As most of you know, Daniel Star and Stephen Kearns have
recently made an interesting, original proposal of what reasons are generally
speaking (see, for instance, their ‘Reasons as Evidence’ in OSME 4).On their
view, to be a reason is to be evidence for an ought. I’ve started to wonder how
compatible this view would be with Dancy’s reasons-holism which attracts me.
So, I want to start from a triad of claims that seem inconsistent to me in the
light of certain plausible empirical and normative assumptions. I then want to
ask what should be given up.
Here are the three inconsistent claims:
- Reasons
as Evidence: Necessarily, a fact F is a reason for an agent A to phi iff F
is evidence that A ought to phi (where phi is either a belief or an
action) (Star & Kearns 2009, 216). - Probabilism
about evidence: q would be evidence for p just if q raises the probability
of p, that is, the conditional probability of p on q is higher than the
unconditional probability of p, Prob (p) < Prob (p | q) (Williamson
2007, 227). - Holism
about Reasons: A feature that is a reason in one case may be no reason at
all, or an opposite reason, in another (Dancy 2004, 73).
inconsistency, we need a back-ground story with certain empirical and normative
assumptions. So, here’s one very rough example.
Suppose that I have borrowed Daniel £20. It must be true
that this raises the probability of that I ought to give £20 to Daniel. So, the
fact that I have borrowed Daniel £20 is a reason for me to give Daniel £20. And,
this is invariably so. The likelihood that I ought to give Daniel £20 is higher
in the set of worlds in which I have borrowed £20 from him than it is in all
worlds.
Now, you might protest that it is not always even true that
I ought to give Daniel £20 if I have borrowed £20 from him. True. Imagine that
I knew that Daniel will use the £20 to build a bomb that he will use to kill a
lot of people. In that case, it is less likely than normally that I ought to
give Daniel £20. So the fact that [I have borrowed £20 from Daniel and that he
will use the money to build a bomb] is a reason not to give Daniel £20.
Imagine then that Daniel will not actually build a bomb with
the £20 (and that I know this). In this case, the fact that [I borrowed £20
from Daniel and he will not use it to build a bomb] makes it even likelier that
I ought to give Daniel £20. Thus, this more complex fact is better evidence and
a stronger reason. In fact, we can make the evidence even better and the reason
stronger. The fact that [I borrowed £20, I promised to pay it back, I have an
extra £20 in my wallet, no children are starving that I could help, Daniel will
not build a bomb…] will make it almost certain that I ought to give Daniel £20.
At this point, Dancy would of course protest that we have not only ruled out
holism but also lost the differences between the kinds of contributions which
different considerations can make normatively. We have included both the
‘favourers’ and the ‘enablers’ in the reason, and thus lost the distinction.
When I first presented this question to Daniel, he first
made an interesting observation. Daniel pointed out that it’s not clear whether
Dancy avoids this inconsistency either. At least in the case of theoretical
reasons, Dancy would accept that evidence for that (one ought to believe) p is a
reason to believe that p (where ‘is’ here is the ‘is’ predication and not that
of identity as for Star and Kearns
certainly does want to accept holism, and it seems likely in the light of
things he says in the end of his book that he accepts some form of probabilism.
So, Dancy too would have to give up one of the claims 1, 2, and 3, if Daniel is
right that he accepts them all.
Of course, that Dancy might be committed to inconsistent
claims doesn’t help Star and Kearns
one of the claims should they give up?
would mean accepting that it is always the complex considerations that are the
proper, invariant reasons, and that no distinction can be made between
favourers and enablers. Of course many people (such as Brad Hooker and Joseph
Raz) do say this. But, I do think this would be a theoretical cost of the view.
They could also give up probabilism. Before this, they could
first try to modify probabilism to more subjectivist version on which evidence
consists of considerations that only raise the probability of the ought
relative to what the agent already believes. They might also accept that ought
is likewise relative to agent’s epistemic situation. I’m not sure how this
helps though – the story about £20 and Daniel seems to still go through even if
we modify probabilism in these ways. We can consider the story from my ignorant
perspective from which I acquire new information. This would make all the
claims in the story still true.
Maybe Star and Kearns
accept some non-probabilistic account of evidence. This account should also be
independent of the notion of reasons for beliefs. I know there are such
alternatives but they are controversial. So, this seems like a theoretical cost
too.
these reasons. 2 and 3 are thus reasons to give up 1 given the inconsistency of 1, 2, and 3. But I know that Daniel and Stephen won’t be convinced.
It sounds like you think that we could solve the problem moving to a version of probabilism where what counts as evidence for a proposition P for an agent S depends on S’s background beliefs. But this is no sacrifice–that some version of it is correct is by far the majority view among probabilists, I’d think.
You cite Williamson suggesting that he disagrees with this version of probabilism, but I don’t think he does–he thinks that whether E is evidence for P for an agent S depends on what else S knows; E might raise P’s probability relative to some bodies of knowledge, but lower it relative to others. Other more subjective probabilists might make it depend on what else S believes.
It’s easy to see why we should opt for a view like this. Suppose I learn that a spy has said: “The eagle has landed.” What propositions this is evidence for depends on my background beliefs/knowledge–if I believe/know that “the eagle” refers to the president, then the evidential force of this claim will be very different from what it would be if I believed/knew that “the eagle” refers to the prime minister.
If accepting some sort of relativity of evidence to background belief/knowledge is enough to avoid the inconsistency you’re worried about, then it doesn’t seem too troubling.
Thanks Daniel.
Well, the other Daniel suggested to me that that more subjectivist version of probablism might solve the problem. If it does, then that’s a nice way out of the dilemma. I do remain sceptical though.
First, I need to say that you are probably right about Williamson’s view. I didn’t mean to suggest that this is his fully-fledged version of probabilism – just a rough, easy, and clear statement of probabilism that he considers.
However, here’s why I doubt whether this will do the trick. The main problem is that, even on this account, there is no way of making the distinction between favouring considerations and enabling considerations. Whatever will raise the probability of that I ought to give £20 to Daniel relative to my previous state of information will count as evidence. This goes for learning that there is £20 in my wallet, for learning that Daniel will not build a bomb, and so on. So, even on this reformulated version of probabilism, these considerations and the complex conjunctions of them will come out as reasons.
It’s true, on the other hand, that the revised view will allow to count the same consideration as a reason for in one context and as a reason against in another. But, this doesn’t seem enough for genuine holism (and I’m slightly uncertain whether this carves the nature at the right joints). That I take it is the view that tries to make distinctions between different kinds of normative contributions.
Jussi, really interesting post. Had a question I hoped you (or anyone, really) could help settle for me. Consider:
(OIR) ‘ought’ implies ‘reason’ in the sense that if S ought to fi, there’s a reason for S to fi.
(OIE) ‘ought’ implies ‘evidence’ in the sense that if S ought to fi, there’s evidence that S ought to fi.
If reasons are bits of evidence, should we accept the pair?
Suppose S believes she ought to fi and that belief is mistaken but non-culpably. I’d think that in many cases, we’d say that the belief was non-culpably mistaken because it was, well, false but the proper response to the evidence _available_. But, if it truly is the case that the belief is mistaken and the agent ought to have done things differently, I don’t think we’d want to say that the reasons in light of which the agent should have done things differently could be identified with bits of evidence available to the agent. It was suggested to me that their view was that the relevant evidence is evidence “there is” rather than evidence possessed (i.e., propositions in the agent’s body of evidence that determines which degrees of confidence are rational to invest in various hypotheses). That seems to be in tension with the suggestion above in Daniel’s comment about the relativity of evidence. (Of course, if Daniel = Daniel, I suppose I know what the official view is. But, I don’t know if Daniel = Daniel.)
Clayton, thanks. First, the first comment above was not by Daniel Star but another Daniel.
Second, well, I’m quite attracted to the view that evidence must be true propositions. But, in any case, I think Daniel and Stephen discuss the kind of thing you have in mind in section 3.2. They have a case in which John ought to flee a building. The reason for this is that there is time-bomb in the building. Yet, John has no evidence about the bomb because it is so well hidden. So, (OIR) is true but (OIE) is false.
It is true that in response they say that there is evidence for that John ought to flee even if it is not available for John – it’s not John’s evidence. So, here what you have been told about their view seems accurate. Hadn’t thought about the fact that this makes the agent-centred version of probabilism less available for them. A good point – thanks.
Jussi, it seems to me that your claims 1 and 2 together have implausible consequences, independently of 3.
Suppose you’re certain that you ought to repay Daniel. That is, you have credence 1 in this proposition. Then, as your credence cannot be raised any further (1 is the maximum), claim 2 implies that you have no evidence that you ought to repay Daniel, and therefore claim 1 implies that you no reason to repay Daniel. But it would be odd if your becoming certain that you ought to do something had the consequence that you no longer have any reason to do it.
More generally, the problem might be put as follows. If reasons are pieces of evidence, as in claim 1, then it seems natural to equate the strength of a reason with its evidential strength. And if evidence is understood as probability-raising, as in claim 2, then it seems natural to equate the strength of a piece of evidence with the extent by which it raises probability. Strong evidence raises probability more than weak evidence does. We might, for example, say the strength of p as evidence for q is given by a function such as
min{0,(Pr(q|p)/Pr(q))-1}.
But then it follows that the more confident you are that you ought to do something, the less reason you have to do it.
Campbell,
thanks – to some degree I agree. I think this depends on how we read 2. One way to read it is in terms of subjective credences. Daniel Star actually suggested that this might be one to avoid the problem. You might be right (and I actually thought this too) that this has more serious problems on its own. Also, they definitely accept the view about strengths of reasons which you describe.
Hovever, there are other possibilities to read 2. You might think that 2 has nothing to do with how confident or certain you are. It might be also rear in terms of objective probabilities. Take all the worlds in which the purported evidence is true. Then compare in how many of those worlds the ought claim is true to in how many worlds generally that claim is true. If the probability of the former is higher than that of the latter then we have evidence. Now, on this reading, 2 doesn’t create the problem you refer to, but we still get the inconsistent triad.
I also think that there might be a reading of 2 which is inbetween the subjective credences and objective probabilities. So, you might take all the worlds in which the evidence claim is true that are compatible with everything else you believe. You could then compare the likelihood of the ought claim to be true in all those worlds to that of it being true in all worlds that are compatible with what you know. On this reading, when you are fully certain about the ought claim being true, the evidence can still increase the likelihood of the truth of that ought claim. So, it might be that this view too avoids your worry even if it faces the question of holism.
I see. But I’m sceptical about one thing you say:
There will be infinitely many worlds of both sorts, won’t there? So how do we compare?
True. I’m not quite sure how this is supposed to work. Maybe you start from the closest worlds and spread towards less similar worlds. It might be that you can then see a pattern – in fewer of the evidence worlds the ought claim is false than in the worlds in which in some the evidence obtains in and some it doesn’t. Maybe you can then make an inductive generalisation on the basis of this. Of course, this is an epistemic question of how we know. It might be that this is an implausible epistemology for reasons – could make knowing what reasons there are too difficult.
Sorry, my last question was not well worded. I didn’t mean to be raising an epistemic worry. I want to know what it means to say, e.g., there are twice as many Fs as Gs, when there are infinitely many of both.
Anyway, this is probably not important. Though I doubt we can understand objective probability in terms of counting possible worlds, I guess there must be some way to understand it.
oh. My mathematics about infinities isn’t what it is supposed to be, but I vaguely remember there are views on which one can talk about the comparable greatness of infinities. So, for instance, one infinity can be greater than another if the other infinity is a subset of the first infinity. Thus, the set of even natural numbers is an infinite set that’s a subset of all natural numbers. This is why the latter set is greater. In these sets, there’s one to one mapping from the even numbers to all numbers, but the second set still has leftovers that are not mapped to the even numbers set.
I guess something like this could be going on in the evidence case too. You start from the evidence worlds in which the ought claim is mainly true. You then at the non-evidence worlds and the proportion of worlds in which the ought claim is true might begin to decrease.
I don’t know enough about how this is supposed to work. You are right that counting possible worlds doesn’t really make sense. I’ll let you know if I can find anything informative on this.
No, that’s no good. Forget this line, Jussi.
But here’s a way to solve the formal problem Campbell raised. If we have a Bayesian approach, then certainty (credence = 1) is ruled out for some propositions. Could the Reasons=Evidence theorist say that it isn’t permissible to be certain about what you ought to do?
Of course, this will still leave the problem about strengths of reasons and amounts of change in probability. The relation between those would have to be more complicated than identity (or direct proportionality).
I’m happy to. Thanks. I need to remind myself about the Bayesian approach more.
I’m slightly lost about the dialectic. Given the answer that Star and Kearns give in the time-bomb case (see my reply to Clayton), the evidence which the agent has for the ought is irrelevant for the agent’s reasons. What is relevant is rather ‘what evidence there is’ in general. If that’s right, I’m not sure I can see how certain the agent is (the subjective credences) could be relevant either for reasons. So, this case seems to get us to the objective probabilities which might be problematic in themselves if Campbell’s problem is right.
I actually thought in the beginning that, given supervenience, if it was the case that I ought to do something and the fact that is the whole state of the world would make the probability of the ought 1. Given the state of the world, it cannot be that I don’t have an ought if I have an ought in this world. So, the whole world would always be the best possible evidence for the ought and the strongest possible reason.
Now, you might think that because of this the Reasons=Evidence theorist will have to rely on the credences of the agent or make the evidence relative to the agent’s epistemic state. That makes the time-bomb case problematic. And, the view still faces the problem of how to rule out full subjective certainty (I still think that there must be some way of representing the worlds compatible with the agent’s state of prior knowledge and how likely it is that the ought claim is true here). So, there seems to be problems either way.
Star and Kearns do accept direct proportionality: “The strength of reason to phi, R depends on the degree to which R increases the probability that one ought to phi. The more probable it is that one ought to phi given R, the stronger the reason to phi R is.”
I guess that my main point is that no matter in which way we understand the increasing of the probability (and I take this to be a question for Star and Kearns really given that it’s their view), it’s still the case that anything that raises the probability is going to count as a reason. And, in the light of holism, this might be too crude. Both reasons and enablers raise the probability of ought so that distinction would be lost.
Now, it’s true that this problem is downstream from Campbell’s worry. They first need to tell a story about what kind of probability raising we are talking about. But, however they solve that problem mine still seems to remain.
“The more probable it is that one ought to phi given R, the stronger the reason to phi R is.”
I need to read this paper again, but this seems wrong. Suppose you know of a really, really, really weak reason, one just strong enough to create an obligation in some possible case. Call that X. Just about anything out there could defeat X, but God says that if anything matters to the case, it is X and X alone. You have an obligation that is an all things considered obligation, the best evidence you could have for thinking that your obligation is just where you think it is, but the reason is exceptionally weak, easily defeated, etc…
I think in that case they would say that it is God’s testimony that makes it certain that you ought to phi. This would also thus be the reason to do so. Anyway, there’s a discussion of like cases in 3.1 of that paper.
Thanks, Jussi. I just looked at their response and two things are striking. First, they run a sort of companions in guilt argument that is supposed to show that if the fact that the paper says “There’s starvation in Africa” doesn’t count as a reason, the fact that there’s starvation in Africa might not count as a reason. I’m not entirely persuaded as I’m sort of kind of willing to say that the _real_ reason is that there’s widespread suffering in Africa. Second, the case they describe is a touch different than the one I’ve offered because the testimony is evidence of the absence of defeating reasons, and I don’t think this sort of evidence is itself part of a reason. It is, however, part of a body of evidence that makes it quite certain that an act is right and if you delete that evidence, the evidence becomes quite poor that the action is right. At any rate, it seems like many of our “strength” intuitions have to do with imagined conflicts between reasons and I’m not entirely convinced that strength in this sense is modeled on evidential strength of rightness. Knowing with a high degree of certainty that some option is right where R is a reason to do that just seems to be a different sort of thing than knowing that R is quite likely to defeat other reasons you might come across elsewhere.
I’m happy with that. In fact, this:
‘the testimony is evidence of the absence of defeating reasons, and I don’t think this sort of evidence is itself part of a reason.’
is just the kind of distinction between ruling out defeaters and reasons that I had in mind in my original post. The worry is that this is a distinction for which their view cannot easily leave room for. Of course, there might be views about evidence available for them that might try to enable for them to deny that evidence of the absence of a defeater itself is evidence for the ought claim.
Here is perhaps another problem for reasons=evidence.
Suppose you believe that ‘ought’ implies ‘can’. To keep things simple, let’s say you’re certain of this. So you have P(Oφ&~Cφ) = 0 (‘O’ for ought, ‘C’ for can). Suppose also that you’re not certain either that you can φ, or that it’s not the case that you ought to φ. That is, P(Cφ) < 1 and P(~Oφ) < 1. Then it follows that P(Oφ|Cφ) > P(Oφ). That you’re able to do something is a reason for you to do it.
(By they way, I wrote ‘min’ instead of ‘max’ in one of my earlier comments.)
Again, I’m happy with that. This just fits what Dancy says about enabling conditions. So, according to Dancy, we should think of abilities to do thing (the ‘cans’) as enabling conditions rather than as reasons. When the enabling condition obtains, it is likelier that you ought to do something. Yet, Dancy wants to be able to keep the distinction between the enabling conditions such as abilities and the favourers that are reasons. The point of the original post was just to show that this distinction collapses if one accepts the probabilistic account of evidence.
Yet, even here there are moves which Star and Kearns could perhaps make. One thing they could do would be to find an alternative view about evidence. Another would be to agree with those who really think that abilities are part of the more complex consideration that is the reason (and perhaps contributing reasons on their own). So if we want to make the objection stick, we have to be able to argue that the probabilistic views about the evidence really are the most plausible and that considerations such as abilities cannot themselves be reasons (but only rather enablers).
Hello, this is Stephen Kearns, of above mentioned “Star and Kearns” fame (though I personally like to think of us as Kearns and Star, as it is in the paper and in our crime-fighting duo (the cape fit me better)). Thanks very much for Jussi’s comments and all the responses so far. I thought I’d chime in which my reply to Jussi.
First, I don’t see at all how Jussi’s example shows that 1) and 2) are inconsistent with 3). In fact, I think that RAE (the reasons as evidence thesis) ENTAILS that a feature that is a reason in one case may be no reason at all, or an opposite reason, in another. That is, RAE entails holism.
This is intuitively easy to see. In a world like ours where people go without food for weeks, they generally suffer and perhaps die (and this is well known). In such worlds, the fact that (say) John has gone without food for a week is (often) evidence that one ought to give him food. It is thus (according to RAE) a reason to give him food.
In a different world, where people going without food for weeks actually makes them really happy and healthy (and this is well known), the fact that John has gone without food for a week is NOT evidence that one ought to give him food (indeed it will often be evidence NOT to give him food). Given RAE, it is thus not a reason to give him food (and is perhaps a reason to not give him food).
This is a clear example according to which RAE entails that a reason for Aing in one case is no reason at all for Aing, or a reason against Aing, in another.
I thus see no conflict between RAE and holism, even given the probability-raising conception of evidence we suggest.
I think Jussi’s main problem should rather be cast as a problem that RAE doesn’t distinguish between enablers and reasons correctly. I confess I’m not much perturbed by the example given. Indeed, I find myself thinking that the following quote from his post (wherein is supposed to lie the problem) is just obviously correct:
“Imagine then that Daniel will not actually build a bomb with the £20 (and that I know this). In this case, the fact that [I borrowed £20 from Daniel and he will not use it to build a bomb] makes it even likelier that I ought to give Daniel £20. Thus, this more complex fact is better evidence and a stronger reason. In fact, we can make the evidence even better and the reason stronger. The fact that [I borrowed £20, I promised to pay it back, I have an extra £20 in my wallet, no children are starving that I could help, Daniel will not build a bomb…] will make it almost certain that I ought to give Daniel £20.”
Isn’t this all just clearly right? Of course, eventually we’ll get to point where the reason stated is a huge unwieldly conjunction. Perhaps THIS reason can’t change its valence, but holism need not be as extreme as to claim that THAT kind of reason could change its valence (otherwise it would end up denying that the normative supervenes on the non-normative!).
In general, isn’t it obvious that of two agents, both of whom own Daniel $20, and in a case where one has it and one doesn’t, that (all other things being equal) the former has a stronger reason to give Daniel the money than the latter (even though both have some reason to do it)? (Psst! The answer is “Yes”.)
Other problems have been raised in the comments, but I think for now I’ll throw this out there. I know Daniel (Star) is planning to comment (we’re hitting you with all we’ve got!) and I think he’s addressing some of those other comments.
Jussi’s (1) and (2) together entail
4. A fact F is a reason for an agent A to phi iff Prob(A ought to phi) < Prob(A ought to phi | F). Holism about reasons introduces the notion of “a reason [for A to phi] in a case”. I suggest we understand “in a case” to mean “in the context of other considerations” or more precisely, “with some (possibly null) set of propositions conjoined to the reason.” Thus reasons about holism will be true just in case there is some set of considerations C such that Prob(A ought to phi) < Prob(A ought to phi | F) but it is not the case that Prob(A ought to phi) < Prob(A ought to phi | F&C). But there is every reason to believe that there will be such situations, given ordinary probability behavior, so I don’t see any kind of an inconsistency here at all. The only problem I can see is one that Stephen Kearns suggested, namely that the long F&C might lose distinctions we want to keep, between favorers and enablers (or better, I think, between evidence and facts about the absence of defeaters for that evidence). There might be an interesting question whether these distinctions can be cashed out probabilistically, and if not whether that is a problem. But that, I think, is the interesting question.
Stephen and Heath,
thanks. And, hope you guys are well. I take your points. Couple of points anyway. Having read bit of Salmon and Reichenbach now on evidence, I think your first point is right to a degree. It’s more usual to formulate the evidence view like a bit like this:
“q would be evidence for p just if q raises the probability of p given our background knowledge b, that is, the conditional probability of p on q&b is higher than the unconditional probability of p, Prob (p) < Prob (p | q&b)" So, now, you might think that whether we are in a not eating entails suffering world or not can be seen as our back-ground knowledge. This means that in our world that some has gone without food is evidence (given our background knowlede) but not so in the other world (given our different background evidence there). Note though that this does not capture holism as it's intended by Dancy. This still entails that in *our world* the fact that someone has gone on without food for a week is always a reason. This is what Dancy wants to deny. For him, it's also true in our world that that someone hasn't eaten for a week is not a reason to eat. So, true, no direct contradiction between 1 and 2 and 3, but if we understand 3 as Dancy does to be about our world, then there still is incoherence. I wouldn't also set aside the problem of not being able to distinguish enablers and favourers. If you think that it is the unwieldy long conjunctions that are the long reasons, then it is difficult to see how one could think that the conjuncts could play different kind of normative roles. They all seem to make the same contribution. And, this I think is problematic. Anyway, there's a long discussion of this between Dancy and others. I find it interesting that your view commits you to the others side against Dancy. I don't claim that this is KO argument against you guys though but it does seem like a cost. Heath, I think my comments for Stephen apply for you too, but thanks for clarifying this.
Thank you Jussi, and everyone else, for the very interesting comments. I agree with much of what Stephen says in his post, although I would admit that I am basically an atomist at the end of the day (to respond to Jussi’s response to Stephen); but an atomist who hopes to capture many of Dancy’s intuitions, rather than an atomist who simply denies Dancy’s intuitions. I have thought for some time that RAE may enable us to better understand (superficial) holism about reasons. In my NDPR review of McKeever and Ridge’s Principled Ethics, I focus on holism about reasons, and argue that the holism thesis needs strengthening if it is to provide a particularist alternative to traditional principle-based ethical theory (but I will assume in the present context that we are not talking about a radicalized holism thesis). I also argue that McKeever and Ridge dismiss two-level accounts of normative reasons much too quickly. In the (still rough) draft of “Two Levels of Moral Thinking” that is up on my webpage, I suggest that it is possible to locate the appearance of holism on the level of ordinary derivative reasons, yet claim that atomism prevails on the level of ultimate reasons of the kind described (and, in principle, discovered) by traditional ethical theory, and that RAE can also explain how a two-level account of reasons can avoid a double counting problem that one might otherwise think arises if one can add the strength of a derivative reason to the strength of an underlying ultimate reason. My thinking on these and many other issues has benefited a great deal from talking and working with Stephen. There is not much more I want to add at this point, except I should perhaps relay some of the ideas I discussed with Jussi in our earlier correspondence (I have added one or two sentences to what follows just to clarify my earlier thoughts).
I recall that Dancy originally wanted to extend holism to reasons for belief. He somewhere discusses the following example: usually, the fact that an object appears green is a reason to believe it is green; however, suppose one has knowingly taken a drug that makes red things appear green and green things appear red – now, according to Dancy, the fact that a particular thing appears green in this context would be no reason at all to believe it is green, while, rather unusually, the fact that it appears red would be a reason to believe it is green (of course, one might well disagree with the “no reason at all” claim here). I guess the fact that you have taken the drug can be considered a disabler here. Presumably, Dancy would say that the fact that one has taken a drug is not, in itself, a reason to believe anything about the color of particular objects. So far so good. However, all this should work with evidence too, since the direct connection between reasons for belief and evidence is not the controversial part of RAE. The fact that you have taken the drug presumably isn’t itself evidence that any particular object is a particular color. However, this fact in conjunction with the fact that an object appears green is evidence that it is red. So the question for Dancy is: given that it seems fine to equate reasons for belief with evidence (at least when we are putting pragmatic reasons for belief to one side, which we are here), why can’t one say that the fact that one has taken the drug can be part of the reason to believe that an object is red (when it appears green)? It seems his only options are to: (1) deny reasons for belief and evidence are closely linked; or (2) claim that reasons for action and reasons for belief are fundamentally different when it comes to the enabling/disabling picture. Either way he loses out on a possible attraction of his view.
If Dancy accepts that a reason to believe P = evidence that P (or a roughly similar claim) then either whatever he wants to say about enablers and defeaters and where they figure in relation to the content of reasons will be something we can say as well about reasons for action, or he will have to claim that reasons for action are radically different than reasons for belief when it comes to enablers and defeaters and what relationship they stand in to reasons. This isn’t just an a point against Dancy; the more important thing to stress for our purposes is that holism about reasons really does seem to apply to reasons for belief – it seems equally attractive when we consider reasons for belief as when we consider reasons for action (we might want to reject holism at the end of the day, but it seems to have an equal amount of prima facie plausibility in both domains). Perhaps holism about reasons for belief is not something a probabilist (i.e. someone who holds to Jussi’s 2.) can accept; however, this would be a surprising result. If it is true, then either probabilism or holism about reasons for belief needs to be rejected. Alternatively, one might suspect that there must be a way to square holism about reasons and probabilism. (Very minor point: I never suggested that Dancy accepts Jussi’s 1. to 3., as Jussi seems to suggest at one point, since I don’t think Dancy accepts 1. as it applies to reasons for action.)
If we can find examples of enablers or defeaters with respect to mundane reasons for belief where it really doesn’t seem like those enablers or defeaters are part of a reason, then whatever story the epistemologist has to give about how evidence functions in such situations is one that we can adopt for reasons for action as well.
The fact that there is light in this room enables me to see the keyboard, so it is an enabler. Is that fact either a reason or part of a reason to believe there is a keyboard in front of me? Suppose one says it isn’t. Then one is saying that the fact that there is light in the room is not evidence that I ought to believe there is a keyboard in front of me. Someone might object that the fact there is light in the room does raise the epistemic probability that there is a keyboard in front of me, so it must be evidence that I ought to believe there is a keyboard in front of me. There are three ways one could reply to this objection:
(1) Evidence that P is not the same as evidence that I ought to believe P (recall that our analysis directly focuses on the second rather than the first); perhaps one can have the first without having the second. In fact, this is quite intuitive: suppose I already have enough evidence to justify believing P (fill in the story), then it might be that the probability that I ought to believe P is 1 (while the evidence that P is less than 1); now I receive fresh evidence that P; this doesn’t raise the probability that I ought to believe P.
(2) Recognizing evidence that P may not involve raising the probability that P; now, we certainly haven’t said this in the past (we’ve said the opposite); however, our central thesis, recall, is not a thesis about probability raising, it is the thesis that a fact is evidence that I ought to X just in case that fact is a reason to X. As we are saying in a response to John Brunero (“Weighing Reasons”) it is always open to us reject the probability raising account of evidence and retain the reasons as evidence thesis. This is not our present plan, but it is an option.
(3) Even in the earlier papers, we pointed out that philosophers who favor probability raising accounts of evidence (e.g. Williamson) need to say that one must focus on a salient subset of one’s evidence, and ask whether fresh information raises the probability of P, relative to that subset of evidence. This is one of the main reasons we presently feel we don’t need to take route (2). Admittedly, more needs to be said about what makes for an appropriate salient subset.