Consider that following plausible wide scope requirement, a requirement that many philosophers endorse (including, I believe, M. Bratman, G. Harman, J. H. Sobel, and J. D. Velleman):
(WSR) Subjective rationality demands that S be such that she does not intend to X if she believes that she will not X.
In order to move from this wide scope requirement to some corresponding narrow scope requirement, we might rely on the following principle:
(WSR->NSR) If there is a wide scope requirement to the effect that subjective rationality demands that S be such that she does not phi if she psi’s (e.g., does not believe not-Q if she believes both P and P entails Q), then it follows that there is a narrow scope requirement to the effect that S is objectively required [not to phi] if S is objectively required to psi.
Now, WSR and WSR->NSR together entail:
(NSR) S is objectively required [not to intend to X] if S is objectively required to believe that she will not X.
But suppose that we assume that agents are objectively required to believe what’s true—that is:
(ORBT) For all S and for all p, if p is true, then S is objectively required to believe that p.
NSR and ORBT imply something absurd:
(ABSRD) One can never be objectively obligated to intend to do what one won’t in fact do.
ABSRD is absurd, because sometimes the only reason why S won’t in fact do something is because S doesn’t in fact intend to do it. And S can’t get out of an obligation to intend to do something simply by not intending to do that something.
Clearly, something has gone wrong along the way to ABSRD. I suspect that WSR->NSR is false, although I’m also open to the possibility that ORBT is false. Unfortunately, though, I’m not well versed in the relevant literature. Can anyone suggest any literature on how we derive narrow scope requirements from wide scope requirements? And can anyone suggest any literature on what we objectively ought to believe? Has anyone explicitly endorsed ORBT? Jacob Ross has said, in conversation, that he’s inclined to endorse it.
I should note that Peter Graham was the one who helped me to see this problem.
Hi Doug,
just a first thought. There’s a standard objection to ORBT which is that there are infinitely many trivial truths that ORBT would create a requirement to believe and this is slightly implausible. There’s a nice paper on the standards of belief forthcoming by Daniel Whiting called ‘Does Belief Aim (only) at Truth?’ which gives the references to the literature on that triviality objection in footnote 16 (you can get the paper through PhilPapers). Of course, one can try to modify ORBT to get rid of this problem and Whiting does discuss these solutions.
Second point point to make is that whether ABSRD follows from NSR and ORBT depends on what your theory of future contingents is. Many people hold that such claims do not have truth-value in before the events or if they have they have them relative to different contexts of evaluation. John MacFarlane has various nice papers about this. I hope others are able to help you with the scope material.
Thanks, Jussi. This is helpful. I’ll look at these papers. Why, though, is it implausible to believe that, for each of an infinite number of trivial truths, there is an objective requirement to believe that truth? (I know. I should read the paper. But if it’s quick, I would like to know why it’s implausible.)
Hi Doug,
well I think there’s a worry in that case that objectively requirements would in that case be utterly unachievable so there would be violations of ought implies can. Also, there seems to be a connection between objective requirements and reactive attitudes which would mean that there would be a lot of people we would and should disapprove of for violating their objective requirements. And, finally, just lots of people listed on that paper working on this area have found the profilitation of requirements silly (listed in the footnote).
I know there’s a tricky bit in getting from WSR to NSR because you have phrased one in terms of subjective requirements and the other in terms of objective. I’m not sure why you need this. I take it many people would think that the wide-scope requirement is an objective requirement as well. In any case, just for what it’s worth NSR seems to follow from WSR from the standard possible worlds semantics of requirements. WSR rules out the worlds in which S inteds to phi and believes that she cannot phi from the deontically ideal worlds. If all the deontically ideal worlds are worlds in which S believes that she will not phi, then these worlds will also only include worlds in which S does not intend to phi. And, so, the narrow-scope requirement follows.
Hi Jussi,
“I think there’s a worry in that case that objectively requirements would in that case be utterly unachievable so there would be violations of ought implies can.”
Okay, maybe ORBT should say ‘for any non-conjunctive proposition p’. After all, for any non-conjunctive proposition p, the belief that p is achievable. What isn’t achievable, perhaps, is believing some infinite set of propositions. But ORBT, so revised, wouldn’t say that for any set of true non-conjunctive propositions, S is objectively obligated to believe that entire set of propositions. Thus, ORBT implies that that S has an obligation to believe the entire set of propositions that are true only if obligation agglomerates over conjunction, which it doesn’t (or so I believe).
“Also, there seems to be a connection between objective requirements and reactive attitudes which would mean that there would be a lot of people we would and should disapprove of for violating their objective requirements.”
That doesn’t follow unless the connection is so simple as to be: a subject is the appropriate target of reactive attitudes if she violates some objective obligation. But I don’t know of anyone who thinks that.
“I know there’s a tricky bit in getting from WSR to NSR because you have phrased one in terms of subjective requirements and the other in terms of objective. I’m not sure why you need this. I take it many people would think that the wide-scope requirement is an objective requirement as well.” I agree, but I thought that some might find the this controversial.
I’ll think about your argument from standard possible worlds semantics for requirements. That’s sounds quite good. Is there a paper that you would recommend on possible worlds semantics for requirements.
Hi Doug,
You can modify (ORBT) in a variety of ways (e.g., you ought to believe as many truths as you can, you ought to believe any truth you consider, etc.) I’d reject even these modified versions of (ORBT), but not (just) for the sort of reasons that Jussi suggests. Suppose that the number of stars is even. Consider propositions like:
(1) The number of stars is even.
(2) I don’t have any more reason to believe (1) than (~1).
(3) Nobody knows (1).
I take it that (1)-(3) are all true. I think it’s intuitively clear that you shouldn’t believe both (1) and (2) or both (1) and (3). If there are objective requirements in epistemology, they are requirements to refrain from believing falsehoods, what’s not known, what’s not supported by the evidence, etc. Once you away from requirements to believe towards prohibitions against believing, I don’t know if you can so easily derive (ABSRD).
Hi Clayton,
That was a bit quick for me. I take it that I’m supposed to conclude that there are no objective requirements to believe (even some suitably restricted set of) truths. And that this follows from what you said about the stars examples, which I disambiguate as follows:
Suppose that the number of stars is even. Consider propositions like:
(1) The number of stars is even.
(2) I don’t have any more *subjective* reason to believe (1) than (~1).
(3) Nobody knows (1).
I take it that (1)-(3) are all true. I think it’s intuitively clear that *I* (did you mean ‘you’?) *subjectively* shouldn’t believe both (1) and (2) or both (1) and (3).
Could spell things out a bit more.
Hi Doug,
It might be too quick to conclude that there are -no- objective requirements to believe truths, but the number of stars example seems to serve as a perfectly good counterexample to OBRT. Suppose the number of stars is even (and that nobody knows that this is true). We might suppose that (1)-(3) are true:
(1) The number of stars is even.
(2) The evidence I have supports (1) and (~1) equally.
(3) Nobody knows (1).
My claims were these:
(4) You shouldn’t: believe (1) and believe (2).
(5) You shouldn’t: believe (1) and believe (3).
It seems to follow from ORBT that:
(6) You should believe (1).
(7) You should believe (2).
(8) You should believe (3).
The basic point is that nobody should believe what they know they don’t know and nobody should believe p when they know their evidence supports p and ~p equally. As such, I don’t think there’s positive requirements to believe truths that you happen to consider. If there are positive epistemic obligations at all, they have to apply to a special class of truths. My own view (which I didn’t argue for) is that epistemic reasons aren’t reasons that require belief. I didn’t use the subjective/objective talk in my example, but I think that there’s no objective obligation or requirement to believe things you’d be crazy to believe and Moorean absurdities (e.g., God despises my atheism) fall into that category.
Hi Clayton,
It would be helpful to me if you would disambiguate your shoulds. It seems that sometimes you’re talking about the objective should and other times you’re talking about the subjective should.
Why is it that I (Doug) objectively shouldn’t believe both that (1) the number of stars is even and that (2) the evidence you (Clayton) have supports 1 and ~1 equally?
I don’t see that at all.
Now, I agree that *Clayton* subjectively shouldn’t believe both that (1) the number of stars is even and that (2) the evidence Clayton has supports 1 and ~1 equally. And I agree that ORBT implies both that (6) Clayton objectively should believe 1 and that (7) Clayton objectively should believe 2. But there’s no conflict here as far as I can see.
Is the thought, then, that it can’t be that somebody *objectively* should believe p even though they know their evidence evidence supports p and ~p equally? But why is that?
I think the problem here might be a result of asking the question in a particular way. You’re relying on a distinction between objective and subjective obligation. But perhaps the question should be, what obligations should I recognize, on pain of irrationality?
Insofar as I am considering what to do – that is, insofar as I am constructing an intention in deliberation – I am thinking of the action I will ultimately intend as the output of deliberation. Intentions, I take it, are intrinsically prospective. The truth about what I will do, then, is tied to the outcome of deliberation. It isn’t the input. This means that I cannot rationally, within the deliberation itself, take that outcome to be a fixed truth (that I will or will not act in such and such a way).
I think this means that the inference from NSR and ORBT to ABSRD doesn’t work. NSR says that I am objectively required not to intend X if I am objectively required to believe that I won’t X. Fine, as far as it goes. For example, I should not intend to fly unassisted by technology given that I am rationally required to believe that I cannot do so. ORBT says that if p is true, then I am objectively required to believe that p. But ABSRD says something more: I can’t have an obligation to intend to do an action that I won’t end up doing. But this only follows if I am required to believe that I won’t do the thing I don’t end up doing. My argument is that we cannot have a general obligation to believe truths about what we will do, while engaged in the deliberations that will determine our intentions.
Even if I have an obligation of some kind to believe all truths generally (which commenters above have questioned), the truth about what I will eventually do, as such, cannot be included among the set of truths I am required to believe when deliberating about what I will do. In cases like flying, I have reasons to hold that I will not fly that are completely independent of my deliberation, and so these kinds of truths don’t fall afoul of presuppositions intrinsic to the act of deliberation. But, in forming my intentions, I am rationally required to see the truth of what action I will ultimately take as indeterminate. The idea, again, is that this truth is tied to the outcome of deliberation – it is my deliberation, and the resultant intention, that determines whether it is true that I will act (or least try to act) in such and such a way. So, what I will or will not do, as such, cannot be included in the set of truths that I recognize within deliberation.
Hi Pete,
I find most of what you say plausible, but I’m not sure why what you’re saying means that the inference from NSR and ORBT to ABSRD doesn’t work. Rather, it seems to me that you’re questioning ORBT.
You write: “But ABSRD says something more: I can’t have an obligation to intend to do an action that I won’t end up doing. But this only follows if I am required to believe that I won’t do the thing I don’t end up doing. My argument is that we cannot have a general obligation to believe truths about what we will do, while engaged in the deliberations that will determine our intentions.”
Note that you say that ABSRD follows only if I am objectively required to believe that I won’t do the thing that I won’t end up doing. Well, if it’s true that I won’t do the thing that won’t end up doing, then that is exactly what ORBT implies that you are objectively required to believe. Now, maybe you want to deny that the proposition that I won’t do x is the sort of proposition that has at present a truth value given your position of future contingents. If so, I think that’s a worry that Jussi already pointed out.
But I take the general thrust of your points. Thanks.
Hi Doug,
I try to never talk about an objective ‘should’ and a subjective ‘should’. I take it that there’s an important ‘should’ (maybe the one we have in mind when we discover that we believed something false and say that we shouldn’t have believed what we did?) and there’s a difficult question as to whether ‘S shouldn’t believe p’ depends upon facts that don’t supervene upon features of S’s perspective. This norm tells us that some prohibitions are grounded in objective features:
KNB: You shouldn’t believe p unless you know p.
If you think belief aims at knowledge, perhaps this gives us a better sense of the objective ‘should’ and ‘ought’ than, say, ORBT does. If I might ask, why do you think there’s an objective ‘should’ according to which someone should believe propositions that they would be crazy to believe (e.g., propositions of the form _p, but I don’t know p_)?
The argument against ORBT I had in mind was was something like this: KNB says that objectively you shouldn’t believe what you don’t know. So, you objectively shouldn’t believe that the number of stars is even and objectively shouldn’t believe that god hates your atheism. Given this, any thesis that says that you objectively should believe any truth you consider is false.
What’s the motivation for ORBT? I mean, if you don’t believe p when p is true, is that really a failure to meet your responsibilities in the way that, say, not pulling a drowning child from a pond is? From the epistemic point of view, I don’t think there are any reasons that pressure you to form beliefs, only reasons against forming beliefs in the wrong way. If the only reasons that determine whether you shouldn’t believe are epistemic reasons, I’d reject ORBT.
Hi Doug,
Just a quick follow up from last post. I think we can cause problems for ORBT even if we don’t assume that knowledge is the norm of belief. (I don’t actually believe that it is, but it can be helpful to pretend that it is to cause trouble for certain views.)
The general structure of the problematic inference is something like this:
Step one: Find a wide-scope subjective ought;
Step two: Derive a narrow-scope objective ought (via the WSR -> NSR principle);
Step three: detach conditional objective ought;
Step four: Apply ORBT;
Step five: Derive problematic conclusion
Suppose the wide-scope subjective oughts govern combinations of beliefs rather combinations of beliefs and intentions. Suppose that the wide-scope subjective oughts forbid forming irrational combinations of beliefs.
Consider an example:
(a) p but I don’t know p.
I take it that this is an irrational thing to believe, so there’s a wide-scope subjective ought claim telling you not to believe that you don’t know if you believe p. That gives us:
Step one: O(Bp ->~B~Kp) [i.e., you ought to see to it that: if you believe p, you don’t believe you don’t know p.]
Here’s the analogue of your WSR ->NSR:
Step two: [O(Bp ->~B~Kp)] -> [OBp -> O~B~Kp]
Step three: Detach OBp -> O~B~Kp (via modus ponens)
Step four: OBp [assume we’re in a p-world; ORBT]
Step five: Detach O~B~Kp [via modus ponens]
It’s consistent with everything we’ve said that you don’t know p. Suppose we stipulate that you’re in a ~Kp world. Then we get:
Step six: OB~Kp
It seems we’ve derived that you ought to believe that you don’t know p and that you ought not believe that you don’t know p. I think that’s bad. Either the move from wide-scope subjective oughts to narrow objective oughts doesn’t work for belief (but only for belief and intention) or ORBT is false. Does this give you some principled reason for thinking that there’s something wrong with the derivation of ABSRD?
I wonder about those who defend ORBT. Would they also say that OBp iff p? If so, we can derive Kp & ~Kp.
Hi Clayton,
“It seems we’ve derived that you ought to believe that you don’t know p and that you ought not believe that you don’t know p. I think that’s bad. Either the move from wide-scope subjective oughts to narrow objective oughts doesn’t work for belief (but only for belief and intention) or ORBT is false.”
Isn’t there another possibility? Namely, that (a) is false.
Indeed, I’m not sure that I would accept:
“O(Bp -> ~B~Kp)”
There are lots of philosophical positions that I believe/accept, but that I don’t believe that I know to be true. Of course, I guess it all depends on what it means to Bp. I take it that I Bp if I’m disposed to assume p in my reasoning, to assent to p, and so. I don’t think that I have to have a particularly high absolute credence in p (although perhaps my credence in p has to be considerably higher than my credence in ~p) in order Bp. But I do think that I should not believe that I Kp unless I do have a particularly high absolute credence in p.
Here’s an example. I believe that commonsense consequentialism is true, but I wouldn’t claim that I know that commonsense consequentialism is true. The verdict is not even close to being out on it.
This is not an issue that I’ve reflected on much and I’m not well-versed in epistemological issues. So take these untutored intuitions for what they’re worth: not much. My main point is only that it’s not completely obvious to someone like me that (a) is incontrovertible.
Hi Doug,
“Here’s an example. I believe that commonsense consequentialism is true, but I wouldn’t claim that I know that commonsense consequentialism is true. The verdict is not even close to being out on it.”
I just bought your book (early festivus present for myself). So far, it’s pretty good. Maybe you do know that commonsense consequentialism is true. I don’t think it’s irrational necessarily to believing one thing and not both to believe you know it, but to believe that you don’t know something while believing it anyway strikes me as odd. (Adler has a nice discussion of this in his _Belief’s Own Ethics_.)
I thought the example would be helpful for these reasons. First, I would think that the state of mind expressed by an assertion of ‘p, but I don’t know p’ or ‘p, but all my evidence suggests ~p’ is deeply irrational. Second, I was assuming that the state of mind you’d be in if you sincerely uttered these things belief (e.g., ‘Bp & B~Kp’ and ‘Bp & B evidence supports ~p’). Third, I was assuming that for any irrational combination of attitudes, there’s a wide-scope (subjective, if you like) ‘ought’ that says in effect that you’re required to avoid that combination.
The trick was this. Following Moore’s lead, identify pairs of propositions that are logically consistent but irrational when believed together. Given this, you’ll get a WSO prohibiting having both beliefs. Given your principle you’ll get a NSO and you can detach an obligation to believe p alongside an obligation not to believe p. Even if you don’t think believing p whilst believing p isn’t known isn’t irrational, surely some such pair of attitudes will do the trick (maybe ‘p and all my evidence suggests ~p’?).
So, how’s this example. Suppose there’s a world in which:
(i) p & I’m irrational to believe p.
If (i) isn’t something you can rationally believe, we get:
(ii) O(Bp -> ~B I’m irrational to believe p)
(iii) [O(Bp -> ~B I’m irrational to believe p)] -> [OBp -> O~B I’m irrational to believe p]
(iv) OBp -> O~B I’m irrational to believe p.
(v) OBp [ORBT, (i)]
(vi) O~B I’m irrational to believe p [MP]
(vii) OB I’m irrational to believe p. [ORBT, (i)]
I suspect there are other ways of generating trouble given that people’s beliefs about what they ought/oughtn’t believe are fallible, but it’s a bit noisy here so I’ll have to try to work that out later.
Hi Clayton,
Good example. Thanks.
Oh, and thanks for buying the book. A happy festivus to you.
(WSR) Subjective rationality demands that S be such that she does not intend to X if she believes that she will not X.
Hi Doug,
I didn’t read the above, but it looks to me like all you need to get the narrow scope conclusion is the modal principle K. K is about as weak as it gets with modal principles, but it can still generate worries. Here’s K.
K. L(p -> q) -> (Lp -> Lq)
Your interpretation of L is ‘demanded by subjective rationality’. So, principle K in your case ensures closure under rational demands. My worry for the use of this principle is that it validates strengthening antecedents. So suppose from the wide scope principle you get to the conclusion, that it is rationally demanded that p only if it is rationally demanded that q.
1. Lp –> Lq
The problem is that (1) entails (2), for any r at all.
2. (Lp & r) -> Lq
So, from (1) and any r at all, if Lp and r, then it is rationally demanded that q. That can’t be right. If it is rationally demanded that I not believe I will do x and r = I learn that I’ll do x, then it is rationally required that I not intend x? What you’ll need I think for the inference is some conditional that does not validate strengthening antecendents. A subjunctive, maybe.