First things first. I want to thank Doug, Dave, Dan, and Josh for inviting me to come on as a contributor. I’m interested in connections between reasons for action and belief. For a while, I’ve been content to argue that at a certain high level of abstraction, we ought to expect similarities between reasons for action and belief. So, for example, if we can show that reasons for action belong in a certain ontological category, it would be surprising if the right account of reasons for belief located those kinds of reasons in an entirely different ontological category. If there’s a gap between reasons and rationality on the practical side, it would be surprising if there were no similar gap on the theoretical side. (Of course, if there’s no gap between reasons and rationality on the theoretical side, we ought to reconsider the suggestion that there’s a gap on the practical side.) You get the idea.
What justification is there for thinking that claims about reasons for action justify claims about reasons for belief? I suppose you might say that the arguments that (purport to) show that there’s a gap between reasons and rationality on the practical side show that there’s nothing to the concepts of normative reason or rationality that require them to go hand in hand. If someone wishes to defend the view that there’s no gap between reasons and rationality on the theoretical side, the onus would be on them. To paraphrase a remark of John Gibbons’ from a forthcoming paper of his, there’s a built in explanation of the similarities since both reasons for belief and action are reasons.
I’m interested to see if we can establish something stronger than just the claim that there’s a burden of proof on those who wish to insist that reasons for action and belief differ in important ways. I’ve been kicking around an idea for the past few months and thought I’d see what sort of reaction it would receive here. Consider:
Link: If you oughtn’t φ, you oughtn’t believe that you ought to φ or that you may φ.
The basic idea behind Link is simple enough. Suppose you accept some sort of motivational internalism and think that there is a necessary, but defeasible, connection between the judgment that you should φ and the motivation to φ. Suppose further that you accept some sort of cognitivism so that in judging that you should φ you believe that you should φ. If we combine the two, then bracketing the exception cases that cause trouble for crude formulations of motivational internalism, necessarily, if you believe you should φ you will thereby be motivated to φ. It seems that something in the neighborhood of Link ought to follow from these metaethical assumptions.
If the theoretical assumptions don’t move you, maybe an example will. (Apologies to Judith Thomson.) Suppose a pilot comes to us with a request for advice: “See, we’re at war with a
villainous country called Bad, and my superiors have ordered me to drop some bombs at Placetown in Bad. Now there is a munitions factory at Placetown, but there is a children hospital there too. Some people tell me that I should drop the bombs to help with the war effort but some tell me that we should avoid killing innocents. I am so confused, I just do not know who to believe.” Now, suppose we say, “Look, given what you have said, it is clear that you should appreciate that dropping the bombs is a necessary evil”. The pilot drops the bombs. The next time we see him we confront him and say, “That was a terrible thing to do!” Confused the pilot says “But you told me that dropping the bombs was a necessary evil”. “No”, we say, “We only said that believing you should drop the bombs is what you should believe. You never asked us what you should do. That is an entirely different matter.” What a queer performance this would be! Can anyone really think that what the pilot should believe about what he should do depends on considerations other than those that determine whether the pilot should drop the bombs?
Well, apparently some people do believe it. I’m looking at a paper of Richard Feldman’s right now (‘Subjective and Objective Justification in Ethics and Epistemology’) and he defends a view that looks for all the world to be incompatible with Link. On his view, facts that are obscure to an agent can make it all things considered wrong for the agent to perform a given course of action (e.g., the fact that the man approaching is a jogger rather than a mugger means that you should not mace him). However, facts that are obscure to the agent can have no bearing on the permissibility of beliefs such as the belief that the man approaching is a mugger or that you should spray him with mace. But, it looks like Feldman’s view gives precisely this sort of advice: believe that you should spray him but do not spray him with mace. Madness!
There are two ways to bring Feldman’s view in line with Link. First, we might say that facts obscure to an agent can make it wrong to believe certain things (e.g., the fact that the man is a mugger rather than a jogger makes it wrong to believe you should mace him. Since you know that if the man is a mugger you are within your rights to mace him, you should not believe the non-normative belief that the man is a mugger no matter how good your evidence is.) Second, we might say that since the facts that are obscure to an agent have no bearing on the justification of our attitudes and our attitudes are necessarily connected to our actions, these facts might be facts in light of which our actions are unfortunate but they are not facts in light of which our actions are wrongful. (Since job season is just around the corner, I’m not here going to say which response I believe is correct. I’ll just say that the first one is correct.)
My first question is just this. Is there something in the neighborhood of what I’ve said above that serves as a decent rationale for Link? Is there some obvious (or not so obvious) objection to Link that I’m missing apart from the obvious ones (i.e., objections from non-cognitivists, externalists about motivation, etc…).
I think Link is interesting for a number of reasons. Among them, it seems that if you look at the standard rationales offered for adopting externalist views in epistemology (e.g., a view that treats reasons for belief as facts beyond those that strongly supervene on our non-factive mental states or says that the right to believe similarly depends on such external facts), they really have nothing to do with the role that belief plays in practical deliberation. It seems that Link might serve as the basis of a novel argument for externalism in epistemology, which is that the right way to think about reasons for action and permissible action is in externalist terms and this requires a parallel externalism in the theoretical domain.
Hi Clayton –
Two things: a) this is really interesting; b) welcome aboard! You probably have a knock-down response to this, but I wonder if the following isn’t a counterexample to Link. Imagine that I’m something like a member of the Three Stooges; whenever I intend to do something, I end up accomplishing the opposite. So whenever I intend to rescue a drowning child, I end up drowning it more quickly. Basically, I’m incompetent to actually accomplish the ends I intend. (And this includes intending to not do things; I end up doing them out of sheer incompetence.) Wouldn’t I be more likely to actually behave rationally if I believe that I don’t believe I have a reason to do something if in fact I do? Or vice versa? All things considered, it seems to me that in this case I ought not to believe that I have reason to x, if in fact I do. Furthermore, this would be compatible with judgment internalism: I intend to accomplish that which I have reason to do, but always fail, and end up behaving irrationally.
One instance of Link (as it currently stands) that is a little dubious is:
If you oughtn’t to drink the glass of arsenic on the table (that all the evidence suggests is water) then you oughtn’t to believe that you ought to drink the glass of arsenic on the table.
You could either agree that you oughtn’t to believe that you ought to drink the glass of arsenic on the table (despite the fact that there is plenty of evidence that it is not a glass of arsenic) or deny that you oughtn’t to drink the glass of arsenic on the table (despite the fact that doing so will, after all, kill you) or you can restrict Link to a claim about the connection between something like subjective reasons for action and epistemic justification (although, depending on how it is done, this latter course risks making Link trivial).
Hi Clayton,
Interesting post. What’s your view on pragmatic reasons for belief? Is a pragmatic reason to believe that P a genuine reason to believe that P or only a genuine reason both to want to believe that P and to intend to do what will cause one to believe that P? It seems that if you accept Link, you have to go with the latter. Consider a revised version of Kavka’s toxin puzzle. Suppose a billionaire will give you a million dollars if, at 11:59 PM to tonight, you believe that you ought to drink the toxin tomorrow afternoon. But, of course, if you drink the toxin tomorrow afternoon, you’ll get terribly sick. So you have a good reason not to drink it. And since drinking the toxin doesn’t affect whether or not you get the million dollars, there doesn’t seem to be any good reason to drink it. So if pragmatic reasons are genuine reasons, then, at 11:59 PM tonight, it’s true both that you ought to believe that you ought to drink the toxin tomorrow afternoon and that you ought not to drink the toxin tomorrow afternoon.
Hi Clayton,
I’m a bit dubious about Link for epistemological reasons. I’m assuming that “you ought not to do A” and “you ought to do A or may do A” are contradictory. If so, then Link is a special case of the following:
Superlink: If p, then you oughtn’t believe that not p.
But Superlink is dubious. After all, I might have excellent evidence that not p, even if in fact p is true. (I’m assuming that you’re interested in epistemic rather than pragmatic reasons for belief.) Perhaps we should make exceptions to Superlink for beliefs about what one should do. But of course that would weaken the parallel between practical and theoretical reasons.
I’m sorry it’s taken so long to respond. I’m visiting my family in California and there’s little quiet time in front of the computer. So, from the bottom up…
Allen,
You are right that Superlink is a special case of Link, but I could offer this (disingenuous) response (which serves as the beginning of the sort of response I’d favor). Maybe judgments about what we ought to do are just special. You can adopt a view along the lines of Barbara Herman’s on which any failure to do what you ought has to be a failure that can be attributed to a bad will. On that sort of view, I don’t think there can be a gap between doing what one ought and acting on one’s best judgment about what one ought to do.
What we’d need to cause trouble for Link is an argument that the moral ought does not depend on the facts about evidence you think the epistemic ought does. But, then you face a problem. You’ll have to explain, for example, how there could be such thing as faultless wrongdoing in the moral domain. You’ll have to explain how it’s possible for a conscientious agent to rely on her rational powers to come to a judgment about what ought to be done in a way that is flawless while failing to do what she ought. You’ll then have to explain why it’s wrong for me to just crib this story and apply it to the theoretical domain. (You’ll note that when pressed for a justification for internalism about the epistemic ‘ought’, the standard responses appeal to connections between obligation, responsibility, fault, etc… that you will have just denied hold in the practical domain in which case you’ll have to say that it’s the epistemicness of the epistemic ought that explains the impossibility of faultless epistemic wrongs.)
Anyway, my own response is to bite what others take to be a bullet. I’ve never seen a good argument for the claim that epistemic permissibility cannot depend on factors beyond those that supervene on the subject’s evidence. Just as people have come around to the idea that you oughtn’t assert what’s false (even if you have excellent evidence), I think it’s time to take seriously the idea that you oughtn’t believe what’s false. If someone asserts p when ~p but has good evidence, we judge that they are reasonable and responsible. That’s why we excuse them. I’d say the same for belief.
Doug,
My view on pragmatic reasons for belief is, I think, the second one. They’re really reasons to cause yourself to believe and so reasons for actions rather than reasons for the beliefs that result. So, my initial response is simply that the practical reasons you point to are reasons to cause yourself to believe rather than reasons to believe. So, while I agree that you oughtn’t drink the toxin, I think the case is one where you ought only cause yourself to believe that you ought to drink the toxin but not one where you ought or are permitted to believe that you ought to drink the toxin.
It’s an interesting case, however, so I’ll have to give it further thought. I think that an initial reaction to Link might be that the justification for Link requires that there are attitude-related reasons that really do bear on whether to believe. Since I don’t believe there are attitude-related reasons that bear on whether to believe, that might seem like a problem for the view. [Quickly, the thought might be this. I’ve adopted a view on which it’s facts about _epistemic reasons_ that determine whether someone’s beliefs are permissible. It’s facts about non-epistemic reasons that determine whether someone’s actions are permissible. How, then, could the facts in virtue of which an action is wrongful make it wrong to believe that the action is not wrongful? Well, on the best test I know of for distinguishing the right kind of reasons from the wrong ones, a consideration constitutes the right kind of reason if acceptance of it can settle the questions that figure in practical and theoretical deliberation. Acceptance of the very same considerations can settle the questions ‘What should I believe about that?’ and ‘What should I do about that?’, so the same considerations constitute both practical and epistemic reasons in the special case where the belief is a belief about what ought to be done.]
Angus,
I like the example. I think it’s important to separate my treatment of the example from possible treatments of the example that cause us to give up Link. Just to put my cards on the table, suppose I serve you the arsenic but have excellent evidence that it’s gin (sans arsenic) and know that you wanted a gin and tonic. I think my action is wrongful, but I think that when it’s known what my evidence was and what my aims were, people will agree that it’s an instance of excusable wrongdoing. (Why wrongdoing? Suppose you live and suppose I help with your medical expenses. Mere beneficence? I don’t think so. I think it’s a reparative duty, but that’s a response to a previous wrong of mine.) I’d say the same for the belief that I ought to give you the stuff in the glass.
All I have to say, however, is that the belief about the ought and the actions that I perform ‘sway together’. If I were really, really convinced that the belief couldn’t be impermissible to hold in virtue of misrepresenting how things are, passing off non-reasons as reasons in deliberation, etc…, I’d be tempted to describe the action as an instance of right action that is nevertheless unfortunate. I’d say ‘Look, if the relevant reasons demanded that I not hand the glass to you, I suppose they demanded something else. They would demand that I simply refuse to hand the glass to you in spite of your having asked for it and wrestle it from your grip while saying ‘I have no idea’ when you asked repeatedly why I won’t give you the gin’. If reasons required such unreasonable behavior from me, they wouldn’t be reasons. Once again, belief and action would be in line.
My view about Link, fwiw, is that it borders on the trivial. I think Link is interesting because it allows us to draw on ethics to do epistemology or draw on epistemology to do ethics. I don’t think Link is interesting because it’s open to serious challenge. I can see the attractions to the two views described above but not to a view that makes the epistemic stuff subjective while making the practical stuff objective. Just think about how such a view would work. You engage in a bit of deliberation to settle the theoretical question ‘What should I believe (about what I should do)?’ Let’s call the things that figure in this bit of deliberation ‘reasons-1’. You’d engage in a bit of deliberation to deal with the practical matter. Let’s call the things that figure in this bit of deliberation ‘reasons-2’. Now, I’d like to say that from my point of view, some of the very same things figure in both practical and theoretical deliberation. I don’t see how I could do that, however, if we denied Link. Either we mean the same thing by ‘figure’, but different sorts of things figured in practical and theoretical deliberation or we equivocate when we say ‘Reasons figure in practical reasoning’ and ‘Reasons figure in theoretical reasoning’. Both options seem completely crazy to me. It seems just obvious that the very same considerations are in mind when we try to settle the questions ‘What should I believe?’ and ‘What should I do?’ and it seems odd to think that ‘Reasons figure in theoretical reasoning’ means that mental states provide representations of certain considerations are involved in reasoning whereas ‘Reasons figure in practical reasoning’ means that considerations represented by the subject’s mental states are involved in reasoning. My view: reasons-1 and reasons-2 figure in deliberation in the same way and are thus sometimes the exact same thing.
Dale,
Thanks and thanks.
I really like your question, but in spite of your confidence that I’ll be able to give you an answer you’ll like I’m not sure you’ll be happy with what I’ve come up with.
We need a case in which Moe oughtn’t X but should believe he ought to X, or something like that. And, you think that such cases could be constructed if we add that whatever Moe tries to do, he fails to do and whatever he tries to avoid he causes. Is that the idea?
I have to confess that someone so cursed, it’s hard for me to know what Moe should do. Suppose that Moe is filling the tub with buckets of water when he sees there’s a baby in it. So, Moe oughtn’t keep continuing to fill the tub. Now, if I understand the example, if Moe tries to save the baby or avoid killing it, it’s now likely that the baby will be killed. So, maybe Moe ought to try to save the baby by inducing the belief that he ought to drown the baby and continue filling the tub, but he’ll fail at that as well! But now I just don’t know what he should do or believe in light of the fact that he oughtn’t drown the baby by continuing to fill the tub.
Maybe the case you had in mind was more like this. Suppose Moe oughtn’t drown the baby and the only way to do that (given his essential klutziness) is for him to try to drown the baby, which requires believing he ought to drown the baby. Or, something like that. Let’s suppose he’s bad at manipulating his environment in ways he judges he ought but good at manipulating himself so that he can modify his mind (indirectly) to make himself believe he ought to do so that his deeds lead him to do what he should. Now it seems like the response to give is similar to the response I gave to Doug. Moe has a reason to cause himself to believe X but it’s not the case that he should believe X. Rather, he should cause himself be in a mental state such that he oughtn’t be in that mental state. But now I don’t see that it’s a counterexample to Link.
I think I’m getting lost in the details of your case.
Anyway, thanks Doug, Dale, Allen, and Angus for your questions. I’m sure I haven’t answered each of them satisfactorily, so if you get the chance, tell me why. Sorry for the slow response.
Interesting post, Clayton!
I’m totally on your side in thinking that the term ‘reason’ must mean the same thing when we talk about “reasons for action” and “reasons for belief” (and similarly ‘rational’ means the same thing when we talk about “rational action” and “rational belief”), and that in consequence it is prima facie plausible that at some level of abstraction, the same fundamental principles apply to both reasons for action and reasons for belief.
I also think that the principle that you label “Link” is true, at least on a couple of possible readings of what it means. But it may be that my idea of why it’s true is rather different from yours.
I think that there’s compelling linguistic evidence that all the central normative terms (including ‘should’, ‘ought’, ‘reason’, ‘rational’ and ‘justified’) have both an “objective” and a “subjective” sense:
(There may also be a range of intermediate senses of these terms as well.)
It seems to me implausible that either of these senses would be unavailable either when we are making normative claims about belief or when we are making normative claims about action. So I’m inclined to think that there are both “objective” and “subjective” senses of these terms both as they apply to belief and as they apply to action.
So Link is true, both when it involves an objective reading of the ‘ought’ in both ‘ought to do’ and ‘ought to believe’, and when it involves a subjective reading of the ‘ought’ in both of these occurrences. It is false only if we switch between an objective ‘ought’ and a subjective ‘ought’ between the antecedent and consequent of the principle.
Indeed, if what we in the objective sense “ought not” to believe is just whatever it is not “correct” for us to believe, and what it is not correct for us to believe is whatever is not true, then the objective reading of Link is almost trivial — it just says that if a certain sort of proposition is true, then it is not correct to believe certain propositions that are incompatible with it.
The subjective reading of Link is a bit more tricky. This is because there are cases (like Donald Regan’s “mineshaft” case) where precisely because one lacks adequate justification for any outright belief about what one objectively “ought” to do, one subjectively “ought” to do something that one knows to be (objectively) second-best. Still, these are cases (or so I’d argue) in which one has some justification for some kind of belief about the positive normative status of the act that one subjectively ought to do. So something like Link is true here too.
Clayton,
Dale’s case is just another case of state-given reasons for belief. Moe has very good state-given reasons to believe that he shouldn’t save the child when he in fact should-viz. that by so believing he will be more likely to actually save the child. But those reasons obviously aren’t epistemic reasons. They are reasons for him to get himself to believe that he shouldn’t save the child (or that he doesn’t have reason to, or whatever). So, if you have an answer to the state-given reasons question, then I think you have an answer for Dale (fwiw, Andrew Reisner thinks that the standard Parfitian answer (the one that Doug gave and you and I just endorsed) cannot work. See ‘The Possibility of Pragmatic Reasons for Belief and the Wrong Kind of Reasons Problem.’)
Hi Clayton –
The second case was something like what I had in mind. But your response strikes me as odd. You say: “he should cause himself be in a mental state such that he oughtn’t be in that mental state.” That sounds odd to me. How could it be the case that I have a reason to cause myself to be in mental state x, but no reason to be in that state? I suppose this is just a reflection of my intuition that pragmatic reasons for belief really should take the form of reasons to belief, rather than reasons to get myself to believe, etc. (That doesn’t mean there aren’t also non-pragmatic (i.e., epistemic) reasons not to be in that state; as we all know, reasons compete.) The latter just delivers some strange-sounding verdicts to me.
Dale,
Like Errol, I think yours is a case of what he’s calling ‘state given reasons’, and while I think there might be such reasons (e.g., there might be state given reasons for imagining or supposing), I don’t think there are for belief. So, while it might sound paradoxical to say that you ought to cause yourself to be in such and such a state but you oughtn’t be in that state, it might sound less paradoxical if we say that morally you ought to perform actions that have as their consequence your believing a proposition while epistemically that proposition oughtn’t be believed. I don’t have a knock down argument for the thesis that there are no state given reasons for belief (barring exceptionally odd cases), but here’s the beginning of an answer. First, I don’t think that a case in which there’s a reason to cause yourself to believe p but you oughtn’t believe p on epistemic grounds we have a conflict of reasons. From the epistemic point of view, there’s nothing to regret and no rational remainder, so to speak, when you don’t believe p. So, its missing one of the marks of cases of conflicting reasons. Second, I think there’s something exceptionally odd to the suggestion that there can be reasons for belief awareness of which can never bring you to believe. But, if reasons to cause yourself to believe that are grounded in considerations that show that there’s something good about the belief that you do not take to be connected to the truth of the belief would be like that. But, you’re right, I think, that a proper defense of Link will involve an argument that there are no state given reasons for belief and I’ve not really addressed that worry sufficiently.
Errol,
I think I agree with your diagnosis. I haven’t read the Reisner paper you mention, but I’ve been meaning to. I have read one of his pieces in which he argues that if we don’t recognize state given reasons for belief it might be the end of the world and I’ve dutifully been hitting myself in the head with a hammer in the hopes that I’ll come to believe he’s right.
Hey Ralph,
I’m glad to see that we seem to be on the same page. The subjective case is tricky and I haven’t figured out quite what to say about the mine shaft case except that it seems to show that the subjective ‘ought’ cannot be understood in terms of things turning out for the best if the subject’s beliefs about the situation were correct.
I’m not yet quite sure what to make of your claim about the different readings of ‘ought’. I have been trying various things to do away with the need for distinguishing between an objective and subjective reading of ‘ought’, but I’ve checked out a copy of The Nature of Normativity and might come around in the next month or so. It’s always struck me as strange that the literature on epistemic justification and reasons for belief either neglects entirely the connection between the evaluation of belief and the objective ought or just assumes as Feldman does that the objective epistemic ‘ought’ depends on nothing but subjective conditions since he acknowledges that this isn’t so for the ‘ought’ that deals with action.
Dear Clayton (et al.),
I read your post with interest. I’m sorry that something I’ve written may be leading experiment dangerously with hammers… I’ve been thinking about Link a bit.
One thought that comes to mind is a view like Daniel Star and Stephen Kearns’s view. They think that (normative) reasons for one to phi (some action) are analysable as evidence that one ought to phi. They take what one might call an externalist line on evidence to fit with an externalist picture of reasons.
Link says: Link: If you oughtn’t φ, you oughtn’t believe that you ought to φ or that you may φ.
Here, it looks like the Star/Kearns proposal would be congenial to Link. When you ought not to phi, there won’t be sufficient evidence that you ought to phi, and presumably there won’t be sufficient evidence for you to believe that you ought to phi in the same case.
Although there are things that worry me about the Star/Kearns view, if it’s right, it might help explain link, or at least a very close cousin. The close cousin would be:
If you oughtn’t to phi, then you oughtn’t to believe that you ought to phi, unless a good deal hangs on the belief. People who think there are state-given reasons for belief can still think that except under high-stakes circumstances, evidence determines what one ought to believe. Doing a little extrapolating, think this is the view expressed in Sven Danielsson and Jonas Olson’s recent paper in Mind.
I suspect the degree to which something like the Star/Kearns view is helpful will depend on what account of evidence you plug in.
Interesting post and comments! About the view I have developed with Stephen Kearns: Clayton started out by mentioning that a unified account of reasons seems like an attractive thing to pursue. Stephen and I agree, and promote an analysis of reasons that we take to be both unified and informative (the two papers Andrew refers to are in press, and can be downloaded from my web site). This analysis leaves room for pragmatic reasons for belief, and is thus compatible with examples like those that Dale Dorsey is worried about (above), without the need to construe the reasons in such cases as reasons to bring it about that one believes P (rather than the simpler, reasons to believe P). Thus, Link may be false, on our unified account. We don’t need to think that evidence that you ought to believe P need always be evidence that P. We say a reason to believe is basically evidence that one ought to believe (and a reason to act is basically evidence that one ought to act). We don’t say a reason to believe is basically evidence that P. So a fact that provides a merely pragmatic reason to believe is a fact that is evidence that one ought to believe P but is not evidence that P. Any fact that provides a non-pragmatic reason to believe P will also be evidence that P. I am inclined to accept that there are pragmatic reasons for belief, if only in high stakes cases (as Andrew has suggested). However, it is worth noting that our analysis is also essentially compatible with the rejection of pragmatic reasons for belief (since we could say that it is always, rather than just sometimes, the case that a reason for belief is evidence that one ought to believe P simply in virtue of being evidence that P).
In response to Ralph, Clayton says he is inclined to think there is only one sense of ought. I am inclined to agree with Clayton about this. A hopefully helpful suggestion: rather than talking of a subjective and an objective ought, one can distinguish between what agents are required to believe or intend to do according to the “requirements of rationality” and what one ought to believe or do (a la John Broome). Given certain beliefs and desires, I may be rationally required to intend to drink from a glass in front of me that appears to contain water, but (surprisingly) actually contains poison; however, it is (intuitively) not the case that I ought to drink from this glass. Regarding the mine shaft case: if there is only one sense of ought, then surely one ought to go down the mine shaft that one knows will be suboptimal from the purely objective perspective; this isn’t just an option that one is rationally required to adopt. Stephen and I discuss the mine shaft case in one of our papers (the one where we compare our own account of reasons with John Broome’s).
Hey Andrew and Daniel,
Sorry it’s taken so long to respond. I went on what was originally to be a quick trip to Portland that morphed into a much less quick trip to San Francisco as well. Not that I’m complaining, it was fantastic.
Andrew,
I’ve only read Star’s and Kearn’s papers once and don’t have them here. I am not quite sure if I’d agree to this:
When you ought not to phi, there won’t be sufficient evidence that you ought to phi, and presumably there won’t be sufficient evidence for you to believe that you ought to phi in the same case.
I might, but this depends, of course, on what our background judgments are about whether we ought to phi, but also what ‘sufficiency’ comes to. It seems we can imagine cases where there’s exceptionally good misleading evidence that one ought to phi when one oughtn’t (e.g., a jogger looks like a mugger and a frightened person out on a walk might mace them). Unless we’re willing to deny that whether one ought to phi supervenes on the subject’s evidence, we might end up saying that ‘sufficiency’ either cannot be cashed out in epistemic terms or have to adopt an infallibilist view on which ‘sufficient evidence for believing p’ entails p. (Myself, I’m happy to say that whether one ought to believe or act does not supervene on just that individual’s evidence, but while some take that view seriously for action, few do for belief.)
I don’t have much to say about state given reasons and Link. My worry is that Link will need too many qualifications to be interesting if we recognize state given reasons, but that’s a worry rather than a view. I’ll have to get a look at Danielson and Olson’s paper.
Daniel,
I’m glad to see that you’ve chipped in. I was wondering whether your view of reasons as evidence could be brought in line with Link, and I suppose it can. (I think this is good for Link.) Like you, I’m somewhat fond of Broome’s use of normative requirements for dealing with cases some take to motivate distinguishing an objective and subjective ‘ought’. I’ll have to get a look at your treatment of the mineshaft case.
Anyway, I’ve written a few papers that use Link (and Link-like principles) in various places. In one, I’ve argued that externalism about justified/permissible action ought to lead us to adopt externalism about justified/permissible belief. (I’ll defend some of this at the Eastern where my friend Leo assures me I’m headed for big trouble.) In another, I’ve argued that we ought to reject psychologized accounts of reasons for belief and action. I’m now using it to try to show that evidentialism in epistemology is untenable and that an argument for it fails because it fails to appreciate the way to respond to the value that attaches to false, but evidentially supported belief. I’ve already suggested in one paper that X-philes can use Link to draw on observations concerning moral intuition to test epistemic theories. It seems if the principle holds, there’s much in epistemology that could be revised if the ethicists are right. (Of course, this were an epistemology blog, I’d say that epistemologists have lots they could teach the ethicists.)
Dear Clayton,
Sounds fair enough. Your point about evidence seems right to me. However, it seems to me that giving up Link might be as good as revising one’s account of evidence, or of reasons for belief at any rate. I’m trying to write a paper about something related (more or less about normative unity if one rejects something like link). So, I’ll babble out some random thoughts…
It’s a bit of a tough thing to sort out. Skorupski/Dancy style epistemic accessibility constraints for theoretical reasons aren’t meant to apply to practical reasons on their views. But, otherwise, the reasons are formally similar (e.g. for Skorupski, facts that count in favour- for an agent at a time, to a strength- of believing a proposition). I argued in an paper that one can treat the degrees of strength for evidential and pragmatic reasons for belief in the same kind of ways, despite the bounded range of strengths for evidential reasons and the unbounded one (at least at the upper end) for the pragmatic reasons. This suggests that one can treat degrees on strength for practical reasons in the same way (in some sense of the same, at least) one does for epistemic reasons.
But, these similarities have their limits, even before thinking about accessibility. The aggregation of evidential theoretical reasons may well work differently to that of reasons for action (even if one is not a strict evidentialist, the difference in aggregation will at least show up when only evidential considerations are in play). Differences in accessibility constraints might be treated not as a dissimilarity between reasons for belief and reasons for action, as far as what it is to be a reason and how that entity is structured. Instead, one might treat accessibility as a sort of side-constraint (perhaps like differences in aggregation- I realise the use of ‘side-constraint’ invokes a technical concept that doesn’t quite fit).
All that said, for what it’s worth, I agree that there are serious problems for evidentialism, even if one rejects pragmatic reasons for belief. I have a paper, Evidentialism and the Numbers Game, in which I try to show that evidence can’t do the work it needs to in some cases. If link is true, then it seems better to me to revise our epistemology rather than our ethics. Is the point of method in considering whether to give up link to search for an argument as to which kind of intuitions are more reliable: those about the strong commonality of reasons, or those about substantial features of epistemic and practical reasons respectively?
I find myself slightly amazed that so many ethicists (including some of my best friends…) are tempted by the view that there is only one sense of ‘ought’.
They wouldn’t claim that there’s only one sense of the paradigmatic modal terms ‘must’ or ‘may’ or ‘can’, would they? But surely it’s clear that ‘ought’ (and to its near-synonym ‘should’) are very closely related to these modal terms.
After all, many languages don’t have any clear distinction between ‘must’ and ‘ought’; and ‘may’ and ‘can’ can clearly be used to express permissibility as well as possibility. Furthermore, everyone accepts that ‘ought’ and ‘should’ have so-called “epistemic” uses (as in “The solution ought to turn blue”), just as all the other modal terms (like ‘might’ and ‘must’ etc.) have “epistemic” uses as well. In addition, in English, ‘should’ (though not ‘ought’) has a “purely modal” use (as in the first line of Rupert Brooke’s sonnet, “If I should die, think only this of me”), and the same is true of the equivalent terms in many other languages (like ‘devoir’ in French, ‘sollen’ in German, etc.).
All of this makes it absolutely overwhelmingly plausible that ‘ought’ and ‘should’ are modal terms, and capable of bearing many different senses, depending on context — just like all other modal terms.
So the claim that ‘ought’ is utterly univocal just seems to me a monumentally implausible claim about the linguistics of English. Nothing but muddle will result from relying on this implausible claim.
Dear Ralph,
I agree that context sensitivity for auxiliary modal verbs in English (and no doubt many other languages) like ‘ought, ‘must’… is, I imagine, uncontroversial. I don’t want to speak for other univocalists about ought, but to the extent I am one, it’s not a linguistic claim. I don’t see how someone can deny the empirical linguistic facts. I think there’s a particularly interesting use of ‘ought’ that picks out a particularly important concept or property. It’s useful to have a special way of denoting that usage, and so it seems useful to me to regiment the use of ‘ought’ in philosophical usage to pick out that use.
Univocalists shouldn’t, it seems to me, be hung up on arguing about empirical linguistic claims. A single word can change its meaning in context, but presumably what we’re interested in philosophically are properties and concepts. Many philosophers adopt, tacitly or explicitly, a bridging principle that says something to the effect of ‘as goes the word, so goes the concept/property’. But, there’s a good deal of work to be done to show that those links are reliable.
Recognising the context sensitivity of ‘ought’ may be very useful for reminding us to avoid conflating a cluster of concepts, all picked out in English by the same word, with each other. I think what I’m saying is friendly to your view- I’m often confused why people want to deny that auxiliary modal verbs are context sensitive, but I’m also not so sure that there is so much to learn about concepts and properties from words (not that there is nothing, of course). So, I don’t think it’s inconsistent to be a univocalist of a kind about ought (that there’s one particular interesting concept that shouldn’t be conflated with the others), while accepting whatever the linguists say about the way the verb works in English.
Ralph (and following on from Andrew),
Thank you for setting us right about this. That was a bit sloppy. I know (although this was only at the back of my mind when writing my post) that your book contains an excellent discussion of the various senses of ‘ought’. What I should have said is that I am inclined to think that there is only one sense of ‘ought’ that both is of primary importance to agents making practical decisions or justifying decisions after the fact, and is of primary importance to philosophers (I don’t mean to imply that there isn’t anything philosophically interesting to say about other senses of ought).
In particular, I wonder whether the distinction between “subjective” and “objective” senses of ought, which you introduced above and which is popular in contemporary philosophy, is helpful. I’m undecided about this, but am inclined to think it isn’t, for the reasons Timothy Williamson elaborates on in section 4 of his “Knowledge, Context, and the Agent’s Point of View” (available on his website). Actually, Tim focuses on ‘wrong’, but I think the same point can be made about ‘ought’: practical deliberation and justification (after a decision) depends on being able to keep one sense of ‘ought’ (or ‘wrong’) fixed across contexts.
I don’t think ordinary people say things like “well, in one sense of ought you did what you ought to have done, but in another sense you didn’t”. If someone said that to me in a non-philosophical context,I’d probably say something like “Hang on… now tell me what should I have done?” (i.e. don’t give me two possibilities; just give me one). I think this is something that worries Clayton too, looking back at the beginning of this thread.