Here’s an interesting quote from Stephen Darwall’s entry on “Normativity” in the Routledge Encyclopedia of Philosophy:
On one popular view, morality is normative for action by its very nature, so that to say that an action is wrong is to imply that one ought not to do it. …But it may be that morality is indeed normative, only not for action as the first view supposes. It may be, as Mill proposed, that the concept of moral wrong is tied to the appropriateness of certain sanctions and ‘sanctioning emotions’ such as blame, guilt, indignation and so on…. On this view, morality’s normativity directly concerns not the acts that are said to be right or wrong, but certain reactive emotions and their natural expressions. If this is right, an action could be wrong (and so something one morally ought not to do) without being something one (categorically) ought not to do. Its being morally wrong would consist in its being something that warrants blame and guilt, and that might be true even if there were [sufficient or decisive] reason to do it.
Now I would have thought that morality was normative for both actions
and reactive attitudes, such that an act’s being wrong implies both
that there is decisive reason not to do it and that there is decisive
(or, at least, sufficient) reason to have certain reactive attitudes
(sanctioning attitudes) towards it and its agent. So why should we
think that it has to be one or the other, as Darwall seems to be
suggesting? Perhaps, we would think that morality cannot be normative
for both actions and reactive attitudes if we thought that whether
certain reactive attitudes were appropriate towards a given action was
something entirely independent of whether that action is one that the
agent had a decisive reason to perform. But, arguably, we don’t think
that the two are independent of each other. For instance, I think that
we would be reluctant to wholeheartedly blame someone for doing what we
admit that she had decisive reason to do. Indeed, it seems
inappropriate to blame someone for doing what she had
decisive reason to do–at least, it is if we know this to be the case. This thought–the thought that there is a connection between
the appropriateness of certain reactive attitudes towards an action and the reasons that the agent has to perform it–appears in Sobel’s post “Subjectivism
and Moral Criticism” and can also be found in Shafer-Landau’s Moral Realism. For instance, Shafer-Landau finds it
implausible to suppose that the “proper moral evaluation of an agent
has nothing to do with the agent’s attentiveness of reasons” (p. 193). I agree.
So why think that morality cannot be normative for both actions and reactive attitudes? Any thoughts?
Doug,
I agree entirely with you that it is unintelligible to blame someone for doing something one says he had good (enough) reason to do. My point was that it is possible to hold, as Mill, Brandt, and Gibbard all seem to, that the proposition that an action is wrong entails that it warrants reactive attitudes (lacking adequate excuse) but does not entail that there is reason for the agent not to perform the action. In The Second-Person Standpoint, I try to vindicate the claim you and I agree about by arguing, first, that to blame someone is to address a demand that he not do it and is part of holding him accountable for doing it and, second, that this way of relating to him is incompatible with accepting that he had good reason to do it. (If he could establish that, after all, he would have adequately accounted for himself.) The incoherence comes not from believing that someone might have reason to do what is morally wrong. (Alas, it seems depressingly easy for people to believe that.) The incoherence comes from having the (second-personal, as I term it) attitude of blaming someone while at the same time believing that he had good (enough) reason to do what one is blaming him for.
A quick addendum to clarify my point: I don’t think that rational egoist amoralism is an incoherent or conceptually confused position, or that someone cannot coherently believe that there is reason to do what is morally wrong when it is in one’s own interest. However, I do think that it would be incoherent for someone holding that position to blame those who do wrong in their own interest.
Steve,
Thanks for your thoughtful comments. Now would you agree that it is possible to hold that the proposition that an action is wrong entails both that it warrants reactive attitudes (lacking adequate excuse) and that there is a reason (I would say: a decisive reason) for the agent not to perform that action? It sounds like you don’t hold this position, as you seem to deny what I would call “moral rationalism” (MR): the view that the proposition that S’s doing X would be wrong entails that S has a decisive reason not to do X. But, of course, you can deny the position while allowing that it is a coherent one. So one of my points was just to say that there are more than just the two views you described. One view is that morality might be normative for actions, but not for reactive attitudes. Another is that morality is normative for reactive attitudes, but not for actions. But a third is that morality is normative for both.
Nevertheless, I take your point: “it is [also] possible to hold, as Mill, Brandt, and Gibbard all seem to, that the proposition that an action is wrong entails that it warrants reactive attitudes (lacking adequate excuse) but does not entail that there is reason for the agent not to perform the action.” But I wonder why hold this view, especially given that it seems that whether or not an action warrants reactive attitudes (lacking adequate excuse) seems to depend on whether we think that the agent had sufficient reason to do what she did? Of course, I see now that you have an interesting way of accepting both that morality is normative (only?) for reactive attitudes and that whether or not an action warrants reactive attitudes (lacking adequate excuse) depends on whether we think that the agent had sufficient reason to do what she did. And it seems that you can do so without having to accept moral rationalism or that morality is normative for action. So I’m guessing that you don’t find moral rationalism attractive.
I’m also wondering whether anyone has any thoughts about the cases of blameless wrong-doing that Parfit discusses in Reasons and Persons. Do such instances of blameless wrong-doing speak against the notion that an act’s being wrong entails that it warrants certain reactive attitudes (lacking an adequate excuse)? Or are these just cases where the agent has an adequate excuse? The cases of blameless wrong-doing are, if I recall correctly, cases where the agent was psychologically determined to act wrongly as result of certain psychological dispositions that she had decisive moral reasons to develop.
Doug,
What I’m denying is that X is wrong analytically entails that there is reason (for anyone) not to do X. I do agree with the substantive normative thesis that there is always conclusive reason not to do what is wrong. Actually, a central agenda in The Second-Person Standpoint is to argue that any satisfying argument for this claim must appreciate the second-personal character of moral obligation AND then to prove such an argument. What I deny is the conceptual thesis that X is wrong analytically entails there is reason (for anyone) not to do X. If that were a conceptual truth, then rational egoist amoralism would simply be conceptually confused.
About the Parfit cases, I would have thought that they don’t put the analytical entailment between wrongdoing and warranted reactive attitudes (lacking adequate excuse) in jeopardy, since, for example, if the agent had sufficient moral reason to develop the relevant attitude, then that constitutes an adequate excuse.
Steve,
Okay, thanks for clearing that up. For myself, I can’t decide whether I accept the conceptual thesis or only the substantive normative thesis. I’m attracted to the conceptual thesis. But that’s another debate. I’ll be sure to read The Second-Person Standpoint. Maybe if I were convinced that the normative thesis could be adequately secured absent the conceptual one, I wouldn’t be so drawn to the conceptual thesis.
Regarding the Parfit cases: Yes, that sounds right. Thanks.
Doug, you say,
Indeed, it seems inappropriate to blame someone for doing what she had decisive reason to do–at least, it is if we know this to be the case.
Steven D. adds,
The incoherence comes from having the (second-personal, as I term it) attitude of blaming someone while at the same time believing that he had good (enough) reason to do what one is blaming him for.
It’s hard to see why these could not pull apart, especially for a utilitarian. Clearly, S might have decisive reason to do A and S’ might recognize this and, still, S’ might have decisive reason to blame S for doing A. It might be true that if S’ were to blame S for doing what S had most reason to do, S’ would thereby maximize overall utility. Or, it might be that were S’ not to blame S for failing to do what S had little reason to do, S’ would thereby maxmize overall utility. But then,
to blame someone is to address a demand that he not do it and is part of holding him accountable for doing it and, second, that this way of relating to him is incompatible with accepting that he had good reason to do it.
How is that incompatible? I recognize that A is what you had most reason to do, but blaming you is something I do. And I might have reason to blame you for A-ing that do not derive entirely from your reasons for A-ing. Indeed, I might have good moral reasons to blame you for doing A that you did not have to fail to do A.
I don’t know. These seem perfectly possible to me. So if utilitarianism is not itself incoherent, then blame seems right for some actions that are not independently wrong.
Mike writes:
“It’s hard to see why these could not pull apart, especially for a utilitarian. Clearly, S might have decisive reason to do A and S’ might recognize this and, still, S’ might have decisive reason to blame S for doing A. It might be true that if S’ were to blame S for doing what S had most reason to do, S’ would thereby maximize overall utility. Or, it might be that were S’ not to blame S for failing to do what S had little reason to do, S’ would thereby maxmize overall utility.”
Indeed, that is a version of what Strawson has his “pragmatist” say in “Freedom and Resentment.” Strawson replies, in my view correctly, that the beneficial consequences of blame is a reason “of the wrong sort” to warrant blame. Reasons of the right kind for blame must tend to show that the agent has done something culpable, and the beneficial consequences of blame can no more show culpability than the beneficial consequences of belief can show credibility. This is an instance of the “wrong kind of reason phenomenon” noted in Prichard’s “Does Moral Philosophy Rest on a Mistake” and recent work by D’Arms and Jacobson, Rabinowicz and Ronnow-Rasmussen, Hieronymi, and others. Just as it is psychically impossible to believe p for the reason that believing p would be beneficial (that is, for have that to be one’s reason for believing p), so also is it psychically possible to have a reactive (blaming) attitude toward someone for the reason that doing so would be beneficial (that is, for that to be one’s reason for, say, being indignant at someone or blaming him or her “in one’s heart).
Reasons of the right kind for blame must tend to show that the agent has done something culpable, and the beneficial consequences of blame can no more show culpability than the beneficial consequences of belief can show credibility
But the analogy fails (doesn’t it?) in the case of utilitarianism. Suppose the object of belief is truth (not benefit). Then perhaps showing that a belief (or the acquisition of a belief) is beneficial (say, as Pascal argued) does not show that the object of belief is credible. I actually don’t believe that, but ok. With utilitarians, I think we can agree that beneficial consequences are the goal. Now suppose there are beneficial consequences of blaming S for A. So the goal that is the basis of blame doesn’t differ from the utilitarian goal. Is S blameworthy for doing A? Wouldn’t the answer have to be yes from a utilitarian viewpoint? Why couldn’t a utilitarian view it as an interesting conceptual consequence of his moral theory that blameworthiness is (in unusual circumstances) unexpectedly forward-looking?
Mike A.,
It seems perfectly analogous to me. If my being in the state of believing that P (a false proposition that I have no evidence for) would have good consequences, then there is a reason for me to want to be in that state and to intend to do what will bring it about, or make it more likely, that I am in that state, but there is no reason for me to believe that P. Likewise, if my being in the state of blaming S (someone who isn’t blameworthy) would have good consequences, then there is a reason to want to be in that state and to intend to do that which will bring it about, or make it more likely, that I am in that state, but there is no reason for me to blame S. This also seems consistent with act-utilitarianism. On my understanding of act-utilitarianism, it implies (assuming that morality is normative for actions) only that there is reason to intend to do that which will bring it about, or make it more likely, that I am in the states of blaming S and believing that P. It does not imply that there is a reason to blame S or believe that P. In other words, act-utilitarianism is normative for the attitude of intending to do, not for the attitudes of blaming and believing.
Actually, I’m not happy with that last line. What I should have said is that act-utilitarians are committed only to the view that whether one has reason to intend to do X is a function of the utility of X’s outcome. Act-utilitarians are not committed to the view that whether one has reason to blame someone or believe some proposition is a function of the utility of the outcomes associated with being in those states.
Likewise, if my being in the state of blaming S (someone who isn’t blameworthy) would have good consequences, then there is a reason to want to be in that state and to intend to do that which will bring it about, or make it more likely, that I am in that state, but there is no reason for me to blame S.
I have no idea how I might be “in the state of blaming S” without actually blaming S. Imagine this denial: “I didn’t actually hit you with the mallet, I was rather in the state of hitting you with the mallet”. “Right, but I still have this bump”.
Act-utilitarians are not committed to the view that whether one has reason to blame someone or believe some proposition is a function of the utility of the outcomes associated with being in those states.
Blaming someone is an action. It might be an action that maximizes overall utility. Since act utilitarianism applies to actions (unless you want to restrict it to some actions and not others) it applies to instances of blaming. Suppose an instance of blaming S for doing A would in fact maximize overall utility in a given situation. Do I have an act utilitarian reason to blame S for doing A? I would say that I have a decisive reason to blame S for A. Presumably, you’d say that I have a reason to intend to blame S for A. I can’t make sense of that. After all there are circumstances in which both (i) and (ii) are true: (i) Simply intending to blame S for A minimizes utility and (ii) Actually blaming S for A maximizes utility. So the fact that blaming S for A would maximize overall utility does not give me reason to simply intend to blame S for A. That intention alone might well make things much worse. How could I have a good utilitarian reason to form that intention?
The disanalogy between the belief case and the blame case is that there are two goals for belief (having true beliefs and having utility-maximizing beliefs). Not (necessarily) so for blame. Steven D. notes that the fact that “believing p” would be utility-maximizing is not relevant to p’s credibility. Or, the fact that believing p would be utility-maximizing does not make p belief-worthy. Similarly, he argued, the fact that blaming S would be utility maximizing does not make S blameworthy. I denied the latter on behalf of (act) utilitarians (this is where, I’m urging, the analogy breaks down) since nothing else could be morally relevant to S’s blameworthiness (for AU) than the fact that blaming S would maximize utility. I noted that it does make the concept of blameworthiness unusually forward-looking. But instead of looking at that as a counterexample, I suggested looking at it as an interesting conceptual consequence of the theory.
Mike A.
You write, “I have no idea how I might be ‘in the state of blaming S’ without actually blaming S.”
I agree that you can’t be in the state of blaming S without actually blaming S, but so what? I wasn’t suggesting that you could be in the state of blaming S without blaming S. What I was suggesting was that you could have reasons to want to be in the state of X-ing and to do what will make it more likely that you’ll be in the state of X-ing without having a reason to X.
You also write, “Blaming someone is an action.”
Is believing that P an action of the sort that act-utilitarianism evaluates? Is desiring that P an action of the sort that act-utilitarianism evaluates? Do you think that act-utilitarianism evaluates non-voluntary mental actions?
Mike,
how do you perform the action of blaming someone? I thought blaming was having a resentful reactive attitude towards someone on the account of their actions. Of course, there are various ways actions one can do to express such attitude. But, pure action of blaming?
Doug,
I can see how reasons for actions that make it more likely that you will be in some state of X:ing and the reasons for X:ing can come apart. But, I’m not sure how reasons for being in the state of X:ing and the reasons for X:ing could. What would be a good example of this? Doesn’t sound plausible that I have a reason to be in a state of playing football without having a reason to play football. However, it may be that I have a reason to prepare to play football (to impress friends for instance) without having a reason to actually play.
Incidentally, Crisp in his new book is lead to an odd position here. He thinks all theoretical reasons, i.e., evidence counts in evidence of being in some state of believing. However, he realises that formation of a belief, judgment, is an action. All reasons there are for actions are practical reasons based on maximizing one’s own enjoyment (and occasionally that of others). So, all reasons there are for judging that something is the case are based on the consequences of the judgments for one’s enjoyment. Evidence is not a reason at all to judge that something is the case. So, there’ll be a lot of cases for him where you have good reasons to judge that something is the case but those are not reasons for believing that thing at all – the evidence that favours beliefs may favour something completely different. This sounds like a reductio for me.
Jussi,
Did I say or imply that reasons for being in the state of X-ing and the reasons for X-ing could come apart? Where? What I said (or, at least, what I intended to say) was that one could have reasons for wanting to be in the state of X-ing without having reasons to X. For instance, an evil demon could threaten to punish me if I don’t intrinsically desire a plate of mud. In that case, I have a reason to want to be in the state of intrinsically desiring the plate of mud, but I don’t have a reason to intrinsically desire the plate of mud.
Doug,
true. Sorry, I misread your last message. BTW, about that Ronnow-Rasmussen & Rabinowich case, I never understood what intrinsically desire means there. What is the contrast class there? I mean, either you desire to eat mud or you don’t. In this case, it seems that you do have a reason to desire to eat mud, full stop. That’s a state given reason rather than object given. I know they try to argue that there is no distinction between state and object given reasons by referring to the Cambridge properties but I was never convinced.
I do still agree that there can be different reasons for actions and desires to do those actions.
Jussi,
As I understand the contrast, to intrinsically desire X is to desire X for its own sake, whereas to extrinsically desire X is to desire X as a means. Thus I have an intrinsic desire for more pleasure, but only an extrinsic desire for more money. Clearly, if the evil demon threatened to punish you only if you didn’t desire the plate of mud, then there would be a good reason to extrinsically desire the plate of mud, for doing so would be a means to avoiding punishment. But there would still be no reason to intrinsically desire the plate of mud. You may say, “but if the evil demon threatened to punish you if you don’t intrinsically desire the plate of mud, then you do have a reason to intrinsically desire the plate of mud–only it’s a state-given reason.” But I don’t think that “state-given reasons” are genuine reasons. I’m with Parfit: genuine reasons have to be something that you can directly respond to, and I don’t see how a state-given reason to X could, in any immediate way, get you to X. Rather, if you respond at all, it seems that you will respond to the object-given reasons you have for wanting to be in the state of X-ing and for intending to do that which will make it more likely that you will X.
Here’s Parfit (Climbing the Mountain:
Doug says,
You also write, “Blaming someone is an action.”
Is believing that P an action of the sort that act-utilitarianism evaluates? Is desiring that P an action of the sort that act-utilitarianism evaluates? Do you think that act-utilitarianism evaluates non-voluntary mental actions?
It might be. But what does that have to do with blaming someone?
Jussi says,
how do you perform the action of blaming someone? I thought blaming was having a resentful reactive attitude towards someone on the account of their actions. Of course, there are various ways actions one can do to express such attitude. But, pure action of blaming?
Jussi, as far as I can see, the reactive attitude has to be appropriate for the situation or it must tend to be elicited in such situations. I doubt I actually have to have such an attitude in order to blame someone. Certainly someone psychically incapable of responding in the right way emotionally can nonetheless rightly blame.
Doug says,
What I was suggesting was that you could have reasons to want to be in the state of X-ing and to do what will make it more likely that you’ll be in the state of X-ing without having a reason to X.
And what I was suggesting is that if you have reasons R to put yourself in state S, and being in state S entails blaming someone, then prima facie, you have reasons R to blame someone.
Doug,
thanks. Now I’m getting into this stuff yet again. I never found the response requirement plausible if I even understood what it was. For me, it seems plausible that I have a reason to be healthier or to know how to get to home. It seems that there are things that favour being healthier or knowing how to get home. It may be that I need to do first some other things to get into these states but I don’t see how that makes them any less favoured by reasons. So, I don’t see an argument there against the state-given reasons.
I mean there are very few actions we can do directly. To do most things you have to do something else first. I cannot directly respond to the reason I have for getting a PhD in philosophy. I have to do many actions first. But, why would we want to say as a result that I have no such reason at all?
So, in the demon case I think the situation I am in gives me a state-given reason to desire to eat mud for its own sake and not as means to anything. It may be that I can’t do this directly and that I have to do other things to trick myself to want to eat the mud for its own sake. This might be difficult but should not be impossible (otherwise I’m dead anyway whatever the reasons favour in the case).
In addition, about this:
“But I don’t think that “state-given reasons” are genuine reasons. I’m with Parfit: genuine reasons have to be something that you can directly respond to, and I don’t see how a state-given reason to X could, in any immediate way, get you to X”
I don’t think that being able to get you to X can be a requirement for the existence of reasons. At least for Parfit. That would be after all accepting the kind reasons-internalism that would be even stricter than Williams’s (which Parfit is against I take it). Being able to directly respond must mean something else.
Mike,
about this:
“Jussi, as far as I can see, the reactive attitude has to be appropriate for the situation or it must tend to be elicited in such situations. I doubt I actually have to have such an attitude in order to blame someone. Certainly someone psychically incapable of responding in the right way emotionally can nonetheless rightly blame.”
So, if don’t have the blaming reactive attitude towards someone how do I blame them? If behave in a sulking and say nasty things towards someone without having the reactive attitudes at all, isn’t that more like acting as if you would blame someone than actually blaming them for anything? If that doesn’t count what would be succesful blaming without the attitudes. Can’t I blame someone without doing anything just by having the reactive attitude? I don’t think we should identify that attitude by the way necessarily with any sort of phenomenological emotion with a distinct feel. Could be a functional state like many other mental states.
Jussi,
here’s a relevant thing to consider in relation to your remark about reasons-internalism.
In the cases of being healthier and of knowing how to get home, I couldn’t achieve these things simply by wanting to. But, in the case of some, say, morally required act, I could often do it simply by wanting to. So, if what is stopping me from doing something I have moral reason to do is my not wanting to do it, then we can plausibly say that I should do this thing, because I would be able to if only I wanted and tried to. But, in the case of being healthier and knowing something unknown to me, I couldn’t achieve these things simply by wanting to. So, there is a relevant difference.
No. Williams of course accepts that you can do something if you want to do it. But, in the case that you don’t want to do it, you cannot, according to him, start to want to do it without having prior motivations to begin to want to do that kind of a thing. If this is true, then in the case that you are claimed to have a reason to do something but you don’t want to do that thing and you don’t have other prior motivations that would lead you to want to do that thing, you cannot respond to the reason you are claimed to have for the action directly by wanting to do that thing. On Parfit’s response requirement, this would imply that you don’t have a reason. But, this is in conflict with his arguments against Williams’s internalism. Hey, I see a paper coming. Wish he published the thing soon so we could get started.
Jussi,
Remember that, in Parfit’s response requirement—and in any plausible similar kind of requirement—the requirement is that we must be able to respond to the kind of reason in question if and insofar as we are rational. (That is, if there is some kind of supposed reason which, even though we rational, we cannot directly respond to—-since our awareness of this kind of fact cannot directly and in a reliable way lead to our responding in the relevant way—-then we don’t according to the requirement have such reasons.) Remember next that, on Parfit’s view, to be rational is to substantively rational, which means that you must care about the things and aims we have reasons to care about. So, if you are rational, then your awareness of reasons for action will make you want to do these things, since our reasons for acting are given by facts about how we can achieve the aims we have reasons to want to achieve. In the case of not knowing how to get home, or of not being healthy, our being substantively rational is not helpful in the same kind of way. Thus the difference between the kind of response requirement that Williams’ appeals to and that which Parfit appeals to has to do with what kinds of rationality—merely procedural in Williams’ case and substantive, in Parfit’s case—-is assumed. So, I am not so sure that you will be able to argue that the claims of Parfit’s that you are discussing are incoherent. (Sorry for the lack of clarity in this post!)
Jussi,
You write, “I cannot directly respond to the reason I have for getting a PhD in philosophy.”
Strictly speaking, I don’t think that you have a reason to get a Ph.D. Rather, you have, perhaps, a reason to intend to get a Ph.D. And that’s a reason that you can directly respond to, specifically, by intending to get a Ph.D. Since you’re a big fan of Scanlon, I thought that you might be sympathetic to the idea that the things we have reasons for are what Scanlon calls “judgment-sensitive attitudes,” such as desiring, intending, believing, etc.
You also write, “I don’t think that being able to get you to X can be a requirement for the existence of reasons. At least for Parfit. That would be after all accepting the kind reasons-internalism that would be even stricter than Williams’s (which Parfit is against I take it). Being able to directly respond must mean something else.”
Yes, I should have been more careful. Williams would hold that you must be able to get there from your subjective motivational set by a strictly procedural deliberative process. And that’s not what I meant when I suggested that a reason to X must be able to get you to X. I should have said that a reason to X must be able to get you to X provided that you are both procedurally and substantively rational. I didn’t mean to suggest that you had to be able to get to that judgment sensitive attitude derivatively through a process of deliberating on your current beliefs and desires independent of whether or not you are substantively rational.
I don’t think that the fact that I would be better off if I were healthier could count as a genuine reason for mybeing healthier. I can’t respond to this fact by being healthier. It seems that what it is a reason for is intending to do what will make it more likely that I’ll be healthier. I can respond to this fact by intending just that. Reasons have to be normative for something. And if I cannot respond to the reason, then I don’t see how it could be normative.
Mike,
You write, “And what I was suggesting is that if you have reasons R to put yourself in state S, and being in state S entails blaming someone, then prima facie, you have reasons R to blame someone.”
Okay, I see. Suppose, though, that an evil demon has threatened to kill me if I don’t believe that 2+2=5. And suppose that I can take a pill that will induce this belief in me. It seems to me that I have a reason to intend to take the pill but no reason to believe that 2+2=5. In defense of this, I would just appeal to Parfit’s response requirement and what I said to Jussi above.
You also ask: what does the stuff about believing and desiring being non-voluntary mental actions have to do with blaming?
Well, it seems to me that blaming is a non-voluntary mental act. As Jussi puts it, blaming is “having a resentful reactive attitude towards someone on the account of their actions.” There are many voluntary actions that we can perform to express our attitude, but the attitude itself seems to be just as non-voluntary as the attitudes of desiring and believing.
Doug,
I don’t have a reason to get PhD. Good to know. Why would I be here then? I think I’m off to home. I’m sure my supervisors will be happy about this too. If Brad asks, I’ll tell him you said so 😉
That is one thing I’m not sure about in Scanlon. I do think that there are reasons for actions, full stop. To be a reason is to be normatively favoured and some actions are. It’s too much of revisioning of our language to say that what we really mean when we talk about reasons for actions are reasons for intentions. Philosophy should leave everything as it is. It’s just too natural to talk about reasons for actions. In fact, I never think about the reasons I have for intending to do things. This is just what Blackburn calls the basic mistake. We don’t usually think about our attitudes but the world and actions. I think about the reasons for actions and my intentions follow hopefully my conclusions about them.
Doug and Sven,
I think you take the same line towards the threat of collapsing to reasons-internalism. Namely this is that:
“a reason to X must be able to get you to X provided that you are both procedurally and substantively rational”
This is fine but then it seems that we are back with state-given reasons. Return to the Evil demon’s threat and the reason it provides as a state to desire to eat the mud for its own sake. I don’t see how I cannot respond to this reason if I am substantively rational. As Sven puts it substantive rationality is defined by caring about what I have reason to care about. In this situation, I have a reason to care about the mud for its own sake. If I am substantively rational, then this is what I do. All the case then shows is that not many of us can be substantively rational in this case, because it is hard to care about the mud for its own sake even when one knows one has reason to do so.
Jussi,
thanks for this reply.
Consider another example: namely, Parfit’s torture example. I am not so sure that some substantively rational person could want, as an end, to be tortured, even if that is what the evil despot wants her to do. It is part, I think, of being rational in this way to care about one’s future well-being. This seems to mean that, to be substantively rational in the torture example, what I should want is that I have this desire, since this is how best to secure this future well-being. Since the despot wants me to want this for its own sake, I am rational in this case if I want, and try, to bring it about that I have such a telic desire for torture. To say that I have a reason to want, for its own sake, to be torured in this case seems to me to be to mistakenly conflate reasons for first-order desires and reasons for second-order desires. We can, without making any great mistake, say that we here have a reason to want be tortured, but this is in one way misleading since we have no telic reason to want to be tortured.
Also, about reasons for acting. Since we respond to such reasons by wanting, intending, or trying to act, all of which are mental states, our claim should be that such reasons—that is, reasons for mental states—are fundamental, or most important. These are the reasons that, if we are rational, we can respond to whether or not the world, as we might put it, plays along. Whether we succeed in acting on our rational desires to act in certain ways often depends, not on whether or not we are sensitive to reasons, but rather on whether we are sufficiently and relevantly skilled, on whether we are lucky, and/or on whether no external forces hinder us. Whether we succeed in responding to reasons for mental states when we are aware of the facts that give us these reasons does not, in the same way, seem to depend on luck, but is rather determined by our degree of rationality. Or so it seems to me.
Also, your getting a PhD seems to consist in your performing a number of actions which, if nothing gets in the way, earns you a PhD. Since your getting a PhD consists in your doing these things, it seems that saying that you have reason to get a PhD is just saying that you have reason to do these things. Sorry for the double post, but I forgot to add this point to my last post.
Sven,
I don’t get the first point. You defined substantially rational as someone who has cares about what she has reason to care about. Now you deny that she could care about being tortured. That assumes that she does not have reason to want to be tortured. However, that was a conclusion we wanted so it can’t be the assumption the argument is based on. And, it looks like the agent has a perfect reason to want to be tortured. And if she is rational and does what she has reason to do then she desires to be tortured. I just don’t see the problem.
I think much of the confusion is with the ambiquity of desiring something for its own sake that Stratton-Lake well clears up. Desiring something for its own sake is just to have a certain attitude desire[X for its Xness] instead of desire [X for the sake of getting Y]. The reason for being in the state desire [X] that gets you to that state can be something else than X (Y for instance) without it being the case that the acquired state is desire [X for the sake of Y]. If everything goes well you can still have a desire [x for its Xness] even though your reason for getting to this state was something else.
I’m not sure I see the second point either. Why isn’t how rational you are just as much about luck too. I mean things like tiredness, alcohol, depression and so on can just as much influence whether you are able to aqcuire the attitudes you judge you have reason to have as the external things can affect whether you are able to do what you judge you have reason to do. I don’t see the difference. I also don’t see why having a reason would require an infallible control over responding to that reason.
Okay, I see. Suppose, though, that an evil demon has threatened to kill me if I don’t believe that 2+2=5. And suppose that I can take a pill that will induce this belief in me. It seems to me that I have a reason to intend to take the pill but no reason to believe that 2+2=5
Doug, there are so many words above, I’m lucky to have seen this! The question about what gives one reason to believe p, for any given p, is vexed. I’m inclined to disagree with your conclusion that you have no reason to believe p in the case you describe. You do not have a reason that is evidential but you do have a reason that is pragmatic. So I’m inclined to believe that you can have good pragmatic reasons (and therefore, in my view, good reasons) to believe (or to dispose yourself to believe) propositions for which you don’t have a lot of evidence. I think you have good reasons to believe, for instance, that you will survive your upcoming surgery, though chances are you won’t, since such belief improves your chances of surviving. I believe (this is an example from Rich Feldman) that you have good reasons to believe the proposition: I will get a hit every time I bat. Cultivating this belief will increase your chances of getting hits. Of course I agree that pragmatic reasons for p are not in general evidential reasons for p. But it also true that I can increase the probability that p is true (i.e., I can increase the evidence for p) by cultivating a belief in p that is not based on the evidence for it.
But I was trying to distinguish the belief case from the blame case. But this post has already gotten too long.
Jussi, you write,
So, if don’t have the blaming reactive attitude towards someone how do I blame them?
That the situation elicits the proper response (understood emotionally or functionally—say, by your feet inflating, as Lewis suggests somewhere in ‘Martian Pain’) I don’t think is necessary condition for any particular instance of blaming to succeed. Someone could correctly inform me that it is appropriate for me to blame S for doing A, even if prima facie (or secunda facie) I’m not inclined to do so. I could well be mistaken, and I could well blame S for A after taking that advice. We offer such advice all the time, I think, to people who do not respond appropriately to others who are obviously taking advantage of them. They too can appropriately blame, I’m sure, despite their unhealthy attitudes.
But getting back to what I think was the point, even if (contrary to what I think) necessarily blaming is partly re-active, I still choose to expressly blame or not. My point is the same: the only morally relevant reasons for an AU-ian to expressly blame S for A is the utility forthcoming from that action. More generally, the only morally relevant reason to cultivate a blaming response R (for an AU-ian) is the utility of R.
Jussi,
The first point, I think, can helpfully be made in a Moorean way. That is, by making use of ideas about how, in order to count as relevantly rational, we must be drawn to what is good. Here goes.
Someone, we could say, is substantively rational if and insofar as she cares about things that are good. Her not being tortured is good, and so is anything that advances her future well-being. So to be substantively rational we should want not to be tortured and whatever would advance our well-being, which are both good things. Now, in the despot example, our wanting, as an end, to be tortured is a good thing, since this advances well-being. So, given that substantively rational people care about what is good, what we would want in this case, if we are substantively rational, is to have a telic desire to be tortured, which would be something that would promote our well-being, and thus be good.
Our being tortured is not good, so we have no reason to want this, and a substantively rational person would not, since she cares about what is good, have a telic desire to be tortured, but would want and try to have such a desire. So, if she would have that desire, then it would need to be the result of her having succeeded in bringing this about if she is to count as substantively rational.
Translating this back to ‘Scanlonian’, we get that, in the torture example, there is no reason to want, as an end, to be tortured, although there is reason, in this extraordinary circumstance, to want to have such a desire. Again, our having that desire would be good, but our being tortured would in this case not even be instrumentally good. As before, we must not conflate reasons for first order desires with any reasons we might have to want to have certain first-order desires. I was surprised, actually, that Stratton-Lake claimed that we had these supposed state-given reasons.
I may return later to your objection to the second point I made. For now I will only say that I don’t see why it would make any difference to the point I tried to make whether or not our being sensitive to reasons in the first place is mostly a matter of luck.
Mike,
You can only *express* blame if you blame someone and that seems to be an attitude. In the case you give you talk about saying that blaming is appropriate. Saying that is not blaming though.
The AU you give seems to be an account of when it is justified to act as if one would blame someone. Utilitarian criterion may be somewhat plausible for that – but that doesn’t count as a theory of blameworthiness. Rather only as ‘blameacting-worthiness’.
Sven,
I still don’t follow. You write first that
‘Now, in the despot example, our wanting, as an end, to be tortured is a good thing, since this advances well-being.’
Then you write that:
‘Our being tortured is not good, so we have no reason to want this, and a substantively rational person would not, since she cares about what is good, have a telic desire to be tortured, but would want and try to have such a desire.’
Yes, being tortured is not good but it does not follow that we do not have a reason to want to be tortured. In fact, you give a perfectly good reason in the first quote. Wanting it is good.
Jussi,
While you seem to take it for granted that, if our having some attitude would be good, then we have reason to have this attitude, I myself feel no inclination to believe this. My sense, as I have said above, is that the pragmatic view which you suggest is based on a mistaken conflation of reasons for first-order attitudes and reasons to have certain second-order attitudes: attitudes about our own actual, or possible, present or future attitudes.
Consider these two possible chains of events:
(1) I come to have a telic desire to be tortured and am, therefore, not tortured by the evil despot
(2) the war ends, and the killing of innocent civilians ceases
I think that these two possible ways in which things could go are alike in how I have reason to want things to go in these ways. And, these are both good things. My having some desire is just like some war’s ending. It is a possible event which, because of its consequences, I have reason to want, and to try, to achieve. What may be confusing here is that, while wars cannot have reasons to end, we can have reasons to have desires. But, this should not lead us to conclude that, for this reason, we have reason to want to be tortured in this case. My having this desire is, in this case, more like the ending of some war. It is a possible event which, because it has desirable consequences, I have reason to want to achieve.
As these remarks suggest, it seems to me that the kind of pragmatism that you refer to rests on an odd, and mistaken, view about what it is to have reasons to have desires.
Here is another argument for the same conclusion. If we say that, in cases like the despot example, we have reason to want these good things, then we seem to be using ‘we have reason to x’ to mean the same thing as ‘it would be good if x’. But, these claims clearly seem to have different meanings. Therefore, in these examples, we don’t have reasons to have these desires which it would be good if we had.
Here is yet another argument. Our supposed state-given reasons to want to be tortured would be self-interested reasons. We have some self-interested reason to want something if this thing would (i) promote our well-being, or (ii) be one of the things our well-being consists in. Our being tortured would neither promote our well-being, nor be one of the things our well-being consists in. Therefore, we have no self-interested reason to want to be tortured. Therefore, since our supposed state-given reasons to want to be tortured in the despot example would have to be self-interested, we have no reason to want to be tortured. But, our having a telic desire to be tortured would, in this case, promote our well-being. Therefore, we have self-interested reason to want to have this telic desire to be tortured.
Lastly, and as before, thanks for responding to, and challenging, my posted arguments.
The AU you give seems to be an account of when it is justified to act as if one would blame someone. Utilitarian criterion may be somewhat plausible for that – but that doesn’t count as a theory of blameworthiness. Rather only as ‘blameacting-worthiness’.
It does, for AU. Indeed it is the only thing that could so count. As I said above,
“More generally, the only morally relevant reason to cultivate a blaming response R (for an AU-ian) is the utility of R.”
Further you say,
You can only *express* blame if you blame someone and that seems to be an attitude.
This seems mistaken on several counts. My expression might well constitute my blaming (or a part of my blaming). Expressing p is not in general reporting p: when I exress blame I am not reporting a blaming state. Further, I can certainly refuse to blame someone who deserves blame (some of the most painfully sanctimonious Pecksniffs go unscathed!) and who I recognize deserves blame. I have no idea where the idea comes from that blame is purely involuntary, if that is what you’re suggesting. A priori, blaming is certainly not remotely involuntary.
Mike,
what I was trying to say is that it is the attitude we have that *constitutes* blaming. This attitude is *expressed* with different kind of linguistic and non-linguistic actions. These actions definatively do not report your attitude. Furthermore, *expression* is a factive term – you can only express an attitude you have. If you don’t have the attitude of blaming, then the actions that usually would express blame do not do so. In that case you are only acting as if you were blaming someone. And, yes, you can withold your attitude even in the case that you realise that the attitude would be appropriate given the norms that guide who is to blame.
But, AU must think that blaming is something completely different. Maybe only the actions that I only take to be expressive of the real blame. I think that is unplausible. You can have the actions without blaming someone. And, you still haven’t given an account of what blaming is as an action.
Jussi,
Furthermore, *expression* is a factive term – you can only express an attitude you have. If you don’t have the attitude of blaming, then the actions that usually would express blame do not do so.
Perhaps this is where we disagree. What I’ve been trying to say is that the expression of blame, too, is (at least partly) constitutive of blaming. I don’t blame anyone unless I express that blame: and I can choose not to blame someone who has it coming. Incidentally, expression is probably not factive. All sorts of “feelings” get expressed on stage by people who are simply acting: these are not (and certainly need not be) the genuine feelings of the actors. The racial loathing expressed by the last person to play Hitler was (I hope) not his genuine feeling.
Mike,
I think you are right about that this is where the disagreement lies. I don’t see how the expression of X could even in part constitute X itself. I do think you can blame someone without expressing it. I think I do it all the time. I don’t think actors do express their feelings on stage – well, sometimes they do. We as spectators interpret their movements as expressions of the feelings of their character. But, this, that there is a fictional character who has the real attitudes, is a part of the make-believe we are engaged in.