The following is conceivable:  the features that make an action right are not the features which one ought to attend to when reasoning about whether to perform the action.  In consequentialist lingo (I think I’m getting this correct), what’s right-making is not necessarily a blueprint for a decision procedure.  For example, it might be best, on utilitarian grounds, if everyone was a Commonsense Moralist and never even attempted to maximize general utility.  This divergence is possible whenever one’s theory of right-making is consequentialist of whatever stripe, and maybe even more often than that.

With that as background, I am wondering what a “reason for acting” is.  Most definitions of reasons that I know of say they are something like “considerations counting in favor,” so a reason for acting in a certain way (a reason for the action) is a consideration which counts in favor of acting in a certain way (the action).  My question is this:  are these “considerations” the considerations that figure in the right-making theory, or the considerations that figure in the decision procedure?

Option 1:  reasons for action are right-makers for the action.  In that case, it is possible that, when reasoning practically, one never thinks about the reasons for action one has.  And when one does act rightly, one will not be acting on the reasons for the action.  Indeed, it would be possible to never act for good reasons at all, and still act rightly all the time.  Bizarre.

Option 2:  reasons for action are premises in decision procedures for the action.  Then sometimes (when the decision procedure is imperfect) you have most reason to do what’s wrong to do, and no reason to do what’s right to do.  That’s peculiar at least.  Further, since what the best decision procedure is may depend heavily on contingent features of human psychology or society, what is a reason for doing X is contingent on features of human psychology or society.  Moreover, reasons for action could change as human psychologies or societies change, and we might need to relativize them to cultures or even individuals, even if we are not relativists about the right-making features.  Practical rationality (and probably theoretical rationality) becomes an empirical matter.  I’m not totally averse to this, but it’s odd.

Option 3:  we should distinguish between reasons for performing an action, which are right-makers, and reasons for deciding to perform an action, which are premises in a decision procedure.  (In effect, the decision procedure for actions becomes a right-making theory for decisions to act, and reasons are fundamentally right-makers.)  It would be odd if the answers to “Why should I do X?” and “Why should I decide to do X?” diverge, but maybe this is the best option.

Option 4:  there is something incoherent about this whole situation, which implies or entails that allowing right-making considerations to diverge from decision-procedure considerations is a bad idea.  Reasons have to be “public” in the sense that these are not allowed to diverge.  I have (cautious) sympathies in this direction too.

I’m curious what others think. 

17 Replies to “What is a reason for action?

  1. For what my two cents are worth, I think that we would argue that a person should do only those things that they have a right to do. Having a right to do something means that I have good reasons for doing that thing (action); I can explain and justify my action based on reasons that I think reasonable people will accept.
    An example might be that I will give money to feed ‘needy’ people at Thanksgiving because I think that I should perform actions that reduce, if not completely eliminate, harm if possible (all others things being equal).
    The person who is being fed would continue to be harmed if not fed, so my reason for acting overlaps with the empirical situation in that the wrongness (harm) of being hungry coincides with my reason (expressed negatively or positively) for feeding that person.
    So, I think I agree with option 3.

  2. I tried to argue for understanding reasons as truth-makers and rationality as an assessment of decision-procedures in “Subjective Accounts of Reasons for Action” Ethics 2001. It is surprising such questions do not get more play.

  3. I’m sort of on the other side from David Sobel, in that I think that a reason has to be the sort of thing that a person can act on so that when the agent acts on the reason the reason justifies their action. The long thread here about the buck passing account a few weeks back contained some of my reasons for liking this idea (as well as reasons offered by others that I also agree with). And I have some related arguments in a paper on internalism that was in Phil Quarterly around 2000, responding to Robert Johnson’s nice conditional fallacy paper. The basic idea is really Williams’ thought that reasons need to be able to both explain and justify actions.
    I’m off to the Central meetings today where I’ll see several of you so I don’t really have time for a sustained defense of my thoughts on this.

  4. “Indeed, it would be possible to never act for good reasons at all, and still act rightly all the time.”
    This isn’t true if a “good reason” is a reason that leads one to act rightly.
    “Moreover, reasons for action could change as human psychologies or societies change, and we might need to relativize them to cultures or even individuals.”
    Just because reasons for action can differ between individuals doesn’t imply relativization, right? I like white wine, you don’t. Though the desire motivates me and not you, we needn’t relativize “someone likes white wine” unless we’re stipulating reasons for actions are to be understood as reasons “possessed” for action. But then relativization turns out to be a trivial requirement.
    “It would be odd if the answers to “Why should I do X?” and “Why should I decide to do X?” diverge.”
    Why do they diverge?
    I’m not sure I see the divergence yet. Suppose the correct moral view is one according to which the act one ought to perform is the act available that causes the outcome with the greatest value, where this outcome’s value is determined by one’s subjective probability that the outcome will be caused by the act, multiplied by the value of the outcome. I like this view. Morality is just a species of decision theory (not restricted to preferences). On this view, reasons for acting (beliefs about value and likelihoods) don’t diverge from reasons that generate right-making properties.

  5. Heath,
    There’s an additional possibility, or it seems so, one that combines 1 & 2. Reasons could be just those right-making features of action that can figure in decision procedures.

  6. I have a number of questions. Here are just a few:
    (a) When you use the word ‘right’ are using it to mean ‘morally permissible’, ‘rational’, ‘in accordance with reason’, or what? It seems to me that you equivocate in your use of the word ‘right’.
    (b) When you use the word ‘reasons’ are you referring to normative reasons or motivating/operative reasons?
    (c) Are options 1-4 supposed to be options for everyone or just options for those who hold that “what’s right-making is not necessarily a blueprint for a decision procedure”?
    (d) Are you assuming that “the features that make an action right” constitute decisive reasons for performing that action? Why can’t someone hold that what makes an act right is that it maximizes utility, but that only facts about what would fulfill one’s present desires constitute normative reasons for action?
    (e) Why do you think that it would be “bizarre” if “it would be possible to never act for good reasons at all, and still act rightly all the time”? I think that people often act both in accordance with what morality requires and in accordance with what they have sufficient reason to do as the result of some motivating reason that doesn’t constitute a good normative reason for them acting that way.
    (f) When you said that, on Option 1, “when one does act rightly, one will not be acting on the reasons for the action,” did you mean to say, “On Option 1, it is possible that, when one does act rightly, one will not be acting on the reasons for the action”?

  7. Doug,
    I hope the following helps.

    (a) When you use the word ‘right’ are using it to mean ‘morally permissible’, ‘rational’, ‘in accordance with reason’, or what? It seems to me that you equivocate in your use of the word ‘right’.

    I am trying to use the word ‘right’ the way everybody does. ‘Morally obligatory’ will do as a translation. But the problem arises anytime that the considerations that make an action right (whatever that means) are different from the considerations that someone using the best decision procedure would consider.

    (b) When you use the word ‘reasons’ are you referring to normative reasons or motivating/operative reasons?

    Normative reasons.

    (c) Are options 1-4 supposed to be options for everyone or just options for those who hold that “what’s right-making is not necessarily a blueprint for a decision procedure”?

    The issue doesn’t arise if what’s right-making is a blueprint for the decision procedure.

    (d) Are you assuming that “the features that make an action right” constitute decisive reasons for performing that action? Why can’t someone hold that what makes an act right is that it maximizes utility, but that only facts about what would fulfill one’s present desires constitute normative reasons for action?

    I’m trying not to assume anything about reasons for action, since that’s the puzzle. I am assuming that getting to the right action is the goal of a decision procedure, and the reason that a decision procedure might diverge from a theory of the right is that reasoning on the basis of what makes an action right could be an inefficient method of getting to that goal. AFAICT, if someone held the combination of theories you suggest, I would be very puzzled what role they had for the concept of ‘right’.

    (e) Why do you think that it would be “bizarre” if “it would be possible to never act for good reasons at all, and still act rightly all the time”? I think that people often act both in accordance with what morality requires and in accordance with what they have sufficient reason to do as the result of some motivating reason that doesn’t constitute a good normative reason for them acting that way.

    I agree that people do this all the time—they do the right thing (and what the best decision procedure would yield) because they want others to like them or because their Mom told them to or whatever. The bizarre picture is the one where you are as practically rational as can be, following the best decision procedure all the time, and yet you never wind up acting for a good reason. The considerations which figure in your reasoning, and in terms of which you would explain your action, are never good reasons.

    (f) When you said that, on Option 1, “when one does act rightly, one will not be acting on the reasons for the action,” did you mean to say, “On Option 1, it is possible that, when one does act rightly, one will not be acting on the reasons for the action”?

    When one does act rightly, one will not be acting on the reasons for the action, in the cases where the decision procedure that recommends the right action does not use, as premises or inputs, the right-making features of the action. This may happen only sometimes; so, yes, “it is possible that…”.

  8. Heath,
    It seems to me that you’re assuming that the facts that constitute reasons for action have to be the facts that make acts morally obligatory or the facts that help us decide to do what is most likely morally obligatory or the facts that do both. It seems to me, though, that they are often none of the above. The fact that drinking a beer would make me feel better is a reason for me to do so, but it’s not (a), it’s not (b), and it’s not (c).

  9. Oh, the answer to Doug’s question shows that I did not understand the original post. I should retract my claim to have a view about the matter.

  10. Two cents:
    Many philosophers accept what I call the ‘deliberative constraint’: they hold that a normative reason for action has to be the kind of thing that it makes sense to pay attention to (take as a premise) in non-enthymematic deliberation about what to do. I think Steve Darwall in Impartial Reason is a good example.
    It is also natural to think that your normative reasons go together to explain what you ought to do. This view is shared by those who think that what you ought to do is just a matter of the weight of your reasons when put together (for example, me), as well as those who think that a normative reason is just whatever plays the right sort of role in explaining what you ought to do (for example, Broome and Stephen Toulmin).
    There’s nothing inconsistent about these two views; nor do they raise any difficulties for any particular moral theory. The problems come in if you bring in a third view: that anything which helps to explain what you ought to do must be a normative reason for you to do it. This view, together with the second view, that all normative reasons play a role in explaining what you ought to do, entails that nothing can play a role in explaining what one of your reasons is, without itself being a reason. This is what I call the no background conditions view, and it is advocated by, for example, Nagel in The Possibility of Altruism, but also by Crisp and Raz and others.
    Here’s the problem: if everything that helps to explain what you ought to do is a normative reason for you to do it, and normative reasons are the kinds of thing that you pay attention to in good, non-enthymematic deliberation about what to do, then from any explanatory moral theory it will follow that agents ought to pay attention to the features cited in the explanations provided by the theory.
    But most – all, I would argue – explanatory moral theories cite features that it makes sense to think agents shouldn’t be thinking about in their deliberations – even when they are deliberating well and non-enthymematically. It is not plausible that good and non-enthymematic reasoning should always involve thinking about the value of results, as would follow from consequentialism, it is not plausible that it should always involve thinking about what you desire, as a Humean theory of reasons would predict, nor is it plausible that it should always involve thinking about what others could reasonably object to, as would follow from some kind of contractualism. Cut it any way you like, I think every brand of explanatory moral theory is subject to some version of the “Objectionable Reasoning” objection (though some may clearly be worse than others).
    How do you get out of this problem? As I understand Railton’s response on behalf of consequentialists, he denies the deliberative constraint: complete and non-enthymematic reasoning may proceed without thinking about your normative reasons. I say: reject the “no background conditions” view. It is highly plausible that your reasons explain or determine what you ought to do, but it is not equally plausible that everything that plays a role in explaining this is a reason.
    In particular, I hold that there may be background conditions on reasons: just as someone has to satisfy certain conditions – such as being inaugurated – to be president of the United States, a consideration has to satisfy certain conditions in order to be a normative reason, and those conditions are no more normative reasons themselves than the fact that George Bush was nominated is a president of the United States. If that’s true, then some things that serve in the explanation of reasons, and hence of what you ought to do, are not themselves reasons, and hence even if the deliberative constraint is true, you need not pay attention to them even in good, complete, and non-enthymematic deliberation.
    So I’m with Robert. Reasons are considerations that play a role in explaining what you ought to do, but not all considerations that play a role in explaining what you ought to do are reasons – some are background conditions on reasons, and you need not pay attention to them in deliberating or incorporate consideration of them into your “decision procedure”.
    I discuss all of these things in detail Chapters 2 and 7 of Slaves of the Passions, forthcoming from OUP. I agree that they’re terribly important questions, because these different background assumptions about how reasons are related to explanation of what we ought to do and to good deliberation can pull in very different directions. Again, I think one of the best illustrations of this is given by Darwall’s Impartial Reason, in his discussion of Nagel’s treatment of subjective (agent-relative) reasons – which is well-worth studying.

  11. Heath,
    I just want to chime in with Mark’s comments. There may be right-making features of actions that cannot be reasons, say, because they’re too complex for agents such as ourselve to comprehend. But you can look at reasons as that set of right making features that are accessible to rational agents such as us. That would mean that rationalism of a certain sort is false, viz., the sort that holds every right making feature of an aciton is a reason to perform it.

  12. This is a great post, Heath – and a terrific series of comments following it too!
    Mark Schroeder’s point is certainly elegant and perceptive, but I don’t think that it really constitutes a fully adequate response to Heath’s original point.
    First, it seems to me that there is no reason to assume that all the considerations that will be attended to by someone who is following a reasonable decision procedure will be “right-making reasons” of any kind. Suppose that you have reasonable but false beliefs about what is right. Then surely it could be that you are following a perfectly reasonable decision procedure, but the features that you are attending to that aren’t right-making reasons at all!
    Secondly, it also seems to me that there is no reason to assume that every “right-making reason” must be attended to by any reasonable decision procedure. Suppose that even though the right thing for you to do is to do A, you are not in a position to know this. (We must accept that this is possible unless we accept some very strong view to the effect that facts about what it is right to do are necessarily “luminous” or “transparent” or “self-intimating” or the like.) Then it seems to me that there will be some “right-making reason” which you are not in a position to know about. So, it seems to me, there is no reasonable decision procedure that you could follow that would involve attending to this reason.
    In general, the solution that I would advocate to Heath’s puzzle is that we have to recognize that there are two different kinds of normative reasons:

    • There are the normative reasons that make actions right (which may or may not include all the factors that explain why those actions are right);
    • There are the normative reasons that serve as the starting points for reasonable decision procedures.

    I remain deeply sceptical of any approach, like Mark’s, that seek to identify these two different kinds of normative reason.

  13. Heath,
    I’m coming to the party a little late, but I wanted to say a few things about the first view (a view that is near and dear to me).
    The first is a minor point, but I don’t think that it is a clear implication of the view that, “when one does act rightly, one will not be acting on the reasons for the action”. If you have a view like Jackson’s decision-theoretic consequentialism, you have a view on which it seems someone could do the right thing without acting from the reasons that make the action right. Nevertheless, the reasons that make an action right are internal to the agent’s perspective on the situation and I can’t see why he’d have to say that the reasons that make an action right are not reasons we can act from. Even if you had an account of right action on which the rightness of an action was not sensitive to facts about your perspective on the situation, it seems to be a contentious interpretation of what it is to act from a reason that someone who, say, knows that X-ing is best (where X-ing is best in virtue of facts about the external world) and acts because she judges that X-ing is best does not thereby act from good reasons.
    I did not think that it would be bizarre to hold a view on which, “it would be possible to never act for good reasons at all, and still act rightly all the time”. There are two ways of trying to motivate a view on which this is a possibility.
    #1
    Suppose we start by looking at competing accounts of what it is that reasons demand of rational agents. Some natural candidates might be:
    (a) Successful conformity.
    (b) Trying.
    (c) Successfully conforming by trying.
    There are examples in which it seems quite intuiitve to say that someone acted, accidentally managed to conform to what some reason demanded, and nothing of the reason’s demand is left. If such examples show that (b) and (c) are not the right way to understand what the reasons demand of us, it wouldn’t be at all odd to say that someone could act rightly without ever acting for a good reason. If what reasons demanded in the first instance successful conformity and they do not in addition demand that they figure in our deliberation or that our actions show respect for their status as reasons, this view seems perfectly natural.
    [e.g., Suppose I have a reason to see to it that my boss knows that I’ll be in the office today–it will help set his mind at ease. As I stumble into my office with my morning coffee reading the morning paper and not paying attention to what is taking place around me, he says ‘Good morning’. I realize that he knows that I’ll be in the office and there is nothing left over that the reasons demand of me].
    #2
    Rather than focusing on competing accounts of what reasons demand of us, consider Kant’s shopkeepers. Do we really want to say that the shopkeeper who was honest but acted from the motive of self-interest failed to do what the moral reasons required of him?
    This strikes me as the wrong thing to say, but suppose someone said it. I suppose they’d say that because they thought that the reasons that make an act right demand that we act in ways that show our respect for these reasons. It seems to be a consequence of such a view that moral reasons are not reasons for things that we do. Acting in a way that shows respect for the reasons would have to be understood as acting from some motive, M. Acting-from-motive-M is not itself an action. If that’s what the reasons insist upon, reasons aren’t reasons for action or for doing things.
    If we reject the view that the shopkeeper failed to do what the moral reasons required and allow that you can do what the moral reasons require while acting from non-moral motives, the claim that you can act rightly without acting on the reasons that make the action right isn’t all that surprising.

  14. Kant is actually a good figure to bring up. Philip Stratton-Lake struggles with a similar problem for the Kantians in his Kant, Duty, and Moral Worth. I wonder if some of his lessons could be adopted by consequentialists.
    Before that, another way of trying to avoid this problem would be to say in the spirit of Moore that consequentialism is not a theory about what makes actions right but rather a theory about what it is for the action to be right. For an action to be right is for it to have optimific consequences. If that’s the view, then the right-makers could be the kind of considerations that we normally think of as good moral reasons. That I can prevent someone from drowning is a good reason to act and yet that drowning is prevented is also what makes the consequences of life-saving good.
    But, to Stratton-Lake. Roughly, he says that the problem for Kantians is that an action seems to be for them right if it’s done from the respect of the moral law – because, as Kantians have thought so far, the action is one that you ought to do. Yet, that you ought to do something is not a good reason to do anything. The first-order considerations are. Stratton-Lake’s solution is to understand acting from the motive of duty in a new way. It requires having two sorts of motives. The first are the considerations we usually think of as good moral reasons. The second are the higher-order of motive that is based on recognising the role of first-order considerations as moral reasons that are valid irrespective of your personal motivations.
    In any case, I wonder if the consequentialists could say something along the same lines. Morally good consequentialist persons would people who act from the right reasons. These are the first-order considerations we normally take to be good moral reasons. However, in addition, their reflection of these reasons is shaped by a commitment to act in a way that leads to best results. This doesn’t require that the agents adopt a consequentialist decisions procedure but rather that they, for instance, don’t take certain first-order considerations to be sufficient reasons when there are better options available in terms of the value of the consequences. The consequentialist principle also can play a fixing role in what first-order considerations are good reasons – the ones that lead agents to act in way that creates most general good.

  15. Hmmm. I feel like I’m about to get stuck defending Ralph’s use of the word ‘normative’ all over again. Lest I be misunderstood, let me be clear: I believe that there are two senses of ‘reason’ – both an objective sense, in which your reasons depend on how things are, and a subjective sense, in which your reasons depend on what you believe. I don’t seek to ‘identify’ these two senses of ‘reason’, and never have, although I think they are related to one another.
    Ralph calls these both ‘normative’, and I agree that this is natural given an appropriate sense of ‘normative’ (see recent thread), but it is only the objective sense of ‘reason’ that philosophers usually have in mind when they talk about ‘normative reasons’. It was that sense I had in mind in my post.
    Agreed, it is not plausible that the deliberative constraint requires you to pay attention to the reasons you don’t know about in complete, non-enthymematic deliberation. Nor will good deliberation bar you from paying attention to things which, unbeknownst to you, are false. For brevity, I left out that qualification in my post.
    Still, distinguishing between the two senses of ‘reason’ as Ralph does does not solve the problem. The philosophers I mentioned – prominently including Darwall and Mark van Roojen in his post, but I think including very many others – think that the deliberative constraint holds for objective normative reasons when you are fully informed, and no more think that the objective and subjective senses of ‘reason’ can be identified than I do.

  16. Here’s a point against Option 1. Some acts count as the type of acts they are only if they are performed for a specific kind of reason. For instance, suppose you have a reason to express your gratitude to your adviser for helping you, perhaps by writing a thank-you note. But you fail to do this if you instead decide to write a thank-you note out of annoyance. Another example: understood one way, it is possible to make love only if one decides to act on specific grounds; same bodily motions + different decision procedure = different intentional action.
    Whether this is devastating to those who sharply distinguish between truthmakers and decision procedures depends largely upon the specific content of the actions for which we have reasons.

  17. Eric,
    I really don’t want this thread to die, so I wanted to say something on behalf of the first option to see if it handled your concern.
    Suppose we grant that some acts count as a type of act only if they are performed for a certain kind of reason and that sometimes you have reasons to perform such actions.
    Can’t the defender of Option 1 say that sometimes there are actions that satisfy the demands that the reasons place upon us regardless of the motives for which we perform the action? That would suggest that there was nothing to the concept of reason for action per se that suggests that reasons for action insist upon figuring in practical reasoning. Instead, the cases you offer tell us that there are special kinds of reasons that demand more than other kinds of reasons that are satisfied with mere conformity.
    The basic point is just that I don’t see that defending Option 1 requires defending anything quite as strong as the claim that reasons for action never demand to figure in practical reasoning.
    I suppose if someone who defended Option 1 was to be very ambitious, they might go further and say two things.
    First, if reasons of, say, gratitude make X-ing right only if such reasons figure in deliberation, admitting that such reasons must figure in deliberation is not a counterexample to the claim that all reasons are reasons because of their right-making power. It is simply to admit that some reasons have to do many things if they are to make an action right.
    Second (and this is I think this is terribly speculative and independent from the previous points) I suppose someone could try to account for why it is that it seems that doing what R requires involves, inter alia, doing what R requires from R, as follows.
    Even those who defend Option 1 might grant that sometimes S fails to do what the reasons require of S because S acts from an inappropriate motive in X-ing that makes it the case that the X-ing involves acting against some reason when X-ing from a different motive wouldn’t have this consequence. That’s very abstract, so let me offer some examples. Maybe there’s nothing wrong with sex, but having sex in the hopes of being paid might make an otherwise unproblematic course of action problematic. Maybe there’s nothing wrong with refusing to sell your house when you listed it for sale , but doing so because you don’t like the skin color of the prospective buyer makes it wrong (I owe these examples to Steve Sverdlik). Maybe there’s nothing wrong with mowing your lawn at 8 a.m., but mowing it at 8 a.m. in the hopes of waking your neighbors is wrong.
    In these examples, an action of a certain type could be performed for a variety of reasons without violating the demands the reasons place upon us although there are some motives which if acted upon change that. I suppose there might be cases in which all but one motive for X-ing would violate the constraint that you shouldn’t act from certain motives the acting from which is wrongful (e.g., the motive of malice, motives that show that one is negligent, reckless, etc…).
    You might think that in the cases you offered, we have cases in which there happens to be but one kind of motive you could act on without violating the requirement to refrain from actions that show ill will or disrespect for another. In the example of the thank you note, it is true that you cannot both (a) do what you should do (i.e., write a thank you note) and (b) act from any old motive. But that’s because if you wrote the note out of annoyance (or perhaps any motive other than the motive of expressing gratitude) that would make the otherwise unproblematic course of action wrongful. It’s just an odd coincidence, as it were, that you can only satisfy this type of reason’s demand by acting on one kind of motive. Perhaps the cases you offer can be accomodated in the Option 1 framework by this sort of indirect method because even in these cases, we aren’t compelled to say that the act is right (in part) because of the motives that gave rise to it, but that the motives are necessary because any alternative motive that would have led to the same action would have been motives it is wrongful to act upon.

Leave a Reply

Your email address will not be published. Required fields are marked *