One
of the great works of recent moral philosophy is Don Regan’s Utilitarianism and Co-operation (New
York: OUP, 1980). Although the theory that Regan argues for in the book (viz.,
cooperative utilitarianism) is, I believe, untenable, I think that Regan makes
a very important discovery: that no exclusively
act-orientated
moral theory will be tenable. Now, unfortunately, Regan
gives us nothing much beyond the following rough characterization of what it
means for a theory to be exclusively act-orientated: “a theory is
exclusively act-orientated if it can be stated in the form ‘An agent should
that act which…’.” (p. 10). But perhaps we can do better and define
the notion as follows: A theory T is not exclusively act-orientated if and only
if T requires of agents something more than just the performance of certain
voluntary acts. With this definition in hand, let me explain how any moral
theory that is exclusively act-orientated will have counterintuitive
implications in the following case, which I borrow with revision from Regan (p.
18) and which I name The Buttons. I name it this, because it involves two
individuals, Coop and Uncoop, each with a button in front of them. Depending on
whether or not each pushes their respective buttons, the consequences will be as
depicted in Table 1.

Table 1

 The Buttons

Uncoop:

Push

Uncoop:

Not-push

Coop:   Push

10 

Coop:   Not-push

0   

 

 

 

 

 

 

 

Assume
that Coop is willing to cooperate with Uncoop in
bringing about the best possible outcome—the one where each pushes,
resulting in ten units of value. Thus, Coop will push his button so long as
Uncoop is willing to cooperate by pushing his button. And let’s assume that it
is transparent to both Coop and Uncoop whether the other is willing to
cooperate. For let’s suppose that they both belong to a strange breed of
hominids that are like us in every way except that a ‘C’ appears on their
foreheads when, and only when, they have a desire to cooperate with others in
bringing about the best possible outcome (I borrow this idea from our own
Chappell).

Unfortunately,
Uncoop is unwilling to cooperate and will not push his button regardless of
whether or not Coop’s forehead displays a ‘C’ and regardless of what Coop may
do. And, of course, Coop knows that Uncoop is unwilling to cooperate, for
Uncoop doesn’t have a ‘C’ on his forehead. Thus, Coop knows that Uncoop will
not push his button and that he is powerless to change this.

Any
exclusively act-orientated theory can require only that agents perform various voluntary
acts, such as to push or not-push. But, in The
Buttons
, the only way for Uncoop to ensure that they both push, thereby
ensuring that they achieve the optimal ten units of value, is for Uncoop both
to push his button AND to desire to cooperate. If he pushes without desiring to
cooperate, then Coop will not push, for unless he desires to cooperate, no ‘C’
will appear on his forehead. And unless a ‘C’ appears on his forehead, Coop
will not push. But it seems to me that a moral theory should require Uncoop
both to push and to desire to cooperate. After all, it seems that he should
desire to cooperate, and it is only by both intending to push and desiring to
cooperate that he will ensure that the optimal ten units of value is achieved.
But, of course, desiring to cooperate is not a voluntary act. Admittedly, it is
a judgment-sensitive attitude, an attitude that is responsive to the subject’s judgments
about reasons. But, unlike raising an arm, it is not something one does by
intending to do it. One involuntary forms a desire to cooperate in response to
one’s awareness of reasons or perceived reasons.

Another reason to think that a theory shouldn’t
be exclusively act-orientated (this one comes from me, not Regan) stems from
the following case. I’ll call it The
Lever
. Suppose that Smith can save five lives if and only if he pulls the
lever that’s in front of him. And suppose that the world is such that if he
intends to pull the lever there is a 90% chance that he’ll pull the lever and a
10% chance that he won’t. Assume that if he doesn’t intend to pull the lever,
there is a 0% chance that he pulls the lever. It seems that a theory that is
exclusively act-orientated could only require Smith to pull the lever or to
perform some other alternative voluntary act. But whether he pulls the lever is
not entirely up to him, so this seems wrong. And it also seems wrong to say
that he’s not required to pull the lever and leave it at that. For if he
doesn’t care about the five and doesn’t even intend to pull the lever, then it
seems that he has not done all that is required of him. So it seems to me that
we should say that he is required to attempt to secure the world in which the
five are saved by intending to pull the lever. But, again, intending to pull a
lever is not a voluntary act. So, again, it seems that no exclusively act-orientated
theory will be adequate.

34 Replies to “Moral Theories Should Not Be Exclusively Act-Orientated

  1. Doug, the Coop example is not entirely convincing to me. In a D’Arms/Jacobson case, do you think the person ought to stop envying his wealthy friend? (Reminder: because if the friend is envied he will be upset and stop donating lots of money.) In a Crisp case, that the person ought to prefer the saucer of mud? (Threatening demon.) In a Rabinowicz/Rønnow‐Rasmussen case… etc.?
    When you say Uncoop ought to desire to cooperate, the ‘ought’ seems to be a WKR-ought.

  2. Hi Jamie,
    I’m assuming that Uncoop ought to desire to cooperate for the right kind of reasons. I take it that the reason that Uncoop has to want himself to cooperate with Coop is that his cooperating with Coop would be good, and that X would be good seems to be the right kind of reason for want X to obtain. In any case, tell me what you take to be the right kind of reasons for Uncoop to want to cooperate and tell me why we can’t assume that Uncoop has reasons of that kind.

  3. Oh, okay. Then what’s the point of the ‘C’ on the foreheads? I thought that was supposed to give Uncoop a reason to desire to cooperate (namely, that if he had a desire to cooperate he would have a ‘C’ on his forehead).

  4. Good question.
    The point of having the ‘C’ appear on the foreheads of those who have a desire to cooperate is to make whether or not Coop pushes causally depend on whether or not Uncoop has a desire to cooperate. Coop will push if and only if Uncoop has a ‘C’ on his forehead, and Uncoop will have a ‘C’ on his forehead if and only if he has a desire to push. (Instead, I could have made Coop a telepath.)
    Suppose, then, that Uncoop doesn’t have a desire to cooperate but pushes only because he is ordered to do so by his commanding officer. In this case, Coop won’t push and the result will be that zero units of value are produced. On an exclusively act-orientated theory that requires Uncoop to push, Uncoop will, in this instance, have done all that is morally required of him. But, intuitively, it seems that what is morally required of him is to secure 10 units of value by intending to push his own button and desiring to cooperate with Coop — the former ensures that he pushes and the latter ensure that Coop pushes as welll. But this is to require of him more than just certain voluntary acts. Hence, such a theory must not be exclusively act-orientated.

  5. “I think that Regan makes a very important discovery: that no exclusively act-orientated moral theory will be tenable.”
    Ok, I can’t resist: isn’t this discovery something that virtue theorists (at least those who don’t simply present “virtue ethics” as an alternative account of right action) and, possibly, Kant have been telling us since well before 1980?
    😉

  6. Hi Matt,
    There is, of course, some truth to what you say. But, often times, contemporary virtue theorists and Kantians formulate their theories in such a way that they turn out to be exclusively act-orientated as well. That is, they put their theory in the following form: “An agent should perform that act which….”
    But, more importantly, Regan has shown that even consequentialists who are concerned solely with the consequences should not accept an exclusively act-orientated theory. And so it’s only Regan who has shown, on theory-neutral grounds, that no exclusively act-orientated moral theory will be tenable. That is, Regan has shown that regardless of whether we are consequentialists, deontologists, virtue ethicists, or anything else, we should not formulate our moral theory so that it only tells us what acts we should perform. It must also tell us what attitudes we should have. I take it that no virtue theorist or Kantian has shown this. And that’s why I take Regan’s book to be so important.

  7. Doug,
    Right, I understood that. I will have to be more long-winded in my question.
    A RKR to desire to cooperate is (something of the form) that cooperation is such-and-such, where such-and-such is desirable. (You suggest that it’s the desirability itself, pace buckpassing, which is fine — I don’t mean to take sides on that.)
    In your story, I thought, it is not desirable for Uncoop to cooperate, because Coop is not going to push the button.
    On the other hand, a WKR to desire to cooperate is that desiring to cooperate will have a such-and-such consequence, where such-and-such is good. In your story, Uncoop’s desiring to cooperate will indeed have a good consequence (the ‘C’ will appear on his forehead) supposing that he does cooperate.
    So, I thought you intended the prospect of the ‘C’ to be a WKR for Uncoop to desire to cooperate. But, you say you intended Uncoop to have a RKR to desire to cooperate. So then I wondered what role the prospect of the ‘C’ was playing.
    Maybe you can clear this up for me by saying what reason Uncoop has to desire to cooperate, in your story.

  8. Hi Doug,
    In the Coop / Uncoop story, could the act-oriented theorist say the following: Uncoop should do her part to bring about the best consequence. Desiring to push is a necessary means to doing her part (because a necessary means to making the ‘C’ appear on her forehead — I presume a felt pen won’t do?). And so Uncoop has decisive instrumental moral reason (instrumental to a moral end) to bring it about that she desires to push. (Where bringing it about is a voluntary action, like visiting a sophisticated neurosurgeon.)
    Also, what is the act-oriented theorist’s notion of a “voluntary” act, if it’s not an intentional act? Just a movement of one’s body? But what makes it “voluntary”?

  9. Hi Jamie,
    I’m thinking that we must have different understandings of what it is to desire to cooperate. Perhaps, you’re thinking that to desire to cooperate it is sufficient to desire to do one’s part in some possible cooperative scheme. But I’m thinking, and I think that this is how Regan thinks of it, to desire to cooperate is to desire that we coordinate our behavior in pursuit of a jointly valued outcome (see pp. 126-127 of Regan). Thus, for Uncoop to desire to cooperate is for Uncoop to desire that he and Coop both value the achievement of ten units of value and work together (coordinating their behavior) so as to achieve this jointly valued outcome. And I think that it is desirable that he and Coop both value the achievement of the ten units and work together to achieve this jointly valued outcome whether or not Coop or Uncoop are going to push. So this is Uncoop’s reason to desire to cooperate: it would be good for them to cooperate.
    It would probably be clearer, then, if I say that in order for a ‘C’ to appear on their foreheads they must desire that they work together and coordinate their behavior in an attempt to achieve the ten units of value.
    Does this help or am I still missing your point?

  10. Hi Hille,
    I don’t think that an exclusively act-orientated theorist can say that. Such a theorist can only require agents to perform certain voluntary actions. Doing one’s part, you’re suggesting, involves having a desire to cooperate (not just to push, as you said). But desiring to cooperate is not a voluntary act. And let me just stipulate that in this case the only way for Uncoop to come to desire to cooperate is to appreciate and respond appropriately to the reasons that he has to desire to cooperate and to thereby involuntarily come to desire to cooperate in response to those reasons. So what makes desiring to cooperate involuntary is that Uncoop can’t (we’re assuming) come to desire to cooperate by intending to do something. Nothing that he can do will cause him to desire to cooperate. There is no pill that he can take to cause him to desire to cooperate just as there is no pill that will cause flat earthers to believe that the Earth is not flat. Nevertheless, flat earthers (at least those who have the relevant evidence as well as the relevant rational capacities) ought to believe that the Earth is not flat. They can’t come to this belief by intending to do anything. Nevertheless, they should have come to this belief in response to the reasons that they have for this belief. Likewise, I want to say Uncoop ought to have a desire to cooperate (given that it would be good for them to cooperate) despite the fact that there was no way for him to have come to this belief by performing an intentional action.

  11. One more thought Hille.
    Do you agree that (suitably informed and rational) flat earthers ought to believe that the Earth is not flat? Or do you think only that flat earthers ought to do what might cause them to believe that the Earth is not flat?
    Likewise, you you agree that Uncoop ought to desire that the two of them cooperate? Or do you think only that Uncoop ought intentionally to do that which will bring it about that he desires that the two of them cooperate? If the latter and if we assume that there is nothing Uncoop can intentionally do to bring it about that he has this desire, then would you say that Uncoop does all that is required of him?

  12. Doug,
    That’s a fair answer (and your remark about many virtue theories is why I hedged my comment above). I’m inclined to think that some of Iris Murdoch’s work (in The Sovereignty of Good) goes to the main point here (loosely, that there’s more to morality than action), but whether she does that on “theory-neutral grounds,” I’m not sure. I’m a little bit suspicious of the notion of theory-neutral grounds, but at any rate, it is noteworthy to see this result coming from a utilitarian perspective. So thanks for bringing Regan’s book to my attention!

  13. It doesn’t matter as to whether Coop has a reason to desire that they cooperate. It matters only in the way that you say you understand that it matters: that is, it matters only as one possible device for explaining how Coop will know what Uncoop’s desires are and will determine how to act depending on what Uncoop’s desires are.

  14. Right.
    On the rest, you’ve lost me. Besides not being familiar with that mechanism (although thanks for the link) could you what your ‘that’ refers to, and what it is a counterexample to?

  15. The same example, but without the forehead-C thing.
    It’s a counterexample to any moral theory that is exclusively act-oriented. (Sorry, I could not bring myself to type the other word.)
    The abstract idea exemplified by the forehead-C mechanism is familiar in evolutionary biology as a Green Beard. Reciprocal altruism can evolved easily if the gene for it also provides its bearer with a recognizable and impossible-to-fake trait. Like, for example…

  16. Hi Doug,
    These are some interesting cases. I have a few questions about them.
    (1) You write that “And, of course, Coop knows that Uncoop is unwilling to cooperate, for Uncoop doesn’t have a ‘C’ on his forehead. Thus, Coop knows that Uncoop will not push his button and that he is powerless to change this.”
    and
    “the only way for Uncoop to ensure that they both push, thereby ensuring that they achieve the optimal ten units of value, is for Uncoop both to push his button AND to desire to cooperate. If he pushes without desiring to cooperate, then Coop will not push, for unless he desires to cooperate, no ‘C’ will appear on his forehead. And unless a ‘C’ appears on his forehead, Coop will not push.”
    How can Coop know that Uncoop won’t push his button if Uncoop can push the button without desiring to push the button? It seems that all Coop could know is that Uncoop does not desire to push the button and so it is unlikely that Uncoop will push the button.
    I thought the point of the “C” was to make it so that the way each person will act is transparent to the other, but I don’t see how that is possible given that Uncoop can act in ways he doesn’t desire to act.
    (2) Given (1), I can imagine a defender of an exclusively act-oriented theory holding that Uncoop’s objective obligation is to push the button iff Coop will push the button and Coop’s objective obligation is to push the button iff Uncoop will push the button? Then, since it’s apparently a possibility that Uncoop will push the button without desiring to push the button, Coop might have a subjective obligation to either push or not push the button, depending on what will maximize expected utility. In this case, Coop would presumably have a subjective obligation to not push the button and I don’t see anything obviously counter-intuitive with this.
    (3) I also am not sure that I want to say that flat-earthers have an epistemic obligation to believe that the earth is spherical. Whether they do depends on their rational capacities. But supposing, as you seem to, that they have sufficient evidence that the earth is not flat and the requisite rational capacity to form the right beliefs if they examined the evidence to the best of their ability, then I am perfectly happy saying that they are obligated to examine the evidence of the earth’s shape to the best of their ability. Following their epistemic obligations in this case would entail that they would form the correct belief, but having that belief doesn’t seem to be an obligation of theirs.
    I think we can say something analogous about Coop and Uncoop. Uncoop might have an obligation to act in the way necessary to develop the best kind of moral character he can. (Maybe he should take an ethics class, etc.). If he has the capacity to respond to the right kinds of reasons, then fulfilling his obligations will result in him forming the right kinds of desires.
    In short, a good strategy for a defender of an exclusively act-oriented theory would be to hold that the agent’s obligations consist of acts they could voluntarily perform that would result in them forming the right kind of non-voluntary mental states (e.g. beliefs, desires).
    (4) Can you say more about what you take it to mean for an agent to desire to perform some act? Except in cases of akrasia (which I also find puzzling), I have a hard time understanding how an agent can choose to ϕ without, in some sense, desiring to ϕ. If Uncoop were to freely ϕ and ϕ-ing is not an instance of askrasia, does he think, of the options that he (believes he) can actualize, ϕ-ing is likely the best one? Why would he freely ϕ if he doesn’t think ϕ-ing is the best option he can actualize and is not subject to any kind of weakness of he will?
    (5) I must be missing something obvious in the Smith case. Why isn’t intending to pull the lever a voluntary act? Maybe Smith can’t make himself care about the five and if he doesn’t care about the five he won’t pull the lever. But couldn’t he still voluntarily form the intention to pull the lever?
    (6) At any rate, supposing it’s not a voluntary action, what do you think about applying the strategy I suggest in (3)?
    – Travis

  17. Hi Travis,
    Re. (1): How can Coop know that Uncoop won’t push his button if Uncoop can push the button without desiring to push the button? It seems that all Coop could know is that Uncoop does not desire to push the button and so it is unlikely that Uncoop will push the button. Let’s just assume that Coop has sufficient justification of the non-Gettier-type that Uncoop has no reason to push unless he wants there to be cooperation between the two of them. Thus, Coop knows not only that Uncoop doesn’t desire to cooperate but also that he won’t push.
    Re. (2): I’m talking about objective obligations. Now it’s true that (i) Uncoop has a conditional obligation to not-push if Coop is not going to push. And as a matter of fact (ii) Coop is not going to push (for Uncoop does not have a ‘C’ on his forehead). But from (i) and (ii) we cannot infer that (iii) Uncoop has an unconditional obligation to not-push. If you need some examples that suggest that sort of inference is bad, let me know. So long as you agree that Uncoop has an unconditional (objective) obligation both to push and to desire to cooperate, then you’ll have to reject exclusively act-orientated theories of objective obligation.
    Re. (3): I am perfectly happy saying that they [flat earthers] are obligated to examine the evidence of the earth’s shape to the best of their ability…, but having that belief doesn’t seem to be an obligation of theirs. Would you say the same thing about reasons? That is would you say: I am perfectly happy saying that they have a reason to examine the evidence for the Earth’s being non-flat, but they don’t have a reason to believe that the Earth is not flat? I wouldn’t say this. And since one’s obligations are, I believe, a function one’s reasons, I wouldn’t say what you said.
    Re. (4): As I noted in reply to Jamie, the relevant desire is not a desire to push but a desire that the two of them (Coop and Uncoop) coordinate their actions so as to bring about some jointly valued outcome — in this case, the one in which ten units of value are achieved by them both pushing. This is the desire that Coop has and Uncoop lacks. And this is the desire that one must have in order of a ‘C’ to appear on one’s forehead. So I’m not talking about a desire to perform an act, but a desire for a state of affairs: the one in which Coop and Uncoop work together so as to achieve ten units of value.
    Re. (5): Why isn’t intending to pull the lever a voluntary act? Because you don’t form the intention to pull the lever by intending to form the intention to pull the lever. I’m moved here by Toxin Puzzle Cases. Let me know if you want me to spell out such a case.
    Re. (6): I don’t like it, for some of the reasons that I explained above.
    Thanks for all these. If I’ve been too quick in responding to some of these, just let me know and I’ll expand on those that seemed too quick to you.

  18. Hi Doug,
    Thanks for the responses! You asked:
    “Do you agree that (suitably informed and rational) flat earthers ought to believe that the Earth is not flat? Or do you think only that flat earthers ought to do what might cause them to believe that the Earth is not flat?”
    Yes, I do agree. I do think there are reasons for belief, not just reasons for action. (And that the right response to reasons for belief is generally just to believe on their basis, not to act so as to make oneself believe.)
    “Likewise, you you agree that Uncoop ought to desire that the two of them cooperate? Or do you think only that Uncoop ought intentionally to do that which will bring it about that he desires that the two of them cooperate? If the latter and if we assume that there is nothing Uncoop can intentionally do to bring it about that he has this desire, then would you say that Uncoop does all that is required of him?”
    Yes, I personally I agree that Uncoop ought to desire, and has reasons to desire, that might be distinct from reasons for action. But I was thinking that the exclusively act-oriented theorist shouldn’t agree, and so she should think in terms of bringing about the desire through voluntary actions, in the same sort of way as one might bring about any other state of affairs.
    E.g. (Suppose) I can put the cup on the table. The cup itself isn’t a voluntary action. But I can have reasons for action to put the cup on the table. Likewise, I was thinking, the act-oriented theorist could say: (Suppose) I can make myself desire to coordinate. The desire itself isn’t a voluntary action, but I can have reasons for action to make myself have this desire. (Here the desire is like the cup, and I am like the table.)
    That’s how I was thinking that the act-oriented theorist should respond. But then you stipulate that there’s nothing Uncoop can do to intentionally bring it about that he desires to cooperate. In that case, if the act-oriented theorist believes that moral oughts governing action obey “ought implies can,” then it seems like she should indeed hold that Uncoop has done all that’s morally required of her even when she doesn’t desire to push. (At least, Uncoop has done all that’s required of her as concerns her states of desire. If Uncoop can push the button without desiring to coordinate, then it remains open that she should push. Or maybe she should push and take out that felt pen to draw a ‘C’ on her forehead?)
    I do personally think that’s an unwelcome consequence about desire. But would the exclusively act-oriented theorist think so? I’m not sure how counterintuitive it is to someone who doesn’t think there are reasons for desire. (Am I right in thinking that that’s how you’re thinking of the exclusively act-oriented theorist — as someone who doesn’t think that there are reasons for desire, only reasons for action?)

  19. Hi Hille,
    Thanks for continuing the discussion.
    I don’t see why the exclusively act-oriented theorist should deny that Uncoop has good (indeed, decisive) reason to desire that the two of them cooperate with each other. Nor should such a theorist deny that the reasons for Uncoop to have this desire are distinct from whatever reasons that Uncoop may have to perform any action that would cause him to come to have this desire. Lastly, I don’t think that such a theorist should deny that Uncoop is rationally required to have this desire.
    Now, what such a theorist must deny is that Uncoop is morally required to have this desire. For given that such a theorist must accept a theory that is exclusively act-orientated, she must deny that agents are bound by any moral requirement besides those requirements that are for performing certain actions.
    You ask, Am I right in thinking that that’s how you’re thinking of the exclusively act-oriented theorist — as someone who doesn’t think that there are reasons for desire, only reasons for action?
    No. The act-utilitarian (which I take to be someone who accepts that the one and only fundamental moral requirement is to act always so as to maximize utility) accepts a theory that is exclusively act-orientated. But act-utilitarians don’t (and shouldn’t) deny that there are reasons to believe certain propositions, to desire certain states of affairs, and to intend to perform certain actions and not just reasons to perform certain actions. What act-utilitarians (as I just conceived of them) are committed to denying is only that agents are morally required to believe certain proposition, to desire certain states of affairs, and to intend to perform certain actions. But what they overlook is that sometimes the only way to see to it that the best consequences are achieved is to not only perform certain actions but also to have certain desires, beliefs, and/or intentions that they are, in any case, rationally required to have. And if the act-utilitarian accepts that an agent can be rationally required to have certain desires, beliefs, and intentions, then that puts them in the uncomfortable position of having to explain why there can be rational, but not moral, requirements to have certain desires, beliefs, and intentions.

  20. Sorry, yes, I meant “moral reasons,” not just any reasons. (For desire.)
    Or are you thinking that there might be moral reasons for desire, but there couldn’t be moral requirements to desire, for the act-oriented theorist? (I’m guessing not — even though on your theory of moral requirements (for actions), non-moral reasons affect our moral requirements.)
    Thanks!

  21. Hi Doug,
    Thanks for your helpful reply. Here is a quick response to your comments.
    Re. (2): I don’t think anything I said committed me to making that inference. Did I commit myself to this? I was suggesting that an exclusively act-oriented theory doesn’t obviously (to me) generate counter-intuitive results if we understand the objective and subjective obligations in the way I suggested.
    I don’t necessarily agree that Uncoop has an unconditional (objective) obligation both to push and to desire to cooperate, so I don’t think I have to reject exclusively act-orientated theories of objective obligation.
    Re. (3): I would say that flat-earthers have a reason to believe that the earth is flat if they have the rational capacity to recognize the evidence available to them. If they lack the rational capacity to recognize the reasons others are able to recognize, then I don’t think they have reason to believe the earth is flat.
    For instance, I think that I have reason to believe the earth is flat and my dog Leela doesn’t. So, I have a rational obligation to believe that the earth is flat and Leela doesn’t. I want to say something analogous about moral obligation.
    But if I accept that an agent’s obligations are a function of her reasons, then maybe I need to say that flat-earthers with the requisite rational capacity have a (rational) obligation to believe that the earth is not flat. And then I should probably say something analogous about moral obligation.
    But, it still seems to me that someone prone to an exclusively act-oriented moral view could reasonably disagree without obviously counter-intuitive consequences.
    Do you think that an agent can be obligated to form some desire (or intention, belief, etc.) that she could not form unless she performed some series of voluntary actions she could not perform?
    If so, then one might worry that this non-exclusively act-oriented theory would violate the right version of ought implies can. (I take it that this is close to Hille’s worry). For how can an agent be obligated to have a desire when there is nothing she could do that would result in her having that desire?
    If not, then I think a defender of exclusively act-oriented is still on relatively solid ground. They just need to insist that an agent can be obligated to perform the voluntary acts she can perform that would result in her forming the right desires (beliefs, intentions etc.). An agent only needs reasons to perform these acts and not reasons to form involuntary mental states. Nevertheless, the agent would necessarily acquire the correct involuntary mental states whenever she fulfills her obligations (which could just be a function of her reasons).
    Re. (4): That’s helpful. Even understanding what the content of the desire is supposed to be, I guess I have trouble understanding why someone would freely ϕ if she doesn’t desire the state of affairs that would result from ϕ-ing and is not subject to any kind of weakness of he will?
    Re. (5): If I remember correctly, Toxin Cases purport to show that one cannot voluntarily form an intention to perform an act that she believes she will not perform. Suppose S is offered a large sum of money to form the intention tonight to drink toxin tomorrow. She can’t form the intention at midnight to drink the toxin tomorrow because after midnight she will either already have the money (and has no reason to drink the toxin) or doesn’t have the money (and has no reason to drink the toxin). So, she can’t form the intention to drink the toxin when she knows that she won’t drink the toxin (and recognizes that she has no reason to drink the toxin). Is that right?
    Smith’s case seems relevantly different. Smith doesn’t’ know that he won’t pull the lever. In fact, it is very likely that he will pull the lever if he intends to do so. So, there doesn’t seem to be the same impediment to forming the intention to pull the lever as there is in forming the intention to drink the toxin. But perhaps I am misremembering, or missing something important about, the Toxin Case.
    I sometimes seem to be able to form intentions merely by intending to perform certain acts that I am only very likely to succeed at. For instance, there might be a 90% chance that I can kick a soccer ball in a goal if I intend to do so. I can form the intention to kick the ball in the goal without kicking the ball in the goal.
    Maybe it would help me if you filled in the details of the Smith case a bit more and/or said why Toxin Cases are relevantly similar to the Smith case.
    – Travis

  22. Thanks, Hille. That clears things up. The exclusively act-orientated theorist must deny that there are moral requirements to desire, to believe, and to intend. And you’re right that insofar as such theorists accept that moral requirements stem from moral reasons they must also deny that there are moral reasons to desire, to believe, and to intend. This means that not only will they have counter-intuitive implications in the cases like The Buttons and The Lever, but they will also have to explain why there can be only non-moral reasons for desiring, when there can be both moral and non-moral reasons for action. And they will have to deny that there is a moral reason to desire equality for all people regardless of their race, creed, sex, gender, etc.

  23. Hi Travis,
    In what follows, assume that I’m talking about objective obligations and assume that the relevant agents are aware of all the relevant reason-providing facts and have the relevant rational capacities such that they ought to respond appropriately to their reasons in whatever the relevant ought-implies-can sense of ‘ought’ is.
    Now, I plan on responding in detail to you other points later, but first I want to get clear on what’s at the heart of our disagreement. You say, “I don’t necessarily agree that Uncoop has an unconditional (objective) obligation both to push and to desire to cooperate, so I don’t think I have to reject exclusively act-orientated theories of objective obligation.”
    So you seem to be sympathetic to exclusively act-orientated theories of obligation. Such theorist must accept one of the following two claims about Uncoop in The Buttons:
    (C1) Uncoop is morally impeccable. He has fulfilled all of his moral obligations and is, thus, without moral fault. This is true even though he did not push and even though he could have seen to it that the he and Coop both pushed if only he had formed both the intention to push his own button and the desire that they cooperate (a desire that he was rationally required to have anyway).
    (C2) Uncoop is not morally impeccable. His fault lies in the fact that he did not push. Had he pushed, he would have been morally impeccable. That is, had he pushed he would have fulfilled all of his moral obligations. This is true even if he had pushed without desiring that they cooperate. That is, this is true even if he had pushed only because his commanding officer ordered him to. Thus, if he had pushed without desiring to cooperate, he would have fulfilled all of his moral obligations even though this resulted in zero units (the least amount) of goodness being achieved.
    I think that (P1) the exclusively act-orientated theory must accept either C1 or C2. And since I think that (P2) C1 and C2 are both highly counterintuitive, I conclude that (C) exclusively act-orientated have highly counterintuitive implications and should be rejected.
    You disagree, for you reject P2. Is that right? Is that the source of our disagreement? If so, which do you accepts C1 or C2?

  24. Hi Doug,
    I am becoming less sympathetic to exclusively act-oriented views, but I haven’t written them off yet. I don’t accept C2. At least, I don’t if Coop won’t push the button regardless of whether Uncoop pushes and that is supposed to be the case, right? This is so because Uncoop will not have a ‘C’ on his forehead since he does not desire to cooperate.
    But I don’t think I need to accept C1 either. Uncoop could have fulfilled his moral obligation by pushing the button (since that is what will maximize utility in this case), but Uncoop can still be deserving of moral blame. I don’t accept that fulfilling one’s objective moral obligations necessitates that one is without moral fault. I don’t think you do either.
    Maybe you want to say that fulfilling all of one’s subjective moral obligations necessitates that one is without moral fault. To accommodate certain cases of moral luck, I deny this. You probably think it implausible, but I might be inclined to say that Uncoop fulfilled his moral obligation by not pushing the button, but is morally criticizable because he would not have cooperated even if Coop was going to push the button and Uncoop was aware of this fact. This is also assuming that Coop could have freely chosen to perform voluntary acts that would have led to him developing the right kind of moral character and consequently forming the correct involuntary mental states.
    Do you think that one is necessarily free from moral fault if she has fulfilled all of her (subjective) moral obligations? Imagine that in the Buttons case, Uncoop is removed from his button-pushing position of power at the last minute and replaced by SUPERCoop, who has a ‘C’ on his forehead and pushes the button. Now, Uncoop’s desires and actions will no longer affect the outcome of the Buttons case. Also, let’s suppose that Uncoop is never put in a position like this again.
    (1)Do you think Uncoop fulfilled all of his moral obligations and is without moral fault?
    – That strikes me as counterintuitive. Uncoop seems deserving of moral blame to me because he (let’s assume) would not have fulfilled his moral obligation were he in a position of power in the Buttons case.
    (2)Or do you think that Uncoop failed to fulfill some moral obligation of his and so is morally criticizable? Perhaps you think he is obligated to have the disposition to form the right kinds of desires, even if he is never put in a position where he forms desires about the matter in question.
    – This also strikes me as counterintuitive. I reject the idea that we can be being morally obligated to have the disposition to form certain desires.
    (3)Or do you accept some third option?
    This raises the more general question of when we are obligated to form certain desires and (other involuntary mental states). On your view, is it just whenever it would change the outcome of some voluntary act (as in the Buttons case) or are we always obligated to have these desires or is it some other option?
    Note: I meant to write in my previous comment that I have reason to believe that the earth is NOT flat. I want to be clear to anyone reading this that I am not a flat-earther.

  25. Hi Travis,
    Okay, I’m thinking now that C1 and C2 are not the best way to diagnose the source of our disagreement given that, as you point out, blame and objective obligations can come apart. So let me try a different tack.
    You say,
    I don’t think I need to accept C1 either. Uncoop could have fulfilled his moral obligation by pushing the button (since that is what will maximize utility in this case), but Uncoop can still be deserving of moral blame.
    Just to be clear: In The Buttons, Uncoop does not push and does not desire that he and Coop cooperate. (And Coop not-pushes, because no ‘C’ appears on Uncoop’s forehead.) The question, then, is: Has Uncoop fulfilled all of this moral obligations?
    Now either he has fulfilled all of his obligations or he hasn’t. I claim that he has not fulfilled all his obligations. Although I accept that he has fulfilled his conditional obligation to not-push if no ‘C’ appears on his forehead (after all, he not-pushes and no ‘C’ appears on his forehead given that he lacks the desire that he and Coop cooperate), I think that there is an obligation he has not fulfilled: specifically, the obligation to desire that he and Coop push. But the exclusively act-orientated theorist cannot say what I just said, because the obligation that I say that Uncoop has failed to fulfill is not an obligation to perform an act but to have a certain desire.
    So do you think that Uncoop has fulfilled all of his obligations or not? If you think that he has, then this is where we disagree. If you think that he has not, then I need to know which obligation you think that he has failed to fulfill. Do you want to say that he has failed to fulfill his obligation to push even though this would have bad consequences given that Coop will not-push given that lack of a ‘C’ on Uncoop’s forehead?
    I’m guessing that you’ll say that Uncoop has fulfilled all of his (objective) moral obligations but that he is blameworthy for something. But for what? I would say that he is blameworthy for not desiring that he and Coop cooperate. And I would explain this by saying that he had an obligation to have this desire and failed to do so although he had the capacity to form this desire and had no suitable excuse for not doing so (such as being ignorant of the fact that it would desirable for he and Coop to cooperate). So if you want to say that he fulfilled all of his (objective) obligations but is blameworthy, then can you tell me what he is blameworthy for.

  26. Hi Doug,
    I am currently agnostic about whether moral theories should be exclusively act-oriented. As such, I’m not sure what to say about these cases exactly, but here are two things I think I can say that seem plausible to me.
    (1) Uncoop has fulfilled all of his (objective) moral obligations in The Buttons case, but he failed to fulfill some obligations in the past. Those obligations are the kind of voluntary acts he could have performed that would have resulted in him developing a better moral character and then forming the right kind of involuntary mental states in the Buttons case.
    If there were no voluntary acts that Uncoop could have performed in the past that would have resulted in him forming the right kind of involuntary mental states, then he has both fulfilled his (objective) moral obligations and may not be morally blameworthy (assuming he fulfilled all of his subjective moral obligations).
    (2) Uncoop has fulfilled all of his (objective) moral obligations in The Buttons case, but he is blameworthy because he would not push the button even if Coop was going to push the button (regardless of Uncoop’s actions). So, he is morally criticizable for the fact that he would fail to fulfill his moral obligation in this nearby possible world.
    What do you think about my revised Buttons case with SUPERCoop? Has Uncoop then failed to fulfill an obligation to form certain kinds of desires? Is he blameworthy for anything? If so, what?

  27. Hi Travis,
    Regarding (1), do you want to say the same thing about rational obligations and desires? Suppose that Smith has the capacity to respond appropriately to his perceived reasons and is fully informed such that his perceived reasons are his actual reasons, but suppose that he fails to desire in accordance with his reasons. That is, he doesn’t desire to quit smoking even though the fact that his continuing to smoke is very likely to significantly reduce the length and quality of his life constitutes decisive reason for him to desire to quit smoking, a fact that he is well aware of.
    I want to say that Smith has failed to fulfill his rational obligation to desire to quit smoking. Are you going to deny this and say instead that he had only a past rational obligation to act so as to become the type of person that would not have made this rational error? But why think that there are only rational obligations to act so as to be the person who doesn’t form inappropriate attitudes? Why not think that are also rational obligations not to form inappropriate attitudes (desires, beliefs, intentions, etc.)? And if you find it implausible to make this sort of move with respect to rational obligation, then what about moral obligations make them different such that you are willing to make this move with respect to moral obligations? In other words, why can there be moral reasons not only to perform certain actions but also to form certain desires, intentions, and other attitudes? This would then allow you to say what is to my mind much more plausible: that Uncoop has not fulfilled all of his moral obligations in The Buttons.
    Regarding (2), this is to say that he is blameworthy for having a certain disposition. This, itself, commits you to denying an exclusively act-oriented theory. But I worry that this particular theory is implausible, because it seems implausible to hold him blameworthy for possessing a disposition that he cannot have see to it that he doesn’t possess.
    Regarding SUPERCoop, it seems orthogonal to our debate because it’s about what I accept specifically and not about whether a theory should be exclusively act-orientated. But, in any case, my view is that the only fundamental moral obligation an agent has is to see to it that certain possible worlds are realized if they can do so by forming, or continuing to possess, certain scrupulous and in some sense “feasible” attitudes. But if Uncoop’s having the desire to cooperate would not be instrumental in securing some desirable world, then I would deny that he has a moral obligation to have the desire and hold only that he has a rational obligation to have that desire.

  28. Hi Travis,
    Another thought: So it seems to me that subjects are held accountable not only for their actions, but also for their attitudes: that is, for their beliefs, desires, intentions, etc. It seems, then, that there must be some attitudes that subjects should and shouldn’t have. For instance, I ought to desire to avoid agony on future Tuesdays. And it seems that there are some combinations of attitudes that subjects should and shouldn’t have. For instance, I ought not to believe both P and ~P. So ask yourself: what attitudes should Uncoop have? Well, it seems rather obvious to me that he should have a set of attitudes that includes both his intending to push his button and desiring that he and Coop cooperate with each other. And it’s a stipulation of the case that if Uncoop has such a set of attitudes then he and Coop will both push. And it seems crazy to say that Coop is required to have a set of attitudes but is not required to perform the act that is necessitated by that set of attitudes. So I want to say that Uncoop has an obligation to intend to push, to desire that he and Coop cooperate with each other, and to push. And if that’s right, then clearly Uncoop has not fulfilled all his obligations. So at what step do you balk? Do you deny that Uncoop should have the set of attitudes that I say he should have or what else?

  29. Hi Doug,
    Thanks for the informative response. As I alluded to in my comment on 8/14, I might come to reject exclusively act-oriented moral views so that my moral views mirror my views on rational obligations. As you point out, I see no reason why the move I made with respect to moral obligations should be a plausible one for morality, yet an implausible one for rationality.
    At the same time, I could make that very same move with respect to rational obligations. Smith might have a rational obligation to quit smoking if he could perform some voluntary act (i.e. deliberate to the best of his ability) that would result in him forming the appropriate involuntary attitudes (i.e. that he rationally ought to quit smoking). If there is nothing he could voluntarily do that would result in him forming the appropriate attitudes for the right kind of reasons, then maybe he is not failing to fulfill some kind of rational obligation he has.
    It’s also true that if Smith could have performed voluntary acts in the past that would have resulted in him forming the appropriate involuntary attitudes for the right kind of reasons, then we can locate his irrational behavior in those past actions (e.g. suppose he took some kind of drug in excess that incapacitated his rational capabilities).
    You wrote “But why think that there are only rational obligations to act so as to be the person who doesn’t form inappropriate attitudes? Why not think that are also rational obligations not to form inappropriate attitudes (desires, beliefs, intentions, etc.)?”
    The answer would be that the view I am suggesting in broad strokes is consistent with a, not implausible, formulation of ‘ought implies can.’
    You wrote “Regarding (2), this is to say that he is blameworthy for having a certain disposition. This, itself, commits you to denying an exclusively act-oriented theory.”
    I don’t see why this is, given that the view holds that one can be blameworthy without having failed to fulfill any moral obligations.
    You add “But I worry that this particular theory is implausible, because it seems implausible to hold him blameworthy for possessing a disposition that he cannot have see to it that he doesn’t possess.”
    The view I suggested needn’t hold that one is blameworthy for possessing a disposition that he cannot see to it that he doesn’t possess. One might only be blameworthy for dispositions they could possess if they performed voluntary actions that were in their power (in the relevant sense) to perform.
    Suppose I grant your worry for the sake of argument. Why would this be any more implausible than holding that someone is blameworthy for possessing an attitude that he cannot see to it that he doesn’t possess (e.g. Uncoop not wanting to cooperate).
    I agree that SUPERCoop is orthogonal to the issue at hand (i.e. whether the correct moral theory is exclusively act-oriented). So I will only add the following. Imagine that whether SUPERCoop is able to take Uncoop’s position depends upon the outcome of a coin toss. I find it incredibly implausible to hold that Uncoop is blameworthy if, for instance, the coin lands on heads and is free of moral blame if it lands on tails. I think he is deserving of criticism for how he *would* act if it landed on tails, regardless of the outcome of the coin toss.

Leave a Reply

Your email address will not be published. Required fields are marked *