I interrupt our regularly scheduled programming to ask whether the following objection to consequentialism is at all original: (Given the vast literature on consequentialism, I suspect it probably is, but I’m hoping those who know that literature better can tell me.)
1. In circumstances C, S would produce X amount of good by doing A.
2. In circumstances C, S would produce (X+Y) amount of good by doing B.
3. S’s doing B rather than A would result in Y amount of good to S (i.e., S would receive Y amount of benefit
by doing B, in addition to whatever benefit S might receive by doing A).
4. Of all the actions available to S in C, B produces the maximal amount of good.
5. So according to consequentialism, S is morally required to B in C.
6. So S is morally required to benefit himself in C.
7. But no one is ever required to morally benefit himself.
——————————-
Consequentialism is false.
I don’t know who first came up with that objection, but Ted Sider came up with a version of utilitarianism to deal with it (“Asymmetry and Self-Sacrifice,” Phil Studies, 1993). If I remember right, Sider’s version allows the agent to take himself out of the equation. You’re allowed to maximize overall utility, but you’re also allowed to maximize utility for the whole world minus yourself. His paper might be one place to start.
When I teach utilitarianism I run an objection about self-sacrifice not being wrong that involves giving a larger slice of cake to someone else, though given my love for icing I’m sure I would enjoy it more. I’m sure I got this from somewhere, but the general idea is that self-sacrifice is not wrong, even when one it would decrease overall utility.
I don’t myself think this is determinative about whether utilitarianism is any good. But I think the fact that I teach it this way is evidence that someone who taught me had a similar of nearly identical objection.
Mike,
No interruption at all. I hope that everyone is clear that there is no reason people can’t post on other topics besides Parfit’s _Climbing the Mountain_.
I believe that the objection comes from Michael Slote in his _Common-Sense Morality and Consequentialism_. I’m on the road, so I can’t check for sure. But I’m pretty sure that Slote raised this objection.
As Ben points out, Ted Sider, in “Asymmetry and Self-Sacrifice,” proposes Self/Other Utilitarianism as a solution to the problem. Clay Splawn in “The Self-Other Asymmetry and Act-Utilitarianism” (Utilitas 13:1) argues that even Sider’s Self/Other Utilitarianism fails to fully capture the self-other asymmetry in that it entails that it is sometimes impermissible to make self-sacrifices that make things no worse for others. And I’ve been working on a paper ““Dual-Ranking Act-Consequentialism”” that argues for a kind of consequentialism that meets the objection from the self-other asymmetry and responds to Splawn’s worries.
I actually don’t see the objection. Run the same argument with one qualification. Let agent S not know who or what are the recipents of his benefaction. S does not know whether it is himself, his neighbor, or some other sentient being. S then chooses the action that is maximally beneficial. In this particular case, he happens to be beneficiary.
“Everyone counts for one”, says Benthem, and I don’t see why we would not want to count the moral agent himself as one. I do understand why you would not want to count the agent himself as *more than one*. I also understand why you would not want to count the agent himself as *less than one*. Either assumption will guarantee the failure of utility maximization in some situations.
Maybe you have in mind cases where there is some alternative to B (say B*), where B* has S maximize utility and pass the utility Y onto some agent S* instead. But that is no objection to utilitarianism, since it’s permissible on that theory.
Like Mike, I’m not sure I see the force of this objection. Mike’s response reminds me of something Mill says:
All honour to those who can abnegate themselves the personal enjoyment of life, when by such renunciation they contribute worthily to increase the amount of happiness in the world; but he who does it, or professes to do it, for any other purpose, is no more deserving of admiration than the ascetic mounted on his pillar. He may be an inspiring proof of what men *can* do, but assuredly not an example of what they *should* (Util, 2, 15, 17-23).
And if I remember correctly, Slote argues, in *From Morality to Virtue* that both virtue theory and consequentialism have an advantage over both Kantian and common-sense morality precisely because the agent’s own happiness counts directly towards the moral status of an action.
In short, I don’t feel the force of this objection.
Mike A.,
Here are a few cases to pump your intuitions:
Case 1: You sacrifice your life to save the life of an older person whose potential future life contains less expected utility than yours. Or perhaps you give a kid a foul ball that you caught at the ballpark, but you know that you would probably get more utility from having it than the kid would get from having it. Commonsense says that such altruistic self-sacrifices are at least permissible if not noble, whereas act-utilitarianism says that such actions are wrong.
Case 2: You stay at home and just laze around the house. Yet you know that you would have got more utility had you gone for bike ride along the beach. Assume that, in this case, there is nothing you can do to affect the utility of others. Commonsense says that what you did was permissible, whereas act-utilitarianism says that what you did was wrong.
Case 3: You keep thousands of dollars (which you won’t need for a number of years) in a savings account that earns little interest, whereas you could have put that money in a CD or money market and earned twice as much interest. Commensense says that what you did was imprudent but not wrong. Act-utilitarianism says that what you did was wrong.
Case 4: You’re going diving. You know that given your lung condition it is very risky for you to dive below 200 ft. But you like to live dangerously and dive to 250 ft. This seems stupid, but not wrong — at least, not if you don’t have any dependents counting on you. Act-utilitarianism, though, says that such risky behavior, which risks a loss of great utility for a relatively slight gain is wrong.
Because commonsense morality embodies an asymmetry between self and others, giving little to no moral weight to the agent’s own utility, whereas act-utilitarianism does not, act-utilitarianism has counter-intuitive implications.
Scott,
I agree with Mill that “the ascetic mounted on his pillar…may be an inspiring proof of what men *can* do, but assuredly not an example of what they *should*,” but, intuitively speaking, the should here is that of rationality or prudence, not the should of morality.
Thanks everyone for the prompt and helpful responses. I’ll certainly re-read Slote’s article and check out Sider.
I’m not sure where I sit with this objection, but Doug’s examples capture the force of it I think. The objection originates in the maximizing element of consequentialism: Mike reminds us of the Benthamite slogan, which steers consequentialism away from both egoism and pure altruism. But it’s one thing to give the agent’s well-being equal worth, another to insist an agent must seek their own well-being. In this sense this objection is the flip side of the familiar objection, often emanating from adherents of agent-centered moralities, that consequentialism is too demanding in the amount of impartiality it requires. Consequentialism not only doesn’t allow an agent to favor his own interests, but also doesn’t allow an agent to forego them either.
A couple of other ideas about this objection:
[1] Is there a connection with the objection (extensively discussed in response to David Sobel’s solicitation for favorite objections to consequentialism) that consequentialism has no place for supererogation. Typical examples of supererogation are like Doug’s case 1, where a person engages in self-sacrifice. But wouldn’t this objection show not only that such a case is not an example supererogatory conduct but is actually wrong? That seems like a surprising result.
[2] Could consequentialists develop a response making use of Scheffler’s agent-centered perogative? The ACP allows individuals to give greater weight to their own personal concerns in deciding what to do than the weight that would be accorded those concerns in an agent-neutral consequentialist
calculation. In other words, could the ACP be broadened into a permission that agents can assign greater or lesser weight to their own personal concerns, etc.?
Doug,
I don’t doubt that commonsense intuitions about morality come into apparent conflict with the deliberances of the principle of utility. The counterexamples are countless, as certainly everyone here knows. You easily provide three or four above.
But I was hoping for something other than a few more counterexamples. Here’s what I’d like to know. What rationale is there for not counting the agent as one (and not more or less than one). Here’s why I’d like to know it. When an agent wants to give his own interests *greater weight* than he gives to the interests of others, we have this argument.
1. S is not permitted give his interests greater weight or significance UNLESS there is some morally relevant reason for doing so.
2. For any moral agents x and y, there is no morally relevant reason to favor the interests of x over y.
3. S is not permitted to give his interests greater weight.
The same kind of argument can be given for not agents who insist on NOT giving their own interests the same weight as the interests of others. They are not permitted to do that UNLESS there is something morally *unspecial* about them. But there isn’t anything morally unspecial about them. Just as there is a violation of impartiality requirements in favoring yourself, there is a violation of impartiality requirements in disfavoring yourself.
So what rationale is there for disfavoring your interests? What argument could there be for doing this? One other interesting point: groups of altruists face prisoner’s dilemmas no less debilitating than groups of egoists. So there is actually a collective disadvantage to disfavoring your interests.
Mike A.,
To my mind, asking, “What rationale is there for not counting [morally] the agent as one (and not more or less than one)?” is like asking, “What rationale is there for not counting, prudentially, others besides the agent?”
Doug, I guess I can’t follow that. In the prudential case the presumption is that interests of others do not count directly. In the moral case the presumption is *not* that the interests of the agent do not count directly. On the contrary, the presumption is that everyone’s interests count directly and count for the same.
But to be clear, I’m not *arguing* that there is no basis for your position. I’m just observing that I don’t know what the argument for your position is or how it might go. I don’t know how it could avoid violations of impartiality and such. But I’d like to see such an argument.
Mike A. (and others),
I’m puzzled by these apparent asymmetries too, but my diagnosis would be that Bentham and other consequentialists think that treating everyone’s interests equally (which results in some self-benefitting acts being morally compulsory) is required by impartiality (where impartiality is fundamental to the moral point of view and so must be taken into account by any plausible normative moral theory). But might they be wrong about impartiality — that one does not violate impartiality when engaging in self-sacrifice? I.e., impartiality demands that each agent give each person’s interests no more weight than any one else’s, not that she may not discount her own interests. After all, the self-sacrificing agent is not being “partial” to her own interests. If impartiality ≠ equal treatment of interests, then perhaps the consequentialist can honor the requirement that the theory be impartial while allowing self-sacrifice.
Mike A.,
Sorry, I guess that wasn’t very clear. It seems to me that, on any ordinary conception of morality, there is no pro tanto duty to promote one’s own utility. The interests of the agent do not count as the kind of consideration that can generate a moral requirement, although they do count as the kind of consideration that can justify not doing something that, but for the cost to the agent in terms of her interests, would be obligatory. It seems, then, that the agent’s own interests has no requiring force, but only justifying force (to use J. Gert’s terminology). Asking “What rationale is there for the agent’s interests not having any requiring force, morally speaking?” is like asking “What rationale is there for the interests of others having no requiring force, prudentially speaking?” That’s just the nature of the beast.
Just to be clear, to say that some consideration has justifying force is to say that it can make it permissible to perform an act that it would otherwise be impermissible to perform. And to say that some consideration has requiring force is to say that it can make it obligatory to perform an act that it would otherwise be permissible to refrain from performing.
Mike C.,
I like your understanding of what impartiality requires, but why think that impartiality is fundamental to the moral point of view?
Doug,
Well, the idea has an impressive pedigree in Smith, Kant, maybe Mill. A stab: The logic of morality is universalistic in a kind of reciprocating fashion. On the side of agents, to prescribe that a person morally ought to ø is to say that any similarly situated person morally ought also to ø. Now if we expect this kind of universal performance, we should expect universal consideration, i.e., we should expect that those agents whose behavior is constrained by moral prescriptions will also be taken into consideration in others’ moral deliberation. So the question then becomes ‘how are each person’s concerns, interests, etc. to be taken into account morally?’ I think “impartiality” is the word that philosophers have settled on as the *concept* we’re aiming to capture here. Where they differ is in their various conceptions of impartiality. As I said before, one consequentialist conception of impartiality is strict equality (to the point of condemning self-sacrifice), but that seems to me wrong. Of course, what the right conception is may be elusive, as even the ethical egoist can claim that her view offers a conception of impartiality that is at least logically consistent.
Michael C.,
As you see it, then, ‘impartiality’ is just a placeholder for ‘how each person’s concerns, interests, etc. ought to be taken into account, morally speaking’, and thus impartiality is, as you point out, potentially compatible with substantive conceptions of morality that require partiality toward oneself and/or those to whom one has close ties.
Such a conception of impartiality is quite empty and can’t do any work in helping us decide between different substantive moral views about how people’s interests ought to get weighed.
So I guess we don’t disagree. We both think that Bentham and other utilitarians are wrong about what “impartiality” requires (I’m using the word in your stipulative sense here). What threw me off the track was your saying, “After all, the self-sacrificing agent is not being ‘partial’ to her own interests.” It seemed that you were using ‘partial’ in its ordinary sense, not as the logical contradictory of your stipulative sense of ‘impartial’.
I was refering to impartiality in the following (I take it, fairly standard sense). Mike C. and I might agree on impartiality.
I. A moral principle is impartial iff. the principle does not favor or disfavor any indvidual or group for reasons that are morally irrelevant.
Impartiality, as I see it, is the distinctive formal feature of moral principles. Impartiality allows all sorts of unequal treatment (such as favoring yourself or others) provided that there is a morally relevant reason for doing so. But deciding what matters morally (what is a morally relevant consideration) is where moral principles disagree. It is the fundamental way, I think, in which principles disagree. And it is, as far as I’m concerned, an extremely difficult metaphysical problem.
Mike A.,
I agree, of course, that moral principles must not allow an agent to favor or disfavor, for morally irrelevant reasons, any indvidual or group that is, in the particular choice situation, owed moral consideration. But, as I see it, the issue here is whether the agent is herself owed moral consideration. The self-other asymmetry points to the fact that, on ordinary morality, the agent herself is not an entity that she must give moral consideration to. So I think that the issue of whether or not to accept the self-other asymmetry is more of an issue of scope than an issue of favoring or disfavoring. It may be that there are beings that have interests whose interests can be discounted. And it seems that, on ordinary morality, an agent may discount her own interests.
I’ve been thinking about a similar argument. I’ve seen Scanlon and others discuss it in the context of contractualism but don’t seem to recall consequentialist discussions of it. I’m sure there’s been plenty. Have we even talked about this before? I borrow Michael’s argument schema and slightly alter it:
1. In circumstances C, S would produce X amount of good by doing A.
2. In circumstances C, S would produce (X+Y) amount of good by doing B.
3. If S would A he would bring about some significant good for Jones (X)(and nothing to Smith). If S would B instead, he would bring the same good to Smith plus a trivial additional good (X+Y)(and nothing to Jones).
4. Of all the actions available to S in C, B produces the maximal amount of good.
5. So according to consequentialism, S is morally required to B in C.
6. So S is morally required to bring about the trivial good.
7. But no one is ever required morally to bring about trivial goods.
So, the case might be where you can either save Smith and Jones but you also happen to have an aspirin which you could give to Smith who has a bit of a headache. Should you therefore save Smith so that you can also do so, other things being equal between him and Jones? Consequentialism seems to say that you should. But that seems counterintuitive – we do not think we are required to heal small headaches. I guess this is a question of how consequentialists can argue for options. Satisficing is one way but as Ben has showed that doesn’t seem too reliable. I might fancy rule-consequentialism in this case.
Anyway, I’m just off to Southampton for Joint session and BSET. Have a good weekend everyone.
I don’t think we have a counterexample yet to the relevant utilitarian claim. Utilitarians are making these two claims.
1. Morality requires the maximization of utility.
2. No moral agent is permitted to violate what morality requires in order to benefit another over himself.
There are two sorts of counterexamples to (2). Only one matters.
A. Cases in which commonsense morality permits an agent to fail to maximize utility in order to benefit another over himself.
This is what has been provided so far. But these are very likely contributions to the endless counterexamples to (1). Nothing new there. In order to show that the utilitarian is wrong about (2)—that is, in order to show that there is some important asymmetry that the utilitarian has wrong–you have to provide (B).
B. Cases in which commonsense morality permits an agent to violate what morality requires in order to benefit another over himself.
Does commonsense morality allow agents to benefit others over themselves in violation of what morality requires? I don’t think so. And (B) is precisely what the utilitarian is claiming cannot happen. So there is no asymmetry. Or, so say I.
Mike A.,
Of course, no coherent moral view “permits an agent to violate what morality requires,” not in order to benefit another over himself, not for any reason. After all, the phrase ‘to violate what morality requires’ is a paraphrase of ‘to act impermissibly’, and it would be incoherent to permit what an agent to do what is impermissible (i.e., not permitted). But no one said utilitarians were wrong about (2). (2) is trivially true, and I can’t think of any moral theory that denies (2). What utilitarians get wrong is the kind of force that one’s own utility has in determining the deontic statuses of one’s actions. Utilitarians think that one’s own utility has the same force that others’ utility has. But, on any ordinary conception of morality, one’s own utility has only justifying force and no requiring force, whereas the utility of others have both justifying and requiring force.
You may say, well, this self-other asymmetry objection just adds to the long list of counter-intuitive results that utilitarianism has. Maybe not. Maybe this objection is of a more fundamental nature, pointing to the fact that utilitarianism misconceives what morality is about. Even if it was just one more counter-intuitive implication, though, that’s another nail in the coffin. Moreover, I don’t think the usual tricks that utilitarians employ to deal with the counter-intuitive implications that plague their view will work here. For instance, a utilitarian can plausibly argue that it is only in the most unrealistic type cases that utilitarianism permits framing an innocent man. But with the self-other asymmetry, we are talking about very real and indeed common situations where utilitarianism has counter-intuitive results, implying that it’s wrong to forgo a greater benefit for oneself in order to provide someone else with a lesser benefit.
“. . . no coherent moral view “permits an agent to violate what morality requires,” not in order to benefit another over himself, not for any reason.”
First, it is just false that no coherent moral theory allows an agent to violate what morality requires. Moral theories are not in general governed by principles requiring agents to do everything that they are required to do. To get something incoherent from that you’d need all sorts of controversial principles.
Second, you’ve just misread (2). (2) says that no moral agent is permitted to violate what morality requires in order to benefit another over himself. (2) does not say says that no moral agent is MORALLY permitted to violate what morality requires in order to benefit another over himself. This reading is presumably where you got the triviality claim. You can have views *about* morality (such as the commonsense picture of morality) according to which agents are permitted to violate the requirements of morality in order to benefit another person. So this is not trivial. Off the top of my head, retributive justice might demand from Smith that he receive his punishment on the rack. But the commonsense picture of morality might urge that we need not fulfill the requirements of justice if the universe is about to collapse. That’s a view about when the requirements need not be fulfilled.
Again, to be fair to the utilitarians (and I am incidentally not one) we need a case where we can fail to fulfill what morality requires in order to benefit another over ourselves. It is not hard to find cases where we can benefit others at a cost in utility to ourselves and overall. We can find those easily since commonsense views of morality do not take utility as the moral standard.
Finally, again in defense of the utilitarians, their responses go much deeper than tricks or rationalization. The utilitarian response to counterexamples goes to the heart of our moral epistemology. The idea that counterexamples from commonsense are important or decisive against utilitarianism is inherited from a Kantian and (more recently) Rawlsian credulity about ordinary moral consciousness. The response of utilitarians is skeptical of ordianry moral consciousness. That is a deep epistemological difference, it seems to me. It’s no trick. Why have we been so much more epistemically receptive to our naive views in moral theory than we have been in epistemology generally? As I read them the utilitarians deny that we are justified in taking our naive moral convictions as weighty objections to their view.
Mike A.,
If “(2) does not say…that no moral agent is MORALLY permitted to violate what morality requires in order to benefit another over himself,” then in what sense of ‘permitted’ is (2) saying that no moral agent is permitted to violate what morality requires in order to benefit another over himself? Prudentially permitted, legally permitted, what?
Mike A.,
One more thought: I’m not sure what you mean by “ordinary moral consciousness,” but I think that generally what utilitarians call into question is not all moral intuitions but only our moral intuitions about particular cases. If they took all moral intuitions to be suspect, then on what basis would they have to argue for utilitarianism in the first place? As Brad Hooker has said, we need to start somewhere and it is better to start somewhere intuitive. Thus I think that most utilitarians are willing to give some credence to our intuitions about the nature of morality. And what the self-other asymmetry shows, I think, is that utilitarianism is counter-intuitive at this abstract level. Utilitarians conceive of morality as being about protecting and promoting everyone’s interests (including ourselves) while our intuitions about morality tell us the morality is about protecting and promoting the interests of only others. We don’t even have to consult our moral intuitions about particular cases to have this intuition.
Mike A.,
Okay, one more last thing. You say, “You can have views *about* morality (such as the commonsense picture of morality) according to which agents are permitted to violate the requirements of morality in order to benefit another person.” By this, do you mean that it may be that commonsense morality holds that agents are morally permitted to violate, not what it takes to be the requirements of morality, but what are in fact the requirements of morality?
I’m sorry to be so thick, but I’m really having a hard time making sense of some of your claims. For one, I want to know what sense of permitted you have in mind if not the moral sense. For another, I want to know whether you think that a coherent moral view can hold that agents are permitted to violate what, on that view, morality requires, or whether the view can only hold that agents are permitted to violate what are in fact the requirements of morality. Lastly, I want to know whether you’re talking about permitting the violation of pro tanto moral requirements or permitting a specific agent to do what she is, in fact, morally required not to do in her circumstances. Your example regarding violating the requirements of justice in a situation where the world is about to collapse sounds like the violation of a pro tanto moral requirement.
Ok, here are your questions.
1. “. . . I want to know what sense of permitted you have in mind if not the moral sense.”
I’m asking whether there is *any* sense of ‘permitted’ in which this can happen. You can include the moral sense if you like. Utilitarianism denies that there is any sense of permitted in which this is possible. I’m less sure about what we’ll find in commonsense morality, since it covers so much more territory. But as I said, Kant is infamous for maintaining that every requirement of justice would have to be fulfilled, even if the world were soon coming to a end. I don’t think this is true on commonsense morality (but I’m not certain). You’re asking me, so in what sense does commonsense morality say “no, you’re permitted not to execute every last prisoner, though justice demands it”. I say that’s an open question, but there does seem to be some sense in which it common sense tells us it’s permissible not to fulfill a duty of justice in the situation Kant envisages.
2. “. . .I want to know whether you think that a coherent moral view can hold that agents are permitted to violate what, on that view, morality requires. . .”
Yes. There can be a coherent (by which I take it you mean, consistent) moral view on which it is true that obligation does not entail permissibility. You might be obligated to do A without A being permissible. And I am using ‘obligation’ and ‘permissiblity’ univocally: I mean moral obligation relative to a single theory. So, yes, there needn’t be any inconsistency in denying that obligation entails permissibility. Strange, yes. Inconsistent, no.
There are more familiar ways in which this can happen. Most plausible versions of utilitarianism are formulated subjunctively. A simple version:
U. You ought to do A iff. were you to do A it would produce better consequences than would its alternatives.
But principles of this sort entail that you are not obligated to do everything that you ought to do. It can be that OA and OB and ~P(A & B). So you ought to do A and you ought to do B, but you’re not allowed to do both. The important thing here is that no inconsistency follows.
I also added: Lastly, I want to know whether you’re talking about permitting the violation of pro tanto moral requirements or permitting a specific agent to do what she is, in fact, morally required not to do in her circumstances. Your example regarding violating the requirements of justice in a situation where the world is about to collapse sounds like the violation of a pro tanto moral requirement.
“I want to know whether you’re talking about permitting the violation of pro tanto moral requirements or permitting a specific agent to do what she is, in fact, morally required not to do in her circumstances”.
I mean the second disjunct. I mean a moral theory might generate no contradictions and, at the same time, countenance situations in which an agent is morally required to do A and not permitted to do A.
About whether the principle of justice in the Kantian case is pro tanto, I don’t know. I guess it could be interpreted that way. It would be nice to see a motivatation for this interpretation.
I think some of the discussion has kind of missed the point — maybe Michael C. encouraged this by the form in which he put the argument. The real point is not that we have (yet) another reason to deny utilitarianism and its near kin. It’s that having no room for self-denying options is an interesting and important way in which agent neutral theories are deficient. The interesting criticisms of utilitarianism have shown how it departs from common sense structurally, for instance when Nozick explained how side-constraints work, or Williams emphasized the systemic and extreme ‘demandingness’ of utilitarianism.
The asymmetry that Michael C. notes in the entry, while (as he suspected) not entirely original, is more interesting than just another grenade lobbed into the smoldering ruins of utilitarianism. It reveals something about the structure of ordinary moral thought which, from the perspective of utilitarianism, looks very odd and irrational.
Jamie,
I’m less sure the point you note here was missed in the discussion.
“It’s that having no room for self-denying options is an interesting and important way in which agent neutral theories are deficient.”
I took issue with that characterization of the problem with utilitarianism. I’ve been urging that utilitarianism is not unique in the way that it leaves no room for self-denying options. Utilitarianism leaves no room for self-denying options in cases where acting in a self-denying way is in violation of what morality requires (viz., for them, utility maximization). But commonsense morality also leaves no room for self-denying options in cases where self-denial is in violation of what morality requires. Or rather I haven’t seen an example where it does.
So the discussion evolved away from the initial formulation of the problem in this way (and of course in other ways). In any case, that’s how I’ve tracked it.
Suppose that utilitarianism leaves no room for self-denying options in cases where acting in a self-denying way is in violation of what morality requires, and suppose that no theory leaves any room so described. That does not seem to me to answer the point. The point is that utilitarianism leaves no room for self-denying options. (Utilitarianism contains no options at all.) Common sense morality does. That’s a structural difference, and it’s an important one.
What if a utilitarian replied to McCloskey’s famous example of the sheriff like this: in your case, utilitarianism requires the sheriff to do something that is morally mandatory. But that is no distinguishing feature of utilitarianism, since all moral theories require us to do what is morally mandatory.
That would be a misunderstanding of the point of the example.
“The point is that utilitarianism leaves no room for self-denying options. (Utilitarianism contains no options at all.) Common sense morality does. That’s a structural difference, and it’s an important one.”
Not true. Utilitarianism leaves room for self-denying options in just the way that common sense morality does. It leaves room, just as with common sense morality, when facing a set of morally permissible options (for utiltarianism specifically, in cases of ties among available actions). It does not leave room, just as with common sense morality, when facing a morally required option. Among permissible options, utilitarians agree that choosing the self-denying option is fine. So I don’t yet see the alleged structural difference.
I’m not sure why you refer to the McCloskey example. Clearly McCloskey’s counterexample is supposed to show that, at least intuitively, maximization of utility is not all that matters morally. I think I see that. But the asymmetry objection, as I read it, is not supposed to do that. If it were an attempt to do that, then the asymmetry objection would not establish the alleged asymmetry. That’s what I’ve been arguing.
A small reductio. Suppose the alleged asymmetry were really a problem due to maximizing utility. In that case, utilitarians could simply turn the tables. Common sense morality claims that we have special obligations to those nearest and dearest to us that we do not have to strangers. Let S be nearest and dearest and let S’ be a stranger. Suppose that you can save either one, but not both. Suppose further that, overall, the utility of saving either is the same. In this case the utilitarian lets you generously and self-denyingly save the stranger. Common sense morality does not permit you to do that.
For the same reasons as above, I don’t take this to be a serious asymmetry problem for commonsense morality.
Picking among ‘tied’ acts might be ‘exercising an option’ in some sense, but I meant it in the sense that our friend Doug Portmore (“Position Relative Consequentialism”, Ethics Jan. 2003) explains:
Utilitarianism does not allow for these.
McCloskey’s example does show that maximization of utility is not all that matters morally, but you have again described the objection at a level so general that the interesting feature disappears. The example shows that there can be something morally wrong with maximizing agent-neutral value. That is a structural point, not a point about which things are agent-neutrally good.
Naturally, any of the important objections can be redescribed at a level so abstract that the interesting point is lost. I’m trying to understand them in such a way that the point stands out.
“[A]lthough it is what one has most moral reason to do, it is not what one is morally required to do”
If there is some conceptual room between (i) x is morally required and (ii) S has most moral reason to do x, such that (i) does not entail (ii), then there ought to be conceptual room for utilitarians as well. Unless the claim is that we are not all using the same moral terms.
“The example shows that there can be something morally wrong with maximizing agent-neutral value”.
You’d need a very good argument to show that the McCloskey problem generalizes to all agent-neutral theories of value. There are countless ways in which a theory can be agent-neutral–including theories on which justice is an intrinsic value. As it stands the southern sheriff does not come close to generalizing in this way.
In any case, I’m still waiting for an alleged asymmetry against utilitarianism that does not beg any questions against the view. The more specific descriptions you insist on just beg the question against the utilitarian theory of value. And as I said, utilitarians can beg the question right back in the ‘nearest and dearest’ examples.
Mike A.,
You write,
There is, of course, conceptual room between (i) x’s being morally required and (ii) S’s having most moral reason to do x. The problem is that utilitarians don’t avail themselves of this conceptual space. As I use the term, a ‘moral reason’ is the kind of reason that can generate a moral requirement. (Perhaps, a ‘requiring reason’ might be a better term.) Utilitarians hold that the fact that x increases any being’s utility always counts as a moral/requiring reason to perform x. And utilitarians hold that agents are always required to do what we have most moral reason to do, that is, maximize utility. On commonsense morality, by contrast, the reason S has to benefit herself is not a moral/requiring reason, although it is a justifying reason, a reason that can make it permissible to do other than what there is most moral/requiring reason to do. Thus commonsense morality allows for what are called agent-centered options, options to do other than what one has most moral/requiring reason to do. Utilitarianism doesn’t allow for such options, not because the conceptual space isn’t available, but because they don’t avail themselves of the conceptual space.
The McCloskey example as stated already shows that it is sometimes wrong to maximize justice, since the acts prevented by frame-up include large numbers of injustices. So it doesn’t have to be generalized.
The more specific descriptions you insist on just beg the question against the utilitarian theory of value.
They don’t beg any question. They are counterexamples. Maybe you think Gettier’s counterexamples also beg the question against the JTF theory of knowledge?
“. . . . They don’t beg any question. They are counterexamples.”
They are *purported* counterexamples. But the specific cases that have been offered so far are simply not counterexamples displaying some interesting symmetry in utilitarianism that is not found in commonsense morality. In general the proposed counterexamples make these claims:
1. On commonsense morality, you can favor others at a cost to yourself, even if you do not maximize overall utility.
2. On a utilitarian view, you cannot favor others at a cost to yourself, even if you do not maximize overall utility.
This does NOT show that utilitarian reasoning does not include the same asymmetries as commonsense morality. Utilitarian reasoning includes the very same asymmetries, if we just bother to look at how the utilitarian is reasoning and how the commonsense moralist is reasoning. The utilitarian is saying that moral agents are permitted to favor others in cases where (i) they must make a moral decision that favors themselves or favors others and (ii) it is not morally required that they favor themselves. The very same is true for the commonsense moralist.
Let me anticipate re-entering this discussion loop: “oh, but you’ve described the problem too generally. The asymmetry is here: Moral agents are permitted to favor others when (i’) moral agents must make a decision to favor themselves or favor others and (ii’) it maximizes utility that they favor themselves.
Commonsense moralists say you can do that, utilitarianism says you can’t. There’s the alleged damning symmetry you find in utilitarianism that you do not find in commonsense morality.
Utilitarians rightly cry foul! You haven’t shown any interesting symmetry in utilitarianism. You’ve shown that maximizing utility sometimes REQUIRES that we do not observe an asymmetry that commonsense morality observes. But there is certainly no surprise there. After all two can play at that game. We utilitarians can show that the moral requirements in commonsense morality REQUIRE that commonsense moralists do not observe an asymmetry that utilitarians observe. The asymmetry is here: Moral agents are permitted to favor others over themselves when (i”) moral agents must make a decision to favor themselves or favor others and (ii”) it violates a special obligation to favor others. Utilitarians can self-deny in an extremely generous way and save the lives of persons to whom they have no special obligation at a large cost to themselves. Commonsense moralists cannot do this.
Now the commonsense moralists cry foul! There is no damning symmetry in commonsense morality that is not found in utilitariaism.
All you’ve shown is that certain moral requirements in commonsense morality REQUIRE that we do not observe an asymmetry that utilitarianism observes. But there is certainly no surprise there.
I hope that finally clears up the problem with the alleged counterexamples. Both cries of foul are right. Neither alleged counterexample shows any interesting symmetry on one theory that is not found in the other. The bad descriptions of the counterexamples are the problem, as I have been saying for a while now.
Mike A.,
I can’t figure out what role you think symmetry is playing in Michael C’s argument. (Note that Michael didn’t mention symmetry at all; Ted Sider does; I used the term ‘asymmetry’ in passing, thinking mainly of Ted’s paper.)
A peculiar argument would be this: utilitarianism contains a symmetry not shared by common sense. Symmetries are bad, so common sense is better than utilitarianism.
This would be a strange argument, for the obvious reason. Fortunately, nobody has given that argument. But at least it would then make sense to reply by showing that common sense also contains a symmetry, one not shared by utilitarianism. But in the absence of such an argument, I can’t see why you are pointing out a symmetry that common sense has and utilitarianism doesn’t share.
Why is anyone in this thread talking about symmetry at all? Because common sense morality treats sacrifices imposed on oneself differently from sacrifices imposed on others. This asymmetry, between self and others, seems quite significant. Its significance, I’d say, falls into the category of the general significance of centeredness of moral norms, but it is interestingly different from the usual examples of centeredness: for example, its significance seems to be different from that of side constraints, and different from the significance of the agent-centered options of Scheffler (I guess he calls them ‘prerogatives’), which are options to benefit oneself.
Redescriptions of cases that obscure the centeredness can make us lose sight of this significance. It would be different if the ‘self-sacrifice’ centeredness were in some way incidental or inessential, but it’s fairly clear that it’s neither. Theories whose norms are all agent-neutral are not going to be able to capture the feature of morality that Michael Cholbi lays bare in the entry. So there is a new (not entirely new, since Doug and Ted have discussed it before) agent-centered aspect of morality that separates it from systems of agent-neutral norms (I think this is what Michael C. means by ‘consequentialism’).
Jamie, I agree completely with this,
“Why is anyone in this thread talking about symmetry at all? Because common sense morality treats sacrifices imposed on oneself differently from sacrifices imposed on others. This asymmetry, between self and others, seems quite significant.”
So let me take a very small step. The criticism of utilitariaism, from the point of view of common sense, is that it does not incorporate or observe that significant asymmetry about sacrifice you describe. Am I right about that?? If so, then precisely where utilitarian reasoning ought to be asymmetrical, it is unfortunately symmetrical (specifically about sacrifices). Is that right so far?? If it is right, then the criticism of utilitarianism is due to this sort of symmetry in utilitarian reasoning about sacrifices (where the reasoning really ought to be asymmetrical). That’s the problematic symmetry I was referring to and you seem to be asking about. Of course, my main point is that commonsense morality suffers from the same problem.
So, we agree that the problem (at hand) from which utilitarianism suffers, is one that has essentially to do with agent-centeredness. (That’s what I said, and you seem to be agreeing.)
Good!
Common sense morality does not suffer from that problem; it does not even have the feature (perfect agent neutrality), problem or not. No doubt it contains symmetries, but we agree that is not a problem per se.
Jamie,
I agree with what I quoted. I agree that utilitarianism (most forms) is agent-neutral. I disagree that the problem you cite for utilitarianism is due to its agent-neutrality. I disagree because I can generate the very same problem for agent-relative theories such as commonsense morality: specifically, a counterintuitive asymmetry in sacrifice for commonsense morality. But I frankly don’t see us ever agreeing on that.
Ah!
Well, let me explain why I think the feature that Michael C. noted does have essentially to do with agent-centeredness.
Suppose A, B, and C, working together, have discovered a quantity of some perishable good; pineapples, say. Each likes pineapples equally (so they are equally good at turning pineapples into utility), but each gets less out of each pineapple than out of the pineapple before (so they have diminishing marginal utility for pineapples). The pineapples have to be divided.
Now the utility maximizing way is to divide them equally. But, if A divides them, he is permitted to give himself less; if B divides them, she is permitted to give herself less; if C divides them, she is permitted to give herself less.
(x)(x is permitted to bring it about that x gets less)
But there is no way to eliminate the second occurrance of ‘x’. So the norm is agent-centered.
There are other ways of testing for centeredness, but this one seems clear in this context.
I see.
Now suppose that Sue and Bob are both drowning. I am related to Bob and like him. You are related to Sue and like her. I have a special obligation to Bob that I do not have to Sue. I cannot save both. Other things are equal. Commonsense morality says that my obligation is to save Bob. Since other things are equal–including the utility of saving each–utilitarianism tells me that I can self-deny and the following is true for every agent (viz. for me, as I decide who to save)
(x)(x is permitted to bring it about that x gets less utility)
I am permitted to save Sue even though I don’t know her and it would produce less utility for me. I think it would be a very generous thing to do for you. Commonsense morality says I am not permitted to self-deny here. But commonsense morality is agent-relative. So the problem is not essentially related to agent-neutrality.
I don’t follow.
What is the principle
(x)(x is permitted to bring it about that x gets less utility)
doing in the example? Is it supposed to be an agent-centered norm that common sense cannot endorse but utilitarianism can? It doesn’t seem to be that, but that’s what you have to come up with, right?
The utilitarian principle in the Save-Sue-Or-Bob example entails the less general permission in P,
P. (x)(x is permitted to bring it about that x gets less utility)
According to (P) I can produce less utility for myself by saving Sue instead of Bob. So utilitarianism permits me to self-deny. But commonsense morality does not allow me to follow (P). It requires me to save Bob (recall, saving Bob produces more utility for me than saving Sue) and thereby NOT self-deny. Saving Bob is required by commonsense morality for the reasons given (viz., I have a special obligation to Bob and other things are equal).
Right, common sense morality says that (P) is false in the Bob & Sue example.
So, uh, how does that show that “the problem is not essentially related to agent-neutrality”? It looks to me like you have come up with a different problem (I don’t agree it’s a problem, but that’s beside the point), and as a matter of fact it has nothing to do with agent-neutrality although (P) doesn’t tell us one way or the other.
The Cholbi-Sider-Portmore problem does have essentially to do with agent-neutrality, because, as I said above, there is no way to eliminate the second occurrence of ‘x’ in the norm that distinguishes the two views. Do you agree that’s a good test for agent-centeredness?
Jamie,
I’m not sure that the issue does have essentially to do with agent-centredness.
In your example of the pineapples, suppose A is doing the dividing and B voluntarily forgoes her share. ‘Don’t worry about me,’ B says, ‘divide all the pineapples between the two of you.’ (We assume as before that B would derive just as much utility from the pineapples as the other two.) In this case, I think, commonsense morality, though not utilitarianism, permits A to not give any pineapples to B.
This suggests that a problem for utilitarianism is that it is incompatible with the following norm of commonsense morality:
(x)(y)(If y consents to getting less, then x is permitted to give y less)
But this is not an agent-centred norm. And it seems that your agent-centred norm,
(x)(x is permitted to bring it about that x gets less),
is a fairly straightforward consequence of the above non-agent-centred norm.
Campbell,
Can you consent to a state of affairs such as “getting less”? Don’t I have to consent to some particular action performed by some particular person? For instance, I don’t consent to being touched sexually; I consent to my wife’s touching me sexually.
Thus we might better describe the norm of commonsense morality as:
(x)(y)(If y consents to x’s giving y less than y’s fair share, then x is permitted to give y less than y’s fair share).
After all, suppose that there are two cakes of which I’m entitled to a fair share. One that you are dividing up and one that Jamie is dividing up. If I consent to your giving me less than my fair shair, that doesn’t mean that Jamie can permissibly give me less than my fair share.
So this is an agent-centered norm and the issue does have essentially to do with agent-centredness.
Don’t I have to consent to some particular action performed by some particular person?
I don’t think so. You could, for example, sign a ‘living will’ consenting to the removal of life support should you fall into a permanent vegetative state, but without specifying any particular person to pull the plug.
Campbell,
Fair enough, although I think that your re-description of the relevant commonsense norm in the pineapple case is a bit of stretch. But, in any case, it seems that you are pointing to a different problem for utilitarianism. You’re pointing out that utilitarianism can’t accommodate the fact that agents can permissibly discount the utility of those who have consented to our doing so. The rest of us (Jamie, Cholbi, Sider, and myself) are pointing out that utilitarianism can’t accommodate the fact that agents can permissibly discount their own utility. The latter isn’t just an instance of the former. That is, agents may permissibly discount their own utility even where their consent is not given (not even tacitly). For instance, I can negligently put myself at risk of great utility losses. Such negligent behavior is not, on commonsense morality, wrong, and, in such cases, it would be false to say that I consented to being put at such risk.
So we have two commonsense norms:
A: (x)(x is permitted to bring it about that x gets less utility)
B: (x)(y)(If y consents to getting less, then x is permitted to bring it about that y gets less)
However, I think that B is better stated as the following, more general, agent centered norm:
B*: (x)(y)(If y consents to x’s doing A to y, then x is permitted to do A to y)
“Right, common sense morality says that (P) is false in the Bob & Sue example. So, uh, how does that show that “the problem is not essentially related to agent-neutrality”?”
Well, uh, here’s why. Commonsense morality is NOT AGENT NEUTRAL. Whereas utilitarianism IS AGENT-NEUTRAL. In the Bob-and-Sue example, utilitariansim (recall, AGENT NEUTRAL) allows us to self-deny and commonsense morality (recall, AGENT RELATIVE) does not allow us to self-deny. The problem I describe for commonsense moralIty is the image (in all relevant respects) of the problem you describe for utilitarianism. Only, the problem I describe arises for commonsense morality (AGENT RELATIVE) and the problem you describe arises for utilitarianism (AGENT NEUTRAL). That’s why the problem is not essentially one for agent neutral theories. That could not be much clearer than it already is.
Doug, you write,
“The rest of us . . . are pointing out that utilitarianism can’t accommodate the fact that agents can permissibly discount their own utility.”
You mean of course that utilitarians can’t accommodate that fact in some cases, right? There are examples in which utilitarian agents are unable to do this. But then there are examples in which they can. For instance if overall utility would be the same whether I benefited you more or myself more, then I can choose to benefit you more.
But then there are examples in which commonsense moralists cannot discount their own utility, too, and examples where they can. For instance there are cases in which keeping a promise benefits me more and breaking it benefits you more. I’m not allowed to discount the utility to me and break the promise in order to benefit you. Or do you think that commonsense moralists can discount their own utility in every case?
Mike A.,
Yes, to be precise, I should have said that maximizing act-utilitarianism can’t accommodate the fact that agents can always permissibly discount their own utility.
More generally, maximizing act-utilitarianism (MAU) cannot accommodate agent-centered options, options to do other than what one has most moral/requiring reason to do. These agent-centered options come in both agent-sacrificing and agent-favoring varieties, and maximizing act-utilitarianism can accommodate neither.
The reason MAU cannot accommodate agent-sacrificing options is because, unlike commonsense morality, MAU holds that agents have a requiring reason to promote their own utility. And the reason MAU cannot accommodate agent-favoring options is because, unlike commonsense morality, MAU denies that the reason agents have to promote their own utility by n amount has greater justifying force than the reason agents have to promote the utility of others by n amount.
You don’t deny any of this, do you?
Doug,
Most of this I think is right. I think it’s also a problem for cs-morality. But let me skip that for a moment. I want to ask you about this,
“. . .to be precise, I should have said that maximizing act-utilitarianism can’t accommodate the fact that agents can always permissibly discount their own utility.”
But commonsense moral agents–cs-agents–cannot always do that. They cannot always act in a way that discounts their own utility. Sometimes they must act in ways that benefit themselves at an expense to other agents they might have benefitted instead.
Maybe I’m misreading what you said above.
Mike A.,
You write, “But commonsense moral agents–cs-agents–cannot always do that. They cannot always act in a way that discounts their own utility.”
What do you mean by “act in a way that discounts their own utility”? Are you implying that sometimes agents have to act in a way that promotes their own utility. For instance, I am morally required to act in a way that enables me to continue to provide for my two-year-old daughter who depends on me and that this sometime requires me to act in ways that promote my utility (i.e., not to kill myself). Sure, my special obligation to my daughter will sometimes require me to act in ways that promote my own utility, but sometimes it will require me to act in ways that don’t promote my utility.
When I say “maximizing act-utilitarianism can’t accommodate the fact that agents can always permissibly discount their own utility,” I mean that their own utility needn’t ever enter into their moral deliberations except derivatively so — as where, say, I promised someone else to promote my utility.” The best way to put it is that utilitarianism cannot account for the fact that agents have no requiring reason to promote their own utility per se. They may, of course, have a requiring reason to do something else that, as a matter of fact, promotes their utility. But the fact that doing x promotes S’s utility never constitutes a requiring reason for S’s doing x. This is something that utilitarianism cannot account for.
What exactly to do you think is also a problem for commonsense morality?
Mike A.,
Here are some numbered claims:
(1) Maximizing act-utilitarianism (MAU) cannot accommodate agent-centered options, options to do other than what one has most moral/requiring reason to do.
(2) These agent-centered options come in both agent-sacrificing and agent-favoring varieties.
(3) MAU cannot accommodate agent-sacrificing options because, unlike commonsense morality, MAU holds that agents have a requiring reason to promote their own utility.
(4) Intuitively speaking, agents have agent-sacrificing options.
(5) If MAU cannot accommodate agent-sacrificing options, then this constitutes a pro tanto reason for rejecting MAU.
Which, if any, of these claims do you reject?
Mike A.;
Insofar as I understand what “the image” means, it is not the image in all relevant respects. It doesn’t have anything to do with a prerogative to sacrifice oneself. That prerogative is essentially agent-centered. So, it is different in that very significant respect.
Campbell, good point. Hm. I’m not convinced yet, but I’d better think about it a little before I decide.
Doug, “discounting” might mean two different things (in the sentence “agents can always permissibly discount their own utility”).
Against a background that presumes consequentialism, it’s very clear: discounting is reducing the contribution that the person’s own welfare contributes to the goodness of the outcome, and so to the potential permissibility of the act. And in that case, I think you are certainly right that utilitarianism can’t accommodate such discounting. But common sense morality is (probably) not consequentialist, so the background assumption lapses. And then the sentence seems like it means that agents can always act in a way that reduces their utility. But it’s clear that according to common sense, there are some cases in which an agent cannot act in a way that reduces her utility (maybe she promised to maximize her utility, or a demon will destroy Phoenix if she fails to maximize her utility, etc.).
So, to sum up, I think you have to put the point a different way. (Which you now have done, I guess.)
Discussion of this counterexample is getting pretty complicated! Now it turns on a technical definition of ‘discounting utility’. Ok, you say,
“. . . I mean that their own utility needn’t ever enter into their moral deliberations except derivatively so — as where, say, I promised someone else to promote my utility.”
I honestly can’t see how that could be true. Certainly for commonsense moralists the utility to the agent can and does figure into his moral deliberations. Utility matters, even if not decisively, for cs-moralists. Don’t you see cs-morality as including contributing to one’s own well-being as one among many morally worthy goals? Won’t it be the case that a cs-agent who finally gives up smoking for the sake of his own well-being has done something not only prudent but morally good? Won’t he have shown a healthy self-respect? Won’t it be the case that a stranded cs-agent who has the options to improve his well-being with X, or waste X altogether, has the obligation to improve his well-being? I can’t believe that he would not. That all looks very much a part of commonsense morality to me.
It is another question altogether whether a utilitarian agent MUST always count utility-to-himself as equal in value to utility-to-others. It can after all be true that the only way to maximize overall utility is that I do not count utility-to-myself as equal in value to utility-to-others. In that case, I can in this sense completely discount the value of utility-to-myself.
Mike A.,
You ask, “Don’t you see cs-morality as including contributing to one’s own well-being as one among many morally worthy goals?”
No! What evidence do you see for such a claim? We don’t typically morally praise those who act prudently.
You ask, “Won’t it be the case that a cs-agent who finally gives up smoking for the sake of his own well-being has done something not only prudent but morally good?”
Perhaps, he has done something morally good insofar as he is no longer frivolously risking the loss of something of great potential value. Perhaps, there is on commonsense morality a duty not to entirely waste what’s of great potential value and a human life is often something of great potential value. But I don’t think that there is any duty to promote one’s utility per se. After all, we wouldn’t think it is morally wrong for me to choose, other things being equal, to watch one TV game show rather than another even if I know that I would get more utility from watching the other.
You ask, “Won’t it be the case that a stranded cs-agent who has the options to improve his well-being with X, or waste X altogether, has the obligation to improve his well-being?”
Again, there may be a duty not to waste altogether something of great potential value, but that’s not to say that there is a duty to promote one’s utility per se.
Duties not destroy what’s valuable, not to waste certain value-promoting talents (talents like that of Mozart as opposed to my talent for counting blades of grass), and the like are distinct from a duty to promote one’s utility. So the telling cases are NOT cases where the agent is destroying his life or wasting his talents. Rather the telling cases are ones where the agent is only choosing between two self-interested courses of action, one of which is more beneficial to the agent than the other. Examples are watching one TV show rather than another, eating one dessert rather than another, flossing once a day versus twice a day, going to the dentist sooner rather than later, etc. It doesn’t seem that commonsense morality holds that one is morally required to choose the course that will promote more utility for oneself even if there are no countervailing reasons not to.
Mike A.,
I would still like to hear which, if any, of claims (1)-(5) you reject.
Doug,
Perhaps I shouldn’t have put the point in terms of consent. The more general issue I have in mind is whether a person may voluntarily forego the status or consideration that morality would otherwise grant her. On a broadly utilitarian view of morality, the question is whether a person may forego having her utility counted in the sum of utilities to be maximised. But the issue arises also for other moral views. A Kantian, for example, may ask whether a person can voluntarily forego being treated as an end.
This issue doesn’t essentially have to do with agent-centredness, because a person other than the agent may wish to forego such consideration (as in my example where it’s B, rather than the agent, A, who wants her utility left out of the calculation).
Let’s see. I deny (3).
3) MAU cannot accommodate agent-sacrificing options because, unlike commonsense morality, MAU holds that agents have a requiring reason to promote their own utility.
As I have tried to show, there are cases in which agent-relative theories cannot accommodate an agent-sacrificing option (as in the Bob-And-Sue example).
But then you say,
“You ask, “Don’t you see cs-morality as including contributing to one’s own well-being as one among many morally worthy goals?”
No! What evidence do you see for such a claim? We don’t typically morally praise those who act prudently.”
Why, I have the same evidence you have. It is just that, every time you talk about producing utility to oneself you use examples like this,
“After all, we wouldn’t think it is morally wrong for me to choose, other things being equal, to watch one TV game show rather than another even if I know that I would get more utility from watching the other.”
But of course that is only one case. You’re making this universal claim about utility. And when we come to more serious cases such as,
“Duties not destroy what’s valuable, not to waste certain value-promoting talents . . . and the like are distinct from a duty to promote one’s utility.”
Says who? There is no argument here. Just assertion. There is every bit as much reason to say that these are valuable because of the utility that they in general produce for the agent. In fact, that is very close to what even Kant says about the value of developing one’s own talents.
But there are other cases in which it seems clearly to be utility to the agent that matters. Cases where the agent allows himself to degenerate below a certain level of well-being. Here it is not maximizing TV shows, but maintaining a level of well-being for oneself. That is something that each person is primarily morally responsible for. When people begin to self-destruct in this way, even when it affects no one else, they are doing something wrong.
Finally, it can also be wrong in commonsense morality to sacrifice too much.
It is just morally wrong to sacrifice your life to slightly prolong the life of Louie’s toad. There is just no way on which that comes out as praiseworthy.
Campbell,
RE: your proposal about agents foregoing their moral status. Could consequentialists (or more precisely, utilitarians) incorporate this suggestion by treating it as an agent’s preference and then assigning value to the agent’s preferences such that the agent’s “preference” not to have his status treated equally can sometimes have the that his not being so treated turns out to be optimal? That would seem like the natural way to accomodate the compulsory self-benefit examples I was concerned. (Though I’m not sure this move would satisfy me, since what the consequentialist would accomplish thereby would not be to align her position with the “common sense” intuition that self-sacrifice is morally permitted. Rather, the consequentialist succeeds in showing that such acts aren’t really suboptimal and so aren’t genuinely self-sacrificial within a consequentialist framework.)
Mike A., You write,
How does the fact that there isn’t an agent-sacrificing option in the Bob-and-Sue case show that (3) is false. The Bob-and-Sue case may show that the fact that S’s doing x fulfills S’s special obligation constitutes a requiring reason for S to do x, but it doesn’t show that that the fact that doing x promotes S’s utility constitutes a requiring reason for S to do x.
You also write: “but that’s only one case” (referring to my TV game show case). Here’s why I think that one case is sufficient.
P1) If the fact that doing x promotes S’s utility constitutes a requiring reason for S to do x, then whenever doing x as opposed to y promotes S’s utility more and S has no reason to do y instead of x, S would be required to do x as opposed to y.
P2) It is not the case that whenever doing x as opposed to y promotes S’s utility more and S has no reason to do y instead of x, S would be required to do x as opposed to y. Take the TV game show example.
C) Therefore, it is not the case that the fact that doing x promotes S’s utility constitutes a requiring reason for S to do x.
Furthermore, I think that if it is not the case that the the fact that one’s doing x promotes one’s utility constitutes a requiring reason for one to do x, then there is no pro tanto duty for one to promote one’s utility. There can, of course, be other pro tanto duties (like the duty not waste one’s life) that at times requires one to act in ways that does promote one’s utility. I never denied that agents are sometimes required to do that which promotes their own utility, but that is not to say that agent’s have a pro tanto duty to promote their own utility per se.
Campbell,
If there is such a norm, I don’t see how this shows that the norm that Jamie states in the pineapple case isn’t essentially agent-centered. After all, what the point about the permissibility of acting negligently in a way that is detrimental to one’s self-interest shows is that whether one has an agent-sacrificing option doesn’t depend on whether or not one has given consent OR “voluntarily forego[ne] the status or consideration that morality would otherwise grant her.” Or do you think that when one acts negligently with respect to one’s welfare, one thereby “voluntarily forego[es] the status or consideration that morality would otherwise grant her”?
Doug, you use this premise in the argument,
P1) If the fact that doing x promotes S’s utility constitutes a requiring reason for S to do x, then whenever doing x as opposed to y promotes S’s utility more and S has no reason to do y instead of x, S would be required to do x as opposed to y.
My point was that commonsense morality requires agents to, among many other things, show concern for their own well-being. What contributes to one’s own well-being includes a particular interpretation of individual utility. Watching TV is probably not valuable in this respect. On the other hand, not squandering your life with drugs or alcohol probably does have utility: it pretty obviously does contribute to your well-being. In short, your TV example assumes a theory of value according to which watching TV has utility. I agree that the view that watching TV is valuable is not a part of commonsense morality. But that does not show that cs-morality does not require agents to show concern for their utility (where utility is the more narrow cs notion).
So, cs-morality (I am claiming) does require agents to show concern for their own well-being. Agents are required to do what contributes to their well-being and what contributes to their well-being has utility for them. But what has utility or value for the individual in cs-morality is not as broad as it is for crude forms of utilitarianism. Further I am not saying that it is a maximizing notion or that agents must maximize this utility. I am not saying that it is the fundamental principle of cs-morality. There are lots of other competing moral requirements on agents and other requirements can trump the concern for one’s well-being.
Now to (3).
3) MAU cannot accommodate agent-sacrificing options because, unlike commonsense morality, MAU holds that agents have a requiring reason to promote their own utility.
The Bob-and-Sue case is one in which MAU accommodates an agent-sacrificing option and cs-morality does not. MAU says that, in this case, I can save your friend rather than mine, when I cannot save both. I can so act from the motive of wanting to benefit you rather than benefit myself. Such a choice would contribute to your utility and diminsh my utility. CS-morality says that I cannot save your friend rather than mine. I cannot contribute to your utility and diminish mine. So (3) does seem false. Given (3) it should be MAU that cannot accommodate the sacrifice, for the reasons given in (3).
Mike A.,
Regarding (P1):
I’m confused. You quote (P1), but then go on to argue against the example that I used to support (P2). Did you mean to quote (P2)?
The TV show example was just one example. There was also the case of going to the dentist later rather than sooner. If you go later rather than sooner, you will experience more pain and thus have less utility. Commonsense says that there is nothing morally wrong with going later rather than sooner even if there no reason not to go sooner and even if that’s not what’s in your best interest. Surely, pain counts as dis-utility and pleasure counts as utility. So (P2) seems pretty plausible to me on the basis of such examples. Perhaps, then, you were rejecting (P1), but, if so, I’ll need you to explain how what you said tells against (P1).
Regarding (3):
The Bob-and-Sue case doesn’t show that MAU accommodates an agent-sacrificing option. It shows that MAU can accommodate a moral option to act in a way that is self-sacrificing. But, as I and others define an agent-sacrificing option, it is a species of agent-centered options, options to do other than that which the agent has most requiring reason to do. See (1) and (2) above. Perhaps, then, you reject (1) or (2) and/or (3). In any case, MAU never allows for such agent-centered options. On MAU, the fact that x-ing promotes someone utility (anyone’s utility) always counts as a requiring reason, and, on MAU, agents are always required to do what they have most requiring reason to do. They never have the option to choose what they less requiring reason to do over what they have more requiring reason to do. Commonsense morality and other agent-centered theories do accommodate such agent-centered options.
Doug, you say,
“But, as I and others define an agent-sacrificing option, it is a species of agent-centered options, options to do other than that which the agent has most requiring reason to do.”
That’s interesting. But the idea that “most requiring reason” and “moral obligation” always coincide in utilitarianism might not be true. If they did always coincide then we should not expect to find act-utilitarian prisoner’s dilemmas. But there are act-utilitarian prisoner’s dilemmas (strictly, au-near-PD’s). There are situations in which the moral goal of utilitarianism–utility maximization–is worse achieved if everyone follows their act-utilitarian obligations. So, in short, fulfilling those obligations pulls us away from what we, as utilitarians, have most reason to bring about. Here’s one such case (I’ll try not to bog you down with details. I provide a reference below for more detailed argument).
Act utilitaianism is a foward-looking theory in the sense that it requires us to produce the best of the available futures. Take a hedonistic theory of value along the lines of Mill, where value is measured along the dimensions of both quantity and quality. And suppose that X and Y are act utilitarians acting at different times t and t’. X’s options include causing pleasure p at t for patient Z or causing nothing for Z. Y’s options are causing p at t’ for Z or causing q at t’ for Z. Let pleasure p exceed in quantity pleasure q. But suppose that (pt,qt’) is the best sequence in the sense that the overall quality resulting from varying the experiences makes the sequence exceed in value the sequence (pt,pt’). Act utilitarians X and Y can fulfill their obligations only if they bring about the less-than-best sequence (pt, pt’). But why? Because when X looks to the future the best sequence (from t onward) requires that he does p at t. But when Y looks to the future, the best sequence for her (from t’ onward) also requires that she does p at t’. Were she to consider the past, she would then do q at t’. But for forward-looking theories, the past is irrelevant (except, of course, as it affects the future). So fulfilling those obligations pulls X and Y away from what they, as utilitarians, have most reason to do (viz., realizing the best utilitarian sequence (pt,qt’)).
Naturally the response to these sorts of dilemmas has been to make act-utilitarianism a temporal-neutral theory. Most utilitarians resist this move (see J. Howard Sobel in several places and Lars Bergstrom). But that is beside the current point. Anyway, credit where credit is due. For additional cases and details see W. Rabinowicz, ‘Act-Utilitarian Prisoner’s Dilemmas’ Theoria 55 (1989).