So here is an idea that seems potentially fun. Some people explain their favorite objection to consequentialism (or at any rate an objection they think worth taking seriously) and then others (or I suppose the same person) attempt to explain why they think the objection does not damage consequentialism’s prospects. It would be good if people try to keep track of whether or not one’s favorite consequentialist dodge can help itself to the previous dodges that people have offered or if it precludes some of them. Perhaps we should number the objections in the order presented.
I’ll start things off. Objection 1: Consequentialism recommends harming one person severely if doing so would prevent many people from suffering very small harms.
David,
I have a question about your objection. Is the following what you have in mind? Consequentialism recommends causing one person to suffer one mild, non-debilitating headache a week for the next sixty years (for a rough total of 3130 headaches) rather than cause each of 3500 people one mild, non-debilitating headache? And let’s assume that having one mild, non-debilitating headache a week for the next sixty years has no negative effects beyond the unpleasantness of the headaches themselves. If it’s this kind of thing that you find objectionable, then I’m not sure that I see the force of the objection. If instead you have something else in mind (say, causing one person the pain and anguish that comes from losing a child versus causing many people mild headaches), then I doubt that consequentialism is necessarily susceptible to the objection. Perhaps, there are lower and higher pains that work similarly to Mill’s higher and lower pleasures.
David,
One more thing: perhaps, you should say what the ground rules are. Can the consequentialist appeal to whatever value theory she likes in responding to objections? Also, how are we defining consequentialism here? Is it, by definition, an agent-neutral theory, as some would have it?
Boy, you try to ask questions and all you get are questions in return. Fair enough.
First off, I am a consequentialist so I am not too taken with my own question. But I worry more about the second version you mention. I take it your reply involves saying that a consequentialist could hold that it is not a better state of the world to have 1 person have terrible troubles rather than an arbitrarily large number of people have small troubles. Maybe. But suppose we were convinced that it would be better for a single person to have the big problem rather than lots and lots of paper cuts. Would you resist that thought? Or maybe the thought is that we change the nature of the harm when we bring it together into one person?
I would recommend that defenses of consequentialism appeal only to value theories that people actually find attractive, rather than one’s designed merely to get the C out of a fix.
And yes, if people care what my view is on this, I would say that C is necessarily agent-neutral. But if others think otherwise, please use your own understanding of C.
Dave,
Sorry to respond with yet another question, but I can’t resist. You write:
What else could make a value theory attractive but its being apt to get C out of a fix?
Dave, are you thinking of one of Larry Temkin’s examples? John Broome’s Weighing Lives discusses a couple of them, and he does a little work to clean them up first. Anyway, those examples do not seem to me to count against consequentialism at all. So maybe you have others in mind.
My favorite objection to consequentialism is that it cannot make sense of supererogation.
Satisficers can account for supererogation, can’t they? Just say we are obligated to bring about consequences that are “good enough“, and to do any better is supererogatory.
Cool idea btw, David.
I mostly object to consequentialism on the basis of what it would mean if I did. HAH!
Jamie wrote: “My favorite objection to consequentialism is that it cannot make sense of supererogation.”
It’s not clear to me that this objection works, even against non-satisficing versions of consequentialism.
Consider a standard formulation of utilitarianism according to which an action is morally right iff no alternative to that action would bring about a higher net utility. You might think that this version of utilitarianism can’t accomodate the intuition that sometimes we do things that go “beyond the call of duty”, and hence deserve extra praise. But it’s not obvious that this is the case.
Suppose Joe has two choices, action A or action B. Both have would bring about 100 UTES (the standard measure for utility, of course), and no alternative to either A or B would bring about more UTES. So Joe is morally required to do either A or B, but either A or B is morally permissible.
If Joe does A, then he gets all the UTES. If Joe does B, then a bunch of deprived, sad little kiddies will get the UTES. Joe does B. He wasn’t required to do B; he could have done A instead without doing anything wrong. But since Joe is less well-off than he would have been had he done A, I think we can all agree that Joe deserves some extra praise, since he did somethning “beyond the call of duty.”
What’s wrong with this sort of response?
Kris,
I believe that this is the same line that our own Campbell Brown took in his post Supererogation for Maximisers. In any case, I don’t think that Joe’s doing B, in your example, is supererogatory. I would argue that in order for P’s doing X to count as supererogatory the following three conditions must be met:
(1) P could permissibly have done some other act, Y.
(2) P is worse off for having done X instead of Y.
(3) P’s doing X is morally superior to P’s doing Y. That is, P has more moral reason to do X than to do Y.
Now, condition (3) isn’t met in your example. On the theory that you’re considering, Joe’s doing B isn’t morally superior to Joe’s doing A. Joe has no more moral reason to do B than to do A. And if condition (3) isn’t met, I don’t see how you can say that Joe has gone beyond the call of duty. To say they he has gone beyond the call of duty is to say that he has done something morally better than what duty required him to do. But, on the theory you’re considering, the performance of B isn’t morally better than the performance of A.
Campbell,
Your question seems a good one to start a new line of discussion. I recommend doing that.
But until then, consider the case of a single person. I think it makes sense to say that what is most valuable for that person, or best for her interests or well-being, can be different from what should represent that person in moral contexts. So, for example, Scanlon and lots of other people held that some things that are good for me need not make a moral claim on others while other things that are good for me would make such a claim (for example, especially optional good make no moral claim on others while necessities do make such a claim). I find this intelligible. And I see the sort of intuitions that would drive one to distinguish what was good (here, for a person) from what morally matters about that person. If one combined this with a view about how one moves from “good for” to “good overall” that held that the latter is just added up from the former, then it seems we could have an intelligible story about how what was “best overall” differed from what morality required us to bring about.
Is the thought that this story is unintelligible or just that one could have a notion of “good overall” which had no other ties but to what morality recommends?
Richard: yes, I guess satisficers are consequentialists, and they can handle supererogation. I wasn’t counting satisficing as consequentialist, but most people do.
Kris: I agree with Doug that a supererogatory act is supposed to be better than the (merely obligatory) alternative. It’s interesting, though, that a consequentialist theory can deliver the right verdict on supererogatory cases — my insistence that supererogation has to be better is theoretic gloss, so it’s not as costly to a consequentialist to reject it.
David: I don’t see how your point about ‘good for’ is relevant. Suppose we agree about a complete theory of what is good for a person. Now we need to know whether and in what way the good for persons contributes to what is good. Maybe it’s at that point that we choose our theory of the good just to make sure we can maintain C. Anyway, that’s my plan.
Jamie,
I am unsure if you are saying that one can construct a notion of “the good” that never comes apart from what morality recommends (which I would not disagree with) or that there is no coherent notion of the good that avoids the above. I was trying to sketch an instance of the latter. So of course one “could” choose a theory of the good merely in order to maintain C. The question, I am thinking, is if there is enough content to our ordinary notions of “the good” to present us with coherent alternatives to doing that. I was presenting the view that “overall good” is just determined by adding together “good for” as such a coherent alternative.
Kris,
As a good friend of mine is fond of saying, great minds think alike but dipsticks seldom differ. (I wonder if y’all know what a dipstick is.)
Dave,
I like your question. I’ll have to think about it, and maybe answer in a new discussion thread.
For now, let me try to respond to Objection 1. I take it the objection is that Consequentialism has the following allegedly counterintuitive implication:
(P) There exists some number N such that it’s permissible to allow one person to suffer severe pain in order to prevent N people from suffering mild headaches.
One response would be to argue, as follows, that (P) isn’t as counterintuitive as it initially may seem. Consider the following sequence of states of affairs. In X1 one person suffers severe pain; in X2 one thousand people suffer slightly less severe pain; in X3 one million people suffer slightly less sever pain; and so on. If we go far enough along the sequence we’ll arrive at a state of affairs in which people are suffering only mild headaches. Now, the consequentialist may argue as follows:
(1) For any n, it is permissible to allow Xn in order to prevent Xn+1.
(2) For any X,Y,Z, if it is permissible to allow X in order to prevent Y, and it is permissible to allow Y in order to prevent Z, then it is permissible to allow X in order to prevent Z.
Therefore,
(3) There exists some Xn such that (i) it is permissible to allow X1 in order to prevent Xn, and (ii) in Xn people suffer only mild headaches.
Oh, I should add that I can’t claim any credit for the argument above. Broome gives a similar argument in Weighing Lives, as Jamie already noted, though Broome’s argument is cast in terms of betterness (of course). And, as I recall, he borrows the argument from Temkin. Interestingly, Temkin puts the argument to a different purpose: namely, to discedit the second premise (i.e. the one about transitivity) — one man’s modus ponens is another’s modus tolloens.
Campbell: Sorry for posting without representing. I didn’t catch your earlier post. What is a dipstick? Is that what you use to check oil in your car?
Doug, Jamie: Small question. Doug wrote, “(3) P’s doing X is morally superior to P’s doing Y. That is, P has more moral reason to do X than to do Y.”
I’m not sure I understand the clause. Maybe I don’t understand what a moral reason is supposed to be. I thought that the following claim was generally accepted, regardless of whether one is a consequentialist, Kantian, Rossian, egoist, etc.:
S’s doing A is morally obligatory iff S has more reason to do A than any alternative to A.
(That is, S’s reasons for doing A are collectively stronger than S’s reasons for doing any alternative to A.)
How could S have two options, X and Y, have more moral reason to do X than Y, and yet both be permissible? It seems to me that something has to give. So I’m suspicious that you’ve given the correct account of what it is for an action to be superogatory.
My thought was that the correct account would look something like this obviously rough analysis:
(1) P could permissibly have done some other act, Y.
(2) P is worse off for having done X instead of Y.
(3) P deserves praise or displays virtue or deserves to be thought of highly by others who are aware of what P has done because P did X instead of Y.
Superogatory acts are acts that we have reason to evaluate in certain ways; they aren’t acts that carry “extra moral weight”, that is, stronger reasons for performing them than other permissible actions, because, if they were, those other actions wouldn’t actually be permissible. Or so it seems to me.
The utilitarian can consistently adopt this account of what it is to be superogatory. And if she does, then she can apply this account to Campbell-style cases, and get what seems to me to be the correct results.
Jamie: related question. What does ‘better’ mean in this context? You don’t mean “have a higher utility”, I guess?
(There are good reasons for a non-consequentialist not to adopt this account of superogatory acts, which is of course only suggested by what you said:
(1) P could permissibly have done some other act, Y.
(2) P is worse off for having done X instead of Y.
(3) X has a higher utility [or brings about more intrinsic value] than Y.
Suppose I have two options, benefit myself (I get 50 UTES) or benefit small, sad child (she gets 25 UTES). No other options are available, no other consequences of the options are relevant, blah blah. I can imagine some non-consequentialist claiming that both are permissible, but that I deserve extra praise if I give the UTES to the kid. But then I don’t do something “better” in the sense of generating more utility; in fact, I do soemthing worse in that sense. )
sh*%*t. Doug on the other posts says this as well, I think. So I’m gonna shut up for awhile and read the old post. My apologies.
Dave,
I was presenting the view that “overall good” is just determined by adding together “good for” as such a coherent alternative.
That is an alternative, in the sense of being a coherent, substantive view about what overall good is. However, I’m afraid I don’t see your point. That there is such a view doesn’t seem to me to call into question in any way the methodology of picking a value theory so as to repair defects in C.
Suppose you say, “I think that Aristotle was correct about what makes a human life better.” Suppose I agree with you. Then you say, “So that means that we ought to maximize the net sum of Aristotle-eudaimonia in the world.” I reply, “No, that doesn’t seem right, because it would mean we’d have to kill innocent people in order to prevent terrorists from blah blah blah.” You agree with me. You add, “I guess that means C is false.” But now I say, “Not at all, it just means that the good for persons is not the whole story about the general (overall) good.”
Isn’t this the right general description of the dialectic?
Kris,
I think it is pretty plain that in ordinary cases of supererogation the supererogatory act is better than the merely obligatory one. For example, it is common sense that an assistant professor of philosophy who earns $48,000/year is not morally obligated to give more than $10,000/year to famine relief. However, giving $15,000 this year rather than $10,000 is patently better, morally speaking, and there is no moral reason against it.
It seems to me that if we add Consequentialism to this — I should now say maximizing Consequentialism — we find that there can’t be any examples of supererogation. But this could be wrong. Maybe I just haven’t seen how to understand or represent supererogation within Consequentialism.
Jamie: related question. What does ‘better’ mean in this context? You don’t mean “have a higher utility”, I guess?
Well, that would be one substantive view about which things are better than which, but it isn’t a definition. ‘Better’ is the comparative of ‘good’. There are lots of substantive ethical theories about what makes things good. Are you asking me to endorse one, or are you asking me something else?
Jamie and Campbell,
Maybe I need to back up and try to understand the claims you guys want to make about goodness and morality.
One claim would be that there is no attractive and coherent notion of “the overall good” that could come apart from what one morally recommendeds.
If that were the claim, then lots of people have thought that they were using the term “the overall good” in a way that allows there to be such space. They thought the notion of “overall good” came with rules for its use (eg is nothing but the sum of individual good) such that it made sense to say that one might or might not morally recommend maximizing the good. I talk to people all the time that feel that they are using these notions such that they can come apart.
I take it you guys want to say that although these people think they have an attractive coherent notion of the good which can diverge from their moral recommendations, in fact they have no such notion.
This is why I thought it relevant to come up with a seemingly attractive and coherent notion of the good which can come apart from a person’s moral recommendations.
But let me come at it another way. What would one have to show to show that your view is wrong?
Hi Jamie,
I think I may have asked a bad question. Here’s (maybe) a better question. You wrote, “the superogatory act must be better than the alternatives.” I want you to tell me what you mean by “better”. Telling me that it is the comparitive of “good” doesn’t help either! 🙂 There are lots of kinds of goodness, for example, asethetic goodness, extrinsic goodness, perfectionist value, signatory value, goodness for Kris, etc., and accordingly there are lots of ways in which an action can be better than another. Some people talk about the “moral value” or “deontic value” of an action, and claim that some actions enjoy more of this kind of value. Which kind of way of being better is necessary for an action to be superogatory?
That’s the question I wanted to ask.
One way of answering it would be to say that you are focusing on Moorean intrinsic value, and that in order for an act to be superogatory, it has to generate more Moorean intrinsic value than its alternatives. That answer seems to me problematic for the same reason that “has a higher utility” seemed to me problematic.
In the example you gave in the previous post, one obvious sense in which giving 15,000$ is better is that it generates more intrinsic value than giving 10,000$. But in general (it seems to me) we non-consequentists don’t think superogatory acts have to be better in this sense (Doug’s point, if I understand him correctly, from the old post, which I endorsed without realizing Doug beat me to it a few posts ago), so it seems odd to use something that most non-consequentialists don’t believe in, in an argument against consequentialism.
So I assumed that there was some other kind of value you were focusing on, and so I’m asking you to tell me what it is.
Kris,
Oh, I see. I meant ‘better’ in the ethical sense, not the aesthetic sense or any of the others you mentioned. Certainly not intrinsic value, but maybe whatever Moore meant by that minus the ‘intrinsic’? But I have a very hard time figuring out what Moore did mean.
In the example, I thought it was obviously intuitively better (ethically better) to give the $15,000. That is much more obvious, as far as I’m concerned, than any particular axiology, so I think it’s a good idea to rely on it when you’re trying to figure out what axiology to accept.
Dave,
Just by the way, I doubt that the ordinary concept of “individual good” has enough content to make sense of the idea of summing individual goods. We have to give it more content for our purposes. But suppose your friends have managed to do that.
So now your friends, having identified the overall good as they understand it, ask themselves whether it is morally right to maximize it. Right? Suppose they think it isn’t. For example, suppose one of them suggests that instead of maximizing it, we should prefer the state in which it is most equally distributed. Doesn’t this mean that the egalitarian friend thinks the equal distribution of individual good is itself good? And isn’t he saying that what he’d identified as the overall good isn’t really good after all — equality in its distribution is what’s really good? It would be very puzzling if he continued to insist that the realization of the overall good doesn’t matter but only its distribution. That’s because our contact with the overall good comes via its connection with choice.
Hi, sorry for budging in, but the textbox on the bottom just seemed too inviting 🙂
I am a… kind of utilitarian myself, and the main problem I see with utilitarianism is death.
How would killing all the miserable people in the world rate in UTES?
Jamie,
Well suppose (forget about what is actual) that the majority of people enmeshed in these sorts of debates in fact distinguished what they thought created the most good and what was morally best. Suppose they thought it mattered how the same amount of good (or bad) was brought about, whether it was caused or allowed, intended or foreseen. And suppose they said loudly that the right was not determined by the good and that what was most good need not be what was most right and suppose they said this was what was distinctive of their view.
In that case the problem would not be that what such people said was puzzling for it would be the common way of talking. And such people would be asserting that they had other and stronger connections to the notion of overall good besides its connection to choice.
Now in such a world would you say that your view was just wrong? Or is your view that even in such a world there is nothing for them to mean by “overall good” except for its connection with choice.
One other way of putting it. I suppose at least that the above way of talking was the common way of talking 20 or 30 years ago. Do you think that despite being the common way of talking, there was just no clear alternative idea connected with “overall good” aside from its connection with choice even back then?
Or is the thought that consequentialists, as it were, lost their nerve, and started allowing lots of considerations to start counting under the heading of “goodness” that once were thought to be disallowed under that heading (distribution consideration, eliminating “nasty preferences, and such). And that once this was done, the only remaining connection with goodness is “what is most worth choosing”. On this story, those who insist on a distinction between goodness and “what is most worth choosing” simply have not kept up with the changing times.
Dave,
As to the first question:
Well, look, in some possible world, people mean ‘avuncular’ by ‘good’. This modal fact does not seem to me to count against my view.
As to the second question:
I think there was some pretty serious confusion starting in the early seventies, myself. Partly Rawls’s fault, maybe partly due to Bernard Williams. Philippa Foot (“Utilitarianism and the Virtues”) tried to straighten everyone out. Campbell and I are helping her.
As a generalisation of Christer’s question, and more directly inspired by John Broome’s forthcoming article in the Journal of Political Philosophy: how should we value population? How do we evaluate birth’s, death’s and policies that affect the rates of these, from a consequentialist perspective?
My own intuitions for consequentialism tend to presuppose the existence of persons: I want to maximise some (appropriately defined and aggregated) measure of welfare precisely *because* there exist people with an interest in it. I’m not sure how that allows me to make judgements about whether these people should exist, without:
(a) a new metaphysical conception of the person; or
(b) a non-consequentialist theory.
On the supererogation issue, I wonder whether consequentialism even needs a notion of obligation in the sense that seems to be being sought. Why can’t it simply say:
(1) some things are (morally) better than others;
(2) some things rank low (high) enough on this spectrum that we inflict social sanctions if people do (don’t do) them;
(3) some things rank high (low) enough that we give people social rewards if they do (don’t do) them; and
(4) we draw these lines on the basis of where (from our consequentialist perspective) they would be most usefully drawn, taking into account both their direct effects and effects on incentives.
On this account, the line between “morally required” and merely “permitted” isn’t actually a moral one at all: it just needs to appear to be one in order to work.
Kris,
I take a moral reason to be a consideration that counts morally in favor of performing the act. So, for instance, on utilitarianism the fact that doing X will increase someone’s utility counts as a (not necessarily decisive) moral reason for doing X, but the fact that doing X will make someone, say, more knowledgeable does not in itself count as a moral reason to do X. Note also that on utilitarianism the fact that doing B will distribute a given amount of utility in one way rather than another (amongst a number of others rather than amongst only you) does NOT count a moral reason in favor of doing B. Of course, on common sense morality, things are quite different. On commonsense morality, there is a self-other asymmetry and so the fact that doing B will distribute a given amount of utility to others as opposed to yourself does count a moral reason for doing B.
Now here’s what I think that’s going on when you and Campbell claim that utilitarianism can accommodate supererogatory acts. You site examples where (1) an agent may permissibly do either A or B, (2) the agent would be better off if she does A, and (3) others would be better off if she does B. You then claim that doing B is supererogatory because doing what’s better for others as opposed to yourself is, we think, something that it’s morally good to do and hence morally praiseworthy. But distributing a given amount of utility to others rather than yourself isn’t something it’s morally good to do if utilitarianism is right. And note that appealing to moral praiseworthiness doesn’t help because it’s not morally praiseworthy if it’s not morally good to do. If there’s no moral reason/consideration in favor of doing B over A, why should we consider it morally praiseworthy do B rather than A?
So in order to show that utilitarianism can accommodate supererogatory acts, one must show that conditions (1)-(3) below can all be met on utilitarianism:
X is supererogatory iff…
(1) P can permissibly do either X or Y.
(2) P’s doing X as opposed to Y is in some way more costly, demanding, or challenging for P.
(3) P’s doing X is morally superior to P’s doing Y. That is, there is some moral reason/consideration that speaks in favor of P’s doing X rather than Y.
As I see it, you’ve shown that utilitarianism can meet conditions (1) and (2) but not (3). And I think that you’ll never show that utilitarianism can meet all of (1)-(3), because, in any case, where, on utilitarianism, condition (1) is met, condition (3) will not be met.
Kris,
One more thing: You say, “Superogatory acts…aren’t acts that carry ‘extra moral weight’, that is, stronger reasons for performing them than other permissible actions, because, if they were, those other actions wouldn’t actually be permissible. Or so it seems to me.”
I think that that’s exactly what they are. Take Jamie’s example of giving to famine relief. The fact that giving more would save more lives does carry extra moral weight. Now you want to claim that if there is, on balance, more moral reason to give $15,000 as opposed to $10,000, then you would be required to give $15,000. Well, this is what the consequentialist must say, but there lies the problem for consequentialism in accommodating supererogation. The consequentialist has no way to accommodate the idea that it could be morally permissible to do either of two acts where there is more moral reason to perform one rather than the other.
Doug,
Suppose that the utilities in X and Y are as follows (there’s only two people):
X: me 10, you 10
Y: me 20, you 0
And suppose I choose X. Then it seems that, on utilitarianism, your conditions (1) – (3) are all met. Perhaps you’ll say that (3) is not met. But, by your own admission, utilitarianism implies that “the fact that doing X will increase someone’s utility counts as a (not necessarily decisive) moral reason for doing X.” It follows that, according to utilitarianism, there is some moral reason that speaks in favour of my choosing X rather than Y, because my choosing X will increase your utility. Am I missing something?
Campbell,
I apologize for being sloppy. What I meant to say, and what I did say earlier, was that P has to have more moral reason to do X than to do Y. The fact that there is some moral reason in favor of doing X as opposed to Y is, as you point out, insufficient for making P’s doing X morally superior to P’s doing Y.
So, condition (3) should read: “P’s doing X is morally superior to P’s doing Y. That is, P has more moral reason to do X than to do Y.”
So do you see any way for utilitarianism to meet conditions (1)-(3) now? And do you accept this definition of supererogation?
Doug,
Thanks for the clarification. To answer your questions: I agree that utilitarianism is inconsistent with the conjunction of (1), (2), and (3); but I’m reluctant to accept your definition of supererogation. It seems to me that your definition makes supererogation impossible, because (1) is inconsistent with (3). Though I think that just takes us back to old disagreements.
Let me ask you a different question: what’s wrong with your original (sloppy) definition of supererogation? Why isn’t it sufficient that there’s some moral reason in favour of X?
I want my def. of supererogation to contain the idea that it involves doing more than one is required to do. I think of that as the main thing about the concept.
And so I like the set up where we contrast X with Y and say both are morally permissible (for an act is not supererogatory if it is the only permissible act because then it is not a case of doing more than one is obligated to do) and the super act is, in some sense, more morally recommended than Y.
I suppose we could have a notion of supererogation in other areas as well–prudential super, etc. but if we are talking about moral super, then it seems the act must be better on that dimension.
I hesitate a bit, or just need to think more about, the requirement that the super must be somewhat costly. Suppose I could press button A or B and that both are acceptable and it makes no difference to me, but B makes the world somewhat better than A. Then pressing B seems plausibly to be super to me.
Doug wrote (sorry about the long quote)”Kris, You say, ‘Superogatory acts…aren’t acts that carry ‘extra moral weight’, that is, stronger reasons for performing them than other permissible actions, because, if they were, those other actions wouldn’t actually be permissible. Or so it seems to me.’
I think that that’s exactly what they are. Take Jamie’s example of giving to famine relief. The fact that giving more would save more lives does carry extra moral weight. Now you want to claim that if there is, on balance, more moral reason to give $15,000 as opposed to $10,000, then you would be required to give $15,000. Well, this is what the consequentialist must say, but there lies the problem for consequentialism in accommodating supererogation. The consequentialist has no way to accommodate the idea that it could be morally permissible to do either of two acts where there is more moral reason to perform one rather than the other.”
If you are right about this, then it seems to me that the consequentialist is not the only person who has a problem making sense of the superogatory. Any person who accepts the following claim has this problem:
(RO): S’s doing A is morally obligatory iff S’s reasons for doing A are stronger than S’s reasons for doing any alternative to A.
According to RO, you ought to do what you have the most reason to do.
You don’t have to be a consequentialist to accept RO. For example, I think Ross accepted RO, and he wasn’t a consequentialist. I think I accept RO, but since I think that there are many other sources of reason for action besides facts about the intrinsic value generated by the actions, I don’t like consequentialism.
I would have also thought that RO is antecdently much more plausible than consequentialism. (Consequentialism seems to imply RO but the converse isn’t true.) When I squint at RO long enough, it even sometimes looks analytic! (Maybe I need to get my eyes checked.) But doesn’t RO face the same troubles accounting for the superogatory if you are right? If so, that’s an interesting result: there’s a problem for non-consequentialists as well.
Kris,
You say, “Any person who accepts the following claim has this problem [the problem of accounting for supererogation as I’ve defined it]:
(RO): S’s doing A is morally obligatory iff S’s reasons for doing A are stronger than S’s reasons for doing any alternative to A.”
I don’t see this at all. Take Jamie’s famine relief example again. Perhaps, given my modest salary, giving $15,ooo to famine relief would mean that I couldn’t purchase the books and journals that are crucial to my professional development. And suppose that I have more reason, all things considered, to provide for my professional development than I do to contribute to famine relief. This is consistent with my having more moral reason to contribute to famine relief. Now, given your RO, I wouldn’t be morally required to give $15,ooo to famine relief since I have more reason, all things considered, to provide for my professional development than I do to contribute to famine relief. Nevertheless, I can account for the fact that giving $15,ooo to famine relief is morally permissible, for surely the following is also true.
(MO): S’s doing A is morally permissible if S has more moral reason to do A then some other morally permissible alternative.
Thus, giving $15,ooo will be supererogatory on my definition. And I not only should that this is compatible with your RO. Indeed, I showed that appealing to your RO helps me in accommodating supererogation.
Campbell: Would you accept that an act must be morally superior to some permissible alternative in order to be supererogatory? It seems that you must accept this even if you don’t accept the way I’ve tried to cash out the idea of moral superiority. In any case, it seems clear that utilitarianism can never hold that it is morally permissible to perform an act that is morally inferior to some other permissible alternative. My main point, then, is that it’s not enough for you or Kris to say that utilitarianism allows for the possibility that an act that produces more utility for others can be a permissible alternative to some other permissible act. You have to show in addition that, on utilitarianism, an act that produces more utility for others is in some sense morally superior to one that produces just as much utility overall but less utility for others. But it seems to me that utilitarianism is committed to the view that in no sense is an act that produces just as much overall utility as another ever morally superior to that other.
David: I guess that I should probably think about condition (2) more as well. In any case, I’m not insistent on it, but I am insistent on conditions (1) and (3).
Kris,
I’m sorry but I misread your “iff” for “if.” Okay, I see that anyone who accepts RO has the problem. But I don’t find RO as obvious as you do, for I believe that it’s at least conceptually possible that non-moral reasons can sometimes override moral reasons in which case RO loses all of it’s plausibility.
Doug,
Fair enough. If we allow for the possibility that moral reasons can sometimes overide non-moral ones, RO does seem less plausible. (However, I’m not sure I want to follow you here. Maybe more on why later.) But there are still two theses in the neighborhood that still seem plausible even granting this possibility. And both these theses face the same sort of worry we’ve been talking about.
MRO: S’s doing A is morally obligatory if and only if S’s moral reasons for doing A are stronger than S’s moral reasons for doing any alternative to A.
I think it’s clear at this point that MRO faces the worry we’ve been discussing just as much as consequentialism does; it’s more plausible than consequentialism; and MRO isn’t problematized by the possibility you mentioned in the way that RO is.
If we are willing to allow that there are moral reasons and non-moral reasons, and that sometimes one kind of reason can trump another, then it seems to me that we should also be willing to allow that there is kind of oughtness — call it “all things considered” oughtness (sometimes also called “just plain ought[ness]”) that some of actions enjoy: some actions we just plain ought to do. Accordingly, we can entertain the following theory:
ARO: S’s doing A is “all things considered” obligatory if and only if S’s reasons for doing A are stronger than S’s reasons for doing any alternative to A.
ARO also seems plausible; when I squint at ARO, it looks analytic; and it still looks like this even when I open my eyes all the way!
Now let’s consider a suggestion by David, “I suppose we could have a notion of supererogation in other areas as well–prudential super, etc.”
I think this is right; if we have a notion of duty in other areas, we should also have the notion of “going beyond that duty” as well. So if we have a notion of an all-things-considered just-plain-ought, we also have the notion of an all-things-considered superogatory action.
And ARO, despite its plausibility, will have trouble accounting for this as well — provided that we assume (as I don’t think we should) that superogatory acts are ones that we have more reason to do than their alternatives.
Kris,
I think that MRO is problematized by the possibility that non-moral reasons can override moral reasons, because it seems to me that both of the following are true:
(RO*): S’s doing A is morally obligatory only if S has most reason, all things considered, to do A — where we weigh both moral and non-moral reasons to determine what S has most reason, all things considered, to do. (It would be very odd if fulfilling one’s moral obligations might turn out to be contrary to what one has most reason to do, all things considered.)
(MO): S’s doing A is morally permissible if S has more moral reason to do A then some other morally permissible alternative. (It would be very odd if doing what one had more moral reason to do over what one had less moral reason to do might turn out to be morally impermissible.)
Now, if these two, RO* and MO, are true, and if non-moral reasons can override moral reasons, then the following must be false.
(MRO*): S’s doing A is morally obligatory if S has most moral reason to do A.
And so the biconditional expressed by MRO would be false. So it seems that MRO is problematized by the possibility that non-moral reasons can override moral reasons. And whether this is indeed a possibility is probably where we disagree.
I am inclined to think the initial question is poor “favorite objection to consequentialism”
Sorry David 😉
I take consequentialism to refer to a type of ethical theory, (More like a genus really) namely theories that hold what is important to assess the morality of actions is their consequences. (And only their consequences? I’m not sure on this).
Consequentialist theories are then made up of three bits,
a theory of scope: When to count that which has value.
a theory of value: What has value.
a theory of aggregation: How you count it.
So for example the distinction between classical utilitarianism and ethical egoism is simply a difference in terms of theory of scope, the utilitarian (plausibly in my opinion) insists that anything which generates the thing of value must be counted, the ethical egoist insists that just the individual making the decision should be counted.
satisficing is a theory of aggregation, so the distinction between a conventional maximising utilitarian and a satisficing utilitarian come down to simply their theory of aggregation.
I suspect that some of the confusion above was generated by sloppy usage of both the terms utilitarianism and consequentialism, instead we ought to try and say for example maximising hedonistic utilitarianism when we mean that, and the criticism is aimed at that theory.
I am not sure how or indeed if you can go about criticising consequentialism in general. (How do you criticise a type of theory?)
David, to answer your question, I would say that one criticises a type of theory by criticising whatever it is that every token of that type has in common. (There must be something they all have in common; otherwise there would be no type of which they all were tokens.) In the case of Consequentialist theories, which as you put it “hold what is important to assess the morality of actions is their consequences”, one may criticise all theories of this type by showing that that’s not what is important. Perhaps that’s a tall order; I’m inclined to think it is. But I don’t think any difficulty arises merely from the fact that “Consequentialism” names a type. Surely, we criticise types of things all the time.
It is perhaps an interesting question how possible it is to criticize C as a class rather than merely this or that version. I suppose this has to do with how much common content we think gets packed into the meaning of C and how much is allowed to vary between instances of C. This, I suppose, partly explains the vast literature on what is constitutive of C and what is not.
It should be noted that some of the most influential writers here aim to be arguing for or against C itself. Suppose, for example, that there were a problem with agent-centered restrictions. If that were so that would force the correct ethical theory in important ways to be close to a C-type view.
Surely, I would say, even if C is too unwieldy to argue against all at once, it is possible to argue for or against views that are closer or further away from the core of the view.
Thank you Campbell, in particular for making the type token distinction, I knew there was a better way to put this, and my students have been struggling with the distinction in terms of genus/species.
And apologies David Sobel, you are of course right that C can be argued against.
I suspect my skepticism about a grand objection to C is a reflection of a general skepticism about grand theorising, I’m inclined to think if you want to object to C then you are best to chip away at it token by token…
On that note here are several common objections to varieties of C:
Demandingness objection:
That C is too demanding. I think there are three variants of this objection ranging from least interesting to most:
1. C is quite hard work
2. C requires me to do things which I think I am obliged not to do (IE the family obligations objection)
3. Poverty of moral universe (IE on some varieties of C every decision, whether it is to play the Banjo or the Flute turns out to be a moral decision)
Responses?
David Hunter: I don’t understand objection (3). Could you elaborate a bit more?
Doug: you are right that we disagree about whether moral reasons can be over-ridden in this way. I want to think about this some more.
Returning to the issue of whether superogatory actions have to be “better” in some ethically relevant sense, I ask you to consider the following case.
Joe and me are equally deserving of some UTES. We both loves us some UTES. In fact, we are pretty much indiscernible with respect to any morally relevant feature. I have a UTES distributor: I can either give Joe the UTES or I can give them to me. I haven’t promised to give Joe any UTES or promised that I wouldn’t. Joe doesn’t even know that I have this choice. There’s no one else to give them to. Those are my two choices. I give Joe the UTES.
What a nice, unselfish thing for me to. I seem to deserve some praise because I gave them to Joe. I didn’t have to give them to Joe; he deserved them no more than I did, and the total amount of UTES produced is the same either way. So both actions are permissible, and neither seems morally better than the other. Still, giving the UTES to Joe seems to merit a kind of praise, which is why I want to say it is superogatory.
So it’s not the case that all superogatory actions have to be “ethically better” than their alternatives.
Kris,
I agree that you deserve moral praise for giving the UTES to Joe instead of yourself, but that’s because I’m not a utilitarian. And, in order to show that there can be supererogation on utilitarianism, you need to show that the utilitarian should think that your act of giving the UTES to Joe instead of yourself is praiseworthy. But, on utilitarianism, a disposition to prefer distributing UTES to others rather than yourself when the total utility will be the same either way isn’t a disposition that increases your likelihood of maximizing utility. So, in what sense, is it praiseworthy on utilitarian grounds? In order to show that utilitarianism can accommodate supererogatory acts, you must show:
Not only…
(1) that, according to utilitarianism, S’s doing X and S’s doing Y are both morally permissible.
But also…
(2) that, according to utilitarianism, S’s doing X is morally better than S’s doing Y.
As far as I can tell, you haven’t demonstrated (2), but rather you’ve demonstrated only that, according to our commonsense moral intuitions, your distributing the UTES to Joe is better than your distributing the UTES to yourself.
Kris,
Exchange “more morally praiseworthy” for “morally better” in (2) above. I don’t see that it makes any difference.
Kris,
One more thing: I do think that giving the UTES to Joe is ethically better than giving the UTES to yourself; it’s just not ethically better if utilitarianism is true. I agree with commonsense morality, here, in accepting what Slote calls the self-other asymmetry. As Slote puts it, “our ordinary thinking about morality assigns no positive value to the well-being or happiness of the moral agent of the sort it clearly assigns to the well-being or happiness of everyone other than the agent.”
Maybe I’m beating a dead horse. (Kris: when did you stop beating your dead horse? Dumb joke. Anyways…)
Doug wrote, “I agree that you deserve moral praise for giving the UTES to Joe instead of yourself, but that’s because I’m not a utilitarian. And, in order to show that there can be supererogation on utilitarianism, you need to show that the utilitarian should think that your act of giving the UTES to Joe instead of yourself is praiseworthy. But, on utilitarianism, a disposition to prefer distributing UTES to others rather than yourself when the total utility will be the same either way isn’t a disposition that increases your likelihood of maximizing utility. So, in what sense, is it praiseworthy on utilitarian grounds?”
Here’s something that seems to me important and worth emphasizing. Utilitarianism, as standardly formulated, is a theory about what makes right actions right, wrong actions wrong, and obligatory actions permissible. It’s not a theory about what makes it appropriate for agents to have certain evaluative attidudes. Utilitarianism as standardly formulated looks like this:
(UTE): An action is wrong iff some alternative to that action generates more utility.
This theory doesn’t say anything about what evaluative attitudes we should have. I take that to be obvious.
To say that something is praiseworthy is to say that it deserves a positive evaluation; it is to say that certain reactive attitudes towards the thing in question are justified or appropriate.
So we might naturally wonder when our evaluative attitudes are appropriate or justified. We might even want to formulate theories that answer the question “when is an attitude towards something appropriate or justified?” But note again that UTE is absolutely silent on this question. It doesn’t have a clause that says that an attitude towards an action is appropriate just in case …..
Now one *could* *supplement* UTE with another theory:
(UTE-EVAL): An action is praiseworthy to the extent that it promotes utility; the more utility an action promotes, the more deserving of positive evaluation it is.
UTE-EVAL and UTE do form a nice package, and it may be that you can argue from UTE to UTE-EVAL, but you would obviously need additional premises. The two views are logically independent of each other.
The moral I draw is that the utilitarian’s account of right and wrong actions and the utilitarian’s account of when actions deserve positive evaluation can come apart; it may be that not all permissible actions are equally praiseworthy — and this possibility is consistent with utilitarianism. So even if we insist that there must be some sense in which a superogatory action is morally better (and Doug says, “Exchange “more morally praiseworthy” for “morally better” in (2) above. I don’t see that it makes any difference.”), it’s not obvious that the utilitarian can’t account for this difference.
Now, just for the record, I’m not a utilitarian or a consequentialist. I guess I’m some sort of messy pluralist about duties, values, reasons, all of which can conflict, overide, overwhelm, and perplex. But I’m not convinced that the superogation argument against consequentialism works.
Kris,
Fair enough. I agree with most everything that you’ve said in your most recent comment. So, in order to account for supererogation, UTE would need to be supplemented with some sort of account of when an act is morally praiseworthy, specifically, an account that would allow one act to be more morally praiseworthy than another despite the two being equal in their utility production. And what I doubt is that a utilitarian would be happy with any such account. Furthermore, I doubt that any such account would be consistent with the spirit of utilitarianism. But, in any case, I concede, then, that I haven’t shown that utilitarianism cannot accommodate supererogatory acts. To do so, I would need to go back to and defend my original definition of supererogation, which holds that there has to be more moral reason to do X and than to do Y, and then further argue that utilitarianism is committed to an account of moral reasons that doesn’t allow for there to be more moral reason to do X than to do Y when X and Y are equal in their utility production. Perhaps, I should think about this more and then post something on this topic later.
Hi Doug,
” And what I doubt is that a utilitarian would be happy with any such account. Furthermore, I doubt that any such account would be consistent with the spirit of utilitarianism.”
Here’s an account that’s worth considering (if you are utilitarian!):
An action is praiseworthy iff the action is caused by a character trait C, the possesion of which tends to lead to utility-maximizing actions.
(this is very rough, but it is in the “spirity of utilitarianism”.)
Arguably, if one has a long-standing preference for helping others as opposed to one’s self, one will be more likely to maximize utility.
Let’s stipulate that in the case just described, my action was caused by the relevant utility-producing character trait. It then counts as praiseworthy, even though it does not produce more utility than its alternative.
Obviously, very rough. But if something like this could be developed, then the utilitarian could avoid the objection from superogation w/o violating the “spirity” of utilitarianism as well. Just a thought.
Doug and Kris,
I am enjoying this exchange. I was thinking of a possible response to Doug when he writes,
“UTE would need to be supplemented with some sort of account of when an act is morally praiseworthy, specifically, an account that would allow one act to be more morally praiseworthy than another despite the two being equal in their utility production. And what I doubt is that a utilitarian would be happy with any such account. Furthermore, I doubt that any such account would be consistent with the spirit of utilitarianism.”
I wonder if it is avialable to the C to say something like the following. What traits we should praise depend not only on how well the praised action does, compared to rivals, in creating utility, but also on the effect of praising such acts on others as well. Generally, from a C point of view, a main vice agent’s have is excessive concern for their own well-being. Praising people in a way that reduces this trait might create postive effects. It might do so even if the praised act is defined as one that creates no more utility than the less praised act. Generally, a C could say we should use blunt tools to hammer out the chief flaws in human nature. C’s can take Aristotle’s advice and guard against the nearer vice.
Kris and David,
To be truly in the spirit of maximizing utilitarianism we should say,
An action is praiseworthy iff the action is caused by a character trait C, the possesion of which has a greater tendency than any other C, to produce utility (N.B. this could be through our own actions or through various influences on others).
Now compare C1 and C2:
C1 is the disposition to help others as opposed to one’s self, always.
C2 is the disposition to help others as opposed to one’s self, except when doing so is unlikely to increase total utility.
It seems that C2 as opposed to C1 has a greater tendency to produce utility. In which case, your act of giving Joe the UTES rather than yourself, an act stemming from C2, is not morally praiseworthy.
Doug,
I disagree. An action can be praiseworthy, for a C, not only because the action praised creates utility, but also because the praising of it creates utility. This is the sort of story I was genturing at in my post above.
Dave,
Just to interject with a quick question, wouldn’t the praising of the praiseworthy act be enabled by the praiseworthy act, so that the utility created by the praising of it would included in the total utility produced by the praiseworthy act?
Josh,
I’m not sure I understood that. But let me say something anyway.
There need not be an act of type X that has occured in order for me to praise an act-type X. And even if it has occured, for the C the question of whether or not to praise it does not hinge on how well X promotes utilty for that is now water under the bridge. Now the question is, what is the C-value of praising it? And this is determined by the causal consequences of the praise, not the causal consequences of the act being praised.
Ah, right. Thanks–I think you did understand the question, and I just didn’t write it in a very praiseworthy way. I was focused on act-tokens and you were writing about types.
Jeez–even that statement was blameworthy. You were talking about types and tokens. Apologies.
Sorry, David. I read you too quickly. It’s true that the utilitarianism is committed to the view that the act of praising (as with any act) is right iff it maximizes utility. But it’s not clear to me that saying that act X should be praised is equivalent to saying that act X is praiseworthy, and that’s what it has to be if it’s going to be supererogatory, right? But, again, I think that I want to retreat to my initial definition of supererogation in which I claimed that to say that an act is above and beyond the call of duty is to say that there is more moral reason to perform it than some other morally permissible alternative. However, I still need to think about it more and hope to post something on it later.
Doug,
That seems right to me. I can think of lots of ways to fuss, but in general, what you say seems right. I can confidently say this, if there is trouble with supererogation for C, it is not trouble with supporting having people say that folks did more than required.
The relevant criteria for determining which is the best or most attractive theory of goodness depend on what purpose one intends that theory to serve. If we are consequentialists, then the relevant purpose is to give an account of what people ought to do in terms of maximising goodness (or value). And relative to that purpose, one theory of value, X, is better than another, Y, just in case it is more plausible that we ought to maximise goodness according X than it is that we ought to maximise goodness according to Y. Hence, the consequentialist’s sole criterion for evaluating the attractiveness of a theory of goodness is whether or not the theory is apt to get consequentialism “out of a fix”, as David puts it.
Now, one is tempted to say that non-consequentialists have no need for a theory of goodness. But supposing they do have such a need, they presumably will have different criteria for evaluating theories of goodness. But it would be no objection to consequentialism to claim that we ought not to maximise goodness according to whichever theory of goodness is best relative to the non-consequetialist criteria, because consequentists never signed up to those criteria.
Ooops. My previous post went out a little prematurely — accidentally hit “post” instead of “preview”. If my pointer hadn’t slipped, I would have added a few sentences at the start explaining that the comments to follow were intended to address the issues raised earlier in this thread during the exchange between David and Jamie.
People tend to think that consequentialist theories can’t incorporate supererogation in a plausible way. I think it can. The “Self-Other” Asymmetry can be grafted onto any consequentialist theory. I think Slote introduced the S-O Asymmetry in his “Morality and Self-Other Asymmetry”. Ted Sider incorporates it into a consequentialist theory in his “Asymmetry and Self Sacrifice.” Here’s the story: Critics have charged that utilitarianism still requires us to act selfishly in cases where it should be morally permissible for us to make a sacrifice so as to benefit others. Let ‘S’ refer to the agent of the act. The following case demonstrates this new charge of the “obligating selfishness” against act utilitarianism (and its various mutations).
%
But SOU seems to imply that none of the acts is supererogatory. Is that intentional?
Doug,
One way to close the gap between it being right for a C to praise an action and the action being praiseworthy is to say that an act X1 caused by character trait C1 is praiseworthy if:
(1) praising it will maximize utility and
(2) much of the utility benefit of the praise occurs because the praise causes people to have character trait C1 or to commit acts like X1.
In other words, it’s praiseworthy if it’s right to praise it in order to encourage it (where “it” can refer to either the act or the underlying character trait). I don’t know if this is the only way that a C could count an action as praiseworthy, so I won’t make the ‘if’s into ‘iff’s.
Jamie,
I think of supererogatory acts as those in which the agent goes beyond what morality requires in efforts to benefit others. In this case,
HU HU-for-S HU-for-Others
a1 100 100 0
a2 90 20 70
a3 70 0 70
a4 55 25 30
both a2 and a3 are supererogatory (in my opinion), though a3 seems a bit irrational in light of a2. Regardless, a2 seems supererogatory. None of the alternatives is morally obligatory according to SOU. a1, a2, and a3 are each m. permissible according to SOU. Since S is m. permitted to reap the 100 hedons (or whatever) accessible via a1, I want to claim that S goes beyond what morality requires when, in doing a2, S “sacrifices” 80 hedons so that others will experience 70. I’ve been thinking this about consequentialism: With the incorporation of the self-other asymmetry comes supererogation of an interesting sort.
J-P,
Do you take SOU to be a version of maximizing, agent-neutral act-consequentialism, where an agent-neutral theory would insist that the ranking of a given set of states of affairs is the same no matter who the evaluator is? If so, what agent-neutral value or values are we all supposed to be maximizing on SOU?
J-P,
SOU is interesting. It is, of course, quite close to utilitarianism (U). The set of actions that are right according to U is a proper subset of the set of actions that are right according to SOU. So, some of the standard objections to U apply also to SOU. Cosider, for example, these two actions (both are actions of mine, so S = me):
X: me on 200, you on 0
Y: each of us on 100
SOU implies that both X and Y are right. Yet some will feel that only Y is right.
But there are also possible objections that apply to SOU but not U. Consider:
X: me on 99, my son on 300, two strangers each on 0
Y: each of these four people on 100
Again, SOU implies that both X and Y are right. Yet some will feel that only Y is right.
J-P,
Supererogatory acts must go beyond what morality requires; I certainly agree with that part. It is a substantive moral view that they must exhibit effort to benefit others. (Imagine a moral view that requires everyone to contribute to the Great Moral Project of building a tower to the sun. Doing your share is obligatory, and contributing even more is supererogatory, but none of it benefits anyone.)
My point was that SOU does not seem to count the other-benefiting acts as ‘beyond’ what morality requires. It’s just one of the ways of meeting obligations, and all of those ways are ranked together. (Actually, they are not really ranked together, but none is ranked above any other.) Supererogatory acts must go beyond. That means they must be ranked above the merely obligatory acts.
Perhaps what J-P meant to say was that a supererogatory act is one that produces greater benefits for others than is required by morality. In his example, SOU implies that the minimum benefit for others that the agent can permissibly produce is 70 (the only way to produce less would be to perform A4, which is impermissible). Hence, if the agent chooses either A2 or A3 he produces more benefit than the permissible minimum, thereby going “beyond the call of duty” with respect to benefiting others.
If that’s right, then it seems J-P endorses the same definition that Kris and I have each floated earlier:
But then it’s not obvious that SOU has any advantage over plain old utilitarianism (U) with respect to supererogation. On the above definition, any action that is classified as supererogatory by U will also be classified as such by SOU. But the converse doesn’t hold: some actions are classified as supererogatory by SOU, yet not by U. However, these “extra” supererogatory actions all seem to be cases of irrationality, where the agent sacrifices more than he achieves in benefits for others. To take another example, consider:
X: me 20, you 10
Y: me 15, you 15
Z: me 10, you 15
U implies that only X and Y are permissible, and only Y is supererogatory (on the above definition). SOU implies that all three are permissible, and both Y and Z are supererogatory. But is it any advantage to be able to say that Z, which seems plainly irrational, is supererogatory? I guess this depends on one’s view of the relationship between morality and rationality — in particular, whether one allows that it may be rationally impermissible to do something that is morally permissible.
Oops. Looking back I see that I misdescribed J-P’s example. I should have said that the minimum permissible benefit to others is 0, which aslo happens to be the minimum possible benefit to others. Everything else I wrote is entirely without error.
OK, allow me one more comment on this, and then I’ll shut up … for a while.
I was beginning to like J-P’s SOU, but I’ve just discovered a problematic feature of the view: it implies an intransitive betterness relation.
I take the following to be an equivalent formulation of SOU:
The problem is that the relation “at least as good as” defined in 1 is intransitive, as the following example shows. (Notice that “at least as good as” is equilavent to “not outranked by” as defined by J-P above.)
X: me 1, you 0
Y: me 0, you 1
Z: me 1, you 1
Assume, as before, that these are actions of mine; so utility-of-others equals your utility. Then SOU, as defined above, implies that X is at least as good as Y, and that Y is at least as good as Z, but that X is not at least as good as Z — a plain violation of transitivity. This has the following peculiar consequence. Given a choice between only X and Y, it’s permissible for me to choose either. But if we add Z to my available options, then it’s still permissible for me to choose Y, but no longer permissible for me to choose X. How can adding a third option change the permissibility of one action but not the other when they were equally permissible to begin with?
I love PEA Soup!
Doug,
It doesn’t seem that SOU is a maximizing theory in the same way that U is, though it permits maximizing in any situation. Just like Campbell says: U’s morally right actions are a proper subset of SOU’s morally right actions. And I think it’s agent-neutral. Note also that it’s much more closer to a maximizing theory than satisficing views are.
Campbell,
I’m not quite sure that I understand the example.
X: me on 99, my son on 300, two strangers on zero
Y: each of the four on 100
It seems that Y maximizes both total utility and utility for others, thus outranking X, implying X to be m. wrong according to SOU. So let’s adjust it slightly, making the total utilities of each 400. Then I take your objection to be one of the many objections from justice. Strap the SOU framework onto some justice-adjusted version of consequentialism and you might have an attractive view.
Let me alter your second case:
W: me 10, you 18
X: me 20, you 10
Y: me 15, you 15
Z: me 5, you 23
I think this might better illustrate the advantages of SOU over U when it comes to supererogation (and yes: I accept the Kris and Campbell principle). U won’t permit me to do W or Z. According to SOU, each of the alternatives is permissible.
I have to go to a talk, though I want to think about your new post.
Jamie,
Note that what I’m calling “supererogatory acts” according to SOU do go “beyond” what is morally obligated in some sense. We’re morally obligated to perform morally permissible acts. According to SOU, some morally permissible acts benefit others to a greater degree than other morally permissible acts. Thus, they go beyond (I think).
J-P,
In Campbell’s example:
X: me on 99, my son on 300, two strangers on zero
Y: each of the four on 100
Both X and Y are permissible according to SOU, since they involve equal utility for others (300). Y has more total utilility (400 vs. 399) because it gives me one more HU. X violates the spirit (though not the letter) of the self-other distinction because it captures all of the utility-for-others and gives it to my son. The same intuition that makes us think that morality should include a self-other distinction also makes it seem like it should include a distinction between people who are close to me and others.
If we change option X to
X’: me on 99, my son on 302, two strangers on zero
then now Y is impermissible for me. If it was my son who had to choose between X’ and Y, though, then Y would be not only permissible but supererogatory. And if my son and I were trying to make the decision together, then I don’t know what SOU would recommend. So I don’t see how SOU can be agent-neutral.
I’m finding it easiest to understand SOU by imagining all of the alternatives as points on a two-dimensional graph where one dimension (the x-axis) is total utility and the other dimension (the y-axis) is utility for others. A given alternative is permissible iff there are no available alternatives above and to the right of it (i.e. bigger on both dimensions).
This way, Campbell’s intransitivity objection is easy to picture. If you sketch it out, X is at (1,0), Y is at (1,1), and Z is at (2,1). Z is above and to the right of X, but it’s not above Y.
The reason that SOU allows this intransitivity is that it treats the two dimensions (total utility and utility-for-others) as incommensurable. The space it deals with is irreducibly two-dimensional, so it can’t have a nice transitive ordering. Its structure is similar to that of Pareto efficiency, in that it tries to maximize on multiple dimensions, but it doesn’t allow comparisons between the dimensions.
Campbell and Blar,
You’re absolutely right about SOU being intransitive. I hated this feature at first. I even wrote to Sider about it and presented another SOUish view (that ended up being a load of crap). The intransitivity thing is a puzzling but seemingly acceptable property of SOU. That’s the self-other asymmetry. You aren’t morally permitted to sacrifice others’ goods in the same way that we can sacrifice our own. I love how SOU operates in our cases. I hope you folks love it too.
And Campbell, I blundered in my commentary on one of your cases. Sorry. In the “son” example, it seems that both X and Y maximize HU-for-others. Again, I think a justice adjustment might be the way to proceed (within a consequentialist framework) in efforts to meet the objection.
I’m not sure what to think about the intransitivity of SOU and I’m concerned about agent neutrality, but I think that it is possible to deal with Campbell’s objection that SOU separates rationality from moral permissibility. His example is:
X: me 20, you 10
Y: me 15, you 15
Z: me 10, you 15
Here Z is permissible according to SOU (and in fact supererogatory), but it’s irrational, since it’s worse for me than Y and it has no compensating benefit. If you are untroubled by this example and you think that separating morality from rationality is a good feature of SOU then you can leave SOU as it is, but if you think that it’s an unfortunate bug and you want to reconnect morality and rationality then you can do so with only a minor change to SOU. Call the new theory SOU’:
Alternative A outranks alternative B iff (i) HU(A) > HU(B) and (ii) HU-for-Others(A) is greater than or equal to HU-for-Others(B)
SOU’: A is morally right iff no alternative to A outranks A.
The only difference between SOU’ and SOU is that part (ii) of the definition of outranking now reads “greater than or equal to” rather than “strictly greater than.” The effect of this change is that SOU’ requires that maximizing utility-for-self when this can be done without lowering utility-for-others. In Campbell’s example, Y involves more total utility than Z and the same amount of utility for others, so Z is impermissible according to SOU’. X and Y are both permissible.
(SOU’ is still intransitive, as a slight modification of Campbell’s example of intransitivity will show. For instance, you can change his Z from “me 1, you 1” to “me .8, you .8”.)
If you were following me when in I said in my last post that an alternative is permissible according to SOU iff there are no available alternatives above and to the right of it, you can draw the same coordinate system for SOU’ and visualize permissibility in the same way, except one of the boundaries of the region above and to the right needs to be included in that region.
J-P,
But the theory does not allow that benefiting others more is ‘going beyond’, because it doesn’t count benefiting others more as a good thing.
What if a utilitarian said that acts that take a longer time ‘go beyond’ other permissible acts? Or acts that are performed at a higher altitude, or by people with longer surnames? Sure, there are properties that some acts have to a greater degree than other acts, but what makes these degrees matter to ‘going beyond’ is whether or not they are degrees of goodness. To ‘go beyond’ an act has to be better. Otherwise it’s just longer, or higher, or whatever.
Apologies for the belatedness of the reply, I was swamped under by work (Still am but fleeing to PEA Soup for some sanity)
Kris by 3. Poverty of the Moral Universe
I meant that at least typically I think the common sense view of the world is that there are a bunch of things that we should or shouldn’t do, (lets call these duties) things that would be good if we did them, but we wouldnt be blameworthy for if we didn’t do them (supererogatory), and a whole bunch of decisions which are non-moral decisions such should I play the flute or the banjo.
Classical utilitarianism subsumes all categories into the first ie things we should or shouldn’t do, there is no scope for the superogatory nor, the non-moral.