Welfare Egalitarianism (hereafter, Egalitarianism) is the view that it
is intrinsically good for people to be equally well off. Or more
generally: it is intrinsically better for people to be more nearly
equal in welfare.
Survey question for curiosity: how many readers think that’s true? How
many find the claim intuitively obvious? (Think of your answer before
reading the rest.)
Following is an argument against Egalitarianism that I’d like to hear
comments on. It uses some premises from population axiology:
- The Unrepugnant Premise: For possible worlds x and y, if
y has a lower population than x, a higher average utility, a higher
total utility, and a perfectly equal distribution of utility, then y is
better than x. - The Benign Addition Principle: If worlds x and y are so
related that y would be the result of increasing the utility of
everyone in x by some amount and adding some new people with worthwhile
lives, then y is better than x. - Transitivity: If z is better than y and y is better than x, then z is better than x.
(Assume that there’s an "other things being equal" clause in the first two premises, and that in the following examples, there are no morally relevant features not required by the descriptions of the situations.)
The Unrepugnant Premise is so named because of its relation to Parfit’s "Repugnant Conclusion." But note that the Unrepugnant Premise is much less controversial than the denial of the repugnant conclusion; the Unrepugnant Premise seems to be accepted by everyone in population ethics, even those who endorse the Repugnant Conclusion. The latter people endorse the RC because they think some weight should be assigned to total utility (or because they endorse the Mere Addition argument in Parfit’s ch. 19), but even these people wouldn’t favor population increases that come with decreases in total utility.
I say a "benign addition" occurs when some people with positive welfare are added to the world, with positive effects on all the original people’s welfare. (Like Parfit’s "Mere Addition", except that in the latter you have no effect on the original people.) The Benign Addition Principle is motivated by the thought that, when a benign addition occurs, all the original people are better off, and the new people (if it makes sense to ask whether they are better off or worse off) are also better off.
Now consider three possible worlds:
World x contains 2 million people at welfare level 50.
World y contains 1 million people at welfare level 101.
World z contains 1 million people at welfare level 102, plus 1 million people at welfare level 1.
Here’s an argument:
- y is better than x. (from the Unrepugnant Premise)
- z is better than y. (from the Benign Addition Principle)
- z is better than x. (from 1, 2, Transitivity)
World z contains a mere 3 points more of total utility and 1.5 points of average utility than x, while it exhibits gross inequality. If world z is nevertheless better than x, then I think egalitarianism must be false; equality has no intrinsic value.
Hi Michael,
In answer to your survey questions, I don’t believe in egalitarianism or find it intuitively plausible.
I think your argument is interesting. I might be inclined to reject the Benign Addition principle. Suppose that the value of a world depends on the extent to which people get what they deserve. (I find this view more plausible than egalitarianism.) Then adding some people with very low welfare levels, even if their lives would be worth living, might make things worse, even if those who already exist get some extra benefits too. This would be the case if those extra people were getting a lot less than they deserved to get.
Great. I’m not sure I’m an egalitarianist myself but here is something someone who is might say. There is a worry that the benign addition principle assumes the falsity of egalitarianism. That is, if egual distribution had sufficient intrinsic value, then that principle would not be true. By adding people with worthwile lives in a way that introduces uneguality we are making the world worse so much that this outweighs the other good-making we are doing. Of course, someone who accepts the benign addition principle is going to say that this does not make sense in the first place and so egalitarianism is false more directly. So, I’m more inclined to think that what is at issue is whether the benign addition principle or egalitarianism is more plausible and the rest is not probably needed.
I think the argument could be improved. The problem right now is that it shows (even if successful) only that the value of equality must be less than 1.5 points of average utility. But we have no idea at all what 1.5 points of average utility amounts to.
Try this. Let the total in y be (100+ε) and the total in z be (100+2ε), and note that z is better than x as ε approaches 0.
I guess I still think there are problems, but I might just leave them to others. (Calling Campbell Brown…)
To strengthen Jussi’a response a bit, it seems like the Benign Addition Principle is going to be incompatible with more than just egalitarianism. As I mentioned, it also seems incompatible with a desert-based view. It might be incompatible with other views according to which intrinsic value is determined partly by how things are distributed. So I’d need an argument for Benign Addition in order to be willing to rule out all these sorts of views.
Okay, then I’ll add that Benign Addition will cause trouble in concert with a rather weaker principle than Egalitarianism, namely,
Weak Eq plus Benign Addition will lead to the Repugnant Conclusion.
Michael,
First of all, kudos on the graphics! I think that’s a first for PEA Soup. 🙂
Second, while others have raised issues with the Benign Addition Principle, I think I want to get off the boat at the Unrepugnant Premise. To be honest, I’m not sure that the sort of moral principles that I think are well-founded are suited to making judgments about the goodness or badness of worlds as such. So while I’m skeptical that y is better than x, this might be due to a general sort of skepticism about our ability to make sound judgments of this general sort of case. (plug: I talk about some of these issues in a paper with David Schmidtz on “Virtue Ethics and Repugnant Conclusions” in Phil Cafaro and Ronald Sandler’s book, Environmental Virtue Ethics)
But there might be something more specific too. My initial reaction to UP is that y isn’t better for the people who don’t exist, and that it would clearly be impermissible to move from x to y by killing those people off. Now, that doesn’t necessarily show that y is not better than x. It just shows that y’s being better than x doesn’t make all moves from x to y right.
But I’m not sure that this is the only thing that bothers me about UP. Consider the extreme case of moving from world x in your example to a world y populated by 1 person with 101 total utility and (obviously) 101 average utility. Are we really willing to say that y is better than x in this case? What if it was 2 people instead of 1? Would the world as it is today be a better place if it were transfored into a handful of really, really (and equally) happy people?
I’m not sure how to account for this intuition. Perhaps the underlying idea is that there’s something intrinsically good about people existing that’s not sufficiently accounted for by UP. But something about it definitely rubs me the wrong way.
Hi Mike,
It is really hard to assess the Benign Addition principle. Suppose I have a choice to distribute utility evenly in x or distribute slightly more utility wildly unevenly in y. The assumption here, I take it, is that there is nothing else relevant to justice of utility-distribution in x or y (i.e., no matters of desert, merit, effort, need, etc.). It seems to me that it would be extremely difficult to justify that radical inequality to those who receive less in y (supposing they might have among those that received more).
But suppose instead that the choice of y does not leave open the choice of who gets less utility in y. In that case, it is easier to justify the choice of y over x. To those who receive less utility, you explain that it is not a possibility that they receive more utility. So, for them, it is either a life worth living–but just worth living–or nothing. In the former case, I would choose x. In the latter case, I think I’d choose y.
Thanks for the comments, everyone. Miscellaneous short replies:
1. I accept Jamie’s improvement. I say something like that in the paper from which this is drawn. Also, all possible criticisms of my argument are successfully answered in the paper.
2. (For Jamie:) "Weak Eq" is what people in population ethics call "Non-Anti-Egalitarianism." I accept Non-Anti-Egal. Non-Anti-Egal, Transitivity, and Benign Addition jointly imply the Repugnant Conclusion. But this isn’t a problem, since the RC is true. I can’t discuss this fully now (I’d be happy to discuss it in another thread), but here’s a summary of why I say that:
3. (Jussi) Benign Addition is supported by:
The Modal Pareto Principle: For possible worlds y and z, if z would, from the standpoint of rational self-interest, be preferred over y by every possible being who would exist in either world, then z is better than y.
In a case of benign addition, the original people would favor the addition, since they’re made better off. The added people would also favor the addition, since they’d rather live at a positive (even if only slightly positive) welfare level than not exist at all.
It’s true that inequality can be created through a benign addition. Nevertheless, it seems that we should not reject Modal Pareto on grounds of egalitarianism. This is because egalitarianism is supposed to be motivated by concern for the worse-off members of society. It seems unreasonable to argue, on the basis of concern for some group, against preferring a state of affairs that the very people about whom you are concerned would prefer. If, on the other hand, one says that egalitarianism is not motivated by concern for the worse-off members of society, then I’d say that I don’t see the appeal of egalitarianism to begin with.
4. (Ben) Desert-based theories:
Suppose that the inhabitants of worlds x, y, and z are all non-moral agents, such as children or animals, so that they have no level of desert. Wouldn’t it then be at least initially plausible that equal distribution of welfare would be good? But interestingly, it turns out that it wouldn’t.
I don’t find the suggested desert-based theory (if I understand it) very plausible in any case, specifically the part about how the addition of some worthwhile lives would be bad if these people would have less welfare than they deserve.
5. (Matt) The intrinsic value of population:
Maybe when the population level is very low, increases in population are good, even if they come with decreases in total utility. (Hurka seems to think so.) But suppose that in worlds x, y, and z, all the population numbers are very large. Probably no one thinks that if the population is already 10 billion, it would be better to instead have 20 billion people with less than half the average utility level.
Presumably the egalitarian would still say z is worse than x. So we can continue the argument as before.
By the way, I’m not sure I understood your question about the case of the 1 person with 101 total utility. My own view is the total utility principle: the value of a world is just determined by its total utility. So a world with 1 person at level 101 is much, much worse than a world with 2 million people at 50. But I don’t know if that’s what you were asking. If you’re asking whether 1 person at 101 is better than 2 people at 50, then I’d say yes; however, some might disagree, which is why I propose to move to the case of 10 billion people versus 20 billion people.
6. (Matt) About making overall world-evaluations:
Perhaps some interesting meta-ethical theory shows that the overall evaluations of worlds that I’m using don’t make sense. (This might be so if all value is agent-relative.) But then I’d say egalitarianism itself probably doesn’t make sense. At least, I don’t know how to formulate it without using some sort of overall world-evaluation.
7. (Matt) Moving from x to y:
Maybe moving from x to y by killing people would be impermissible because it would violate rights, even though y is better. Consider the matter from the other side: if y is the present state, should you move to x by making more people? I take it the answer is no. This suggests that my explanation is correct: y is better than x, but there are some deontological concerns that arise (only) when you’re already in x.
8. (Mike A.) Moving from y to z:
I’m assuming that better-than is a (only) 2-place relation, so that to decide whether A is better than B, you only have to look at the nature of A and B; you don’t have to consider any other alternatives. So just looking at my worlds y and z, I think one should be able to decide which is better, and the answer is z. Maybe there’s another alternative, call it w, which is even better: the one in which there are all the same people as in z, with the same total utility, but everyone is about equally well off. But, I’m stipulating that we’re not considering that option. We’re just considering y and z at the moment. The fact that there’s something even better than either of them doesn’t show that z isn’t better than y, right?
An analogy: Suppose I have a cake to distribute between Jussi and Ben. Consider three options:
D1: I give all of it to Jussi.
D2: I give two-thirds to Jussi and one-third to Ben.
D3: I give half to Jussi and half to Ben.
You might well think D3 is best and that I should choose D3 rather than D2. But that shouldn’t stop you from agreeing that at least D2 is better than D1. I don’t think that depends on whether D3 is available or not. If D3 is available, then choosing D2 might be wrong because there’s something better available, but it would still be true that D2 isn’t as bad as D1.
So by the same token, whether or not world w is available, we should still agree that my world z is at least better than world y.
If egalitarianism is just the view that inequality is always wrong, then the benign addition principle is a straightforward denial of egalitarianism, and the anti-egalitarian argument begs the question. If the egalitarian thinks that equality is a good, but not the only good, then she can endorse a version of BAP which specifies some acceptable rate of trade between extra utility for the lucky members of world x, and the amount of inequality between them and the extra people in world y. But I think egalitarianism can mean something else: a concern to keep people out of bad lives, where lives can be bad both 1) because they’re below some objective and quite high threshold of wellbeing; and 2) because the people who live them know that others have much better lives. I’m an egalitarian in that sense. (2) can be motivated by concern for fairness, or recognition that positional and comparative goods are very important for social creatures like us, or both.
Michael,
Nice post — all very interesting!
Regarding your survey question, I confess to having egalitarian intuitions, but wouldn’t consider myself a committed egalitarian. And I’m not sure I’d say that “it is intrinsically good for people to be equally well off”, mostly because I’m not sure what that means.
Regarding your example, it’s hard for me to have innocent intuitions, uncorrupted by theory. So let me just tell you what my view is. (Well, I’m not sure this really is my view. I’m partly playing devil’s advocate.)
First, I assume that all the people who exist in y — call them the a-people — also exist in x and z, and that all have a welfare level of 102 in z. Further, I assume that all the people who exist in x but not y — call them the b-people — also exist in z.
Now, before getting to which worlds are better or worse, let me tell you what I think you ought to choose. This depends on what your alternatives are, as follows:
(a) Given a choice between x and y only, you ought to choose y.
(b) Given a choice between y and z only, you ought to choose z.
(c) Given a choice between x and z only, you ought to choose x.
Here are my reasons. Suppose first your choice is between x and y only. If you choose x, then some people, the a-people, will be worse off than they would have been if you’d chosen differently. But if you choose y, no one will be worse off; in particular, the b-people won’t be worse off because they won’t exist. So you ought to choose y. Similar reasoning supports choosing z, given a choice between y and z only. The a-people will be worse off if you choose y, but no one will be worse off if you choose z. So you ought to choose z. But this reasoning doesn’t apply to the choice between x and z only. In this case, someone will be worse off, no matter which you choose. Although the extent to which the a-people will be worse off if you choose x is slightly greater than the extent to which the b-people will be worse off in you choose z, the b-people are much worse off in z than the a-people are in x. So the b-people get priority; you ought to choose x.
I think the last judgement qualifies this as an egalitarian view. I say you should choose an equal world over an unequal world, even though the equal world has lower total welfare.
The conjunction of (a), (b), (c), and Transitivity is inconsistent with the following principle:
Strict Consequentialism. You ought to choose x when the alternatives are A iff, for every y in A, if x is not better than y, then x = y.
I certainly don’t want to give up Transitivity. So I reject Strict Consequentialism. But I don’t reject consequentialism altogether. I accept a less strict version:
Less Strict Consequentialism. You ought to choose x when the alternatives are A iff, for every y in A, if (x,A) is not better than (y,A), then x = y.
Here the ordered pair (x,A) represents x when the alternatives are A.
That was all rather quick. I hope it makes some sense.
“Also, all possible criticisms of my argument are successfully answered in the paper.”
WOW. This must be some paper.
I like the idea of getting an argument for the benign addition from the Modal Pareto Principle. But, again, the Modal Pareto Principle is incompatible with egalitarianism. The argument you give for the Modal Pareto principle based on the concerns for the worst off in effect is an argument against egalitarianism. If that argument is successful then egalitarianism is false. This might be good news. But, if the argument holds and egalitirianism is false on these grounds, then the argument that has the benign addition and the other principles is redundant. The conclusion must be already false based on the argument for one of the premises. If, on the other hand, the argument for the Modal Pareto doesn’t succeed, then the egalitarian can reject the benign addition principle. So, still the need for the main argument doesn’t seem to exist.
I have also found this claim of the Pareto argument:
“The added people would also favor the addition, since they’d rather live at a positive (even if only slightly positive) welfare level than not exist at all.”
always dubious. This claim seems to assume that ex ante we have obligations to take into an account merely possible people and their interests. Actual non-existence is worse for them than actual, worthy existence. This may be. But, I find few reasons to be concerned about their interests.
Michael, I just wanted to say that I liked this response: “all possible criticisms of my argument are successfully answered in the paper.” I plan to borrow that response in my future work.
Thanks for your response to my comments, Mike.
Regarding moving from x to y — I think I misread your original graphic. I assumed that x contained a fairly large population, not just 2 people. My own intuition is that moving from a large population to a very low one, even if that lower one has higher total/average utility is not necessarily a good thing. I’m not sure what I think when you limit the option set to worlds which all contain a “lot” of people. Moving from world x of population 50,000,000 and 500,000,000 total utility to world y of population 20,000,0000 and 600,000,000 total utility is not as clearly bad (in my mind) as moving from world x of population 50,000,0000 and 500,000,000 total utility to world y of population 10 and 600,000,000 total utility. But it’s not clearly good either, even if the distribution is equitable.
Like I said, maybe I simply don’t have strong confidence in my intuitions about the goodness or badness of states of affairs of this sort. My thoughts on the goodness or badness of population seem to be along the lines of Nozick’s thoughts regarding the justice of holdings – it’s a matter of pure procedural justice (pure procedural goodness?) such that there is no independent criterion for saying whether a population distribution is good or bad without knowing how we got there. I’m sure I might bend on this principle if presented with extreme enough examples, but on the whole it seems to capture my intuitions, or lack thereof, regarding these sorts of cases.
Three interesting issues:
Modal Pareto vs. Egalitarianism?
As I see it, the motivation for Egal. is concern for the worse off (exceeding one’s concern for the better-off). So I would call Campbell Brown’s “prioritarianism” a form of egalitarianism. (Some in the literature dispute this, but I think it’s the classification that is kind to egalitarianism.)
This being so, I think it is not obvious that an egalitarian would just immediately say “I want to reject the Modal Pareto Principle.” A reasonable egalitarian would see that saying that would make his position implausible. What the egalitarian is most concerned about is choices between options like x and z: given a fixed group of people, the egalitarian cares how you distribute utility among those people; it’s not immediately obvious what (if anything) he should think about creating new people. Moreover–again thinking about what is plausible to say–it is much more plausible initially to say that it would be good to eliminate inequality by redistributing utility, even with some loss in total utility, than it is to say that it would be good to eliminate inequality by eliminating the worse-off people. If not for my argument, one could happily maintain the former without the latter.
As I say, it’s not at all obvious that the egalitarian wants to reject Modal Pareto: this is illustrated by our own Campbell Brown here, saying that actually, z is better than y, even though x is better than z. So I turn now to that.
Better-than relative to a set of alternatives?
It seems that you (Campbell) are denying that “better than” is a 2-place relation, and instead making it a 4-place relation (where two sets of alternatives must be involved). I don’t accept this mainly because I just don’t think I have a concept of this 4-place relation. I think when I decide whether x is better than y, I’m just thinking about x and y. I don’t think how they compare depends on looking at some third and fourth thing.
But let’s avoid that dispute. Let’s say that in all my comparisons, the available alternatives are {x, y, z}. This way we have the same alternatives for all three world-comparisons, and so (I assume) we can apply the Transitivity principle. So, in this case, would you say y is better than x? That z is better than y?
Should potential people count?
A couple of people echo the Narveson line, roughly that you shouldn’t count the interests of the merely potential people when comparing worlds. I call this “the Actualist Bias”, because it’s prejudiced in favor of actual people. So consider two worlds:
World W: One million people, welfare level 60.
World V: The same 1 million people, at welfare level 50, plus another 1 million people at level 50.
In one variant of the view, you say: V is better than W if V is actual, but W is better than V if W is actual. This is because if V is actual, the interests of all 2 million people count, and the loss of a million people at level 50 outweighs the gain of 10 units of welfare for the others. But if W is actual, then only the one million people in W count, and V is worse for them.
In another variant of the view, you say instead: W is just better than V, period. This is because the people who live in V but not in W have no welfare level at all in W (not even level 0), so W is neither better, nor worse, nor equally good for them as anything else. But W is better for the 1 million who live in W. So W is better for some and worse for no one.
I think you should renounce your actualist bias. Consider this alternate possible world:
Alien Invasion:
Q: Which is better: the actual world, or the Alien Invasion world? According to the Actualist Bias, the Alien Invasion world is better, because it’s better for some actual-world people, and it’s not worse for any actual-world people. The people who are tortured in that world don’t count, because (i) they don’t actually exist, (ii) they have no level of welfare (not even level 0) in the actual world, so the actual world is neither better, nor worse, nor equally good as the Alien Invasion world for them.
I think this is ridiculous.
That’s definitely not what Campbell wrote.
You mention two versions of the Actualist Bias. The first is properly so-called. But the second is not an Actualist Bias at all. Here’s the second ‘version’:
That can’t be an Actualist Bias, since neither W nor V is actual.
Finally, I can’t begin to figure out what it is that Campbell said that you (Mike H) believe to entail the ridiculous conclusion about the Alien Invasion case. Is it (this is a wild guess) his Less Strict Consequentialism? If so, then I think you have misunderstood what Less Strict Consequentialism says.
Jamie,
I think you may have missed the bit where Michael stopped talking to me and started talking to some other, unnamed person. I don’t think the stuff about “Actualist Bias” was addressed to me.
Michael,
1. I don’t deny that “better than” is a 2-place relation. I just think that the two places might be filled by things a little more complex than possible worlds alone. Compare, for example, the “subset of” relation. This holds between quite complex things, infinite sets and so on, but it’s still a 2-place relation.
2. If all three worlds are alternatives, then I say you ought to choose y. In this case, like the case where the alternatives are x and z only, no matter which world you choose some group of people will be worse off than they would have been if you had chosen differently. The a-people will be worse off iff you choose either x or y, and the b-people will be worse off iff you choose z. But the a-people are only slightly worse off in y, whereas they are a lot worse off in x, and the b-people are a lot worse off in z. So you ought to choose y. (Perhaps, then, I have to reject your Benign Addition Principle.)
This brings out what might seem an odd consequence of my view. I say you ought to choose z out of {y,z}, but y out of {x,y,z}. But how, you might ask, could the addition of a third alternative, x, make a difference to whether you ought to choose y or z? I agree that it’s odd, but think it can be explained. There’s a certain bad feature that z has only when x is present as an alternative: the b-people are worse off in z than in some alternative. It’s this bad feature that makes the difference between, as we might put it, y being better than z when x is an alternative and worse than z when x is not an alternative.
Campbell,
Three more interesting issues, about your view:
1. What are the relata of “better than”?
I don’t think “better than” applies to the sort of things you identify. When I say that Star Trek: The Next Generation is better than the Original Series, I’m not saying that an ordered pair is better than another ordered pair. I’m saying that one television program is better than another television program. Similarly, when deciding what to do, people often talk about which actions or outcomes are better than which. It would be quite strange for them to start talking about whether some mathematical object is better than another one.
I think you could modify your view to not have this undesirable consequence.
2. Independence of Irrelevant Alternatives
As you mention, your view violates this principle of decision theory (the “odd consequence”). I don’t have much to add, just that I wasn’t convinced by the explanation offered. It seems to me that the allegedly bad feature that z has only when x is available is a Cambridge property and does not actually alter the nature of z, and so does not change how good z is. The fact that some alternative is introduced that would be better for the b-people wouldn’t change how good z is for the b-people. So it wouldn’t change how good z is. This seems to me a fair analogy: If some really tall person is created, that won’t make me any shorter. I’ll be shorter than him, but I won’t thereby be any shorter than I am now–I won’t lose a couple of inches, for example.
3. Transitivity
In some sense you’re preserving Transitivity: I think that on your view, the relation “x is better than y” is transitive, though it turns out that the relation “(x,{x,y}) is better than (y,{x,y})” is not transitive. I’m not sure you gain any plausibility over standard intransitive theories (as in Temkin and Rachels). And you might have the same problems. One traditional problem is the Money Pump: Suppose that you prefer A out of {A,B}, B out of {B,C}, and C out of {A,C}. You presently have A. I offer to let you pay a small amount of money to be allowed to trade A for C. I assume you’d accept, provided the money was small enough. I then, similarly, let you pay a small amount of money to trade C for B. You accept. I then let you pay a small amount of money to trade B for A. You accept. You are now back to the same A you started with, only with less money. It seems that something has gone wrong here. Since you acted in accordance with your preferences at each stage, it seems that there’s something wrong with those preferences.
(Note: in the scenario, each time I offer you a trade, you don’t know if I’m going to offer you any more trades in the future. That’s supposed to make it so that your sets of alternatives in the respective stages are {A,C}, {B,C}, and {A,B}, and not, at any stage, {A,B,C}.)
Here’s some more about the actualist bias:
Narveson says (in a couple of articles from 1967 and 1973), roughly, that only actual people count (sometimes, he seems to mean only present actual people), so in deciding whether to have children, you should ignore their future welfare and only look at the effects on present actual people. He also says that since these prospective people have no welfare level at all in the alternative in which we don’t create them, it’s not the case that they’ll be better off if we create them than if we don’t. Parfit also discusses (and takes seriously) the view that “existence cannot benefit”.
A couple of things in this thread are reminiscent of Narveson’s view. One is Jussi’s remarks:
Also, Campbell’s remarks:
My point here isn’t really to criticize particular people. My point is more like this: there’s a certain interesting constellation of views. One or more of them might initially be tempting. But, they’re refuted by the Alien Invasion example. Thus, look at these interesting views:
(2) and (3) support the claim that Alien Invasion is better than the actual world. To refute (1), just imagine a scenario in which one actual person is slightly better off, but a trillion new people are created who are horribly tortured for their entire lives; (1) implies that this would have been better than the actual world. I think that if (4) is the case, then it’s also the case that [that these people would have negative welfare] (even if true) is no reason against creating some people. (Aside: I know Narveson denies this last point. I’ll elaborate on that if someone cares.) So, (1)-(4) are all false.
I think (3) is true. People who don’t exist don’t have heights, or weights. It seems unlikely that they have welfare.
I think there is something right about the view that the goodness of an option depends on its alternatives. But I find it very confusing, especially the way Campbell put it, so that it seems like relativity rather than dependence.
Here’s the kind of dependence on alternatives that I understand in John Broome’s example (in Weighing Goods), Maurice is choosing between vacation prospects. If the choice is between rock climbing and museum tour, he thinks museum tour is better (because rock climbing is scary). If the choice is between museums and lounging at home, he thinks staying home is better (museums are boring). But if the choice is between lounging at home at rock climbing, he thinks home is worse, because it would be cowardly. (The museum tour wouldn’t be cowardly but rather cultured.)
I understand this example. The outcomes, (staying home when the alternative was rock climbing) and (staying home when the alternative was museums) are different outcomes. The first contains cowardice while the other doesn’t. This is not relativity. If Maurice stays home, then one or the other outcome will have occurred.
Finally, I like the idea of a bias in favor of the actual. Campbell put it nicely, too, by making the relation in question the you ought to choose x when the alternative is y relation (on x, y). Then the bias in favor of the actual seems more plausible. We don’t have to worry about the comparison between a pair of merely possible worlds, in that case. Still, I think it is probably wrong. Too bad.
Michael,
Firstly, I should answer your question by saying that I have a strong egalitarian leaning in terms of equality of resources (due to diminishing marginal value of wealth arguments), but have no egalitarian leaning when it comes to distribution of welfare (the sense in question in your argument).
I like the argument a lot. However, one thing that hasn’t been noted is that the unrepugnant premise will be denied by many prioritarians. For example, if they think that the total good is the sum of the square roots of the individual goods, then the good of x is 2 million times root 50, and the good of y is 1 million times root 101. Thus the good of x is much higher than that of y.
Note that this seems puzzling due to a tension in the term ‘total good’. You have used it to mean the sum of the individual goods (quite a reasonable definition!). I have used it here to mean the overall good after all appropriate adjustment and aggregation (also a reasonable definition). Used in the second way, prioritarians will agree with the premise, but find the argument invalid. Used in the first way, they will reject it.
1. You seem happy to say that possible worlds may be relata of “better than”. What do you take (non-actual) possible worlds to be? I assume you’re not a modal realist. Perhaps you think they’re abstract objects. But then, I ask, if abstract objects of one kind (i.e. merely possible worlds) can be relata of “better than”, why not those of another (i.e. ordered pairs of worlds and sets of worlds)?
You might say that while it’s fairly clear what possible worlds represent (i.e. “ways the world might me”, as it’s often put), it’s not at all clear what my ordered pairs represent. I admit this part of my view is a bit obscure; I need to think more about it. Perhaps I could say something like the following. These ordered pairs represent “ways in which the world might come to be a way”. Let x be a possible world; it represents a way the world might be. There are different ways the world might come to be the way x represents it as being. Here’s one way: you could choose to make it that way when your only alternative was to make it the way y represents it as being. Here’s another: you could choose to make it that way when your only alternative was to make it the way z represents it as being. The former way is represented by the ordered pair (x,{x,y}), the latter by (x,{x,z}).
2. The analogy with Broome’s example, suggested above by Jamie, is useful here. Maurice’s lounging at home when he could go rock climbing has a certain property which is not shared by his lounging at home when he could not go rock climbing: the property of being cowardly. Similarly, your choosing z when you could choose x has a certain property which is not shared by your choosing z when you could not choose x: the property of making the b-people worse off. If calling something a “Cambridge property” implies that it is morally insignificant, then I deny that this is a Cambridge property. Surely it’s morally significant that something makes people worse off.
3. I haven’t said anything to imply that “better than” is intransitive. Let me clarify. Earlier I made three claims — (a), (b), and (c) above — about what you ought to choose depending on what your alternatives are. Given my Less Strict Consequentialism, those three claims imply the following:
(a’) (y,{x,y}) is better than (x,{x,y}).
(b’) (z,{y,z}) is better than (y,{y,z}).
(c’) (x,{x,z}) is better than (z,{x,z}).
There’s no intransitivity there.
Of course I’m aware of the money pump argument against intransitive preferences. But I think that’s a red herring here. I don’t see my view makes anyone vulnerable to money pumping.
Campbell, here is (one spot) where I don’t feel that I understand what’s going on.
z is a possible world. Either you make the choice in question, in z, or you don’t. So, however significant your choice might be, that significance is part of z. So why do we need to have the alternatives added to the relata of better than?
Your kind of sensitivity to alternatives must be different from the kind that shows up in the example of Maurice, I think.
Campbell, I think I didn’t explain the points about Transitivity clearly enough, so here’s some elaboration.
(I)
This is what I take transitivity to be: Suppose "Phi(x,y)" is an open sentence with 2 free variables. Then Phi(x,y) (the relation denoted by "Phi(x,y)") is transitive if and only if:
I’m taking this as a general condition applicable to relations of the form Phi(x,y), not just atomic relations like Rxy.
Now let "Phi(x,y)" be "(x,{x,y}) is better than (y,{x,y})". Then doing the appropriate substitutions, the above definition of transitivity yields:
The relation [(x,{x,y}) is better than (y,{x,y})] is transitive if and only if:
This is more complicated than the usual characterization of the transitivity of better-than, but it is a correct application of the above general definition, for the case of the more complicated view of the better-than relation. Now, what you said in your last comment–
–is just a counter-instance to the transitivity condition (rearrange the variables a little).
(II)
I didn’t see how you planned to avoid the Money Pump problem. Let me be more explicit about that:
From all that, it follows that the agent will, at the first point, choose to trade A and a small amount of money for C; then, at the second point, he’ll trade C and a small amount of money for B; and at the third point, he’ll trade B and a small amount of money for A.
That’s what I was trying to get at earlier. Let me know if you think one of those assumptions (1-4) isn’t right, or if you think the money-pumping doesn’t result from them.
Michael,
I can’t quite put my finger on this but it seems combatible to say both of these things:
1. We have no reason to always prefer/bring about a world with more people with worthy lives other things being equal. At least, we have no such reason because the people who would live those lives would prefer their worthy lives to non-existence and because it is good to satisfy the interests of possible people in the same way as those of the actual. Saying this would be to deny that satisfying the preferences of possible people about existence is a good-making feature.
I worry that if we accepted the claim that satisfying the preferences of possible people is good-making and we have some obligations to bring about a better world, we get obligations to make a lot of babies. I don’t want to say that there can never be such obligations but I’m not sure we get them very easily.
2. Deny that the alien invasion world is better than the actual world. We can say this because we can think that constant agony is bad-making, full stop, no matter whose it is and no matter what is in their interests.
You are probably right that the egalitarian doesn’t want to reject Modal Pareto. But, it seems necessary that she has to. If some intrinsic value is given for equality, then it looks like we can always construct scenarios of distributions where adding merely worthy lives does not outweigh the value of lost in a less equal distribution. In these cases the egalitarian is forced to make a choice.
I also wonder whether the main motivation for egalitarianism can be the concern for the worst-off. What many egalitarians want to say is that equality is intrinsically good and not because it is a means to improving the condition of the worst-off. If the concern was merely to make the worst-off better off, then surely there are more effective views for that purpose (like the prioritarian views which were oddly classified as egalitarian).
Jamie:
Good point. I see that what I’ve said above is a bit confusing. This is due to a misleading use of the word “world”. What I above call “worlds” are really better thought of not as worlds, but as sets of worlds of a certain kind: roughly, worlds with a common distribution of wellbeing.
Recall the diagram in Michael’s original post. What does the rectangle labelled “World x” represent? The label suggests it represents a single world. But that’s not quite right. There are very many, probably infinitely many, possible worlds in which wellbeing is distributed in the way represented by the rectangle. More accurately, then, the rectangle represents a set of worlds: the set of all and only those worlds in which wellbeing is distributed in the way represented. In effect this set is a wellbeing distribution. So let X, Y, and Z be the wellbeing distributions (where these are sets of worlds) represented in the diagram by “World x”, “World y”, and “World z” respectively.
This clarification reveals a hidden assumption in Michael’s argument. He assumes that the goodness of a world depends solely on its wellbeing distribution. So, for example, if one X-world is better than one Y-world, then every X-world is better than every Y-world. This assumption I deny. I think that some Y-worlds are better than some Z-worlds, and some Z-worlds are better than some Y-worlds.
To see which are which, we need to consider also the wellbeing distributions of the worlds’ alternatives. As you suggest, which worlds are alternatives to a given world will depend on facts about that world itself. Thus, for example, some but not all X-worlds have Y-worlds as alternatives. Let us write (X,A), where X is a wellbeing distribution and A a set of wellbeing distributions, to denote the set of every world x such that x is an X-world and, for all Y in A, x has a Y-world as an alternative. Then, in Michael’s example, I say every (Z,{Y,Z})-world is better than every (Y,{Y,Z})-world, but every (Y,{X,Y,Z})-world is better than every (Z,{X,Y,Z})-world.
Michael:
It’s true that my view implies that a certain relation is intransitive; I never meant to deny that. All I deny is that my view implies that “better than” is intransitive. Just to clarify, let us introduce a new relation R, defined by the following: for any x and y, xRy iff (x,{x,y}) is better than (y,{x,y}). (It’s misleading to say, as you did, that R is the relation “(x,{x,y}) is better than (y,{x,y})”, because that sentence — the one in quotes — is more naturally used to express what is, according to me at least, a different relation, namely, “better than”.) Then although I believe that R is intransitive, I deny that R is “better than”. For one thing I hold that, in your example, zRy and z is not better than y; so the two relations, R and “better than”, cannot be the same. Since I deny that R is “better than”, it doesn’t bother me that my view implies R is intransitive. The world is full of intransitive relations. What’s the harm of one more?
Now, about the money pump. As I’m using it here, “choose” is a “success term”. That is, you choose a world x only if x obtains. (Maybe it would be better to say “bring about”, rather than “choose”.) But no more than one world can obtain; there’s only one actual world. So you cannot, say, choose y and at one time and choose z at another time; otherwise both y and z would obtain, which is impossible because they’re distinct. Therefore, the money pump can’t get started.
Right. Now it doesn’t look weird anymore.
We often think that a pair of alternatives that could look the same if one assumed that only certain kinds of features matter, are in fact different because other features matter, too. You think the causal ancestry of a distribution matters, and in a particular way: its having been chosen when certain other distributions could have been chosen instead, is a significant feature. (This reminds me of the Peter Diamond-inspired examples in which an outcome’s being the result of a coin toss is significant to the value of the outcome.) Now just one more thing: you can recover the intuitive idea that better than is a relation between (sets of) worlds, or states of affairs. It doesn’t have to be between those funny ordered pairs. So, for instance, we can straightforwardly say that X brought about by a choice whose alternative was Y is an outcome distinct from X brought about by a choice whose alternative was Z. And these outcomes are just sets of worlds (propositions, states of affairs, etc.).
I have my doubts about your defense against the Money Pump argument, but I have to think about that (it will be a splendid way for me to procrastinate).
Jussi,
I think it might just be true that we have lots of baby-creating obligations. This would be a revisionary ethical claim, but I see it as in line with many revisionary ethical claims that you get from consequentialism, particularly claims about our many strong, positive obligations to help others, which we’re both violating right now by wasting time blogging when we could be saving lives.
I say that *might* be true. Or, maybe (impartialist) consequentialism is false. In that case, maybe we lack baby-creating obligations because, even though (very often) the world would be much better were we to create another child, we aren’t obligated to make the world better (at least not in this way).
About the Maurice vacation example: I was unsure what it is supposed to show. I haven’t read Broome’s discussion, so I don’t know how he intended it to be taken.
My initial reaction is that there’s a mistake in moral reasoning there. If M actually finds museums boring but he’d still prefer a museum tour over rock climbing because rock climbing is scary, then I’d say that his choosing the museum tour over rock climbing would be just as cowardly as choosing staying-at-home over rock climbing; nor would that choice be exhibiting a moral virtue such as appreciation of culture.
So then I thought of a way to clearly make the moral reasoning good: Let’s say that M is justifiably concerned about appearing cowardly to others. Let’s say that if M were to choose staying-home over rock-climbing, then M’s uncle would think M is cowardly and would kill M because he hates cowards. Let’s say that M would choose museum-tour over rock-climbing because he’s rationally afraid of hurting himself while rock-climbing, and the uncle will not think he’s a coward if he chooses museum-tour in this case. M would also prefer staying-home over museum-tour, because he finds museums boring. But he would prefer rock-climbing over staying-home because he rationally wants to avoid his uncle killing him.
So now the normative reasoning is all sound. Does the new example show something interesting? Very superficially, it looks like a counterexample to Transitivity:
(The first line is read: When the alternatives are going to the museum and going rock climbing, going to the museum is better than rock climbing.) But of course, it’s not a real counter-example to Transitivity (in any interesting sense), because what we’re doing is changing the consequences of one of the options in different parts of the story: in the first comparison, Home has death as a consequence; in the last comparison, Home does not have this consequence. So no advocate of Transitivity would count the "Home" options in those two comparisons as genuinely the same option.
You might say, what the example shows is that the value of some option can depend on what the alternatives are, and that’s something interesting. But note something about this sort of dependence: we’re just imagining that changing what alternative(/s) is (/are) present changes the causal consequences of the Home option. Of course that’s possible, and of course if you set up a scenario like that, then the goodness of the Home option can also differ depending on what alternatives are present.
So my version of the example doesn’t have a normatively exciting or controversial lesson. What would be interesting would be if you can change the value of an option, O, by changing what the alternatives to O are, without changing the causal consequences of O, and without changing any of O’s intrinsic, non-moral properties. The original version of the example would do this, but as I say, I don’t buy the normative reasoning there.
Mike,
Maurice thinks going to the museums when the alternative is rock climbing isn’t cowardly. You think he’s wrong. I don’t think it matters who’s right here. The idea was to see whether Campbell’s example was like Maurice’s example as it appears to Maurice. Apparently it is, so the mission of the example is accomplished.
Now we can ask who’s right about the case at hand, namely, Campbell’s. So, let W(x, y) be the world in which the distribution is x and this was chosen when y could have been chosen instead,
W(y, x) be the world in which the distribution is y and this was chosen when x could have been chosen instead,
and likewise W(x, z), W(z, x), and so on.
Is it true that W(y, x) is better than W(x, y), W(z, y) better than W(y, z), and W(x, z) better than W(z, x)?
Notice that there is no violation of transitivity here; furthermore, the relata of better than are worlds.