Actualist Utilitarianism (AU) is, roughly stated, the view that we ought to act so as to maximise the sum total of actual people’s utilities. (By utility is here meant a numerical representation of a person’s level of wellbeing, or welfare.) It is distinguished from regular utilitarianism in that it excludes the utilities of “merely possible people” from figuring in our moral judgements. And, for this reason, it might be motivated by various “person-affecting” intuitions to the effect that merely possible people are morally insignificant. I shall not, however, try to develop that line of motivation here. Rather, I want to focus on an objection to AU advanced by John Broome in his recent book Weighing Lives. (Although Broome doesn’t consider the case of AU in particular, he does object to “actualist axiologies” more generally, and his objection is applicable to AU. With that clarification noted, I shall for simplicity proceed as though Broome’s objection is aimed specifically at AU.)
The objection, in short, is that AU is incapable of giving practical adivice. As Broome understands AU, it implies that what the agent ought to do in a given situation of choice may sometimes depend on what he actually does in that situation. Thus, if the agent were to ask “ought I to do X?”, then the best practical advice that AU could give him would be prefaced with “well, that all depends on whehter or not you actually do X.” But that would be no practical advice at all; usually we want to know whether or not it’s permissible to do something in advance of our having done or not having done it. Understood in this way, then, AU will be practically impotent — of no use at all in deciding what to do.
As I shall argue, however, AU need not be understood in this way. Below I suggest two formulations of AU, and show that only one of these is vulnerable to Broome’s objection.
It will be helpful to introduce a simple formal framework.
We shall write
Perm_@,W(p)
(where @ is a possible world, W is a set of possible worlds of which @ is a member, and p is a proposition) to mean “It is permissible, relative to <@,W>, that p”. We interpret W as the set of worlds that are open, or available, to the agent at the time of acting. These represent the different outcomes that the agent’s action might have — the different ways the world might be, depending on how the agent chooses to act. (If we picture modal space as a kind of tree diagram, with each branch of the tree representing a distinct possible world, then we can think of the agent as being located at a node on the tree and faced with the choice of which branch to go down. The branches of the tree that go through the node in question are the possible worlds in W.) And we interpret @ as the actual world. We assume that the actual world is always open to the agent; hence, @ is a member of W.
Broome’s objection may then be stated in terms of the following condition.
Actual World Independence(AWI): Perm_@,W(p) iff Perm_@’,W(p).
This states that, if we hold fixed the set of available worlds W while allowing the actual world to vary between @ and @’, then the permissibility of any proposition must remain unchanged. A mere difference in which world is actual is insufficient to make a difference to permissibility. Broome’s objection is that AU violates AWI.
Now, for any possible world w, let “the w-people” refer to all the people who exist in the complete history of w. Then we have the following two versions of AU:
Actualist Utilitarianism 1 (AU1): Perm_@,W(p) iff there exists some world w in W such that (1) p is true at w; and (2) for any world w’ in W, the sum total of @-people’s utilities in w is at least as great as that in w’.
Actualist Utilitarianism 2 (AU2): Perm_@,W(p) iff there exists some world w in W such that (1) p is true at w; and (2) for any world w’ in W, the sum total of w-people’s utilities in w is at least as great as that in w’.
The difference between these two can be understood as follows. Both AU1 and AU2 say that it is permissible that p iff p is true at the best (or one of the best) available worlds, where the goodness of a world is given by the sum total of actual people’s utilities in that world. However, on AU1 we hold fixed the actual world when evaluating different worlds, whereas, on AU2, we evaluate each available world w under the hypothesis that w is the actual world. In the terminology of “two-dimensionalism”, AU1 evaluates worlds considered as counterfactual, whereas AU2 evaluates worlds considered as actual.
Clearly, AU2 implies AWI. (To see this, it should be sufficient to note that the term “@” does not appear on the right-hand side of the biconditional in AU2.) It’s also fairly clear, I think, that AU1 is inconsistent with AWI, given some very uncontroversial assumptions. But rather than trying to give a formal proof, let me instead offer the following illustration.
Suppose that Adam and Eve are contemplating whether or not to have a child. To keep things simple, assume that there are only two worlds open to them at this time: world w, in which they have a child; and world w’ in which they don’t. And assume no other children will be born in either case. Hence, the w-people are Adam, Eve, and Cain (their child); and the w’-people are just Adam and Eve. Suppose further that, although Cain has a happy life in w, Adam and Eve find caring for Cain rather burdensome, so their utilities are slightly diminished by his existence. Hence, the sum total of w-people’s utilities is greater in w than in w’ (the gain in Cain’s utility outweighs the losses to Adam and Eve); whereas the sum total of w’-people’s utilities is greater in w’ than in w (because Cain’s utility is ignored in this case).
Now, let W= {w,w’}, and let c be the proposition “Adam and Eve have a child”. Then AU1 implies that Perm_w,W(c) and ~Perm_w’,W(c). That is, if Adam and Eve actually have a child, then it is permissible for them to do so, but if they do not actually have a child, then it is not permissible for them to do so. Clearly this violates AWI. On the other hand, AU2 implies that Perm_w,W(c) and Perm_w’,W(c). Hence, on AU2, it makes no difference what Adam and Eve actually do; in either case, it is permissible that they have a child.
So AU2 dodges Broome’s objection. But is there anything else to be said for it? Let me briefly note two further features of AU2, one of which seems good, and the other perhaps not so good. As Broome observes, most people’s intuitions seem to support a certain asymmetry. In general, couples have a “free choice” whether or not to have children. They are not obligated to have children, nor are they obligated not have children. But in special circumstances, where it is known that the child would have a life “not worth living”, couples are obligated not to have the child. Thus, the asymmetry: if the child would be happy, there’s no obligation to have the child; but if the child would be unhappy, there is an obligation not to have the child.
The further good feature of AU2 is that it accommodates this asymmetry. We’ve already seen an example of the healthy child case. In the example of Adam of Eve, where it was supposed that Cain would be happy, AU2 implies that Perm_@,W(c) and Perm_@,W(~c) — i.e. either course of action is permissible. But suppose we alter the example so that Cain would be very unhappy, so that he receives a negative utility in w’. Then AU2 implies that ~Perm_@,W(c) and Perm_@,W(~c) — i.e. it is obligatory that Adam and Eve do not have a child.
But now to the further bad feature. Given very plausible assumptions, AU2 is inconsistent with the following condition:
No Moral Dillemas (NMD): ~Perm_@,W(p) implies Perm_@,W(~p).
This states that for any proposition p, it must be the case either that p is permissible or ~p is permissible. If this condition is violated, then there will be situations in which, no matter what the agent does, he will do something impermissible. Of course, the issue of moral dilemmas is hotly debated, with at least some philosophers arguing that such dilemmas are unproblematic. So it’s unclear how bad (if at all) this feature of AU2 is.
I like (AU1). What’s so great about Actual World Independence, anyway?
(AU2) seems to me to rank worlds incorrectly, although its obvious failures do not appear when it only has to rank available worlds. Take an uninhabited world; (AU2) seems to say that it is one of the best. (It is not available to any agent, of course, since it is deserted for all of its history.)
(AU2) reminds me of Jeffrey-ratifiability, by the way, which always struck me as a strange property (of acts) to care about.
p.s.
‘dilemma’, two horns.
Hey, this is off topic, but somebody at PEA Soup should post a notice of the up-coming (September) Second Madison Metaethics Workshop. This year’s program features Tim Scanlon and Judy Thomson as keynote speakers, and the rest of the line-up is irresistible (PEA Soup regular Robert Johnson, for one).
The program doesn’t seem to be up yet, but I have an advance copy if someone wants to post it here.
I agree with Jamie that it isn’t obvious that (AWI) is correct. But I have pretty anti-utilitarian intuitions in general, so I’m not sure what they count for in this game.
What I don’t yet see is why (AU1) doesn’t have some of the advantages claimed for the other version. There does seem to be a related problem if we modify the Adam, Eve and Cain case and maybe that is why you prefer (AU2).
Take the unhappy Cain example. If Adam and Eve have Cain, then the actual world is less good for some actual person than an alternative would have been for that person (after all he would be better off to never have existed) and better for no one, so it is a worse world than that one. Hence it is impermissible according to AU1. If they don’t have Cain the actual world is better for some actual people (Adam and Eve) and worse for none, hence it is permissible to have no children in this world. So far we have the advantages of (AU2). For the case described, the two seem on a par.
Note also that this seems to show that the theory does give advice even if the facts determining the advice may depend on which world is actual. It says do not have a kid, because if you do you will have done something impermissible and if you don’t you will have done something permissible.
I suppose that the worrisome case for (AU1) is the one in which Adam and Eve would be happier to have a Cain who would be better off not existing. For if the world in which only Adam and Eve exist is actual, there does exist a world in which they (the only actual people) are happier than in the actual world, so in the actual world having Cain would be permissible. On the other hand, were the actual world to be one in which they had Cain, one of the actual people, Cain would be worse off than in an alternative where he did not exist, enough so as to outweigh the benefits to Adam and Eve. In that world having Cain would be impermissible. I guess this is bad for the theory and not a consequence of (AU2). It seems that (AU2) doesn’t have this consequence because it would not say that the permissible thing to do in an actual world where Cain does not exist is to have the kid, because we are to worry about the people who would be actual if we were to do the action in question, not just those who are actual given what we actually do.
Have I missed something?
Mark,
You write:
I’m not sure that’s quite right. The truth-values of the last two conditionals you mention will depend on which world is actual, according to AU2. That is, if the world where Adam and Eve have Cain is actual, then the conditional “if you were not to have Cain, you would do something impermissible” comes out true. But if the other world is actual, then the same conditional comes out false. Things start getting pretty weird in this area, because of the “two-dimensionalism”. It’s one thing to say “if you have a child, …”, yet another to say “if you actually have a child, …”. These correspond to the two ways of thinking of possible worlds — as different counterfactual possibilities, or as different ways the actual world might be. My brain starts to hurt when I think about this for too long.
I think AU2 implies that the last example you describe is a moral dilemma. Neither having the child nor not having it is permissible. I discuss an example very much like this in a paper of mine, “Consequentialise This”. So, I think the example might be a problem for AU2.
Campbell writes:
“I’m not sure that’s quite right. The truth-values of the last two conditionals you mention will depend on which world is actual, according to AU2. That is, if the world where Adam and Eve have Cain is actual, then the conditional “if you were not to have Cain, you would do something impermissible” comes out true. But if the other world is actual, then the same conditional comes out false.”
Should ‘impermissible’ here be ‘permissible’? Remember this is the example in which Cain is better off to never have existed, so that if Cain exists the alternative world where he does not is better for some actual person, namely Cain.
In any case, I may be having trouble with the interpretation of the @-people. I was thinking of it in such a way that my actions change who the actual people are. And then my thought was that it was true that had I done something leading to the existence of Cain (say), Cain would have been an @-person, and further then there would have been an actual person (Cain) who would be doing significantly less well than he would have been in the world where I do not bring about his existence. And that leads me to think I should say that if I bring about Cain’s existence I will have done something impermissible.
I can see that there might be another reading on which the @-people functions as a name for the people in the actual world, so that in any other worlds even those where they don’t exist the name still refers to them. Read that way, there is no actual person on the Cain creation scenario who would be doing worse if I brought Cain about. On that reading it would be false to say that had I brought Cain into existence I would have done something wrong according to (AU1)(in those cases where I don’t bring Cain into existence).
This stuff makes my head hurt too, so I may be (embarrasingly publicly) making a simple error and not seeing what it is.
No, nothing can change that. It would be like changing which day is today, or which place is here. You can have some effect on the reference of “the @-people”, though; maybe that’s what you meant.
The conditionals, as Mark formulates them, are indicative conditionals, so we ought to be able to decide on their acceptability by the Ramsey test. So,
(i) If you do you have a child then you will have done something impermissible.
(ii) If you don’t you will have done something permissible.
(This is the unhappy Cain example case, remember; also, we will take them as intended, so that (ii) isn’t made trivial by the obvious fact that Adam and Eve will do plenty of permissible things in their lives.)
These seem to me both to come out as acceptable, according to (AU2). That is, if I adopt (AU2), then I find that
pr(Eve does something impermissible | Eve has unhappy Cain) >> .5
pr(Eve does something permissible | Eve declines to have unhappy Cain) >> .5
Campbell, you may have been addressing a slightly different question.
Jamie kindly said,
“No, nothing can change that. It would be like changing which day is today, or which place is here. You can have some effect on the reference of “the @-people”, though; maybe that’s what you meant.”
If I said, “Yeah, that is what I meant,” it would be of indeterminate truth value. Now that Jamie has clearly demarcated the two things I could have meant to say it is obvious to me I intended the one that makes sense. But I suspect that the actual thought I had in mind was a bit more muddled. My general idea was to contrast uses of ‘@’ in ‘@-people’ that were more like our ‘here’ and ‘now’ from uses that were more like names.
Mark, you’re right: I said “impermissible” when I meant “permissible” (forgot which case I was discussing).
Jamie, I suspect you’re right too: I was addressing a different question. I was thinking of the conditionals as subjunctive: “If Adam and Eve were to have a child, they would do something impermissible.” And I was thinking of this conditional as meaning, roughly, that the nearest possible world at which they have a child is worse than some other world available to them where they don’t have a child. So, on AU1, the truth of the conditional depends on which world is actual, because the goodness of possible worlds depends on which world is actual.
I’ve discussed quite a few exmaple with my teachers at University regarding Utilitarianism and I’m yet to come to a conclusion.