What is utilitarianism? That may seem like an easy question — the kind of thing one might include on a quiz in one’s Intro to Ethics class with the expectation that every minimally attentive student will get it right. However, I think that the question is more tricky than that. There are two answers commonly given — two definitions of utilitarianism — but these, at least on their face, are not equivalent. This raises a puzzle about utilitarianism: in particular, about its relation to another widely discussed ethical doctrine, consequentialism. The puzzle can be resolved. But the resolution has important implications for our understanding of consequentialism. In this post, I’ll set out the puzzle. Later, I’ll explain my favoured solution.

[UPDATE: I had initially intended to give my solution in a later post. But I found that I couldn’t resist letting the cat out the bag (pulling the rabbit out of the hat?) in response to comments on this post. So there will be no later post. If you’re interested in seeing the cat (rabbit?), see the discussion below.]

Let’s begin with the two definitions of utilitarianism advertised above. Here’s the first:

(U1) An action X is permissible if and only if the total utility in the outcome of X is at least as great that in every alternative.

There may be some minor quibbles about the wording of (U1). But, setting those aside, I hope that most would agree that this is the standard definition of utilitarianism. (No undergraduate should have points deducted for writing this as as a definition of utilitarianism in her Intro-to-Ethics quiz.)

The second definition is suggested by a claim that is commonly made about utilitarianism. It’s often said that utilitarianism can usefully be decomposed into two parts. (Sometimes it’s three parts; see Amartya Sen’s numerous writings on the subject. But I’ll stick to two here, to keep things simple.) The first part is a doctrine concerning the relation between “the right” and “the good”: roughly, that the right consists in maximising the good. And the second part is a doctrine about the good itself: roughly, that the good consists in total utility. The former doctrine is usually called consequentialism (the latter doesn’t really have a name). Often, the purpose of “decomposing” utilitarianism in this way is to show that one can reject one part — usually the second part — without rejecting the whole thing. Consequentialists are apt to say that the trouble with utilitarianism is not that it misunderstands the relation between the right and the good, but rather that it has a naive account of the good.

This suggests that utilitarianism may be defined as the conjunction of the following two claims:

(U2a) An action is permissible if and only if it’s outcome is at least as good as every alternative.

(U2b) One outcome X is at least as good as another Y if and only if the total utility in X is at least as great as that in Y.

On this second definition — call it (U2) — utilitarianism has two parts, as given by the two conjuncts, (U2a) and (U2b). It is in virtue of the first part that utilitarianism counts as a consequentialist view.

Now, here’s the puzzle: although both definitions, (U1) and (U2), are commonly given, they appear not to be equivalent; in particular, it seems that one might consistently accept (U1) while denying (U2). Suppose, for example, that someone — call him Eugene — says this: “I don’t believe in any such thing as ‘the goodness of outcomes’. I deny that any outcomes are such that one is at least as good as the other. Nonetheless, I do believe in permissibility, and I think that an action is permissible just in case it maximises total utility.” This seems to imply that Eugene is committed to accepting (U1) yet rejecting (U2); if what he said were true, then both conjuncts in (U2) would be false. So, is Eugene a utilitarian or not? If utilitarianism is defined by (U1), then he is. But if it’s defined by (U2) then he isn’t. But neither answer is very satisfying.

Suppose we settle on (U1) as our definition of utilitarianism. Then we should say that Eugene is a utilitarian. But this is odd, because Eugene is not a consequentialist; he rejects (U2a). And this contradicts the received wisdom that utilitarianism is a form of consequentialism — i.e., that utilitarianism is a particular instance of the broader category of consequentialist views. It would be misleading to say that consequentialism is a part of utilitarianism, as we commonly do say. Alternatively, suppose we settle on (U2). Then we should say that Eugene is not a utilitarian. But, again, that’s odd. After all, Eugene does believe that we ought to maximise total utility; and normally, I think, we would regard that as in itself sufficient to make him a utilitarian.

So, that’s the puzzle. Perhaps one tempting solution is the following. We might go with (U1) and concede that, strictly speaking, consequentialism is not part of utilitarianism; rather, it’s part of some utilitarian views, but not of others. We might distinguish two kinds of utilitarianism, one being consequentialist and the other nonconsequentialist. But we might add that the former kind is by far the more common, or more plausible. Strictly speaking, Eugene is a utilitarian; but he endorses a non-standard brand of utilitarianism, which is so unusual or implausible as not to be worth considering.

However, I’m not inclined to favour that solution. It seems to sacrifice too much of our common understanding of utilitarianism. To countenance the possibility of nonconsequentialist utilitarianism — even the bare conceptual possibility — would be very revisionary. A solution that is so revisionary should be adopted only as a last resort. But it need not come to that, since there is, I believe, a better solution.

I’ll save my favoured solution for another time. (To briefly anticipate: the basic strategy is to show that, despite appearances, (U1) and (U2) are equivalent, after all.) For now, I’m interested to know what others make of all this. Do others find it puzzling too?

49 Replies to “A Puzzle about Utilitarianism

  1. Campbell – wouldn’t a utilitarian like Eugene just deny that consequentialism is a view about the connection between rightness and goodness? Or, at least, that there’s a way to think about consequentialism according to which it does not involve goodness? (I don’t know what that other way might be – maybe just the view that the rightness of an act depends on *some features* of its outcome and the outcomes of its alternatives, not necessarily their values.)
    I think there are other problems with either U1 or U2 as a definition of utilitarianism. There are versions of utilitarianism that deny both, such as Fred Feldman’s “World Utilitarianism.”

  2. Ben,
    You make an interesting suggestion. I think that, in a certain way, it’s quite similar the solution that I have in mind. Consider a third definition. Let (U3) be the conjunction of the following:
    (U3a) There’s something that ought to be maximised.
    (U3b) If there’s something that ought to be maximised, then total utility ought to be maximised.
    I take it that (U3) is equivalent to (U1), which says, in other words, that total utility ought to be maximised. Now, if we said that consequentialism is (U3a), we could hang on to the common idea that utilitarianism is a sub-category of consequentialism. The problem, however, is that (U3a), on its face, does not seem to capture what philosophers tend to mean by “consequentialism”. In particular, whereas consequentialism is normally understood as a view about goodness, (U3a) mentions goodness not at all.
    So, here’s my solution: despite appearances, (U3a) does make a claim about goodness. To see this we need to see more clearly what “goodness” means. My proposal, put very roughly, is that “goodness” means “that which ought to be maximised” (or, less succinctly, that which it is impermissible to fail to maximise). Thus, (U3a) says, in other words, something like this: there such a thing as goodness. Without going into details, I think it can be shown that given this definition of “goodness”, (U3) is equivalent to (U2), and hence that (U2) is equivalent to (U1), after all.
    So, what should we say about Eugene? I think we should say that he’s conceptually confused: in particular, that he doesn’t understand what “goodness” means. He claims to believe both that (i) there’s something that ought to be maximised (i.e. total utility), and (ii) that there’s no such thing as goodness. But the conjunction of (i) and (ii) is a contradiction.
    Anyway, that’s the rough idea.
    As regards Feldman’s World Utilitarianism, I intended (U1) to be a generic statement of utilitarianism, covering all the various precisifications of the view that have been worked out within the utilitarian camp. In particular, I intended the notion of an outcome to be construed rather broadly, so that it would be compatible with interpreting “the outcome of X” as “the world that would obtain if X were performed”, for example. Do you think that some such interpretation would bring Feldman’s view within the scope of (U1)?

  3. Campbell,
    If I’m following, you hold that consequentialism (roughly) equals “(U3a) There’s something that ought to be maximised,” and then you hold that goodness is just whatever ought to be maximized. But what about these two views:
    (E) Evil and badness ought to be maximized.
    It seems like (E) is a coherent view, but on your criteria, (E) would be consequentialist (which strikes me as counterintuitive, but maybe I could be talked out of that) and (more counterintuitively) evil and badness are identical to goodness!
    Then consider
    (M) Morally right actions ought to be maximized.
    (M) not only seems coherent, but seems to be the view of all moral theories (or at least those that don’t include options, where perhaps options aren’t themselves built into rightness…let’s avoid that complication) including (some forms of) deontology. This is why, as I’ve always understood it, consequentialism must be construed as (roughly) the view that it’s obligatory to produce non-moral goodness, such as utility or welfare or whatever. These are non-moral in the sense that they themselves aren’t morally good (in the way a person who produces good consequences, or is motivated by duty, or whatever, is morally good); they are nonetheless morally relevant in the sense that they are what makes right acts right and wrong acts wrong. That is, everyone agrees that you should produce moral goodness, so the distinctive claim of consequentialism is that the right action is the one that produces non-moral (but, obviously, morally relevant) goodness. If that’s correct, then that makes it tough to eliminate “goodness” from a characterization of consequentialism.

  4. Campbell,
    I agree that your solution solves the puzzle, but I’m still skeptical. I wouldn’t be happy saying that you can’t believe in goodness without being a maximizer. But if the goal is to show that Eugene is mistaken, you can get that with a weaker claim; as long as there is a conceptual connection between goodness and reasons for action, Eugene is contradicting himself if he thinks there are reasons for action and denies the existence of goodness.
    Still, it seems to me that puzzle you present is a puzzle that arises just from the ways philosophers use terms of art like “utilitarianism” and “consequentialism.” As I see it, it’s to a large degree arbitrary whether a particular view should count as a sort of consequentialism or not. There’s definitely some value to sorting out the way we use those terms, but I wouldn’t want to base any conclusions about goodness on that sorting out.
    As for Feldman, I don’t think his utilitarianism is compatible with U1, but I could be wrong. Here’s how he puts his view: “An act is right for an agent at a time iff he performs that act in an optimific life history world then open to him.” (From his “World Utilitarianism,” reprinted in Utilitarianism, Hedonism and Desert.) There’s no comparison of the consequences of a particular act with the consequences of its alternatives. So the view can’t be seen as an instance of the U1 schema, which uses a concept of maximizing that appeals to acts and alternatives. The view is motivated by the problems about complex actions that people like Castaneda and Bergstrom worried about a lot in the ’70s.

  5. Josh – what about the view that we ought to maximize virtue? Does that count as non-consequentialist on your view?

  6. Ben,
    “We ought to maximize virtue” might be taken in two ways, I think.
    (V1) We ought to maximize virtue, and virtue = moral goodness.
    (V2) We ought to maximize virtue, and virtue = the actualization of our potential (or something close to that)
    (V1), I would argue, is neither uniquely consequentialist nor non-consequentialist, since Kant, Mill, Aristotle, and just about every mainstream moral theory has subscribed to it. (V2), by contrast, is a consequentialist view, because the virtues are here being defined non-morally, that is, independently of reference to moral terms, and our obligation is to produce these non-moral (but morally relevant) goods. So this kind of view (I’m thinking of perfectionism here) would be consequentialist.

  7. Josh,
    I would say that (E) is incoherent. On my proposal, “badness” means something like “that which ought to be minimised”. But it surely cannot be the case that we ought to maximise that which ought to be minimised. Consulting my own linguistic intuitions, this seems correct: (E) is indeed incoherent.
    Things are a little more tricky with (M). If you mean by (M) that we ought to maximise the number of right actions, then I’d say that it’s both false and inconsistent with at least some common moral views — utilitarianism, for example. Utilitarianism instructs us to maximise total utility, but there are conceivable circumstances in which maximising utility will conflict with maximising the number of utility-maximising actions. In such circumstances, utilitarianism says that we ought not to maximise the number of right actions.
    Perhaps more to the point, though, it may be helpful for me to explain (U3a) more precisely. What I mean by (U3a) is this: there exists some ordering of outcomes R such that, for any action X, X is permissible iff R ranks the outcome of X at least highly as that of every alternative. If there is such an ordering R, then R represents that which ought to be maximised — i.e., it represents goodness. There are some moral views that imply that there is no such ordering of outcomes; on these views, there no such thing as the goodness of outcomes. (Actually, strictly speaking, these views imply that there’s no neutral goodness. I think there’s a broader notion of goodness, which encompasses also various kinds of relative goodness. I’ve argued elsewhere that every moral theory implies the existence of goodness in this broader sense.)

  8. Ben,
    You write:

    Still, it seems to me that puzzle you present is a puzzle that arises just from the ways philosophers use terms of art like “utilitarianism” and “consequentialism.” As I see it, it’s to a large degree arbitrary whether a particular view should count as a sort of consequentialism or not. There’s definitely some value to sorting out the way we use those terms, but I wouldn’t want to base any conclusions about goodness on that sorting out.

    I don’t think we’re really in disagreement here. The view I’m defending is a view about what “goodness” means: in particular, what it means in moral philosophy. (So perhaps I’m thinking of “goodness” as a term of art too — not sure.) In order to work that out what philosophers mean by goodness, we need to pay attention to the things that they say about it. And they very often say each of the following: (i) utilitarianism is the view that we ought to maximise total utility; (ii) utilitarianism implies consequentialism; and (iii) consequentialism is the view that we ought to maximise goodness. As I’ve argued, this suggests that, for them, “goodness” means “that which ought to be maximised.” But I’m not trying to draw any substantive conclusions about the nature of goodness.
    Regarding Feldman’s view, I assume that there’s a typo somewhere in the quotation you gave, but I notice that he uses the expression “optimific life history”. Doesn’t this entail some kind of comparison with alternatives? My understanding is that to be optimific is to be at least as good as the alternatives.

  9. Ben,
    I forgot to comment on another of the points in your last post. You write:

    I wouldn’t be happy saying that you can’t believe in goodness without being a maximizer.

    I’m not entirely happy about this either. And I’m sure that a number of philosophers would be most unhappy about it: namely, those who endorse some variety of satisficing view. On a satisficing view, it’s sometimes permissible to fail to maximise goodness. But, on my proposal, goodness just is that which it is impermissible to fail to maximise; hence, satisficing simply makes no sense. Granted, this is a troubling implication of my proposal. But it may be a bullet that I’m prepared to bite. I’m not all that averse to saying that, when satisficers object to the requirement of maximising goodness, they’ve got themselves into a conceptual muddle regarding goodness. (I’m also hopeful that satisficing views can be recast as maximising views without significant cost.)

  10. Campbell – right, Feldman’s view involves comparing alternative life histories – *not* alternative act tokens. What does “alternative” mean in U1?

  11. Campbell,
    You say: “I would say that (E) is incoherent. On my proposal, “badness” means something like “that which ought to be minimised”. But it surely cannot be the case that we ought to maximise that which ought to be minimised. Consulting my own linguistic intuitions, this seems correct: (E) is indeed incoherent.”
    Of course, I also meant to suggest that (E) is inconsistent with your view — that was the point! So far, then, whether that requires abandoning your view or the coherence of (E) is merely a matter of consulting our intuitions (presumably independently of any related theoretical commitments, if we’re not begging any questions), and it seems like our intuitions clash on this one. Is there any further, non-circular argument that one might point to here, in order to arbitrate between the clashing intuitions?
    You also write, “Things are a little more tricky with (M). If you mean by (M) that we ought to maximise the number of right actions, then I’d say that it’s both false and inconsistent with at least some common moral views — utilitarianism, for example.” Let’s grant that (M) is inconsistent with utilitarianism. Let’s also grant that it’s false. In fact, and in contrast to what I said originally, let’s assume that (M) is inconsistent with most moral theories, and even all mainstream moral theories. (I add this because you interpreted (M) agent-neutrally, when I was intending it agent-relatively, but we can put that aside under the foregoing assumptions and just treat it agent-neutrally — I wonder now if that might be built into your U3a. Anyway…) Even granting all those assumptions, surely (M) is held by some non-consequentialist views, e.g., a view according to which right actions = ones that comport with the keeping of promises (under the non-consequentialist recognition that this might make for an otherwise [non-morally] worse state of affairs, with little happiness or welfare or whatever). In so, then if (M) is an instance of some general characterization of consequentialism, that characterization of consequentialism would be incomplete. (My diagnosis above for why it’s incomplete is that it doesn’t make specific reference to non-moral goodness, but there may be other diagnoses.)

  12. Campbell, I think it’s true that the Feldman view is not compatible with (U1). Let me put them both in this comment.
    (U1) An action X is permissible if and only if the total utility in the outcome of X is at least as great that in every alternative.
    (Feldman) An act is right for an agent at a time iff he performs that act in an optimific life history world then open to him.
    You can sometimes choose an act whose consequences are at least as good as any alternative, even though one such consequence will be that in the future you will perform a second act whose consequences are worse than one of its alternatives. The first act will then be permissible according to (U1), but the second will not be. However, the second act will be right, according to (Feldman).
    Similarly, (U1) is not compatible with your typical indirect utilitarianism. You should say that (U1) captures all direct versions of utilitarianism.

  13. Campbell,
    Are the following your views?
    (1) Consequentialism: Agents ought to maximize goodness.
    (2) “Goodness” just means “that which agents ought to maximize.”
    In that case, consequentialism is just the trivial position that agents ought to maximize that which agents ought to maximize. Do you accept, then, that consequentialism is an entirely trivial position?

  14. Doug,
    That’s a good question. Here’s my answer: I accept that consequentialism is the position that agents ought to maximise that which agents ought to maximise (call this position CC, for Campbell’s Consequentialism); but I deny that this is a trivial position. That is, although I’m inclined to say that CC is true — indeed, necessarily true — I deny that it’s true a priori. A person might believe that CC is false without thereby committing any conceptual error; in particular, this would be the case if the person believed that there is nothing that agents ought to maximise. As I understand it, CC is equivalent to the claim that there is something that agents ought to maximise. But this claim is not trivial.
    An analogy may be helpful. Consider this claim:
    (1) The steepest hill in Bowling Green is at least as steep as every hill in Bowling Green.
    Perhaps this looks trivially true. But most residents of Bowling Green would say, I believe, that it’s false, because they know that there are no hills in Bowling Green and, hence, that there’s no steepest hill. On one influential analysis of definite descriptions (due to Russell), (1) would be analysed roughly as follows: there exists some X such that X is the uniquely steepest hill in Bowling Green and, for all Y, if Y is a hill in Bowling Green, then Y is no steeper than X . And this is not trivial; rather, it’s a posteriori and false.
    At any rate, when I define consequentialism as the view that agents ought to maximise goodness, I do not intend this to be understood as:
    (2) For any property F-ness, if F-ness is goodness, then agents ought to maximise F-ness.
    Rather, I intend it to be understood as:
    (3) There exists some property F-ness such that F-ness is goodness and agents ought to maximise F-ness.
    The problem with (2) as a definition of consequentialism is that those who deny the existence of goodness must accept (2), but I don’t think we want to call such people consequentialists. I believe that, although (2) is trivial, (3) is not. And when I talk about consequentialism I mean by this (3).

  15. Jamie and Ben,
    I’m still not seeing the incompatibility between (U1) and Feldman’s view. I confess that, when I first read the quotation from Feldman, I couldn’t see how it was a grammatical sentence (hence, my comment to Ben about there being a typo). But I’ve sorted that out now.
    Rather than tackiling Jamie’s example, let me instead ask a question of clarification. What does Feldman mean by “an optimific life history world then open to him”? I would guess that it’s something like this: W is an optimific life history world open to an agent S at a time T iff (i) W is a life history world open to S at T, and (ii) for any V, if V is a life history world open to S at T, then W contains at least as great total utility as V. Is that roughly right? If so, then it seems that (U1) could be seen as compatible with Feldman’s view by interpreting “outcomes” and “alternatives” in a suitable way. Specifically, we might interpret “the outcome of X” as the life history world in which X is performed, and we might interpret “every alternative” as every life history world that is open to the agent at the time at which X is performed. Would that do the trick?

  16. Campbell,
    I take it that the ought in (3) is not merely a prima facie ought but an all-things-considered ought; otherwise, a moral pluralist (which is a kind of non-consequentialist) could accept (3). Still, couldn’t someone accept (3) and be a non-consequentialist? For example, suppose someone a) accepts (3), b) holds that caging white tigers is good and is, in fact, the only good, c) holds that one ought take care of one’s children when all white tigers have already been caged. I know that this is a silly example, but I want to suggest that what’s essential to consequentialism is not merely the view that agents morally ought to maximize goodness, but the view that this is the only thing that agents morally ought to do. (3) seems to miss this essential feature of consequentialism.

  17. Doug,
    I agree that “what’s essential to consequentialism is not merely the view that agents morally ought to maximize goodness, but the view that this is the only thing that agents morally ought to do.” So, when I write “agents ought to maximise goodness,” I mean that agents act rightly (i.e. as they ought to act) if and only if their actions maximise goodness. However, for the reasons given above, I deny that someone could coherently hold the beliefs described in your example. If a person holds that “one ought to take care of one’s children when all white tigers have already been caged”, then she cannot also hold that “caging white tigers is … the only good.” She must hold that taking care of one’s children is good too. Don’t you think it would be rather strange for someone to say “caring for one’s children doesn’t do any good, but one ought to do it anyway”?

  18. Campbell,
    You say, “When I write ‘agents ought to maximise goodness’, I mean that agents act rightly (i.e. as they ought to act) if and only if their actions maximise goodness.” Okay, that clears things up for me. You also ask, “Don’t you think it would be rather strange for someone to say ‘caring for one’s children doesn’t do any good, but one ought to do it anyway’?” Yes, but then I’m a consequentialist. I’m not sure that a non-consequentialist should find it strange at all.

  19. Just to add a thought to Doug’s comment, to say ‘caring for one’s children doesn’t do any good, but one ought to do it anyway’ does sound strange to me (a non-consequentialist). But it does not sound *incoherent*. Rather, the theory of value sounds wrong, i.e., caring for one’s children does do good. So the strangeness isn’t the right kind of strangeness. What sounds less strange (and, still, coherent) to a non-consequentialist like myself is to say that ‘caring for one’s children would be the action by which, if performed, the agent would maximize goodness, but it would still be wrong for the agent to do so in this case, because doing so means X’ (where X = would disrespect humanity [presumably because in this case caring for one’s children would conflict with some more significant respect-for-humanity action] or would require breaking a promise, or some such).

  20. Campbell,
    Well, it is now unclear to me what Feldman means. I thought I knew, but your last posting (on the subject) led me to read the Feldman criterion more carefully and now I’m confused.
    (Feldman) An act is right for an agent at a time iff he performs that act in an optimific life history world then open to him.
    Suppose I am today (Dec. 6th) thinking about a choice I will face tomorrow (Dec. 7th). When I judge whether the acts I might perform on Dec. 7th are right, am I supposed to be thinking about whether they are right-for-me-now, or whether they are right-for-me-Dec. 7th?
    Ben, can you help with the interpretation?

  21. Jamie,
    I don’t think Feldman wants to say that an act could be right relative to one time but not right relative to another time. If you’re evaluating an act you could perform tomorrow, you look at what life history worlds will be available to you at the time of the act – not the worlds available to you now.
    Campbell,
    I don’t think Feldman’s view can be seen as a version of U1, but I’m still thinking of a way to explain why. I’ll post something soon.

  22. Campbell,
    Here’s U1 again:
    (U1) An action X is permissible if and only if the total utility in the outcome of X is at least as great that in every alternative.
    I would have thought U1 means the same as this:
    (U1*) An action X is permissible if and only if the total utility in the outcome of X is at least as great that in every of X’s alternatives.
    If so, then Feldman’s view is not a version of U1. If you wanted to rewrite Feldman’s view so that alternatives are mentioned explicitly, I guess it might look something like this:
    (Feldman*) An action X is right for an agent at a time t iff he performs X in a life history world W open to him at t such that no alternative to W available to the agent as of t has greater value than W.
    (Feldman*) does not entail (U1*), and in fact they are incompatible, as Jamie’s example shows.
    Maybe you don’t want to read U1 as U1*, but instead as a less specific view that is entailed by (U1*) and (Feldman*). But then you have to say exactly how we are to read U1, because I take it the standard reading is (U1*).

  23. Ben and Jamie,
    Ben writes: “(Feldman*) does not entail (U1*), and in fact they are incompatible, as Jamie’s example shows.” But I don’t understand how Jamie’s example shows this. I suspect that the problem is that I don’t understand what it is for a life history world to be “open” or “available” to an agent, in Feldman’s view.
    Here’s Jamie’s example:

    You can sometimes choose an act whose consequences are at least as good as any alternative, even though one such consequence will be that in the future you will perform a second act whose consequences are worse than one of its alternatives. The first act will then be permissible according to (U1), but the second will not be. However, the second act will be right, according to (Feldman).

    Let’s say that the outcome of an act is the life history world in which the act is performed. And let X be the second act in Jamie’s example. By hypothesis, there’s some act Y such (i) Y is an alternative of X, and (ii) the life history world in which Y is performed contains greater total utility than that in which X is performed. Thus, if (Feldman) implies that X is right, as Jamie says it does, then the life history world in which Y is performed is not open to the agent at the time at which X is performed; otherwise, X would not be performed in an optimific life history world. But why is that? This is the part that I’m not seeing.

  24. Doug and Josh,
    If someone were to say “caring for one’s children doesn’t do any good, but one ought to do it anyway”, that would strike as strange, not merely in the sense of being implausible, but in the sense of being incoherent. Perhaps, then, a principle of charity would tell me to interpret the person as using the word “good” in a way that I’m unfamiliar with; what she means by “good” differs from what I mean by “good”, so that we’re talking past each other to some extent. But this leads me to wonder what on earth she does mean by “good”.
    Let goodness* be whatever property is picked out by this person’s use of the word “good”. Now, goodness* seems quite mysterious to me — quite unlike anything that I’d associate with “good”. Presumably, this person thinks that there are some moral considerations that speak in favour of caring for one’s children; otherwise she wouldn’t believe that one ought to care for one’s children. But, if we are to interpret her consistently, then we must suppose that these considerations are irrelevant to the question of whether caring for one’s children is good* (i.e. whether it instantiates goodness*). But what could it be about goodness* that makes this the case? I find it hard to see how that question might be answered in such a way that her use of the word “good” does not turn out to be quite arbitrary and idiosyncratic.
    At any rate, those are my linguistic intuitions. Of course, it might be that others don’t share them. (Josh has already indicated that he doesn’t.) But this suggests that a large part of the debate between self-proclaimed consequentialists and non-consequentialists amounts to no more than a pseudo-debate, since the opponents are speaking past each other, attaching different meanings to the word “good”. In that case, my view might be seen as a proposal for terminological legislation: from now on, let’s all use “goodness” to mean “that which ought to be maximised”. That way, we would at least all be on the same page — and, as an added bonus, we could make sense of widely held view that utilitarianism implies consequentialism.

  25. Campbell,
    I think that’s the right way to look at it, in general. I’m still not sure, though, that the main debate has been a pseudo-debate. Since I’m not sure how to make sense of the ‘caring for one’s children’ example (because of it’s odd theory of value), let me use the promise keeping example. A non-consequentialist might say ‘P should keep her promise in case C, even though it doesn’t maximize goodness*.’ The (non-Campbellian) consequentialist might respond in one of two ways: ‘If keeping the promise doesn’t maximize goodness*, then (contrary to the non-consequentialist) P shouldn’t keep her promise;’ or (2) ‘If P should keep her promise, then (contrary to the non-consequentialist) this must mean that keeping her promise maximizes goodness*.’
    So the issue, then, is what goodness* is here. As you put it, Campbell, “But, if we are to interpret [the non-consequentialist] consistently, then we must suppose that [the moral] considerations [in favor of having P keep her promise] are irrelevant to the question of whether [keeping her promise] is good* (i.e. whether it instantiates goodness*). But what could it be about goodness* that makes this the case? I find it hard to see how that question might be answered in such a way that her use of the word ‘good’ does not turn out to be quite arbitrary and idiosyncratic.”
    I think there’s actually a non-arbitrary and non-idiosyncratic way of answering this question: goodness* refers to specifically non-moral (but morally relevant) goodness, such as happiness or welfare or whatever. So, the consequentialist says that we should maximize goodness*, and the non-consequentialist denies this.
    I think this is neither arbitrary nor idiosyncratic, since it seems to capture the traditional debates between Mill and Ross and Kantians and contemporary consequentialists, among others. And it’s not a pseudo-debate, since the parties are all referring to goodness*. You might be right, though, that this can’t solve the puzzle with which this post began. If that’s so, then rather than motivating your conception of goodness from a perspective of terminological legislation, you might use that puzzle itself as a way of motivating a shift in the terms of the debate towards your construal (if I’m right that the traditional debate is over maximizing goodness* rather than all-inclusive goodness).

  26. Campbell,

    Let’s say that the outcome of an act is the life history world in which the act is performed. And let X be the second act in Jamie’s example. By hypothesis, there’s some act Y such (i) Y is an alternative of X, and (ii) the life history world in which Y is performed contains greater total utility than that in which X is performed. Thus, if (Feldman) implies that X is right, as Jamie says it does, then the life history world in which Y is performed is not open to the agent at the time at which X is performed; otherwise, X would not be performed in an optimific life history world. But why is that? This is the part that I’m not seeing.

    My thought was that world in which Y occurs is not open to the agent at the earlier time, the time of the first act. This is what I thought Feldman intended. But Ben’s gloss suggests that it is not what Feldman intended. So I am at a loss. I can’t see why Feldman puts his view in this strange way.
    I originally thought that the point was this. When the agent, at the earlier time, looks forward to the time at which he will choose the second act, he will say to himself, “That will be the right choice”, even though he can see that there will be an alternative with more utility in its consequences. This is because there is no alternative available to him ‘now’, at the time of his first choice, that will contain more utility than the act that brings about the world in which he does what you are calling ‘X’.
    I bet this is hopelessly unclear. If there’s still any interest I will give a non-schematic example and start again.

  27. Josh,

    The (non-Campbellian) consequentialist might respond in one of two ways: [(1)] ‘If keeping the promise doesn’t maximize goodness*, then (contrary to the non-consequentialist) P shouldn’t keep her promise;’ or (2) ‘If P should keep her promise, then (contrary to the non-consequentialist) this must mean that keeping her promise maximizes goodness*.’

    Hm, I think that’s only one way, since (1) and (2) are logically equivalent (they are contrapositives).

  28. Josh,

    I think there’s actually a non-arbitrary and non-idiosyncratic way of answering this question: goodness* refers to specifically non-moral (but morally relevant) goodness, such as happiness or welfare or whatever. So, the consequentialist says that we should maximize goodness*, and the non-consequentialist denies this.

    I think it’s important to distinguish between what is good for a person and what is good. It is certainly quite coherent to say that there are other important moral considerations than what is good for persons. But that might just be to say, more things go into goodness than merely what is good for persons.
    What is not so clear, to me anyway, is that there is such a thing as goodness (simpliciter, rather than goodness for a person) that is independent of what one ought to do. (In fact, I am understating my view. It seems pretty clear to me that there is no such independent sense of ‘good’.)

  29. Jamie,
    I confess I’m not getting it. Let me give an example. I’m not sure if this is the sort of example you were thinking of, but in any case it’s the sort of example that partly motivates Feldman’s view.
    Doctor needs to perform heart surgery to save Patient. Heart surgery is complicated. First, you have to cut open the patient (a1). Then you have to break some ribs (a2). Then you have to do some other things. Consider (a1) – what good does (a1) bring about? None, by itself. In fact, just considered by itself, it brings about some pain to Patient when he wakes up later. Of course, there’s a more complicated act, consisting of (a1), (a2), and lots of other little acts, that brings about good consequences. That more complicated act is obligatory. So its parts must be obligatory too. The problem is that some of the parts, like a1 and a2, don’t maximize utility – they are just necessary parts of a utility-maximizing act. If I’m remembering things correctly, that’s the sort of problem Castaneda and others pointed out for U1. Feldman’s view entails that a1 is right, while U1 (at least in its traditional formulation) entails that a1 is wrong.
    Campbell, at this point you’ll say: “Let’s say that the outcome of an act is the life history world in which the act is performed.” If we could say that, then I guess we could say, in the heart surgery case, that the good things that happen as a result of the whole act of surgery are part of the outcome of a1. And those good things wouldn’t happen if any alternative to a1 were performed. So a1 would maximize utility after all.
    I have to think more about this. There’s at least one problem with your suggestion, which is the assumption that there is a unique life history world in which a1 is performed. There isn’t. There’s one where a1 is performed, then the rest of the surgery is performed; there’s also one where a1 is performed but Patient is left to bleed to death. Maybe there’s a way around this by dropping the uniqueness condition in U1.
    (I’m not sure why it matters whether all views held by people who consider themselves consequentialists entail some more general view like U1. Isn’t the important thing whether any of those views are right? But it’s my own fault, since I started the argument by bringing up Feldman.)
    (Chris Heathwood – if I’m getting Feldman’s view wrong in some way, please correct me – I haven’t thought about this stuff in a while.)

  30. Thanks, Ben. That’s helpful. I think I finally get it! In order to assimilate Feldman’s view to (U1) we would need to identify the outcome of an action with the best available LHW in which the action is performed (if there’s no uniquely best, we can arbitrarily choose any of the non-uniquely best; it’ll make no difference). Suppose, for example, that there are only two actions, A1 and A2, that an agent S might perform at a time T. If we assume — as seems plausible — that a LHW is available to S at T only if either A1 or A2 is performed in that LHW, then the following holds: A1 is performed in some optimific LHW open to S at T iff the best available LHW at which A1 is performed is at least as good as the best available LHW at which A2 is performed.
    Of course, that may not be the most perspicacious way in which to represent Feldman’s view. And I’m not really hell-bent on massaging his account to fit the (U1) mould. At this stage, I’m just intrigued by the view.

  31. But surely (a1) does bring about some good. (I don’t know whether it brings about some good by itself — though I am inclined to think that it does, I am not exactly sure what it means.) The simple way to see this is to notice that if Doctor performs (a1), then Doctor will perform (a2) (and so on). We know this, and so does Doctor. And this fact enters into our judgment of how good (a1) is. Compare: if we knew that Doctor would not perform (a2) even if she performed (a1), then of course (a1) wouldn’t bring about any good.
    It helps, but isn’t necessary (I think!) that (a1) brings it about that Doctor performs (a2).
    Here’s what I thought the interesting sort of case was.

    The Newlyweds decide to have a child. They know that if they become parents, they will then be very partial to their own child. And from time to time, they will unfairly favor their own child. Sometimes, they know, they will do some things that are not best, and do not have the best consequences, because they will be so devoted to their own child. Still, on the whole, they judge that having the child will be better than not having the child.
    Later, on one of those occasions when they act too partially toward their own child, what they do will be impermissible at the time. But it will be permissible relative to the earlier time. It will be an act that they perform in the best world open to them at the earlier time, but not an act that they perform in the best world open to them at the later time.

    I had thought that Feldman’s criterion might have something unusual to say about that sort of case. Maybe not.

  32. Let me suggest an example, borrowed from Frank Jackson, that might reveal a distinctive, and perhaps problematic, implication of Feldman’s view.

    Professor Procrastinate is asked by the editor of a journal to referee a paper that has been submitted. He knows that he is particularly well suited to referee the paper in question. But he also knows that, due to his tendency to procrastinate, if he were to agree to referee the paper, he would never get around to doing so. Rather, he would keep putting it off, and eventually the editor would get fed up and send the paper to some other referee instead. Should Prof Procrastinate accept the assignment?

    Jackson believes that he should not. According to the brand of consequentialism that Jackson endorses, the outcome of an action is the possible world that would, in fact, obtain if the action were performed. In this case, the outcome of accepting the assignment is a rather shabby world in which Procrastinate never gets around to refereeing the paper, annoys the editor, frustrates the career of the author, damages the reputation of the journal, and so on. And that world is clearly worse than that which would obtain if he were to decline the assignment. Hence, he ought to decline the assignment.
    Feldman’s view (as I understand it) implies, by contrast, that Prof Procrastinate ought to accept the assignment. There are optimific worlds open to him in which he accepts the assignment: namely, those in which he accepts and then promptly referees the paper without procrastinating. But there are no optimific worlds open to him in which he declines, since these worlds are all worse than the optimific worlds in which he accepts. Thus, he ought to accept.
    Of course, this all rests on a certain account of what it is for a world to be open to an agent. And perhaps the account that I’ve assumed is not one that Feldman would endorse. Perhaps he would say, in particular, that, since Procrastinate knows that he would not referee the paper, those worlds in which he does do so are not open to him. But if he did say that, then I worry that his view might not be so distinctive, after all.

  33. Jamie,
    I’m a bit confused about the implication of your point (five comments above, assuming no one’s posting while I’m composing this). I’m under the impression that the distinctions between moral and non-moral goodness, on the one hand, and good-for-a-person and good simpliciter, on the other, are orthogonal. So maybe goodness simpliciter always encompasses moral goodness [EDIT: should say “maybe goodness simpliciter always encompasses moral rightness”], but doesn’t that leave intact the difference between moral and non-moral goodness?

  34. As I was construing it above, I take non-moral goods to be goods who can be defined without reference to moral terms. So, assuming that happiness is a good, it would be non-moral if we can explicate happiness without referring to any moral concepts (or at least ‘thin’ moral concepts like right and wrong). The same would be true of welfare or perfection. However, while non-moral, any of these goods might be morally relevant in the sense that they provide the moral criteria for some theories of rightness, i.e., they are what make right acts right and wrong acts wrong.

  35. I see.
    But now I don’t understand your main point. You think a non-consequentialist should say that we are not to maximize goodness*, although the same non-consequentialist can agree that we are to maximize moral goodness. (Have I got you right?) But I can’t see what this amounts to. Moral goodness can be defined in moral terms. Does your non-consequentialist have to maintain that moral goodness cannot be characterized in non-moral terms? What sort of view says this? Would non-consequentialists have to be particularists?

  36. I don’t think non-consequentialists have to be particularists. (Or at least I hope not, or else I’ve gone horribly wrong somewhere!) But many views, at least, do characterize moral goodness in part by reference to moral terms, I think. So, for Kant (for example), roughly, a person is morally good just when she is disposed to do her duty from the motive of duty. Maybe it would be safe to say that many theories hold a similar, but more modest, view: a person is morally good when she is disposed to perform right actions. So, I guess one way of construing one of my points is that both consequentialists and non-consequentialists can agree with this characterization of moral goodness, and hold that (agent-relatively) each agent ought to maximize her moral goodness (I’m assuming here that the disposition to perform right action comes in degrees), so that, per Campbell’s characterization of consequentialism, there is something that we ought to maximize (though, again, he might have meant it to be exclusively agent-neutral). But consequentialists and non-consequentialists still must disagree about whether we should maximize non-moral goods, such as welfare. But I’m not sure whether this is the point you were asking me to clarify.

  37. Huh. But isn’t there a non-moral substitute for “do her duty”? There’s some way to characterize in non-moral terms what doing one’s duty amounts to, unless particularism is true. Suppose someone does her duty just in case she does N, where N is the non-moral characterization. So why couldn’t your Kantian say that there is non-moral goodness, namely the disposition to do N, and that it is each agent ought to maximize her non-moral goodness? Then your Kantian turns out to be a consequentialist, too.
    Your way of drawing the line sounds sort of Rawlsian, by the way. No doubt you knew that.

  38. Jamie,
    You’re right — that characterization of non-moral goodness doesn’t work. In fact, it’s been bugging me since I wrote the comment (so much for judicious use of the Preview function). So I’m not sure how to cash out the difference between moral and non-moral goodness. But I want to keep the distinction, since I want to say that (most) Kantians are non-consequentialists, and I want to say that Kantians can agree with consequentialists that we should maximize our virtue and that Kantians can hold that we should maximize our right actions (two elements of moral goodness, whatever moral goodness is), without saying that what makes right acts right – your N – is a function of maximizing some (putatively non-moral) good. That seems sensible to me, since it preserves the distinction between consequentialism and non-consequentialism (in terms of what makes right acts right) and seems to rightly capture some core elements of Kant, for famous example. But those desiderata mean, still, that we can’t say consequentialism is (U3a) “There’s something that ought to be maximised,” because, again, Kant would agree, and Kant’s not a consequentialist. So maybe (in a Rawlsian spirit, though I’ve got other issues with Rawls’ characterization of deontology), those desiderata are best satisfied by going back to the old characterization of consequentialism as holding: “What makes right acts right is that they maximize some good.” Of course, that doesn’t go any distance to solving the puzzle with which Campbell’s post began.

  39. OK, I’m way behind. You guys are too fast.
    Campbell – Jackson’s case is interesting, but I don’t think it’s a problem for Feldman in particular. After all, doesn’t it seem problematic to say that Procrastinate gets off the hook by being lazy? He seems to have an (unconditional) obligation to accept the assignment and then referee the paper. But he also seems to have this conditional obligation: not to accept the assignment *if* he’s not going to referee the paper. I don’t have anything interesting to say about this, but people wrote about it a lot 20-30 years ago. In fact, it’s one of the main topics of Feldman’s *Doing the Best We Can.*

  40. Jamie,
    You might be right about the Doctor case. I think there’s room for disagreement about whether the later acts are *caused* by the earlier ones; it seems better to say they are made possible by them. Maybe you’d be less inclined to make the causation claim if I’d given a different example, where it is left open what the agent will do after doing the first part of the complex act.
    There’s an interesting question here, that relates to Campbell’s worry: namely, whether we want to include our own future decisions and actions, and their outcomes, in the outcomes of our current actions. And I think Feldman denies that we should do so – what we’ll do in the future should be treated as open, not determined by our present choices. But I’m not sure about that. In any case, I really wasn’t planning on trying to defend Feldman here – I just thought Feldman’s view didn’t fit the utilitarian mold that Campbell described in the beginning.
    The newlywed case is interesting, but I’m not sure I entirely understand it. You say there’s some act that is available to the Newlyweds as of the later time, but not as of the earlier time. Why, exactly, is it not available to them as of the earlier time? Presumably it doesn’t follow from the fact that they *won’t* do the act that they *couldn’t* do the act. And if it did, it would be equally unavailable to them at the later time. So I’m not seeing the motivation for saying that the same act is permissible relative to one time but not another.

  41. Hm, Ben, I don’t think I said that there’s an act available to the Newlyweds as of the later time but not as of the earlier time. You are thinking of something else that I said, presumably, and I will be glad to tell you what I meant by it, but you’ll have to identify it first.
    Of course, it is true that there are acts available to them at the later time that are not available to them at the earlier time. I think this is obvious, isn’t it? Putting their child to bed is available at the later time, but at the earlier time they do not have a child, so they could not put it to bed. But I am sure this is not what you had in mind.

  42. Here’s what you said:
    [the act of favoring their own child] will be an act that they perform in the best world open to them at the earlier time, but not an act that they perform in the best world open to them at the later time.
    Here are three worlds available to Newlyweds as of the earlier time: (1) have kids, don’t wrongly favor them; (2) have kids, wrongly favor them; (3) don’t have kids. I take it (1) is the best world open to them, (2) is second best, (3) is third best. As of the later time, only (1) and (2) are available. At both times, (1) is the best world available to them. So what I don’t see is why you say that wrongly favoring their own child is an act they perform in the best world available to them at the earlier time.

  43. Oh, I get it.
    In my story, (1) is not open. There is nothing that the Newlyweds can do at the early time that will bring about (1). If they had really good control over their future selves, then they could bring about (1), but like most of us, they don’t.

  44. Yes, it is (at least if we understand (1) in one plausible, relevant way). They do choose to have the child, in my story. Suppose the difference between (1) and (2) is just one occasion. Then, on that occasion, it is up to the couple whether they give undue favor to their child, so (1) is then available to them. (Of course, as my story goes, they in fact will give undue favor to the child, so (2) is the world that will happen.)

  45. I think the reason I was puzzled is we’re using “open” in different ways. I was thinking that a world is open to someone at a time just in case it’s possible, as of that time, that the person will undertake actions (now and in the future) that will actualize that world. You seem to be thinking of it in this way: a world is open for a person at a time just in case that person, at that time, can perform an act that will ensure that that world is actualized. Newlyweds can’t, at the earlier time, see to it that (1) is actualized; at the earlier time they can only eliminate (3). Nevertheless, in the sense I had in mind, (1) is still open to them at the earlier time, since they haven’t done anything to rule it out.

  46. Yes, Ben, I think you’re right that we are using ‘open’ in different ways.
    Suppose that you have $1000 and give all of it to OxFam. Now someone points out that you could have bought a PowerBall ticket, cashed it in when it won, and sent the proceeds to OxFam. This, she notes, would have been much, much better than what you actually did.
    According to Feldman’s view plus your interpretation of ‘open’ness of worlds, your critic is correct. What you did was wrong, and buying a lottery ticket would have been right! For at the time in question, it was possible that you would buy the lottery ticket, win, cash in the ticket, and send the proceeds to OxFam. Therefore, the world in which you did all those things was open to you. That world is a whole lot better than the world in which you don’t buy the ticket. We may assume, in fact, that it is the best of the worlds that are open at the time. So, you perform the ticket-buying act in an optimific life history world then open to you. So it is right to buy the ticket.
    But this seems wrong. Don’t you think?

  47. Right, it’s wrong to buy the ticket. I think the sense of “possible” relevant to openness has to be tied in some way to the abilities of the agent. In order for a world to be open for an agent, it has to be up to the agent whether that world becomes actual or not. This would take a lot more spelling out, I think. But there seems to be a clear difference between the newlywed case and the lottery case – there’s a course of action Newlyweds can undertake starting at the early time that will ensure they get into (1), but there’s no course of action I can undertake that will ensure I send a million dollars to Oxfam.
    I was wrong before when I said this: “I don’t think Feldman wants to say that an act could be right relative to one time but not right relative to another time.” I was looking at his paper again and he does say exactly that. At this point I want to stop trying to explain Feldman’s view here. I’m not doing a good job.

Leave a Reply

Your email address will not be published. Required fields are marked *