I’m interested in defending consequentialism against allegations that it represents an inherently perverse perspective, or that the consequentialist agent would have a morally bad character. For example, critics allege that the consequentialist agent would have ‘one thought too many’, that they would treat others as replaceable ‘value receptacles’, that they would be cold and calculating, untrustworthy, and incapable of genuine personal relationships. I aim to rebut these charges.


By way of background: I assume that for any given moral theory, we can reconstruct what it would take to be a fitting agent, i.e. one who exemplifies the theory's moral perspective. Roughly: the fitting agent is one who believes, and has fully internalized, the moral truth. They thus desire just what’s genuinely desirable. Importantly, to call a character ‘fitting’ in this sense is not to say that the theory in question recommends adopting it. We might do better, by the theory’s lights, to believe and act upon moral falsehoods.

The paradox of hedonism is a great example of this. According to egoistic hedonism, the fitting agent would desire only his own happiness. But such an agent might predictably be happier were he to come to care non-instrumentally about other things and other people. So the hedonist would want to change their character to become a happier non-hedonist instead. Even so, that doesn’t change what the fitting hedonistic character or mindset looks like. We can assess the hedonistic mindset, independently of its consequences, for whether it seems to constitute a morally accurate perspective — this is just an indirect way of assessing whether the theory of hedonism is true. Nonetheless, I think it’s a helpful methodology, because — at least in some cases — we may have stronger intuitions about the appropriateness or perversity of concrete psychologies than we do about the truth of abstract theories.

Now, everything I just said about hedonism also applies to impartial consequentialism. We can grasp the consequentialist mindset, or what it would take to be a fitting consequentialist agent, and we can assess whether this seems to constitute a morally accurate perspective, or a morally perverse one.

Past defenders of consequentialism have typically neglected this challenge, feeling content to gesture at the distinction between criteria of rightness and decision procedures, and pointing out — correctly enough — that consequentialism needn’t recommend that we adopt a consequentialist decision procedure. But this is non-responsive to the kind of objection I’m considering here. The objection is not that consequentialism recommends an instrumentally bad mindset, but that it exemplifies an inherently misguided one. There are two ways to defend against this objection. One is to bite the bullet and just insist that what critics claim to be perverse is not really so. (Maybe it’s actually completely appropriate to treat people as value receptacles!) The second, which I pursue, is to argue that the critics are mistaken to attribute the psychological feature in question to the fitting consequentialist agent.

For example, some claim that the fitting consequentialist agent has but a single desire — to maximize utility — and the welfare of particular individuals is merely instrumental to this end. To see why this is objectionable, compare the way in which we treat money: I don’t care if you switch the $20 bill in my hand with another, because I don’t care about the particular bills — I just care about my total net worth. But it’d seem terribly perverse to treat individual people as fungible in this way. Any theory that attributes intrinsic value only to aggregate welfare, and not to individuals, is I think clearly false.

But consequentialism need not have this implication. The fitting agent has distinct intrinsic desires corresponding to each thing that is intrinsically good or desirable. So if we think it’s fitting to desire each individual’s welfare separately, that just goes to show that we’re committed to a value theory that is in one sense pluralistic: rather than holding that there is only one token good, namely aggregate welfare, it says that what’s good is the welfare of this person, and that person, and so on, for each person. Consequentialism can comfortably take this form. This may not sound like a big difference, but it actually has concrete psychological implications. When weighing two equally good instruments to the same end, the appropriate response is indifference: this reflects the fact that the particular identities of instruments is not something of any normative significance. But faced with two equally weighty intrinsic goods, one responds not with indifference, but with ambivalence: one has distinct desires pulling in either direction. Even if it doesn’t alter one’s outward behaviour, this internal conflict reflects one’s recognition of the distinct and irreplaceable values in play.

So that's how I think consequentialists should respond to this version of the value receptacle / separateness of persons objection. In my full paper [pdf] [view online with Google Viewer] I defend my framing and methodology in more detail, and address several other character-based objections to consequentialism. It's very much a work in progress, so any feedback would be much appreciated!

14 Replies to “Fitting Consequentialist Agents

  1. Couple of points: How psychologically plausible is that first of all? I mean if I were a good consequentialist agent on your view, I should currently have 6,890,646,738 desires or there abouts.
    Also, why shouldn’t we assess what desires an agent ought to have by consequentialist grounds – the having of which desires has the best consequences? Why then assume that the optimific desires directly correspond the value theory of consequentialism? Going this way might allow some appealing desires – caring more about the close ones than each other individual in the world…
    I guess the worry is that if the utilitarian agent has the moral desires he will be less effective in promoting the object of those desire. Of course, as Parfit famously, says this isn’t exactly self-undermining but it sure seems paradoxical.
    Things will probably get easier if you are if you accept agent-centred consequentialism. Then you can run the story the other way. You can start from the ideal agent of the philosopher who is making the objection. You can then create a value theory where the objects of those desires have value. In this case, the consequentialist agent and the objector’s agent have identical desires so the objection can really not be run. However, if the consequentialist defends any other axiologies, then of course you’ll get an objection that the consequentialist agent is desiring wrong things.

  2. Hi Jussi, I definitely don’t “assume that the optimific desires directly correspond [to] the value theory of consequentialism”. I explicitly distinguish the ‘fitting’ mindset from the ‘recommended’ or optimific one. But it’s not enough to only talk about the latter, because the character-based objections to consequentialism are strongest when formulated in terms of the former. (As explained in the paper, I don’t think the self-effacing recommendations of consequentialism pose any serious objection.)
    if I were a good consequentialist agent on your view, I should currently have 6,890,646,738 desires or there abouts.
    Yeah, the summary offered in the main post is just a rough gloss. In the paper, I suggest that we need to refine the view so that the fitting consequentialist agent has a generic fill-in desire for the welfare of the unidentified masses. It’s just when you are capable of referring to an individual as distinct from others that you instead form a particularized desire for their welfare.

  3. I’m not sure about what you say in the second to last paragraph.
    Typically, the distinction between indifference and ambivalence does show up in behavior, and I suspect that strong believers in the “separateness of persons” will want to stress the importance of these behavioral differences.
    If I have a choice between two options between which I am indifferent, (e.g., having one $20 bill and a different $20 bill) then slightly improving one of the options (throwing in an extra $1 bill if I pick the $20 bill on the left) will break the tie, and I’ll no longer be indifferent.
    However, if I’m ambivalent between two options (in the way I suspect separateness of persons advocates would think is appropriate in certain cases involving conflicts of interest), ties won’t be so easily broken. For instance, suppose I have a choice between saving Billy from a horrible death and saving Suzy from a horrible death, and I’m appropriately torn (ambivalent). It’s plausible that I should still be torn even if we change the case so that if I save Billy from a horrible death, I’ll also get a $1 bill thrown in for good measure.
    The upshot of this, I think, is that we don’t need to put as much weight on phenomenological individuations of desire/profiles as you’re suggesting–behavior can do a lot more than you might initially think. But given a plausible behavioral characterization of the difference between indifference and ambivalence–and of the corresponding difference between treating people as fungible and treating them as non-fungible–the fitting consequentialist agent will treat people as fungible.

  4. Hi Daniel, that’s an interesting suggestion.
    I actually think the sort of resistance to ‘sweetening’ you describe in the Billy/Suzy case is a fundamentally different phenomenon from ambivalence (in the relevant sense). If we resist sweetening, I’d interpret that as treating the objects as not precisely comparable in value. But then couldn’t we have a case of mere instruments that are likewise ‘on a par’ in terms of their effectiveness, such that a slight boost to one would not make it determinately better than the other? In that case, we might have a similar behavioural profile to what you describe in the Billy/Suzy case, but this time – I take it – we would have a case of persisting indifference between sweetened options, rather than persisting ambivalence between sweetened options. (Admittedly, I’m not sure about this: You might think that instrumental effectiveness is not the sort of thing that can admit of the relevant kind of imprecision.)
    But even if resistance to sweetening turns out to be sufficient for ambivalence, I definitely don’t think it’s necessary. You could be genuinely torn between two options, even if your decision “hangs in the balance” and could be swayed to either side by the slightest additional reason. You could still feel “torn”, or regretful about the valued object that was lost, even if you now judge that you have most reason to choose the other. Insofar as you have conflicting ultimate desires for the competing particular objects, you are thereby not treating them as ‘fungible’ in the way that we treat money as fungible (i.e. only valuing the aggregrate, rather than valuing each bill in its particularity).
    But I take your point that many people who talk about fungibility might actually be concerned with this other (also intuitively appropriate) phenomenon of resistance to sweetening, so I’ll add a section dealing with that. Thanks!

  5. A couple of comments:
    (1) Your distinction between two kinds of responses, indifference and ambivalence, is neat, but I wonder if ambivalence is enough to capture our intuitions about the separateness of persons. By way of analogy, I can be ambivalent about an apple and an orange, but desire having both the apple and the orange over having just a peach. The suggestion I want to make here is that (for all I know) a fitting consequentialist agent can value different persons as distinct and irreplaceable, and still suppose it ok to violate the rights of one to promote the welfare of many. So it seems to me that the problem is the commensurability and prioritization of different kinds of value, i.e., the intrinsic value of persons as compared to the intrinsic value of welfare. I read the Kantian (“upholder of separateness of persons”) as denying that these two kinds of value be compared on some scale where gains in welfare can outweigh the value of a person, and also that honoring the kind of value that persons have absolute priority over promoting the kind of value that welfare has.
    (2) Recognizing separateness of persons, in the sense just explained, seems to require modifying the consequentialist criterion of rightness. It cannot be understood as claiming that it is right to maximize some single homogeneous additive value like welfare. There are different kinds of values, some having absolute priority over others, so there should be an additional principle stating this, perhaps establishing a lexicographic ordering of otherwise incommensurable kinds of value.
    p.s. Haven’t yet read through your paper, I hope soon to do so. Looking over the comments I see Daniel anticipated my 1st.

  6. On second thought (and in light of Boram’s and Richard’s comments), I’m not sure if there’s anything problematic for consequentialism per se here.
    Couldn’t you just be a consequentialist who denied that all states of affairs were precisely comparable in value? As Boram says, you’d have to complicate your criterion for right action, but it’s not clear that the ways in which you’d have to complicate it would be hostile to the spirit of consequentialism.
    For instance, there’s a lot of work in decision theory on decisionmaking with vague or indeterminate probabilities (Brian Weatherson has a paper on his website that provides a nice survey of this stuff. Here’s the address: http://brian.weatherson.org/vdt.pdf ). There’s also some more recent work along the same lines on decisionmaking with vague or indeterminate utilities (See Caspar Hare’s “Take the Sugar,” for example). None of this work seems hostile to the spirit of consequentialism, at least to me.

  7. Daniel – “Couldn’t you just be a consequentialist who denied that all states of affairs were precisely comparable in value?
    Yes, exactly! (I’m not yet sure that I want to go that route myself, but that’s what I had in mind for those who want to accommodate resistance to sweetening.)
    Boram – “a fitting consequentialist agent can value different persons as distinct and irreplaceable, and still suppose it ok to violate the rights of one to promote the welfare of many
    Right, I think any consequentialist worthy of the name has to endorse (e.g.) killing one person to save several. (I don’t want consequentialism to collapse into deontology here!) I’m just wanting to establish that this consequentialist commitment is compatible with recognizing the separateness of persons (given my analysis of what this requires in principle).

  8. Richard,
    thanks for the response. You say:
    “I explicitly distinguish the ‘fitting’ mindset from the ‘recommended’ or optimific one. They thus desire just what’s genuinely desirable.”
    You also wrote earlier that:
    “the fitting agent is one who believes, and has fully internalized, the moral truth”
    That sounds to me like the optimific mindset. I thought the moral truth in question would be the value theory of consequentialism. But, now the fitting mindset is supposed to be something different, so what is it?
    Be also careful with this kind of stuff: “the fitting consequentialist agent has a generic fill-in desire for the welfare of the unidentified masses”. That really sounds like the kind of de dicto desire that seems objectionable to your opponents.

  9. I’m not sure I see the problem.
    You distinguish two relations a ‘mindset’ may bear to a moral theory: it may be ‘recommended by’ the theory, or it may ‘fit’ the theory. I think I understand the former: a moral theory T recommends a mindset M iff T implies that our having M is permissible. Now suppose that T recommends a ‘bad’ mindset, by which I mean a mindset that is impermissible for us to have. Clearly that would be a mark against T, indeed a decisive one, I’d say. T would then be false, because it implies something false.
    But suppose now that T is merely fitted by a bad mindset. I don’t see how this counts against T. For one thing, it seems compatible with T’s being true.

  10. Hi Campbell, there are a couple of different senses in which a mindset could be ‘bad’. It could be (all things considered, including instrumentally) disvaluable, in which case a sensible theory will recommend against its possession. But it’s no objection to a theory that it is merely fitted by an instrumentally bad mindset, I agree.
    The more interesting sort of evaluation, for my purposes, assesses a mindset for whether it is ‘bad’ in the internal sense of being perverse, misguided, or failing to fit with the moral truth. For example, if we know that a morally accurate perspective precludes treating people as fungible, but theory T does treat people as fungible (in the sense that the T-fitting agent or mindset would see people as fungible), then it follows that T is inaccurate.
    Jussi – are you using ‘optimific’ to mean something other than ‘has the best consequences’? Because there’s no reason to think that believing the truth (and desiring just what’s good) must have the best consequences. See my discussion of the paradox of hedonism.
    I agree that care must be taken with the use of de dicto fill-in desires. I say some things in the paper about why, given the role they’re playing, we shouldn’t find these fill-in desires objectionable. (See my ‘objections’ section, around pg 17.) But I’d be interested to hear more if you find my arguments there unsatisfactory.

  11. No. By ‘optimific’, I mean the truest. That is, for every object that the correct consequentialist axiology assigns value, this desire set has a desire for. This set will probably not make things go best. For one it’s going to require a lot of instrumental practical deliberation which will be costly.
    Now, I thought that this is the fitting mindset you started with. But, in your response, you wanted to distinguish the two. The original point was, what’s the point of requiring this mindset from the consequentialist agent when it results of less of what really matters for the consequentialist?
    If you can distinguish the fitting set from that set which is optimific to truth but suboptimal to good consequences, then you can say that the fitting one is the one that has the best consequences. But, then we are back with the old criterion of rightness, deliberation procedure view – in fact, we are back in Mill.
    I’ll have a look at page 17 🙂 thanks!

  12. Ok, I was misled by your non-standard usage of the term ‘optimific’. It sounds like you do have in mind what I mean by the ‘fitting’ (rather than the fortunate or recommended) mindset: reflecting the true value theory, etc. So it’s good to clear that up.
    You ask, “what’s the point of requiring this mindset from the consequentialist agent when it results of less of what really matters for the consequentialist?
    I’m not sure I understand the question. I don’t “require” the mindset in any normative or exhortative sense. If an agent has a choice between adopting the fitting mindset or the recommended one, they’re required to choose the latter!
    Of course, once they successfully inculcate the fortunate-but-inaccurate mindset, they will no longer ‘fit’ the theory of consequentialism, or exemplify the consequentialist mindset. So in that sense they will no longer be a “consequentialist agent”. So maybe you’re referring to the fact that I ‘require’ the fitting consequentialist mindset in this merely criterial sense that without this mindset an agent is no longer what I’m talking about when I use the term “consequentialist agent”. But then your objection is easily answered: the point of using the terms in this way is to allow us to talk about the consequentialist mindset, and to assess it for whether it seems to exemplify a morally accurate or a morally perverse (/inaccurate) perspective.

  13. Hi Richard,
    Hope this is not repeating what has been said…
    You might want to consider the objection that the consequentialist perspective embodies a perversely indiscriminate form of benevolence. In Civilization and its Discontents, Freud makes this objection to the Christian injunction to love your neighbor as yourself (he thinks indiscriminate love is an impractical and perverse ideal). Confucians make an analogous objection to Buddhist views of compassion (which some think give rise to a consequentialist view). And one can imagine various defenders of partiality lodging the objection in a more contemporary setting.
    Just a thought!

  14. Thanks Brad, yeah that’s definitely something to consider. I suspect that this is one attribute that many consequentialists would be happy enough to acknowledge: we’ll just insist that universal and impartial concern (were it possible) would be a genuine ideal, and not perverse at all.
    But for those more partial to partiality, it’s probably worth noting the option of agent-relative consequentialism (where you just weight the welfare of your loved ones more heavily than that of strangers). At least, such agent-relativity will be a coherent option for those of us who accept a fitting-attitudes analysis of value. Mooreans might have more trouble with it.

Leave a Reply

Your email address will not be published. Required fields are marked *