Setiya argues that moral side-constraints (e.g. against killing as a means) are best understood as agent-neutral: “In general, when you should not cause harm to one in a way that will benefit others, you should not want others to do so either.” (97) So, if we should not kill one even to prevent five killings, we should also prefer, as a mere bystander in either case, that the event of Five Killings occurs than that Killing One to Prevent Five (K1P5) does.  This verdict is awkward on its face.  It only gets worse when we add further cases for comparison.

Consider Killing One but Failing to Prevent Five (K1FP5). This is just like K1P5, except that — disastrously — killing the one fails to achieve the desired goal of preventing the five other killings.  So six killings happen in total.  K1FP5 is clearly vastly worse than K1P5.  It’s presumably comparable in undesirability to Six Killings (where all the same killings occur, but none were intended to prevent others), maybe slightly better since at least one of the killings was more well-intentioned.

Now, if K1P5 is vastly preferable to K1FP5, which in turn is no worse than Six Killings, then presumably K1P5 is likewise vastly preferable to Six Killings.  Five Killings is also preferable to Six Killings, of course, but presumably not quite so vastly so.  In which case, we should conclude that K1P5 is preferable to Five Killings after all, contrary to the assumption of a(n agent-neutral) moral side-constraint.

Can we say more to establish that the preferability of K1P5 doesn’t lie between Five Killings and Six?  Well, the general worry here is that this understates how much worse K1FP5 would be.  The implication would be that it doesn’t matter all that much, once the one has been killed in an attempt to save the five, whether or not the five are actually saved.  Setiya appears to say as much: “Ethically speaking, the damage has been done” (p. 104) once the five lives have been threatened, rather than when they are actually killed.

This seems like an odd claim to make about what we really ought to prefer.  Surely the difference (of five killings prevented) between K1P5 and K1FP5 should, for example, outweigh the difference (of just one fewer killing) between Five Killings and Six, given that “all else is equal” between the respective members of each pair — whether the lives are saved or not makes no difference to what other harms occur in the paired scenarios.

Consider how much you should care about one of the lives saved in K1P5.  I would have thought that this should be more or less constant.  But on the view under consideration, you must instead care much less about preventing existing lethal threats than you do about the introduction of a new lethal threat (at least if the prevention attempt was itself wrongful), even holding all else equal (such that the same harms result from the prevention attempt whether it is successful in saving this life or not).  And that seems wrong.

I can understand a deontologist thinking that the importance of saving this life is outweighed by the importance of not killing a new innocent victim.  But it’s something else entirely to downgrade the importance of saving this life, such that once the new victim has been killed, it now matters less whether or not our target life is saved as a result. But that seems to be what the agent-neutral deontologist is committed to.

Or am I missing something?

8 Replies to “Preferring Side Constraints and the Paradox of Deontology

  1. “It is indeed mysterious how a stronger claim of the many is supposed to arise. Moreover, it seems unfair simply to leave the minority without any chance to be saved. Counting the numbers does not decide the moral question.” [Timmerman 2004].

    I agree that “side-constraints” and “agent-neutral deontology” are not that different from one another – I thought they were kind of implicit in categorality, and both come into play in the setting of being obliged to give moral advice. Two answers are: a) deontologists can’t order preferences in terms of dreadfulness (consider Taurek 1977) b) lotteries as per Timmerman et seq.

  2. Hi David, yeah, I guess one way out is to refuse to play the evaluation game at all. That seems a pretty hefty cost, though. (You shouldn’t have to be a consequentialist to think that some outcomes are morally worse — more *undesirable* — than others.)

    It would be a surprising and striking result if it really wasn’t feasible to combine side constraints with the sensible view that the numbers DO count (for something, even if not everything, supposing the side-constraints matter too).

  3. Shouldn’t Setiya just say that K1FP5 is indeed much worse than Six Killings? Though I’ve only skimmed the paper, I take it that his view is that if one innocent person is killed as a means to prevent others from being killed, this is (somehow!) a much worse state of affairs than one in which a potentially much larger number of people are killed full stop. If so, why should it matter whether the end to which this is a means is actually achieved?

    If it doesn’t matter, I take it Setiya gets to say that K1FP5 is way worse than K1P5 (since five more killings happen), but both are themselves way worse than either Five Killings or Six Killings (since neither of these contain a killing performed in order to prevent others).

  4. Hi Ben, you may be right, but that route is very costly too. If K1P5 is worse than Six Killings, then that implies that if the sixth killer is deliberating between either (i) killing someone for the sheer gratuitous joy of it (and refraining from preventing any other harms), or (ii) killing someone in order to rescue the other five victims, then we should all prefer and hope for the former. But that’s completely absurd. If some killing is to happen anyway, it is not better to refrain from preventing other killings as a result!

    There’s also some textual evidence that Setiya would resist that move. He writes: “The situation in which someone is going to be killed unless they are saved in this way [by killing an innocent] is as bad as the situation in which they are going to be killed. Ethically speaking, the damage has been done. […] It makes
    things worse, not better, that the button is pushed, so that the innocent
    stranger dies. That is why One Killing to Prevent Five is worse than Five
    Killings: it starts out the same and then declines.” (104)

    This talk of how “the damage has been done” makes it sound like it doesn’t much matter whether the threatened five ultimately live or die: that K1P5 is not hugely better than K1FP5. It also sounds like the innocent stranger’s death (by button push) just makes things worse by one death, not some massively outsized amount. It’s just that we don’t get to count the lives thereby saved as pro tanto good (at all?). The immorally rescued five remain morally as good as dead, and a new killing occurs on top of that, yielding a result morally akin to Six Killings.

    (I may be misinterpreting Setiya here though. Regardless, the more important question is that raised in my first paragraph above, of whether it could really be substantively defensible to hold a beneficial killing to be morally *worse* than a vicious killing, all else equal.)

  5. Yes, I completely agree—both that the first question is the more important one, and that it really could not be substantively defensible to hold that.

    I do think the passage you quote is interesting, though, because it illustrates how badly the intuitions it sounds like Setiya is trying to accomodate fit into an agent-neutral consequentialist framework. The natural way to understand “the situation in which someone is going to be killed unless they are saved in this way [by killing an innocent] is as bad as the situation in which they are going to be killed” is as referring to a silencing consideration (á la McDowell/Dancy): it’s not at all implausible that, if the only way for me to survive is through the death of an innocent, this fact weakens or negates reasons I would otherwise have for wanting to live. But it is very implausible that it does so by making *the state of affairs in which I live* worse per se!

  6. it’s not at all implausible that, if the only way for me to survive is through the death of an innocent, this fact weakens or negates reasons I would otherwise have for wanting to live…

    Hmm, I wonder. I mean, if you learn that the additional innocent person is going to be killed anyway, and the only remaining question is whether you’ll at least be saved along the way (at no further cost), or whether the one’s death will be entirely gratuitous, it seems to me that our reasons for wanting you to live shouldn’t now be weakened at all!

    Is there a way to make sense of the reasons such that the reasons for wanting you to live are *only* silenced when and insofar as their satisfaction depends upon an additional killing than would otherwise occur? That might get around my objection. That way, the disputed reasons can get *un-silenced* once the killing of the one is a done deal (so to speak). We have strong reasons to want you to be saved *rather than that the killing be gratuitous*; but silenced reasons for wanting you to be saved *rather than leaving the additional innocent one in peace*.

  7. > Is there a way to make sense of the reasons such that the reasons for wanting you to live are *only* silenced when and insofar as their satisfaction depends upon an additional killing than would otherwise occur?
    Maybe! I wonder whether you can make sense of it in terms of transmission conditions (it’s been a while since I read the McDowell, but I also wonder whether this is close to his view), say along roughly the following lines:

    Typically, if I have reason to want E, and M is a necessary means to E, I thereby have reason to want M too. But silencing cases can be exceptions: I can have reason to want to survive, and an event involving an innocent person’s death can be a necessary means to my survival, without having any reason to want this event to happen. So we get the result you point to: when I consider whether I want the event involving the innocent person’s death to happen, I shouldn’t view the fact that this is necessary for my survival as deliberatively relevant (yielding a sense in which, from the point of view I take in considering *that* question, it’s *as if* living and dying are equally bad). But when I learn the event is going to happen anyway, the question whether I should want it becomes moot, and my reasons for wanting to live become deliberatively relevant again.

Leave a Reply

Your email address will not be published. Required fields are marked *