Threshold deontology is a theory which holds that some act which is intrinsically wrong even if it produces the best consequences, can still be morally justified if those consequences surpass a certain threshold of seriousness. It could, for instance be wrong to torture one innocent person in order to save five innocent people but right to do so in order to save an entire city of one million people. Somewhere between five and one million lies the threshold where an action which was previously wrong becomes permissible, and perhaps even obligatory.
Threshold deontology is a hybrid or pluralist theory. Below the threshold, deontology is supposed to rule, but above that point consequentialism reigns supreme. Unsurprisingly, this creates the huge problem of pinpointing that exact cut-off point where deontological reasoning must give way to consequentialism. Indeed, some philosophers claim to have shown that any such threshold must necessarily be arbitrary since deontology and consequentialism are antagonistic and don’t communicate.
I share this pessimism with regard to threshold deontology, but I will argue that we don’t need this theory because Kantian absolutist deontology can account for thresholds without any appeal to consequentialism. In the following I will sketch a scenario where it can be shown that the threshold is exactly 4. It would be morally impermissible in our example to kill one in order to save the lives of three, but right to do so if you can save four people.
Assume that ten people are doing something which contain a certain risk to their lives. It doesn’t matter if the risk is high or low or if it is a natural risk or involves murderous human beings. Nor does it matter if the ten act as a group with a common purpose or as individuals. All that matters is that the activity they are engaged in has some risk which is understood.
Now this danger can materialize to different degrees. Call the scenario in which one of the ten is killed an Event 1; the scenario where two are killed an Event 2 and so on up to an Event 10 where all the ten are killed. Assume also that everyone’s action is equally dangerous so that each has a probability of 10 percent of being the one who is killed in an Event 1 and a 20 percent risk of being killed in an Event 2 and so on.
Finally assume that we have a bystander who can reduce the number of deaths to only one by killing one person who would otherwise have survived, so that in an Event 6 for example, the bystander can save the six by killing one of the remaining four. If the bystander is committed to the principle of saving as many lives as possible by killing one, this has the implication that everyone’s risk to be killed is reduced to only 10 percent for an Event 2 or higher. So in an Event 4, for instance, the risk of each is reduced from 40 to 10 percent. To use Kantian language, we could say that all of the ten could rationally will a universal law for a bystander to minimize deaths by killing one in an Event 2 or higher because such a law would promote their end of survival.
Let’s now introduce Tom, the only person we need to know by name. Tom is one of the ten but he is much more careful than the others. His risk of being killed in the different scenarios is as follows:
Event 1 – 3 percent
Event 2 – 6 percent
Event 3 – 9 percent
Event 4 – 12 percent
…
Event 10 – 100 percent
Now, if the bystander is prepared to minimize deaths by killing one of the ten, including Tom, then in an Event 2 Tom’s risk is increased from only 6 percent to 10 percent, and also in an Event 3 will Tom’s risk increase by the presence of the bystander. Only in an Event 4 will Tom’s risk be reduced by the bystander from 12 to 10 percent. In other words, Tom can only rationally will a universal law for a bystander to minimize deaths in case of an Event 4 or higher. Consequently, the deontological threshold with respect to Tom is exactly 4. Killing Tom to save three people is morally impermissible, but not killing Tom to save four. (Note that the threshold for the other nine is still only 2).
We see now what it means to always treat a person as an end in itself and never merely as a means. We treat someone as an end when the maxim of our action, conceived as a universal law is such that it promotes that person’s ends. When we kill Tom in an Event 4, the maxim of our action, seen as a universal law is one that promotes Tom’s end of survival. On the other hand, when we kill Tom in an Event 3 or 2, we promote the ends of the other nine, but only by demoting Tom’s ends. Tom is treated as a mere means.
Secondly we have shown that it is possible to derive deontological thresholds without engaging in consequentialist reasoning at all. It is not the fact that the value of the lives of four people is just enough to offset the badness of the act of killing Tom that justifies the threshold but the fact that only at a threshold of 4 is everyone treated as an end and not as a means to other people’s ends. Kantian deontology needs no assistance from consequentialism.
And finally, perhaps most remarkably, by rejecting the hybrid theory of threshold deontology we have a method of localizing a precise, non-arbitrary threshold for every conceivable scenario, something which we were told was an impossibility and forever out of reach.
Suppose Sam faces zero risk (aside from the risk of our deciding to sacrifice her to save others’ lives). If I understand your view correctly, it is then never permissible to sacrifice Sam, even to save six billion lives (i.e. from some event that Sam was never at risk from). Is that right?
Hi Richard, and thanks for your question.
The simple answer is yes, the numbers don’t count. All that counts is whether the action is one of treating another as an end or as a mere means.
However, if you think of a situation in which 6 billion people would die unless one is sacrificed, and this person would otherwise have survived, the reason for him not being one of the 6 billion must surely be only luck, and not any conscious decision on his part. You can hardly conceive of a situation in which 6 billion people have foolishly ignored a serious risk while only one person has taken the necessary precautions and is safe.
So, in conclusion. If you can save 6 billion people by killing one, it’s almost certainly the morally right thing to do.
I find this calculus a little hard to credit given the example of the bystander who won’t lie to save one life. Otherwise we could universally will that the bystander in our particular case is not a Kantian.