PEA Soup Blog is pleased to be hosting this Ethics discussion with Erik Zhang (University of Leeds) and Theron Pummer (University of St Andrews).

This discussion focuses on Zhang’s paper “Individualist Theories and Interpersonal Aggregation”, with a critical précis from Theron Pummer.

Précis of Erik Zhang’s “Individualist Theories and Interpersonal Aggregation”

Theron Pummer

It is an absolute pleasure to introduce Erik Zhang’s wonderful article. I believe he provides one of the most promising solutions to the numbers problem to date. The article is extraordinarily rich, and I can’t offer a complete summary, but I will outline the central ideas and end with a few exploratory questions.

Consider the following cases:

Life vs. Lives: We can, at no cost to ourselves, save either one stranger from death or a greater number of others (say, two) from the same fate.

Life vs. Headaches: We can, at no cost to ourselves, save either one stranger from death or very many others (say, a billion) from each incurring a minor headache.

Many of us have the intuition that we’re morally required to save the greater number in Life vs. Lives. At the same time, many of us also have the intuition that we’re morally required to save the one in Life vs. Headaches. It is notoriously difficult to provide a plausible theory that captures both intuitions. For example, according to a simple consequentialist view, we’re required to save the greater number in Life vs. Lives because this is what would maximize aggregate value. This simple view has the counterintuitive implication that we’re required to save the many from headaches rather than save the one from death in Life vs. Headaches, insofar as this would maximize aggregate value. Indeed, not just simple consequentialism, but any view that says we’re required to save the greater number in Life vs. Lives based on adding together the separate moral reasons to save each member of the larger group faces the challenge of explaining why no number of headaches could in aggregate morally compel us to allow the one to die in Life vs. Headaches.

By contrast, according to an individualist moral framework, “the moral permissibility of an action depends only on its implications for single persons, not on its implications for collections of people” or aggregate value (p. 480). We are to consider only each individual’s personal reasons or complaints, “that is, reasons [or complaints] having to do with a person’s own well-being, interest, or status” (p. 482). An individualist framework can straightforwardly explain why we are required to save the one in Life vs. Headaches. After all, the complaint of the one is the weightiest personal complaint. However, this framework struggles to capture the intuition that we’re required to save the greater number in Life vs. Lives. After all, it may appear that, since the well-being at stake for each is the same, no member of the larger group has a weightier personal complaint than the one.

Like Tim Scanlon, Zhang aims to solve the numbers problem by expanding the basis of personal complaints beyond well-being (p. 484). I can legitimately complain, on my own behalf, not just in virtue of having my well-being diminished, but also in virtue of being disrespected or treated unfairly. Zhang’s core idea is that failing to give equal consideration to equal interests constitutes one such form of disrespect or unfair treatment. He writes:

Suppose we decide that if A’s life is ever threatened, we will devote X amount of valuable resources to the saving of A’s life. Suppose we also decide that if B’s life is ever similarly threatened, then we will devote to B only half the amount of resources we are willing to devote to A. Holding fixed that the lives of A and B do not differ in any morally relevant respect, our decisions reveal an inequality in the consideration we show to the interests of A and B. B may rightly complain, on her own behalf, that her interest in avoiding a lethal harm is unjustifiably given less importance than A’s interest in avoiding the same. Moreover, the inequality at issue is revealed by a difference in the amount of value that we are prepared to forgo for the respective interests of the parties involved. Call this understanding equal consideration in the value-forgoing sense. (p. 496)

Zhang proceeds to argue that, if we save the one in Life vs. Lives, we violate equal consideration in the value-forgoing sense. In this case we can either save A’s life or instead save the lives of both B and C. If we save A’s life, we forgo the value of two lives saved, whereas if we save B (along with C), we forgo the value of one life saved. Assuming that saving two generates more value than saving one, saving A would violate equal consideration in the value-forgoing sense. As Zhang writes,

In effect, if one decides to save A in Life vs. Lives, then each of B and C may raise a personal complaint in the following terms: “If you treat A’s life as being so important that you are willing to let two other people die for A’s sake, then since my life is just as important, giving equal consideration to my life means that you should treat my life as giving you a comparatively stronger reason to forgo just a single life for my sake. Failing to do so unjustifiably undervalues my life in comparison to A’s life.” Or, even more pithily, “Why are you willing to save A at the cost of two lives but no more willing to save me at the cost of just one life?” (p. 497)

Each member of the larger group has a weightier personal complaint than the one. Accordingly, the moral reasons to save each member of the larger group are individually stronger than the moral reasons to save each member of the smaller group. An individualist framework can in this way capture the intuition that we’re required to save the greater number in Life vs. Lives. Not only this, but it supports a plausible answer to Elizabeth Anscombe’s famous “who is wronged?” question, namely that saving the one wrongs each member of the larger group (p. 501). Zhang formulates the relevant underlying principle as follows:

Equal Consideration 1 (EC1): If X’s interest and Y’s interest are equally weighty and V1 is more valuable than V2, equal consideration entails that if there is reason of a certain strength to forgo V1 for the sake of X’s interest, then there is comparatively stronger reason to forgo V2 for the sake of Y’s interest. (p. 497)

According to EC1, in Life vs Lives there is stronger moral reason to save B than there is to save A (likewise, there is stronger moral reason to save C than there is to save A).

It is important to observe that Zhang’s argument for a requirement to save the greater number based on EC1 appeals to at least a limited form of axiological aggregation. For example, it depends on the claim that saving two generates more value than saving one. While many of those who reject a requirement to save the greater number are happy to concede this axiological claim, famously some, including John Taurek, are not. Still, unlike deontically pluralist views (e.g., Johann Frick’s, p. 493), Zhang’s argument remains thoroughly individualist with respect to deontic assessment in that aggregate value plays only an indirect role (p. 495). Aggregate value does not directly ground moral reasons, it grounds moral reasons only via personal complaints taken individually.

Next Zhang addresses the following sort of case.

Life vs. Paraplegias: We can save either one stranger from death or many others from each incurring a lesser but still substantial harm, such as permanent paraplegia.

If the many are sufficiently numerous, we are intuitively required to save the many. Zhang argues that, with a suitable conception of equal consideration, this intuition can also be captured within an individualist framework. Zhang writes,

Suppose we decide that if A’s life is ever threatened, we will devote X amount of valuable resources to the saving of A’s life. Suppose we also decide that if B is ever threatened with paraplegia, no amount of resources will be devoted to the alleviation of B’s plight. These decisions reveal an inequality in the consideration we show to the interests of the two parties. B may rightly complain, again on her own behalf, that while her interest is indeed less weighty than A’s interest, we should nevertheless be prepared to allot to her an amount of valuable resources that is proportional to the relative weightiness of their respective interests. (pp. 497-8)

In addition to EC1, Zhang adopts:

Equal Consideration 2 (EC2): When X’s interest is weightier than Y’s interest, equal consideration entails that the values we have reason to forgo for the sake of their respective interests are not disproportional to the relative weightiness of their interests. (p. 498)

When there is a big enough difference in value between saving the one from death and saving the many from paraplegia, each of the many can legitimately complain that her interest isn’t given equal consideration.

Zhang is aware of the danger here: if EC2 implies a requirement to save the many in Life vs. Paraplegias, couldn’t it likewise imply a requirement to save the many in Life vs. Headaches? Suppose we’d prevent far more than a billion headaches—some ungodly number. Suppose the aggregate value of preventing all these headaches is sufficiently large, so that it dwarfs the value of saving a single life. Wouldn’t it then be disproportional to be willing to forgo this very large amount of value for the sake of saving a life but be no more willing to forgo much less value for the sake of preventing a person’s headache? (Perhaps the interest in being saved from death is roughly 100,000 times weightier than the interest in being saved from a headache, but preventing the ungodly number of headaches generates 100 million times more value than saving one life.)

In response, Zhang argues that EC2 does not apply to cases like Life vs. Headaches (pp. 498-9), citing his earlier argument (section III, pp. 486-493) for the crucial claim that, while there typically are moral reasons to prevent minor headaches, these reasons are normatively disabled (or silenced) in contexts in which we can prevent these headaches only if we let others die. Minor headaches don’t provide reasons in such contexts. Given this crucial claim, Zhang contends that “no one facing a minor headache is in a position to complain if we simply bar her interest from being part of the aggregate value in the application of the argument from equal consideration” (p. 499). The only moral reason present in Life vs. Headaches is the reason to save the life, so that we’re morally required to save the one.

Though it is powerful, not everyone will accept Zhang’s normative disabling argument (section III). But it is worth observing that even without this argument we can still block the inference from EC2 to the claim that we are required to save the many in Lives vs. Headaches if we hold that the value of saving a single life is greater than the aggregate value of preventing any number of headaches. Indeed, not everyone accepts a fully aggregative view at the axiological level.

In sum, Zhang provides a promising new solution to the numbers problem within an individualist framework. Appealing to a value-forgoing conception of equal consideration, as stated by EC1 and EC2, Zhang captures the intuitions that we’re required to save the greater number in cases like Life vs. Lives and Life vs. Paraplegias. Appealing to his normative disabling argument, Zhang captures the intuition that we’re required to save the one in Life vs. Headaches. Finally, section V addresses further issues that arise in structurally more complex cases, involving competitions between “nonhomogenous” groups (as when we can either save one person from death and save one other person from incurring a headache or save ten others from paraplegia and save a million others from each incurring a headache). I lack the space here to summarize this further section, but it makes several interesting and important contributions to the ongoing debate about partially aggregative views.

I will end with a few exploratory questions about Zhang’s value-forgoing conception of equal consideration, particularly EC1. I wonder whether or how EC1 should apply to some cases Zhang doesn’t discuss in his article. First consider:

Many vs. Many & One: We can save either 99,999 strangers from death or 99,999 other strangers from death and 1 from paraplegia.

Many find it plausible that we are not required to save the greater number in this case. But, given the difference in value between these alternatives, EC1 appears to imply that the moral reasons to save each of the 99,999 in the larger group are individually stronger than the moral reasons to save each of the 99,999 in the smaller group. Absent further details, EC1 appears to imply a requirement to save the greater number. Next consider:

Risky Case Three: We can either save A’s life or save B’s life and prevent C from facing a tiny risk of death.

This case is a variant of Zhang’s Case Three (p. 509, originally from Frances Kamm)—I’ve just replaced C’s sore throat with a tiny risk of death. I suspect that, just as many find it plausible that we’re not required to save the greater number in Case Three, many will find it plausible that we’re not required to save the greater number in Risky Case Three. But if (unlike the reason to prevent a sore throat) the reason to prevent the tiny risk of death is not disabled in this context, and if EC1 is sensitive to the (admittedly small) difference in expected value between the alternatives in Risky Case Three, EC1 may imply that the moral reason to save B is stronger than the moral reason to save A. It may therefore imply a requirement to save the greater number.

One possible response to these cases is to invoke parity, claiming that, in Many vs. Many & One and Risky Case Three the alternatives are evaluatively on par, so that EC1 does not apply. But it may be difficult to argue this while maintaining that, in Life vs. Lives, saving two generates more value than saving one. Finally, consider:

Bored vs. Joyful: We can save either A’s life or B’s life. Their lives have gone roughly the same up until now, but their possible futures differ dramatically. If saved, A would have a mediocre (but definitely worth living) future. If saved, B would have an absolutely amazing future.

It is plausible that, while A and B have equally weighty interests in being saved, saving B would generate much more value than saving A would. If that is correct, EC1 would seem to imply that we are required to save B. Many find that implausible. More counterintuitively still, EC1 may imply that we’d be required to save B rather than save multiple people like A, if saving B would generate more value than saving all these people like A.

Should EC1 apply to the three cases raised above in the ways I have suggested it might? Or should EC1 be revised, or supplemented with some other principle, to avoid potentially counterintuitive implications in some of these cases? Would such revision or supplementation remain compatible with the work EC1 is supposed to do in core cases like Life vs. Lives? I for one suspect Zhang can offer plausible responses to these questions, but I’m curious to know what he thinks.

36 Replies to ““Individualist Theories and Interpersonal Aggregation.” With Erik Zhang and Theron Pummer.

  1. Many thanks to PEA Soup for hosting this discussion of my paper; I feel extremely honoured and grateful. And special thanks to Theron for his excellent critical précis, which raised many thought-provoking issues that, frankly, were not on my radar when I was writing the paper!

    My aim in this first post is to respond, to the best of my ability, the critical questions posed by Theron’s piece. Before I do so, however, let me say a few words about what led me to write the paper in the first place (and thank some people along the way).

    My interest in the numbers problem stems from three seminars I took as a student. The first two were on Scanlonian contractualism, taught by Rahul Kumar at Queen’s and Liam Murphy at NYU respectively. The third seminar was taught by Johann Frick at Princeton, which introduced me to the recent literature on partial/limited aggregation. I have these individuals to thank for making vivid to me just how multi-faceted the numbers problem is, and just how important a proper solution to the problem is for the development of both consequentialist and nonconsequentialist theories of morality.

    As I see it, the numbers problem can be generally characterized as a challenge to bring into reflective equilibrium the following three levels of theorizing. First, there is the level of foundational and comprehensive moral theory, including (but not limited to) standard forms of consequentialism and moral contractualism, with the former having an aggregative mode of justification and the latter an individualist one. Second, there is the level of criterion of adjudication, which standardly includes full aggregation, partial aggregation, and no aggregation. Third, there is the level of cases, including the three leading cases of Life vs. Lives, Life vs. Paraplegias, and Life vs. Headaches, as well as more structurally complex cases that fill the recent literature on partial aggregation.

    My main motivation for writing the paper is to take up the challenge posed by the numbers problem on behalf of individualist moral theories, such as moral contractualism (consequentialism, of course, faces its own version of the numbers problem). There are two main tasks here. First, how do we plausibly connect the level of theory and the level of adjudication; that is, proceeding from an individualist mode of justification, how can we allow the numbers to count in cases in which they clearly should (and not allow them to count in cases in which they clearly should not). This task is taken up in sections 3 and 4 of the paper. Second, once the theoretical foundation for a generally partially aggregative view is secured within an individual moral framework, how do we then develop an extensionally adequate version of partial aggregation, one that can handle the more structurally complex cases advanced by its recent critics. This is the task of section 5 of the paper.

    Another main aim of my paper, which comes out explicitly only in the concluding section, is to enumerate some conditions of adequacy for a proper solution to the numbers problem. In my view, in addition to capturing the correct verdicts in the leading cases of Life vs. Lives, Life vs. Paraplegias, and Life vs. Headaches, a satisfactory solution should attend to Anscombe’s question, address the plausibility of the lottery strategy, capture the idea that sometimes individuals have increasingly stronger claims to being saved as the disparity in numbers increases between competing groups, and help us navigate more structurally complex situations of trade-off.

    It seems to me, for example, that even if consequentialists can avoid giving the implausible verdict in Life vs. Headaches, their preferred theory is still poorly suited to give the correct answer to Anscombe’s question, which asks who in particular is wronged if we do not save the larger group in Life vs. Lives (and the correctly answer, I think, is that each person in the larger group is wronged; each may complain, on her own behalf, that she in particular has been treated in an unjustifiable manner). Similarly, many other proposed solutions to the numbers problem, including ones that appeal to the ex ante perspective, and ones that posit a rational or moral requirement to satisfy as many individual claims as possible, seems to me also violate one or more of the conditions of adequacy above.

    Theron has provided an excellent summary of my main argument in section 4, which has to do with the central question: How can we make the numbers count without directly counting the numbers? In case it is helpful for the subsequent discussion, let me briefly note some other key claims I make in the paper.

    – In section 2, I make two moves. First, we should clearly distinguish two levels at which aggregative considerations may enter: they may enter at the level of *justification* or the level of *adjudication*. Moral individualism bars their entrance at the former level but not the latter. Second, as Theron notes, I emphasize that individualists should broaden their understanding of the basis of personal reasons to include not only considerations of individual well-being but also considerations of fairness and equal consideration.
    – In section 3, I argue that the notion of *relevance* which features prominently in the literature should be explained in terms of the phenomenon of normative disabling. Disabling, I submit, takes place at the level of reasons, and it is sourced in the contextual character of the reason relation itself. I further submit that we have good reasons for accepting that disabling occurs in the practical domain because of its intimate connections with the nature of value and the conditions of respect.
    – In section 4, the two key claims I make before advancing my argument from equal consideration are (a) axiological aggregation and moral individualism are distinct doctrines, and (b) though aggregative considerations may not directly figure in the content of people’s moral claims, they may nevertheless serve as the *background* against which individuals may derive personal reasons for accepting or rejecting a criterion of adjudication.
    – In section 5, my main claim is that though the strongest individual claims in a trade-off situation do not enjoy lexical priority, they may nevertheless have the pride of place in our moral adjudication by setting the *context* and *aim* of the process of offsetting. I develop this idea in the form my Partial Primacy Account (PPA), which addresses many of the extensional objections to partial aggregation.

    Let now turn to the critical questions posed by Theron’s précis. Though Theron says they are merely exploratory, they have definitely provoked a lot of thought on my part. From the outset, I want to note an unstated assumption in my paper. I assume, as it is standard in the literature, that the individuals (and the options) involved in the trade-off situations do not differ in any morally relevant respects, other than the immediate harms that the imperiled individuals stand to suffer. That is to say, I assume, for example, in Life vs. Lives, it is not the case that saving the larger group is more risky or more difficult for the rescue, or that saving either group would involve harming an innocent bystander, or that some people have far more years to live if saved than some others. And so on.

    Nevertheless, Theron is absolutely right that my EC1 and EC2, as developments of the idea of equal consideration in the value forgoing sense, rely on comparing and/or aggregating the values of the lives of different individuals. So, let me address the three cases Theron poses in turn (to the best of my ability).

    I begin with the case of Bored vs Joyful. Here, I want to say, at a first pass, that though Bored and Joyful may have different amount of value contained *in* their respective lives (admitting that subjective experience like happiness is indeed a value-contributing element in the lives of people), the value *of* saving their lives need not exactly track the amount of valuable elements their respective lives contain. To make this claim plausible, I am indeed inclined to reach for the notion of parity, as Theron suggests. That is, I want to say that the value of saving Bored’s life and the value of saving Joyful’s life are on a par.

    One attractive feature of this proposal is that parity can be broken by sufficiently large improvements and worsenings. Hence, if we assume, in a variant of the case, that Bored’s psychological state renders her life not worth living (or barely worth living) if saved, then I think it is only right that we should save Joyful instead. Here is a related case. Suppose that we can either save A, who is 15 and has 70 more years to live, or save B, who is 80 and has only 5 more years to live. In this case, I submit, the parity that generally holds between the lives of people is broken, and it is only right that we should save A. Indeed, suppose that we can either save A, who is again 15 and has 70 more years to live, or save B *plus* C, who are both 80 and have 5 more years to live each. I think a plausible case can be made that we should still save A in this case.

    I count as a strength of EC1 that it can capture these verdicts. I don’t think appealing to parity here jeopardizes the claim that, in Life vs. Lives, saving two generates more value overall. In the standard presentation of the case, each of the three individuals involved has many more years to live if saved, so parity can definitely be broken when the aggregate value of saving two is brought into the picture.

    Let me then turn to Risky Case Three. One thing I do not do in the paper is to connect my discussion to the complex and ever-growing literature on trade-offs involving risk. I do have some things to say about the debate, but I do not yet have a fully worked out view. Nevertheless, here is what I am inclined to say about Theron’s case. I share Theron’s intuition that we are not required to leave A to die just because the option of saving B produces the additional benefit of preventing a tiny risk of death to C. My suggestion is that C’s interest in avoiding a tiny risk of death is indeed normatively disabled in the situation at hand.

    But here is a potential problem with my suggestion. Suppose that we can either save A’s life, or subjecting each of a very large number of people to a tiny risk of death, but with the result that 10 people will die (though we cannot know in advance who they will be). If a tiny risk of death is individually disabled by the value of saving a life, then doesn’t my suggestion implausibly imply that we ought to take the latter option? I say No, because I think the ex post fact that 10 people will die (though we cannot know in advance who they will be) is morally dispositive. In Theron’s Risky Case Three, by contrast, the risk of death even from an ex post perspective is tiny, so normative disabling can happily occur. I do realize, however, my reply lands me squarely in the debate between ex ante and ex post approaches to risk; the reply should then be seen as a promissory note (I do hope to contribute to the debate one day).

    Finally, let me turn to the case of Many vs. Many & One. I must admit that this case gives me the greatest pause. On the one hand, I am happy to accept the implication that a straightforward application of EC1 entails that we ought to save the larger group, since my intuition about the case is less firm than Theron’s. On the other hand, being someone who takes other people’s firm intuitions seriously, I do want to explore how my framework can try to accommodate the verdict that it is permissible to save either group. Here is one rough-and-ready suggestion.

    We may begin with the observation that a key difference between One vs. Two and Many vs. Many & One is the *ratio* size difference between the disvalue that members of the two groups stand to incur (I credit this idea to Michael Rabenberg). In the former case, the ratio size difference is, say, 2:1, whereas in the latter case, the difference is, say, 999,999.75:999,999.

    Next, in my paper, I do make the claim that the strength of an individual’s equal-consideration complaint is sensitive to the difference in the amount of value a rescuer is willing to forgo for the respective parties involved. As I write, “In Life vs. Lives, for example, when B is in a group of two, and we decide to save A instead, we are in effect saying that A’s life is important enough to give us sufficiently strong reason to forgo the value of saving two other lives, but it is not the case that B’s life gives us comparatively stronger reason to forgo the value of saving just one life. But when B is in a group of one million, and we once again decide to save A, the inequality in the consideration we show to the lives of A and B is even greater. In effect we are saying that, compared to A’s life, which we judge to be worthy of forgoing the value of saving one million lives, B’s life gives us no stronger reason to forgo the value of saving just a single life. B therefore has an even stronger personal complaint, based on the ideal of equal consideration, when she is in a group of one million than when she is in a group of two.”

    We can then add that the strength of an individual’s equal-consideration complaint is also sensitive to the *ratio* size difference between the competing groups. We may then run my equal consideration in ratio terms. In One vs. Two, each of the Two may say: “Look, why are you willing to forgo the aggregate value of saving two lives for the One, but no more willing to forgo *half* of that value for me?” In Many vs. Many & One, by contrast, ratio size difference is smaller, so the complaint by any one person in the larger group seems considerably weaker (if the ratio understanding is adopted).

    But in order for this suggestion to work, I need to do two things. First and most obviously, I need to provide a deeper rationale for why ratio size difference matters in the way I suggest. Second, I need to say by how much the strength of equal-consideration complaints is tempered/modified by factors about ratio. These questions I will have to think more about.

    Now, being my own worst critic, let me end this post by noting two (further) unsolved issues within my paper; I do not yet have fully satisfactory answers to them, and they sometimes keep me up at night. The first issue has to do with the conception of reasons and values that underlies a major part of my argument. In my paper, I grant that suffering a minor headache carries *disvalue* (indeed, I grant that preventing sufficiently many headaches can generate more aggregate value than saving a single life). But I also maintain this kind of minor harm provides us no *reason* for its alleviation (in the context of Life vs. Headaches at least). This position, it seems to me, puts serious pressure on the buck-passing account of value, to which I am broadly sympathetic. One may rightly ask: Can we really make sense of something being valuable or disvaluable if it provides us with no reason for its preservation or prevention?

    Second, my argument assumes that an individual’s equal-consideration complaint is normally dispositive or conclusive in settling the justifiability of a criterion of adjudication. But nowhere in the paper do I provide independent support for this claim. As Gideon Rosen once said to me, even if we accept that each member of the larger group has an equal-consideration claim that we ought to save them, why think this claim is so strong as to *conclusively* dictate which group we ought to save? (To be sure, I could make a companion-in-guilt move and say that the tie-breaking argument also assumes without further argument that an individual’s claim that her additional presence must be allowed to break the tie is conclusive; but companion-in-guilt arguments only get us to far).

    In any case, I am really looking forward to the subsequent discussion, and I want to thank Theron once again for the time and attention he has devoted to my paper! I am currently on the Pacific Coast of Canada, so apologies in advance if the time zone difference delays my response!

  2. Hi Erik (my deontology buddy from Singer’s consequentialism seminar): What about the following two cases? (Case 1) Life vs. Thumbs: We can, at no cost to ourselves, save either one stranger from death or a billion other strangers from the loss of a thumb. (Case 2) Many vs. Many & One Billion: We can, at no cost to ourselves, save either 99,999 strangers from death or 99,999 other strangers from death and one billion other strangers from the loss of a thumb. I have the clear intuition that we’re required to save the larger group in Many vs. Many & One Billion but the smaller group in Life vs. Thumbs. This suggests to me that we’re weighing two ends against each other: the end of respecting each person’s right to have a say in decisions affecting them that’s proportional to what they have at stake in the decision and the end of promoting the impersonal good. In Life vs. Thumbs, it seems that the vast difference between the loss of a life and the loss of a thumb silences the moral reason that we have to give each of those who stand to lose a thumb a say (even a tiny say that’s proportional to their relatively small stake) in our decision of whether to save the one life or the billion thumbs. So, in this case, the moral reason we have to pursue the end of respecting people favors our saving the one life rather than the billion thumbs. And, in this case, the moral reason we have to pursue the end of respecting people outweighs the moral reason we have to pursue the end of promoting the impersonal good—or so it seems to me. But, in Many vs. Many & One Billion, we can equally well pursue the end of respecting people by either saving the 99,999 strangers from death or the 99,999 other strangers from death and the one billion from the loss of a thumb. For, in this case too, the moral reason to give those with a thumb at stake a say (even a tiny say) is silenced. Thus, we have one group of 99,999 saying “save our group” and another group of 99,999 (with just as much say) saying “no, save our group.” So, in this case, it comes down to which is best in terms of pursuing the end of promoting the impersonal good, which we can best do by saving the larger group. (This may not be a fair question, because I’m challenging the idea that we can do justice to our intuitions on a purely individualist moral framework, whereas you’re most interested in seeing to what extent we can do justice to our intuitions on such a framework. It’s just that I think that examples like these show that what we need is a framework that weighs our pro tanto obligation to respect people against our pro tanto obligation to promote the impersonal [aggregate] good. Utilitarians go wrong in failing to acknowledge the former obligation, and those working within a purely individualist framework go wrong in failing to acknowledge the latter obligation.)

  3. Many thanks to Erik for such an illuminating first post! There is lots to discuss here, and I look forward to seeing what others think about Erik’s very rich paper. For now, I want to follow up on two relatively small points, both of which concern cases I raised in my précis.

    First, about Many vs. Many & One, Erik suggests that we can accommodate the intuition that it’s permissible to save either group if we claim that “the strength of an individual’s equal-consideration complaint is also sensitive to the *ratio* size difference between the competing groups”. He writes, “In One vs. Two, each of the Two may say: ‘Look, why are you willing to forgo the aggregate value of saving two lives for the One, but no more willing to forgo *half* of that value for me?’ In Many vs. Many & One, by contrast, ratio size difference is smaller, so the complaint by any one person in the larger group seems considerably weaker (if the ratio understanding is adopted).” This strikes me as a plausible way to go, but I wonder if it will go far enough to accommodate the intuition that it’s permissible to save either group. After all, this response concedes that in Many vs. Many & One each person in the larger group still has an equal-consideration claim to be saved. Why doesn’t that require us to save the larger group, as it does in Life vs. Lives? Perhaps the relevant equal-consideration claim has to be *sufficiently strong* to ground a requirement to save the greater number? This potentially relates to the challenge that Erik attributes to Gideon Rosen: “even if we accept that each member of the larger group has an equal-consideration claim that we ought to save them, why think this claim is so strong as to *conclusively* dictate which group we ought to save?”

    Second, about Bored vs. Joyful, Erik says that the “value of saving Bored’s life and the value of saving Joyful’s life are on a par” so that his EC1 doesn’t apply. Fair enough. He adds that parity is broken by sufficient worsenings or improvements, suggesting that saving a 15-year-old who would then go on to live 70 more years would generate more value than saving an 80-year-old who would then go on to live 5 more years. But if the number of years of life can in this way differ enough to break evaluative parity, why couldn’t the quality of life per year? Suppose that in Bored vs. Joyful, if saved, each person would live another 50 years, and that, while Bored’s 50 years would be well worth living, Joyful’s 50 years would be *so much better* that evaluative parity is broken. But even if more value is generated by saving Joyful, it still seems to me that we’re not required to save Joyful. By contrast, in Erik’s 15-year-old vs. 80-year-old case, I agree that we’re required to save the 15-year-old, but I don’t think this stems from the fact that doing so would generate more value (plausibly the 15-year-old has a stronger claim to being saved on egalitarian, prioritarian, and/or sufficientarian grounds, and perhaps increasing years of life can sometimes strengthen claims to be saved in ways that increasing quality of life can’t).

  4. Hi Dough, my way more senior and accomplished buddy from Singer’s seminar, thank you for taking part in the discussion!

    Your cases are excellent as always. Let me (a) report what I say about a related case in my paper, (b) raise a worry about your suggestion, (c) talk about an alternative.

    In my paper, I discuss Kamm’s sore throat case, which invites us to suppose that we can either save A’s life or save B’s life *plus* cure C’s sore throat. Of this case, I side with Kamm’s intuition that we are not morally required to leave A’s die just because saving B’s life produces the additional benefit of curing C’s sore throat. Indeed, I go further and say, in a footnote 39 (where I hide all the good stuff), that we are not morally required to save B *no matter* how many more sore throats we can prevent. I am also inclined to say, then, in your (Case 2), it permissible to save either group, despite the vast number of minor benefits we can bring about if we save the larger group.

    Your alternative suggestion, which I take to be that we ought to take the option that better promotes impersonal goodne if the competing options do equally well at respecting the relevant claims of individuals, seems to give the wrong verdict in Kamm’s original sore throat case. This is because, on my understanding of your characterization, the option of saving A’s life and the option of saving B’s life do equally well at pursing the end of respecting people. The question then comes down to the consequentialist question: Which option maximizes the impersonal good. And we seem to be lead to the (wrong) conclusion that we ought to leave A to die and save B and C instead.

    Here is an alternative. When two options do equally well at respecting people’s claims, then we allow the promotion of impersonal good to break the tie, but only if the impersonal good at issue is *sufficiently large* (in some yet underspecified sense). This would capture the desired intuition in Kamm’s original case as well as your desired intution in (Case Two). However, I am generally allergic to positing competing modes of justification within a single domain of morality (in this case interpersonal morality), which in my view introduces schism and undermines theoretical unity (I make this claim as part of my response to Voorhoeve’s framework in footnote 24). But I don’t think your challenge is unfair to my project; if what I say turns out to be revisionary, at least with respect to widely shared intuitions, then I need to muster a better defense than what I can do right now!

  5. Terrific piece, Erik. This is a massive contribution to the debate on the numbers problem! And thanks to Theron for a helpful summary.

    A few queries.

    Here’s the first. You appeal to normative disabling to distinguish between Life vs. Paraplegias, on the one hand, and Life vs. Headaches on the other hand. You think this represents an improvement on Scanlon’s distinction between ‘relevant’ and ‘irrelevant’ harms, since it provides an ‘explication’ which is missing in Scanlon’s own account.

    It provides *apparatus*, but does really give us an *explication*? You want to say:

    The disvalue of headaches is normatively disabled when we are comparing headaches to lives; it is improper, not simply inaccurate, to appeal to the disvalue of headaches when these are compared to the disvalue of death. That is not so with paraplegia.

    Scanlon wants to say:

    The disvalue of headaches is irrelevant to the comparison with lives. That is not so with paraplegia.

    Aren’t you just saying the same as Scanlon in a slightly different or more complicated or roundabout way?

    Second query. There’s an internal relationship between the concept of normative disabling and the category of impropriety which threatens to put a squeeze on its explanatory value. When is it improper, not just inaccurate, to compare value B with value A? You say: when value A disables the appeal to value B. Fine – so when does value A disable value B? You say: when it is improper, not just inaccurate, to compare value A with value B. Is there more to it than this?

    Third query. According to the value-forgoing approach to equal consideration, if we propose to save A rather than B and C, then the value we place on A’s life makes it equivalent to two lives; and we are treating B with less than equal consideration if we fail to save her at the cost of only one life (i.e. A’s life). Thus, we should save the greater number. Moreover, the larger the group is, the more blatant the unequal concern we show to each person in the larger group, compared to the one whom we save.

    Now the Taurekian account (on a more standard interpretation of it) will say that we needn’t, and we shouldn’t, measure the concern in this value-forgoing way: the relevant value is the chance of being saved, which is 50% for each individual, and which we are forced to adopt when each person has a claim to be saved, but not everyone can be saved. Taurekians are entitled to specify the way in which each individual is being treated the same as everyone else. Why is the value-forgoing account preferable to the Taurekian account?

    Now you do point out that Taurek embraces axiological anti-aggregationism, while you don’t (pp. 494-5). But I don’t see why this axiological commitment blunts the Taurekian from asking this question. Even if one wasn’t hostile to axiological aggregationism, one might still think that the moral individualism provides a rationale for the coin-flipping proposal—and moreover shields it from having to incur any commitment to the value-forgoing approach, even if the result of adopting the coin-flipping proposal is that we might find ourselves rescuing one rather than two (or more).

  6. Sorry about the many typos in my response, Doug, not least of which is the misspelling of your name! I stayed up late last night in anticipation of the discussion going live and working on/revising my original response to Theron, so my mind is a little hazy right now!

  7. I’m very pleased to see this paper getting the attention it deserves, and I’m pleased to have the opportunity to make a few comments and pose a few questions here. My first comment is that my thought while reading my way through the paper was: you’re just taking a back door to aggregation; drop the individualism and just embrace aggregation. But then I got to p. 501 where I saw the light: you (Erik) are motivated to be able to answer Anscombe’s challenge: who is wronged if we save the smaller group (or, more accurately, the group for whom we forgo more if we save the other group)? I really appreciate your answer, which I want to repeat here: “if the reason for saving the greater number lies directly in considerations of an aggregative kind, then on one in the larger group is in a position to say, on her own behalf, that she in particular has been treated in an unjustifiable manner.” I really like that. Thanks.

    I should also say, I think your solution to the Tomlin/Horton problem in section 5 is GREAT.

    But now for a few questions: (1) It seems that you think that consequentialists (straightforward agent-neutral aggregationists) cannot embrace limited aggregation (see top p. 493). But I don’t see why individualism is essential to limited aggregation. It may be that consequentialists tend to reject limited aggregation. But it seems to me that it’s open to them to say that reasons (values… whatever, I don’t have a horse in the buck-passing issue) weight up in a contextual way, one that they can frame just as you frame your solution in section 5. Am I wrong? If so, what am I missing?

    (2) Re silencing, does it have to go from full value to no value without passing through a phase of falling off in a way that is non-linearly quick? You could take me to be referring to my idea of weak aggregation here, and you wouldn’t be wrong. But I’m also referring to your discussion on pp. 488-491. There you discuss “defeaters.” And I wonder if you need the defeated reasons to go to zero. To be clear, this is not the same as my ideas about weak aggregation; they’re just cousin points. Indeed, if anything, the point I’m making here is more connected to my previous thought, that defeater status does not depend on individualization; in addition, I’m suggesting, it need not be complete.

    Let’s consider your example of the Rembrandt. You say that its “potential to stabilize a wobbly table provides no reason, much less sufficient reason, for its destruction.” What I want to know if: do you need “no reason”? Why not completely inadequate reason In This Context? Here’s a hypo. Shims are cheap, but let’s imagine that the paint Rembrandt used had a very rare chemical in it (it takes a few hundred years for it to form), which is especially good at curing migraine headaches. Many people would happily pay $100 to get a pill that would cure their headache as well as a small piece of the painting would, and, as a matter of fact, there are no such pills, but there’s the painting! Suppose there are a million people who you could satisfy in this way if you cut it up. No reason? Certainly no reason that registers if there is only one such person, but suppose you’ve got a million. It’s a different kind of value, but is it out of bounds? I can’t see why.

    Finally, you stay in the relatively safe waters of save-save cases. But I’d like to press you to think about how you move into cases in which you might harm some to save others, or you have to deal with violations of the means principle. Can you extend your ideas into those contexts? It seems TO ME that the thing to do is to shift into a rights framework and talk about not reasons but the strength of competing claims, where interest is just one factor, but nature of the claim (e.g., positive claim for aid vs negative claim not to be harmed), as well as forfeiture, waiver and the rest are all relevant. Do you have any thoughts on that?

  8. Thanks for the astute follow-up questions, Theron! I will respond to your first point right now and think more about the second point.

    On your first point, I am tempted once again to reach for the notion of parity. Here, the thought is that the value of saving the lives 999 people and the value of saving 1000 people are on a par, whereas the value of saving 2 people and the value of saving 1 person are not. If this claim can be maintained, then EC1 is triggered in the latter case but not the former.

    Here is an intuitive illustration how I think parity would work in a non-rescue case. Suppose I am offered two competing, one-off gigs, the first pays $200 whereas the second pays $700. Here, the $500 pay difference is quite significant, and (if things just come down to pay) the second gig is better. Suppose next that I am choosing between two long-term careers (say, one in philosophy and the other in stand-up comedy), and the life-long pay difference is again $500. Here, I want to say, (if things just come down to pay) the two careers are on a par. My inclination is to say that 2 vs. 1 is more like the former case, and 999 vs. 1000 is more like the latter.

  9. What a wonderful discussion! Congratulations to Erik on the publication of this truly trailblazing paper, and thanks to Theron, too, for such a clear and efficient introduction to the discussion.

    Erik, we’ve spent many, many hours over the years discussing these issues, and at this point I probably can’t think of a question that you haven’t already heard from me before. So let me instead attempt to draw you out on some related issues that haven’t yet, I think, been fully dealt with in your paper. Let me start with a first question. I’ll post a few more in subsequent comments.

    Suppose that we have a national health system (one can always dream…) and you’re the head bureaucrat who has to decide how many resources to allocate for the treatment of various health conditions in our population, from the most severe to the most trivial. And, because our health budget is finite, trade-offs will have to be made: spending more money on life-threatening conditions will mean less money for aspirins, and vice versa.

    On your view, is it ever justifiable to take away resources that could go towards saving the *lives* of some patients and allocating them instead to treating much less severe conditions, such as migraines or shortsightedness?

    If yes, could you explain how this is compatible with the partial aggregation framework of your paper? If not, could you tell us why this isn’t an implausibly revisionist commitment of your view. (I think I have a fairly good sense of how you’ll answer this, but I think it would be helpful to get you on the record).

  10. Here’s my next set of questions: Since none of us will live forever, “saving a life” really means “conferring on someone some additional number of years of life”. On your view, does it matter at all to the strength of our reasons to save someone’s life, and how they compare in strength to our reasons to spare people from other burdens, how many extra years (or, if you want to take things a step-further, how many *quality-adjusted* life years they are expected to live?) (Your initial response to Theron suggests that it does, but I’d love to hear more).

    What if I could either save the life of A, who would live an extra 40 years, or save the lives of 7 other people, each of whom would live an extra five years? Should we think of think of this as a forced choice between saving ‘one life’ vs saving ‘seven lives’? Or a choice between saving ‘40 life years’ and ’35 life years’? Or some other way?

    And I wonder whether there are interesting differences between “inter-” and “intra-personal” aggregation here. Does it matter whether extra life years are contained within the life of a single person or distributed across multiple people?

    To see the distinction I have in mind, contrast the following two cases:

    **Case Y**:
    I can either
    – Save A’s life (giving him an extra 10 years of life)
    or
    – Save B’s life (giving her an extra 10 years of life) *and* C’s life (giving him an extra 2 years of life).

    **Case Z**:
    I can either
    – Save A’s life (giving him an extra 10 years of life)
    or
    – Save D’s life (giving her an extra 12 years of life).

    In either case, if we forgo the saving of A’s life for the alternative, we will save 12 years of life rather than 10. But in Case Y, in so doing, we save two lives rather than one. In Case Z, we save the life of only one other person, who will live for slightly longer than A. Does this matter morally?

  11. I should have said that the pro tanto duty to respect people’s autonomy favors our flipping a coin (or, otherwise, giving each an equal chance of being saved from death) in Many vs. Many & One Billion. By contrast, the pro tanto duty to promote the impersonal good favors our saving the larger group (the one with the additional billion people who will otherwise lose their thumbs). But it seems to me that given that we’re talking about a billion people losing a thumb, the pro tanto moral obligation to promote the impersonal good by the great extent that saving each of a billion from the loss of a thumb outweighs the pro tanto moral obligation to respect people’s autonomy by giving everyone (facing the possible loss of life) an equal chance of being saved from death. That is, given how much impersonal good is at stake in this case, the pro tanto duty to promote the impersonal good triumphs over the pro tanto duty to respect people’s autonomy. But, in Kamm’s case, it seems that pro tanto obligation to respect both A’s and B’s autonomy by giving each of them an equal chance of being saved from death triumphs over the pro tanto obligation to promote the impersonal good given what little extent curing one person of a sore throat would do. Now, I’m not sure what to say if we’re talking about saving a billion as opposed to just one from a sore throat because a sore throat is rather trivial. But the loss of a thumb is far from trivial. And, so, I would say that if enough thumbs were at stake we should certainly save B’s life as well as all these thumbs rather than A’s life. Sure, respect for people’s autonomy is important. So, other things being equal, we should give A as much of a say as B (and give C no say) in which group we save. But it’s not as important as the prevention of billions, trillions, or billions of trillions of thumbs lost. So, my suggestion is that we ought to take the option that better promotes impersonal good if the competing options do equally well at respecting the relevant claims of individuals, but I don’t think that we do equally well at respecting the relevant claims of individuals if we choose to save the larger group in Many vs. Many & One Billion on the grounds that this would do more promote the impersonal good. (Admittedly, though, I don’t think that I worded things quite right in my previous comment). I think that we respect the relevant claims of individuals only if we give each person with their life a stake an equal chance of being saved (or an equal say in whether they are saved). Also, isn’t saying that we should save the larger group at the expense of not giving everyone an equal chance of being saved (or an equal say in whether they are saved) amount of impersonal good is sufficiently large just to posit two competing modes of justification within a single domain of morality?

  12. My third question follows on from my second.

    Your paper, like much of the literature on aggregation and the numbers problem, focuses on the forward-looking question of how much each individual would benefit, if we helped them, and how this affects the comparative strength of their claims to be aided (alone or as part of a larger group).

    But should the question whom to save be settled on the basis of such considerations alone?

    Here’s a case to pump the intuition that the answer may be no.

    You have one dose of a scarce drug, that could be used to save either A’s life or B’s. Assume that if you give the drug to A, he’s expected to live another 15 years, whereas if you give the drug to B, she is expected to live only another 7. (I’ve chosen these numbers so that the forward-looking benefits to A and B are plausibly not “on a par”; the benefit to A seems more significant. If these numbers don’t do it for you, then feel free to make the gap wider). Whom should you help?

    Before you give your verdict, here’s a further fact: A is already 80 years old; if he gets the drug, he is expected to live to 95. B, by contrast, is 17. Even if she gets the drug, she is not expected to live past 24. Call this case Young or Old.

    Looked at in one way, it might seem that A has the stronger claim to receive the drug than B. He is expected to live another 15 years, B only 7. But many have the intuition that this isn’t all that matters here. At 80 years old, A has already enjoyed a lot of the ‘goods of life’. He has enjoyed, to borrow a term from bioethics, his “fair innings”. B, by contrast, is only barely setting out in life. Even if she receives the drug, and gets to live another 7 years, she will have enjoyed far fewer of the goods of life than A. Shouldn’t that strengthen her claim?

    There are different ways in which one might attempt to capture these intuitions. One might hold that, in addition to the *forward-looking* question of how much someone will benefit if we help them, we should also consider *backward-looking* or *holistic* questions about how much of these or other relevant goods they have already enjoyed in the past or how well their life will have gone as a whole if they are helped or not helped. Alternatively, while preserving a purely forward-looking orientation, one might attempt to capture these intuitions by building a prioritarian element into one’s theory of claims: a person’s claim to be helped is stronger, all else equal, the worse off they are.

    I wonder whether you agree that in deciding whom to help, how much we could benefit each person is often not the only morally relevant consideration, and if yes, whether you have a preferred way of capturing our intuitions about cases like Young or Old.

  13. Thanks for your excellent and challenging questions, Gerald!

    I will take your first two queries together, since they both challenge me to say more about the explanatory relations between notions such as relevance, value, impropriety, and disabling. Here is what I have in mind. As someone who is sympathetic to the reason-first approach, I think any explanation in the normative domain terminates in facts about reasons (or the is-a-reason-for relation). Disabling occurs, in my view, when a consideration which is typically reason-providing loses its reason-giving force. At this level, things are explained by primitive facts about reasons (primitive in the normative sense, not in the metaethical sense, since nothing I say in this paper at least commits to non-naturalism about reasons). Hence, I want to say, while I do have a reason to alleviate another’s minor discomfort under normal circumstances, that reason relation fails to obtain in circumstances in which the alleviation comes at the expense of another’s life.

    Now, with these remarks about reasons at hand, we can then explain other normative phenomena. For example, on this proposal, the value of human life is partly grounded in the fact that, in cases of conflict, a person’s interest in avoiding a minor headache is not a genuine reason that counts in favor of the destruction of a life. And it would in this sense be inappropriate for the alleviation of a headache to weigh against the preservation of a person’s life. And when it comes to respect, plausibly, to respect some value, at the very minimum, calls for us to not sacrifice the value for no good reason at all.

    Now, insofar as the above remarks are what people like Scanlon already have in mind when they say an interest is irrelevant, then I am happy to admit that saying an interest is disabled is just another way of saying an interest is irrelevant (although, even here, I think disabling has greater reach, since it is capacious enough to include undercutting defeaters in the epistemic domain). My contribution then, as you note, is to provide an apparatus for understanding the notion of relevance. However, insofar as your challenge is for me to provide a deeper explanation than what I have just offered, then I don’t think this challenge can be answered (at least not by me). This is because, on my view, normative explanations simply bottom out at the level of primitive facts about reasons.

    I should note, however, many people find Voorhoeve’s account of relevance more illuminating. The account links relevance to permissible partiality to oneself. I resist this link. I engage with this part of Voorhoeve’s paper in footnote 21 (and his theoretical pluralism is addressed in footnote in 24, and his main account of partial aggregation is directly addressed in section 5 of the paper).

    Your third query, which is again really excellent and challenging, has to do with the question: Why prefer my understanding of equal consideration over a Taurekian one? I want to say a few things.

    First, Taurek’s understanding leads to a criterion of adjudication that is highly revisionary. It is highly implausible to maintain that in a choice between saving 1 life and 1 million others, we ought to flip a coin. As a methodological matter, then, the quest of reflective equilibrium should counsel us to wait for a more plausible solution to the numbers problem before taking the drastic measure of endorsing Taurek’s.

    Second, Taurek’s proposal has limited applicability. Take a case where we can either save 10 lives or prevent 2 paraplegias. Do we still do a Taurekian lottery in this case? If the answer is Yes, then I submit the worry I expressed above about the proposal being revisionary and counterintuitive is strengthened. If the answer is No, then a Taurekian needs some other proposal for dealing with different-harm cases. My proposal, by contrast, offers a more unified solution.

    Third, in my view, Taurek’s proposal strikes me as not adequately motivated. Take a case where we can save X’s life or prevent Y’s arm-loss. I think everyone will agree that a lottery is out of place. Once we arrive at the correct conclusion that there is more moral reason to save X, then we should simply do so. In the same way, in Life vs. Lives, once we arrive at the conclusion that there is comparatively stronger reason to save the lives of B and C over A, as the ideal of equal consideration requires, we should simply go ahead and rescue the larger group. Quite generally, the comparative strength of our practical reasons is meant to settle the issue of which option is to be taken.

  14. Hi Alec! Thanks for the great questions!

    On your first question, I think it very much depends on some foundation issues for consequentialism. For example, in my analysis of the cases, I always begin by examining the harm that a *single* individual would suffer if unaided (and arrive at a conclusion about the relevance that way). This way of analyzing the cases fits well within a moral theory that has an individualist mode of justification, since we are concerned with what would be justifiable to *each* of the individuals affected. Insofar as consequentialism has a different moral ideal, such as that of making the world go best, they need to explain why it is appropriate to approach the cases from an individualist perspective first, rather than simply focusing on the amount of aggregate value that would be brought about if either group is saved. To be sure, there is definitely room in the conceptual space for a kind of individualist-analysis-first-then-aggregation-second theory to occupy. My only concern is that such a theory would appear quite ad hoc, lacking a rationale that is deeper than simply wanting to get the cases right.

    Perhaps your suggestion is instead that, consequentialism is always about making the world go best, it is just that, in Life vs. Headaches, for example, it is not the case that preventing the many headaches produces more value overall. I believe Dale Dorsey explore this suggestion, and he argues that the better-than relation is discontinuous. One task for proponents of this kind, then, is to provide a satisfactory solution, at the level of axiology, to the spectrum argument, which seeks to establish that there must be some number of headaches the alleviation of which generates more value than the saving of a life.

    On your point about whether it is necessary for my account that I go into for an “on-off” notion like relevance, I think you are right that the answer seems to be No. Life vs. Headaches may well be a limiting case, where the reason to satisfy a headache-related interest reduces to zero. But there may well be a “a phase of falling off [in terms of the strength of individual reasons] in a way that is non-linearly quick.” This admission would still allow me to capture the correct verdicts in the three leading cases of Life vs. Lives, Life vs. Paraplegias, and Life vs. Headaches. But I will have to think more about whether it threatens some other claims I make in the paper.

    On your invitation for me to step into the realm of rights, I have no objection in principle to the use of rights-talk. However, as a contractualist sympathizer, I do think rights themselves need to be explained in terms of the reasonable rejectability of principle that regulate our interpersonal conduct; rights are not primitive. Insofar as rights can be given a plausible reduction, I have no problem at all with appealing to the distinction between positive and negative rights in dealing with aiding vs. harming cases for example.

  15. Wow, look at all these fantastic comments—thanks, everyone!

    Erik, about cases like Many vs. Many & One, you wrote, “the value of saving the lives 999 people and the value of saving 1000 people are on a par, whereas the value of saving 2 people and the value of saving 1 person are not. If this claim can be maintained, then EC1 is triggered in the latter case but not the former.”

    The idea here seems to be that (1) the value of saving A is on a par with the value of saving B, but an additional life is enough to break this parity: the value of saving B and C is greater than the value of saving A. And (2) when we scale things up to the case of saving 999 A’s or saving 999 B’s, much more is needed to break the parity, so that just adding C to the side with 999 B’s won’t make it the case that saving this now larger group generates more value. This may be correct! (I don’t know if it’s correct, but it isn’t unreasonable.)

    Nonetheless, I think part of what makes me nervous about EC1 is that it seems to me that, *even if* the value of saving the 1000 *were* greater than the value of saving the 999 others, it would *still* seem to me that we’re not required to save the greater number.

    Thanks to everyone, especially Erik, for all the terrific comments.

  16. Thank you for the fantastic questions, Johann! I can’t help but feel a little emotional thinking that it seems like only yesterday when I was a prospective graduate student nervously sitting in your office and talking about the ethics of self-defense! The only difference is that I am now nervously sitting in front of my own desk answering your questions about the ethics of interpersonal aggregation!

    On your first question, you are right that a position like my own has upshots for policy decisions regarding the allocation of scarce resources for health. Let me note an example. In 1990, the Oregon Health Services Commission proposed to prioritize funding for health interventions based on the cost per quality-adjusted life-year (QALY) saved. But it was quickly pointed out to the public that, on the commission’s proposal, tooth capping received higher priority than treatment for deadly appendicitis. In the wake of subsequent public outrage, the commission withdrew its proposal. My account of interpersonal trade-offs supports this withdrawal. On my account, it would be unjustifiable to each of those facing deadly illness for us to prioritize funding for dental restorations, even if the benefits produced in the aggregate leads to a higher score on the QALY metric. I think this is a plausible result to reach.

    On your next set of questions, I do wish to maintain that the number of distinct individuals involved, and not just the number of years they have to live, matter morally. One way to maintain this suggestion, as I have tried to do in response to Theron’s questions, is to reject any straightforward inference from the value *contained* in a person’s life to the value *of* saving that person’s life. Within limits, I wish to say that the value of saving any one of two lives is on a par (of course, while allowing that the value of saving *two* lives is greater than that of saving one life).

    One way to provide further support for this point is to draw on something John Harris says in the “Value of Life,” Here, I will just quote Harris: “All of us who wish to go on living have something that each of us values equally although for each it is different in character, for some a much richer prize than for others, and we none of us know its true extent. This thing is of course ‘the rest of our lives’. So long as we do not know the date of our deaths then for each of us the ‘rest of our lives’ is of indefinite duration. Whether we are 17 or 70, in perfect health or suffering from a terminal disease, we each have the rest of our lives to lead. So long as we each fervently wish to live out the rest of our lives, however long that turns out to be, then if we do not deserve to die, we each suffer the same injustice if our wishes are deliberately frustrated and we are cut off prematurely.”

    This quote, of course, is to be tempered by the “fair innings” argument, which comes later in Harris’s paper. In the *standard* Young or Old case (where the younger person has way more years to live if saved), I do have the firm intuition that we ought to save the younger person, all else being equal (thereby making me an “ageist” in Harris’s sense). I therefore do accept your excellent suggestion that backward-looking considerations do matter, at least in some cases. This gives me an opportunity to say something I should have made more explicit in the paper. EC1 and EC2 should be read as each containing an “ceteris paribus”clause.

    Your version of Young or Old is of course more difficult. Reflecting on the case myself, the relevant considerations are indeed what you enumerate, and they pull in different directions. I do like the backward looking way of capturing the “fair innings” strand of my intuition, focusing on “how much of these or other relevant goods they have already enjoyed in the past or how well their life will have gone as a whole if they are helped or not helped.”

  17. Quick replies Erik:

    First, you write: “Perhaps your suggestion is instead that … it is not the case that preventing the many headaches produces more value overall. I believe Dale Dorsey explore this suggestion, and he argues that the better-than relation is discontinuous. One task for proponents of this kind, then, is to provide a satisfactory solution, at the level of axiology, to the spectrum argument, which seeks to establish that there must be some number of headaches the alleviation of which generates more value than the saving of a life.”

    My response: I don’t know Dale’s argument, but yes, something like that. And I don’t see that the spectrum argument is especially hard for them. But I say that because I think they can do a comparative assessment between options and frame certain goods as not relevant in that context… just as you can. Perhaps I’m wrong, but that option seems open to me.

    Second, on the strength of reasons dropping off, rather than going with an on-off switch… I don’t see why it should be a problem for you.

    Third, on my invitation to use a rights framework, I think one can always convert an argument into a reasons argument. For a rights theorist you just say: and you have reason that is almost always decisive to respect the rights of others (not violate them). But the interesting question is: how can rights theory *inform* your sense of the strongest reasons. Scanlon makes some tepid points about this when he says that responsibility for past actions and fairness matter, not just welfare. I think you should take that on. And in my recent Separate Peaks paper I argue that you (and Tim) should go further and take on a more systematic rights framework. In a phrase: you can’t tell who has the strongest objection to a proposed course of action until you frame a concern with welfare *inside* a rights framework that tells you how to weigh their welfare. Welfare alone is not the basis for finding the strongest objection… as you well know.

  18. Thank you, Doug, for the further clarifications about your original suggestion! They are really helpful!

    Just to make sure I am understanding you correctly: On your picture, when confronted with a trade-off situation, we first identify two normative factors. First, we identify a pro tanto duty to respect people’s autonomy by giving each an equal chance of being saved (or an equal say in whether they are saved). Second, we identity a pro tanto duty to maximize impersonal good. Once these two pro tanto duties are identified, we then assess whether one is sufficiently weightier than the other, in cases in which they point to conflicting options.

    Thus, in Kamm’s (Life vs. Life & Sore Throat), though saving the larger group does generate more impersonal good overall, the difference is only very slight, so the pro tanto duty to respect people’s autonomy takes precedence. Hence, we should give each group an equal chance of being saved. On the other hand, in (Life vs. Life & Very Many Thumbs), because taking the latter option generates so much more value in the aggregate, the pro tanto duty to maximize impersonal good takes precedence over competing pro tanto duty to give each individual an equal chance of being saved.

    One worry I have about this proposal is this: Take the simple (1 Life vs. 2 Lives) case. Here, on your picture, there is a pro tanto duty to promote impersonal good, which counsels in favor of saving 2. On the other hand, there is a pro tanto duty to respect people’s autonomy. It is my understanding that this latter pro tanto duty is an individualist one, so this duty says give each person an equal chance of being saved (am I wrong about this? Is some form of aggregation already built into the pro tanto duty of respecting individual autonomy?)

    If my characterization is correct, then I think we run into trouble with respect to answering Anscombe’s question. Suppose in (1 Life vs. 2 Lives), instead of saving 2 over 1, I toss a coin and saves 1. I think each person in the larger group is wronged; each may complain, on her own behalf, that my conduct has treated her in an unjustifiable manner. But what complaint do they have on your picture? I cannot be one that is based on the understanding of individual autonomy (since aggregation is not built into this notion). Nor may they appeal to the impersonal worseness of leaving them to die, since this is not a complaint that each can make *on their own behalf*.

    Perhaps I’ve once again misunderstood your suggestion. Perhaps some measure of aggregation is already built into your understanding of respecting the autonomy of individuals, in which case I’d love to hear how the pro tanto duty to respect *each* person gets us to a pro tanto duty to save the larger number in 1 Life vs. 2 Lives.

  19. Thanks, Theron, for your follow-up! I’m glad to hear that you don’t think the idea is completely unreasonable! I do see the force of your lingering concern, though; so perhaps I should come up with another way to supplement EC1. I do just want to emphasize that I owe the ratio understanding of parity to Michael Rabenberg, who shared his working paper with me while he was a postdoc at Princeton. I think the paper is now published under the title “Imprecision in the Ethics of Rescue” (I can’t be too sure because I don’t currently have any institutional subscription that allows me to access journal articles).

  20. Hi all,

    Thanks for the excellent paper Erik, and the great comment by Theron and discussion so far. I’m on holiday and so haven’t been able to read through everything; look forward to doing that more carefully when I return.

    I wanted to pick up on something Theron raised at the start about whether we should look for explanations of the powerful intuition in Lives v Headaches that we should save the life that are at root axiological or deontological. In my own paper on this I’d offered a fundamentally deontological solution that is akin to Erik’s, but I’m now less sure.

    Consider:

    Watch variation: A natural event is about to occur. It will either save X’s life or a billion headaches, but not both. I’m not sure what the result will be.

    The intuition that I should hope that X’s life is saved is just as powerful, I think, as the intuition that I should save the life in Life v Headaches. I have every reason to value the saving of the life over valuing the saving of the headaches and so on.

    What explains this similarity? Here are some possibilities:

    1) it is better that the life is saved, which explains all the cases, because the deontic is grounded in the axiological.
    2) it is better that the headaches are saved, but just as reasons to act can be disabled, so can reasons to hope and reasons to value, and none of these disablings ground each other.
    3) as 2) but reasons to hope or value ground or are grounded in reasons to act

    Some will reject 2) and perhaps also 3) as incoherent, because they think betterness just is ‘reason to value more’. I have my doubts – we could think: if the headaches are saved a greater total magnitude of things of value occurs, so in that sense the world is better, but we ought to value this outcome less. So I think 2) is a live possibility, and I guess I’m tempted by it.

    Anyway, I wondered what Erik thought about these questions as it seemed that the paper was fundamentally deontological and so doesn’t give an account of the similar intuitions that we have about valuing and hoping.

  21. Erik, thanks so much for the great paper! It’s been a great discussion so far. I especially like Victor’s question above (and, like him, I’m tempted by 2, at least in my limited aggregation moods).

    Here are a couple of Qs that I’d be really interested to get your thoughts on, when you have a chance:

    1. Suppose we must either save A or save B, and we save A. Suppose that B then asks, in a ghostly voice, “Why were you willing to save A at the cost of one life, but not willing to save me at the cost of one life?” Presumably, we can reasonably reply, “We were just as willing to save you, B, as to save A. But we couldn’t save you both. We had to choose.”

    Suppose next that we must either save A or save B and C, and we save A. Suppose that B then asks, in a ghostly voice, “Why were you willing to save A at the cost of two lives but no more willing to save me at the cost of one life?” I’m wondering why we can’t reasonably say to B something like what we said before. “We were just as willing to save you, B, as to save A. Indeed, just as we were willing to save A at the cost of two lives (and perhaps even more), we were willing to save you at the cost of two lives (and perhaps even more). But we couldn’t save you both. We had to choose.”

    I guess this comes down to me not yet being sure that choosing to save A from [A v B+C] necessarily means valuing A more than B (or C).

    2. You say (501) that aggregative explanations of an obligation to save the greater number can’t answer Anscombe’s challenge—they can’t explain why saving the lesser number wrongs everyone in the larger group. I’m not so sure about this. Consider, as an example, Daniel McDermott’s view, on which an individual has a decisive objection to an act just in case there is an alternative act that is better for her and that produces a lesser sum of relative harms (or what others call complaints). This view implies that, in [A v B+C], A has no decisive objection to you saving B+C, because saving B+C generates a greater sum of relative harms (or complaints), whereas B+C each have a decisive objection to you saving A, because that would be better for them and would produce a lesser sum of relative harms (or complaints). If we say that an individual is wronged just when they have a decisive objection, then we get the result that saving A would wrong both B and C. And we also get the right order of explanation, in that saving A is wrong because it wrongs B and C.

    I think there are problems with McDermott’s view, but it seems to illustrate a more general strategy by which aggregative explanations could answer Anscombe’s challenge. I think ‘decisive objections’ could also have degrees of strength, such that a version of this view could “capture the idea that sometimes individuals have increasingly stronger claims to being saved as the disparity in numbers increases between competing groups”.

    What do you make of this kind of approach? If it had a limit aggregation component built in, would there still be reasons to prefer your approach?

  22. Thanks, Erik, for a terrific paper that I got to read at different stages and thanks everyone, especially Theron, for a fantastic discussion. I want to pick up on something that was part of the exchange between Erik and Theron above, concerning the “Many vs. Many & One” case. A lot of what you say here sounds like we should attribute diminishing marginal value to saving a single life. I.e. in “Life vs. Lives” saving the two lives rather than just one is more valuable and not on a par. But in “Many vs. Many & One” [replacing the case of paraplegia through an additional death] saving 1001 lives is on a par with saving 1000 lives. This seems hard to square with the plausible assumption that each live-saving is equally valuable. Erik, in your reply to Theron you refer to a non-rescue case concerning salary as an illustration. This example works precisely because money has diminishing marginal value, but we usually don’t and probably shouldn’t think that saved lives do.

    Moreover, such views seem especially troublesome in dynamic contexts. For example, we might have to say that the value of saving a life in a sequence of life-savings depends on where in the sequence this life can be saved. I.e. saving one additional life when you have only saved one life so far is more valuable than saving one additional life when you already have saved a thousand lives. All of this seems to go against the egalitarian spirit of the rest of the discussion. What do you think?

    Thanks again for a great paper and discussion!

  23. Thanks everyone – what a fun discussion! Erik, I wonder if you can capture the desired intuitions in the cases Doug raised within a thoroughly individualist framework. Here are three key cases:

    (Case 1) Life vs. Thumbs: We can save either A from death or a billion other strangers from the loss of a thumb.
    (Case 2) Life vs. Life & One Thumb: We can save either A from death or B from death and one other stranger from the loss of a thumb.
    (Case 3) Life vs. Life & One Billion Thumbs: We can save either A from death or B from death and one billion other strangers from the loss of a thumb.

    I’ll say the ‘desired intuitions’ are as follows (they are not universally shared!): We are required to save A in Case 1. We are not required to save the greater number in Case 2. We are required to save the greater number in Case 3.

    It seems to me that you can capture all three intuitions within a thoroughly individualist framework, if you (i) take a more nuanced approach to normative disabling and (ii) go for evaluative parity (as you already suggested in response to my Many vs. Many & One case).

    Regarding the more nuanced approach to normative disabling, here’s what I mean. In Case 1, the losses of thumbs cannot ground any personal complaints at all. If we save A, no one of the billion has a legitimate personal complaint. So far, this matches what you say in the article. However, in Cases 2 and 3, the losses of thumbs *can potentially* ground personal complaints, *if* they trigger your EC1 (so in this way they are not fully normatively disabled). And arguably, they do trigger EC1 in Case 3, since A and B have equal interests in being saved, and saving A forgoes more aggregate value than saving B (I assume that a billion thumbs break the evaluative parity between saving A’s life and saving B’s life). So, in Case 3, B has a legitimate equal-consideration claim that you save her, which provides an individualist case for the verdict that we’re required to save B (along with the billion thumbs). You can maintain that none of the people who stand to lose thumbs could legitimately complain on their own behalf if you save A, but I wonder if *B* would have a legitimate equal-consideration complaint based on the discrepancy in the value you forgo in saving A rather than B.

    However, EC1 is not triggered in Case 2, because the value of saving A’s life is on a par with the value of saving B’s life and one other stranger’s thumb. So, you can capture the verdict that we’re not required to save the greater number in this case.

  24. Hi Erik, You understand my proposal perfectly. Thanks for putting it so clearly and succinctly. But there is one other aspect of my proposal that I have been a bit unclear about. Really, what matters, on my proposal, vis-a-vis respecting people’s autonomy is that we give each person a say in a decision that affects them that’s proportional to their stake in that decision, not that we give each person an equal chance of being saved—I was just using the equal-chance talk merely as a heuristic where we assume everyone votes in accordance with their self-interests. That is, I like Tim Henning’s voting proposal in his “Numbers without Aggregation.” So, whenever I don’t give someone a vote (a vote that’s proportional to their stake) that pro tanto wrongs them. So if, in 1 Life vs. 2 Lives, I toss a coin and save the 1, then I wrong each of the 2 in that I didn’t give them a vote regarding which group I save. (I’m assuming that they would each vote for me to save their own group [the larger one]. If they were instead each vote for me to save the 1 instead of them and I save the 1 instead of the 2 out of respect for their say in the matter, then I don’t wrong them by saving the 1.) So, I agree with you that, where I just flip a coin and save the 1, each person in the larger group is wronged and each may complain, on their own behalf, that my conduct has treated them in an unjustifiable manner. But my proposal accounts for this. Their complaint is that they weren’t given a say in a decision that affects them that’s proportional to what they have at stake in the decision. So, one of the merits of my proposal (I believe) is that it enables us to give a plausible answer to Anscombe’s question.

  25. Theron and Erik: I think that Theron’s suggestion works. But I wonder if it is better than my own suggestion in which we don’t take a purely individualist approach but instead weigh two competing normative factors: maximizing the impersonal good (a non-individualist factor) and respecting people’s autonomy (an individualist factor). As I’ve been keen to argue elsewhere, moral theories are very ecumenical in their ability to account for various case verdicts. You just have to tweak them sufficiently. The question, then, is which gives the best explanation for these various case verdicts. And I think that my non-purely-individualist theory is better in that it better accounts for our phenomenology. For when I consider such cases, it feels like I’m trying to balance competing normative factors (respecting people’s autonomy and maximizing the impersonal good). It doesn’t feel like I’m just trying to look at each individual’s complaint, where some of these complaints get silenced.

  26. Thanks to Victor, Joe, Korbinian, Theron, and Doug for the new commments! I will respond to them in the coming hours after grabbing something to eat! 🙂

  27. Thank you for the thought-provoking question, Victor! In response, I am inclined to adopt something like (4) reasons to hope and value do not themselves exclusively respond to axiological consideration.

    To make this claim plausible, I wish to draw on something Sam Scheffler says about what it is to value X. Scheffler provides the following analysis:

    “Somewhat more precisely, it seems that valuing any X involves at least the following elements:
    1. A belief that X is good or valuable or worthy,
    2. A susceptibility to experience a range of context-dependent emotions regarding X,
    3. A disposition to experience these emotions as being merited or appropriate,
    4. A disposition to treat certain kinds of X-related considerations as reasons for action in relevant deliberative contexts.”

    Here, I wish to focus on 4, which establishes a link between reasons to value and reasons for action. My (very preliminary) thought is that “what I have reason to do if I were in a position to bring about such-and-such state of affairs” at least in some cases colour our judgement “what reason do I have to value the occurrence of such-and-such state of affairs.”

    So, we could say, parodying your statement, Yes, the prevention of a great many headaches indeed makes the world better, but I value it less. I value it less because, if I were in a position to prevent a death or that many headaches, I have decisive reason to save a life.

    To give an example: Suppose a doctor is thinking about killing an innocent patient so she can transplant the patient’s organs to two other dying patients. If I am asked: What do you hope the doctor will do? I will respond: I hope the doctor refrains. And I will give this response even if I truly believe that it is a better state of affairs if the two other patients are saved. (But this case now involves constraints, so perhaps the case is a bit too disanalogous to Life vs. Headachces.)

  28. Thank you for the very perceptive questions, Joe, and thank you once again for providing me invaluable feedback on an earlier version of my paper!

    Your first question gives me an opportunity to further clarify an issue about how to understand my argument from equal consideration. As I see it, the key to this argument is the plausibility of EC1 and EC2, which have to do with the comparative strength of *reasons* individuals have for adjudicating between the competing interests of others.

    My talk about *willingness*, which as your question perceptively suggests, has a psychological connotation, drawing our focus to the psychological states of the rescuer. I only use the willingness talk in a pithy formulation of the argument from equal consideration, since I think it gives the readers an intuitive taste of my argument.

    What really matters for my argument is the plausibility of the following process of reasoning:

    First, there is reason of some strength to save A’s life at the cost of forgoing the aggregate values of two lives.

    Second, since B’s life is just as important, equal consideration dictates that there is reason of the same strength to save B’s life at the cost of forgoing the aggregate values of two lives.

    Third, if there is reason of some strength to save B at the cost of two lives, then there is reason of a comparatively greater strength to save B at the cost of just a single life.

    Finally, Life vs. Lives describes precisely a situation in which the controlling moral question is whether there is stronger moral reason for us to choose the option of saving A’s life at the cost of two deaths, or the option of saving B’s life (together with C’s) at the cost of just one death. Equal consideration for the lives of A and B implies that the moral reason to choose the latter option is stronger.

    Now, this line of reasoning may not be *recognized* or *accepted*, at the psychological level, by a would-be rescuer. But that is OK for my purpose, since what really matter for the argument are facts about the comparative strength of reasons in Life vs. Lives as dictated by the ideal of equal consideration, and not psychological facts about the rescuer.

    On your second question, I would resist the connection between having a decisive objection, understood in McDermott’s sense, and wronging someone, in Anscombe’ sense. A claim that an option produces a lesser or more *sum* of relative harms compared to another option is not a claim that an individual may raise *on her own behalf* (while a claim that one in particular is harmed is one that a person can raise on her own behalf, but this claim can be equally raised by each of A, B, and C in Life vs. Lives).

    I therefore do affirm the following claim: A is wronged by B’s Φ-ing only if A has a decisive claim, grounded in A’s *personal* reasons, against B’s Φ-ing. Personal reasons, as I understood them in paper, are reasons having to do with a person’s *own* well-being, interest, or status.

    Indeed, I am inclined to affirm something stronger, adding to the above claim “and B’s Φ-ing is impermissible.” But some smarter minds have thought that we can have cases of wronging without wrongdoing (though I remain somewhat unconvinced).

  29. Thanks, Korbinian, for the great question, and thank you for the many conversations we’ve had about interpersonal trade-offs!

    In my initial response to Theron about Many vs. Many & One (my first post), I noted that I am happy to accept the implication that a straightforward application of EC1 entails that we ought to save the larger group, since my intuition about the case is less firm than Theron’s. I think part of my hesitance about saying that it is permissible to save either group draws precisely from the thought, which you make explicit, that the value of saving a person’s life should not diminish just because there are many more individuals involved.

    On the other hand, I don’t think the ratio understanding of parity is committed to the idea that saving lives has diminishing marginal value. It just relies on the thought that, in assessing whether two competing sets of values are on a par, it matters what the ratio size difference is between the two sets. Hence, in the 1001 Lives vs. 1000 Lives case, we can maintain that each life in the two groups contributes the same amount of value to the saving of the respective group. But since the ratio size difference is quite small, the value of saving one and the other are on a par.

  30. Thanks for your excellent paper, Erik, and thanks everyone for a fantastic discussion!

    My main worries are (1) that your proposal begs the question against numbers skeptics and (2) that the kind of equal consideration or impartiality that matters entails a lottery requirement, after all.

    Take a rescue conflict involving three persons, P1 vs. P2 & P3, ceteris paribus. If I’m indifferent between the two options, your view accuses me of partiality toward P1. This verdict seems to presuppose something like the following model of the normative situation: If the weight of the axiological reason to save P1 is x, the weight of the axiological reason to save P2 & P3 is 2x (assuming summative aggregation). Add to this the weight of the deontological reasons to save each person. There are three such reasons, each of weight y. These reasons do not aggregate, such that the deontological reason to save P1 balances the weight of the deontological reasons to save P2 & P3. Given that the axiological reasons favor saving P2 & P3, there is more reason to save P2 & P3 overall. If, contrary to this, I’m indifferent between the options, and if I accept that the axiological reasons favor saving P2 & P3, I must consider the deontological reason to save P1 weightier than the deontological reason to save P2 and the deontological reason to save P3. Hence the accusation of partiality toward P1.

    Why does this beg the question? Some numbers skeptics deny the axiological claims involved in the above or consider them meaningless. Others accept them but deny that the deontological reasons can be combined with the axiological ones in the aggregative fashion presupposed. They hold that deontological reasons trump axiological ones (and equivalently, that claims or rights trump utilities). Consequently, they judge that, in P1 vs. P2 & P3, one may save either group or must give equal chances.

    Are numbers skeptics impartial? I think so; they just disagree with Erik about the kind of impartiality that matters. In particular, if they judge that one may save either group in P1 vs. P2 & P3, they impartially grant each person the power to establish ties with any other group of persons no matter how large (ceteris paribus). And if they judge that one must give equal chances, they grant each person the greatest chance to be saved consistent with impartiality. Taurek suggested that giving equal chances is what impartiality entails but seems to have denied that impartiality is required in rescue conflicts between strangers. (This latter claim, I think, is a mistake driven by Taurek’s David argument, which is supposed to establish that one may save either group in rescue conflicts between strangers and thus need not give equal chances. But the David argument is flawed, and if impartiality is required anywhere, it surely is required in life-and-death situations involving strangers we can rescue costlessly.)

    Here’s an argument that Taurek’s notion of impartiality is the one that matters. In a single person case, what reasonably matters to the person we can rescue is that we maximize her rescue chance (100% is best, 99% is worse but better than 98%, etc.). In a conflict case, this maximization is constrained by impartiality, such that we should give each person a 50% chance. Indeed, if we fail to do so, at least one person will have the following complaint: “You could have done more to save me – you could have given me a greater chance – without partiality towards me! Why didn’t you do so?” We have no good answer here and must conclude that we owe each person a 50% chance, or so it seems to me.

    You strongly appeal to anti-Taurekian case intuitions to resist this. But not everyone shares these intuitions. My guess is that about half of practical philosophers from German-speaking countries are numbers skeptics, as are most legal philosophers and scholars. This is due to Kantian influences and should, as a matter of social epistemology, trouble anyone wedded to their own intuitions about numbers cases. German criminal law scholars invented the trolley problem in the 1930s and seem to have taken it for granted that the numbers don’t count – can’t treat persons the way you treat valuable objects! A form of Taurekianism is the legal orthodoxy taught to students in constitutional and criminal law classes. (Incidentally, Luptakova & Voorhoeve, “How Do People Balance Death against Lesser Burdens?,” recently found that many people indeed have Taurekian inclinations. They were so surprised by this that their study design did not allow them to detect the full extent of people’s Taurekianism.)

    You write that “everyone will agree that a lottery is out of place” between saving P1’s life and P2’s arm. But a significant number of people disagree. My own starting intuitions were strongly anti-Taurekian, but these days I’m no longer sure. If P2 has a right that I save his arm, then I must, perhaps, unconditionally do something to save his arm (as much as impartiality allows). At the same time, P2 must accept that P1 has more at stake. Consequently, I should perhaps do a stakes-weighted lottery to decide the case.

    Does that mean that the numbers never count? No – they may count in many cases, but not as a matter of what we directly owe to each other. They may count indirectly (e.g., when we act behind an empirically real veil of ignorance), and they may count for various extramoral reasons.

  31. Hi Theron and Doug! Thank you both for the follow-ups and the exchange!

    Theron: Your suggestion is really helpful, and who would have thought that appealing to evaluative parity would help me address so many objections in this discussion (not me)! My only concern is that I am not an expert on parity, so I do not want to fully take on board something that I have not adequately thought through. However, *if* the claim that, within limits, the values of saving two people’s lives are on a par, then I do very much agree that it would help me get the desired intuitions in your Cases 1, 2 & 3.

    Doug: I have not yet read the Henning paper. But I do think it violates another condition of adequacy I introduced in the paper, which is that a satisfactory solution to the numbers problem should capture the idea that sometimes individuals have increasingly stronger claims to being saved as the disparity in numbers increases between competing groups.

    On the each-person-gets-a-vote-proportional-to-their-stake suggestion, it seems to me, my complaint is the same if the rescuer decides to save the smaller group, whether I am in a group of two or a group of one million (two variants of Life vs. Lives). It is unclear why, on this suggestion, *how many more* people are in the larger group should matter. (I make the same objection to Scanlon’s tie-breaking argument). By contrast, my solution allows for the strength of individual complaints to vary as the disparity varies between the competing groups (I explain this point on pp. 499-50).

  32. Thank you for the insightful questions, Adriano! I will address several issues that were raised in my mind by your comments.

    1. In my view, my account doesn’t allow for the *combination or aggregation* of axiological and deontological reasons. Rather, axiological considerations serve as the background against which individual may raise personal reasons against a criterion of adjudication. To make this plausible, I give the following illustration, which I don’t think in any sense appeal to the *combination* of axiological and deontological reasons:

    “Suppose we decide that if A’s life is ever threatened, we will devote X amount of valuable resources to the saving of A’s life. Suppose we also decide that if B’s life is ever similarly threatened, then we will devote to B only half the amount of resources we are willing to devote to A. Holding fixed that the lives of A and B do not differ in any morally relevant respect, our decisions reveal an inequality in the consideration we show to the interests of A and B. B may rightly complain, on her own behalf, that her interest in avoiding a lethal harm is unjustifiably given less importance than A’s interest in avoiding the same. Moreover, the inequality at issue is revealed by a difference in the amount of value that we are prepared to forgo for the respective interests of the parties involved. Call this understanding equal consideration in the value-forgoing sense.”

    2. Even in (1 Life vs. 1 Life) case, I am not convinced that we should give each person a maximal, equal chance of being saved. My intuition is that it is morally permitted to save either person. Of course, some people think a coin flip is justified because it *behaviorally signals* to each person the decision process is fair. But I don’t think this is a principled reason for why a coin toss is preferable to simply saving one of the two (imagine a heavenly saint who is known to be impartial is deciding between….)

    3. You are right to say that I harbour certain firm intuitions which serve as the starting point of my theorizing (theorizing has to start somewhere, right?). My firm intuition is that, in (1 Life vs. 1000 Lives), we ought to save the larger group, and that in (1 Life vs. 1 Arm), a lottery is out of place, weighted or not. In my quest for reflective equilibrium, it counts decisively against the Taurekian position that it cannot adequately capture these intuitions of mine.

    But of course, I very much agree that not everyone shares my intuitions about these cases. And that is OK with me. We may then think of my project as: How may we develop an individualist moral framework that can capture these intuitions *for those who share them*.

    4. As a methodological matter, I think the best way to approach the Taurekian position (for someone like me) is to ask: Sure, the position gives a *different* interpretation of equal concern/impartiality/equal consideration. But does it give a *better* one? If so, in which ways is it *better* than a less revisionary position (less revisionary for those who share my intuitions), and can we also capture its appealing features without saying we ought to flip a coin in (1 Life vs. 1000 Lives), etc.

    In the many defenses of Taurek I’ve read, I can definitely see that it is a *different* position, but it is unclear to me why it is a *better* position, especially since its implications are (held by me and I dare say many others) to be greatly counterintuitive. And I think the general attitude we should take with respect to this debate, as with philosophical debates in general. is to *wait and see*; let defenders of each philosophically interesting position develop their preferred position into a mature form, and then we can do a comprehensive assessment later. My paper tries to develop an individualist moral framework, in a direction that (to me) better captures certain commonsense intuitions and theoretical desiderata about numbers, though whether it is successful remains to be seen.

  33. Before the comments close, I just want to thank everyone once again for participating in this discussion of my paper! I learned a lot from the exchanges, and I had a lot of fun engaging with the excellent comments from everyone! Special thanks to Theron once again for being the précis writer and an active contributor, and thanks also to PEA Soup for hosting this discussion!
    If anyone wishes to follow up on any strand of the exchanges above, please do not hesitate to reach me by email at zhangerik92@gmail.com . Fair warning though: I am in the process of undergoing a big relocation, and my flight departs for the UK on the 17th. So the next few days will likely be hectic for me; apologies in advance if I do not respond to your emails promptly!

  34. Thanks, Erik! With apologies for the late reply:

    Sure, theorizing has to start somewhere, but one would hope that there is a dialectically effective argument for views as central to our moral outlook as the numbers (not) counting. In any case, though, I’m all for “[letting the] defenders of each philosophically interesting position develop their preferred position into a mature form” and doing a comprehensive assessment later. As others have noted, you make a massive contribution to this overall project!

    I realize that you don’t take your account to be committed to the aggregation of axiological and deontological reasons. But I was wondering if you can really avoid such a commitment (in some sense of “aggregation” that numbers skeptics should take issue with). In your reply to Joe, you write:

    “First, there is reason of some strength to save A’s life at the cost of forgoing the aggregate values of two lives.
    Second, since B’s life is just as important, equal consideration dictates that there is reason of the same strength to save B’s life at the cost of forgoing the aggregate values of two lives.
    Third, if there is reason of some strength to save B at the cost of two lives, then there is reason of a comparatively greater strength to save B at the cost of just a single life.”

    As I suggested, numbers skeptics should, if they accept your axiological claims, hold that the deontological reasons here at play trump the axiological ones. I.e., they should say that the deontological reason to save B is “infinitely” stronger than the reason grounded in the axiological cost of one or two lives. This would block your argument, I think; it entails that your third point above is false.

    The principled reason for a coin flip I find most compelling is roughly as follows: It reasonably matters to each separate person in a rescue situation that their rescue chance be maximized. If we fail to do so without good reason, we wrong the respective person. In conflict cases, equal consideration/impartiality constrains the degree to which we can maximize each person’s rescue chance and provides us with a good reason for giving each person a 50% chance (instead of giving someone more). If we fail to do so, someone will have the complaint that we could have done more to rescue them and failed to do so for no good reason.

    Incidentally, I also think (pace Henning 2023 and Doug above) that an axiomatization of the coin flip – not the axioms behind majority rule – provides the best operationalization of the idea of “giving each person an equal say.” If you want to be rescued in a Taurek case, you can cast your +1 vote for the respective option. But the social choice rule is probabilistic and grants each +1 voter and equal maximal chance to secure their preferred outcome.

Leave a Reply

Your email address will not be published. Required fields are marked *