Welcome to our highly anticipated discussion of Joe Horton‘s “Aggregation, Risk, and Reductio.” The paper is published in the most recent issue of Ethics; you can find it here. Johann Frick‘s critical précis is immediately below. Please join the discussion!
Johann Frick writes:
It is a great pleasure to kick off our discussion of Joe Horton’s extremely rich and thought-provoking article “Aggregation, Risk, and Reductio” (Ethics, 2020). I will begin with a brief synopsis of some of Joe’s main claims, followed by a few critical comments.
Synopsis
Consider the following two questions:
QUESTION 1: Is there some number n such that I should save n people from a substantial burden, such as paralysis, rather than saving one person from a severe burden, such as death?
QUESTION 2: Is there some number m such that I should save m people from a minor burden, such as a migraine, rather than saving one person from a severe burden, such as death?
Proponents of full aggregation answer both questions in the affirmative. Proponents of full anti-aggregation answer both questions in the negative. Proponents of partial aggregation (PA) seek to justify an affirmative answer to the first and a negative answer to the second question.
The stated objective of Joe’s paper is nothing if not ambitious. He aims to offer a reductio trilemma for proponents of any partially aggregative view. (Though Joe does not say this, it seems to me that, if his arguments are sound, they would likewise cut against all versions of full anti-aggregation. So, the potential implications of Joe’s discussion are even more far-reaching than he advertises).
To be acceptable, Joe maintains, a partially aggregative moral view must satisfy three desiderata:
D1: It must avoid implausible implications in individual cases.
D2: It must avoid implications that are in tension with the intuitions that incline people toward partially aggregative views.
D3: It must avoid inconsistent implications across cases that are in all morally relevant respects equivalent.
However, Joe argues, no partially aggregative view can satisfy all three desiderata. Hence, no partially aggregative view is acceptable.
Joe seeks to demonstrate this by examining the verdicts that a PA view would render in the following trio of cases:
VILLAIN 1: A villain has kidnapped A and B. He will either (1) inflict a migraine on A, or (2) inflict a one-in-a-zillion chance of death on B. You must choose which.
VILLAIN 2: A villain has kidnapped ten zillion X people and ten zillion Y people. He will either (1) inflict a migraine on each X person, or (2) randomly select and kill ten Y people. You must choose which.
VILLAIN 3: A villain has kidnapped ten zillion X people and ten zillion Y people. He pairs each X person with a Y person. For each pair, the villain will either (1) inflict a migraine on the X person, or (2) give the Y person a ticket for a lottery with ten zillion tickets. You must choose between these options for each pair in turn. You know that, after you have chosen for each pair, the villain will randomly select ten tickets and kill anyone who has a corresponding ticket.
What verdicts should a PA view give about each of these three cases? Joe reasons as follows:
Lest it violate the first desideratum, a PA view must imply that we should (or at least can permissibly) choose (2) in Villain 1. Any other answer would be simply implausible. As Joe puts it “we frequently impose tiny chances of death on some people as a side effect of sparing others from minor burdens, and this behavior seems clearly permissible.” (p. 516)
With regard to Villain 2, Joe argues that a PA view must imply that you should choose (1). Denying this, he thinks, would violate his second desideratum: “It would be bizarre to hold that, in cases involving certainty, there is no number of people you should save from migraines rather than saving one person from death, and yet also hold that, in some cases involving risk, you should save a huge number of people from migraines rather than saving ten people from death.” (p. 517).
But finally, in Villain 3, Joe thinks that a proponent of PA is committed to choosing (2) for each pair. After all, he reasons, what you confront in Villain 3 is a series of choices, each of which is exactly like the choice you faced in Villain 1. And in that case, we said, a proponent of PA would choose (2). So, this is also what they should (or at least permissibly can) choose for each pair in Villain 3.
But now notice that this judgment, combined with the verdict about Villain 2 above, seems to lead to a violation of Joe’s third desideratum: “[C]hoosing (2) for each pair in Villain 3 is choosing that the villain randomly select and kill ten Y people rather than inflicting a migraine on each X person. And that is the same choice PA will condemn in Villain 2. So, PA will be inconsistent, in the sense that it will have different implications across cases that are in all morally relevant respects equivalent.” (p. 517). Of course, if a proponent of PA tried to avoid this problem by revising their judgments about either Villain 1 or Villain 2, this would bring them into conflict with one of the other two desiderata instead. Hence, there is no partially aggregative view that is acceptable, because there is no partially aggregative view that satisfies all three of Joe’s desiderata.
Joe considers a way in which a proponent of PA might attempt to justify a different verdict about Villain 3. Perhaps, the proposal goes, we shouldn’t think of the choice you must make for each pair of X and Y people as analogous to the decision you had to make for A and B in Villain 1. This is because the right way to think of the choices in Villain 3 is not as individual acts, but as parts of a sequence of acts. And it is at this level that partial aggregation applies:
[A]s some proponents of partially aggregative views have suggested, PA could be a view that applies not to individual acts, but rather to sequences of acts. In Villain 3, if you choose (2) for each pair, you perform a sequence of acts that you know will result in ten deaths. If PA applies to sequences of acts, it could condemn this sequence. Suppose PA does apply to sequences of acts, and that it forbids you from choosing (2) for each pair in Villain 3. What will it imply you should do in this case? There are two possibilities. PA will imply either that you should choose (1) for every pair, or that you should choose (2) for some number of pairs and then choose (1) for the others. (p. 518)
For ease of future reference, I will dub this the “Sequence Proposal”. Such a proposal has indeed been put forward in the literature. Joe’s article mentions discussions by Lazar (2018), Tadros (2019) and Lazar and Lee-Stronach (2019). The most explicit recent discussion of this idea is Alec Walen’s, in his article “Risks and Weak Aggregation: Why Different Models of Risk Suit Different Types of Cases” (forthcoming in the next issue of Ethics). Walen writes, in response to Joe:
If the serial version of the case really is morally equivalent to the holistic version of the case, then [the agent] can modify what she would do in the individual cases (…). She could do that in two ways: she could decide that the villain should [inflict a migraine on each X person], or, better, she could decide that there is some acceptable risk of death that she could inflict for the sake of preventing migraines and give out lottery tickets [to Y people] until she reached that risk of death. (…) After that, [the agent] would have to choose that the villain [inflict migraines on the remaining X people].
Joe accepts that applying PA not to individual acts but to sequences of acts would permit the proponent of PA to avoid the third horn of his trilemma. But this is a Pyrrhic victory, Joe contends, because the Sequence Proposal will have implausible implications in other cases, thereby violating Joe’s first desideratum instead. One such case, Joe contends, is
LONG LIFE: You will live an extremely long time—zillions of years. As you look ahead at your long life, you know there will be frequent opportunities to spare some people from minor burdens, or give them minor benefits, by acting in ways that expose others to tiny chances of death. Given the extreme length of your life, it is a statistical certainty that, if you take these opportunities, eventually you will kill someone.
As Joe writes: “If PA applies to sequences of acts, it will forbid you from taking these risky opportunities. It will imply either that you should never take these opportunities, or that you should take some and then refuse to take any more. Both implications are implausible.” (p. 518).
Having laid out his reductio trilemma, Joe proceeds to review a number of extant PA views from the literature (put forward by Alex Voorhoeve, Mike Otsuka, Sophia Moreau, and Seth Lazar), seeking to demonstrate that they all find themselves impaled on at least one the horns of his trilemma. While this part of Joe’s paper is interesting and impressive, I won’t discuss it here. Instead, it is my hope that some of those targeted by Joe’s critique will jump in in the comments to defend themselves. 😉
Suppose all of this is right — what follows? The most plausible way out of his trilemma, Joe argues, is to swallow the second horn and accept that you should choose (2) for each pair in Villain 2. And the most natural explanation for why this is the case is that the aggregate of ten zillion migraines morally outweighs ten deaths. But if we accept this explanation, Joe argues, we should accept a fully aggregative view.
So much for my quick synopsis of Joe’s paper. I will now present a few critical comments about Joe’s argument, to get the discussion started.
Comments
A persuasive argument for full aggregation or an impossibility result?
Let us begin in a maximally concessive spirit. Let us suppose, for the sake of argument, that Joe is right and his trilemma is indeed inescapable for any partial (or fully) aggregative moral view. Even granting this, should we agree with Joe’s further claim that full aggregation emerges as the victor?
This isn’t clear to me. I am not convinced that embracing full aggregation would constitute a satisfactory way of resolving his trilemma either, by Joe’s own lights. For, given the highly counterintuitive implications that full aggregation has in many cases (TRANSMITTER ROOM; LOLLIPOPS FOR LIVES, etc.), it seems that a fully aggregative view cannot avoid violating Joe’s first desideratum. This matters, because avoiding implausible implications about cases surely is a desideratum for any moral view, not just for proponents of partial aggregation.
If this is right, then even if Joe’s other arguments are sound, they should perhaps be construed, not as establishing that we must accept full aggregation, but instead as suggesting an impossibility result. If Joe is right, there is no way of fully satisfying his three desiderata.
But are all of Joe’s other arguments sound? It seems to me that there are at least three different ways (not all of them mutually consistent) in which proponents of PA views might seek to challenge Joe’s trilemma argument:
CHALLENGE 1: Would a view that told us to choose (2) in Villain 2 necessarily be in tension with the intuitions that incline people to accept partially aggregative moral views, as Joe claims?
CHALLENGE 2: Is Joe right that the Sequence Proposal, which promises to avoid the third horn of his trilemma, would commit us to unacceptable verdicts about cases like Long Life (thereby impaling us on the first horn instead)?
CHALLENGE 3: Contra Joe, is there perhaps some morally relevant difference between Villain 2 and Villain 3, such that choosing (1) in the former case and (2) in the latter case would not, in fact, violate Joe’s third desideratum?
In what follows, I will briefly discuss the first of these challenges and will then spend a little more time developing the second. I will not here examine the third challenge, though I flag it as a dialectical possibility, in the hope that someone might take it up in the discussion.
Challenge 1:
Consider a view such as Seth Lazar’s (2018), which tells us to choose (2) in Villain 2. Would such a view necessarily be in tension with the intuitions that incline people to accept partially aggregative moral views, as Joe maintains? Recall how Joe argues for this claim:
“It would be bizarre to hold that, in cases involving certainty, there is no number of people you should save from migraines rather than saving one person from death, and yet also hold that, in some cases involving risk, you should save a huge number of people from migraines rather than saving ten people from death.” (p. 517).
This reasoning strikes me as unpersuasive, since it seems to clearly overgeneralize. Consider the following pair of cases:
MIGRAINES VS DEATH (CERTAINTY): We can either (1) save 10 known people from death or (2) save 1 zillion known people from a migraine.
MIGRAINES VS DEATH (RISK): We can either (1) withhold our aspirin tablets. Then 1 zillion people will suffer a migraine, but no-one dies as a result of taking aspirin. Or we can (2) distribute our aspirin tablets to whoever needs them. Then approximately 10 people will die as a result of taking aspirin (we can’t know who), but 1 zillion people are spared a migraine.
Many people, myself included, judge that you should choose (1) in the former case, but (2) in the latter. Indeed, Migraines vs Death (Risk) is simply a stylized version of reality: aspirin tablets cure many people of migraines, but sometimes have fatal side-effects. Yet we don’t think it would be impermissible for a public health authority to distribute aspirins, even to a very large population.
Would it be a problem for an account of partial aggregation if it supported this combination of views (as the above passage from Joe suggests)? On the contrary. This would be a virtue, not a vice, of a partially aggregative view. There seems no tension in our judgments because, at least in cases like Migraines vs Death (Risk), the presence of risk clearly does seem to make a morally relevant difference: It is one thing to expose a zillion people to a tiny risk of death (even foreseeing that ten of them will die), when doing so is in their own interest ex ante, because it promises to cure them of a migraine. It is quite another to expect 10 people to accept the severe burden of certain death, so that others are spared the comparatively trivial burden of a migraine. While the kind of moral outlook that endorses partial aggregation should balk at the latter action, it need detect no problem with the former.
Admittedly, Villain 2 is a different sort of case. Whereas the interests of the parties in Migraines vs Death (Risk) are not “competitive ex ante” (in the terminology of my 2015 paper) in Villain 2 there are two groups of people, the X’s and Y’s, whose interests are in conflict from the get-go. Given this, many (myself included) are less sure about Lazar’s claim that choosing option (2) in this case, too, would indeed be the right thing to, all things considered.
If our main misgiving, here, is simply one of intuitive fit, then this is a problem that Lazar shares with proponents of full aggregation (such as Joe), who reach the same verdict albeit for different reasons. But perhaps one can dig deeper and find further reasons why a commitment to choosing option (2) in Villain 2 should be especially hard to stomach for proponents of partial aggregation.
My point, however, is that the specific argument to this effect which Joe has given us is not compelling. Rejecting a view like Lazar’s on the grounds that it would in general be ‘bizarre’ for proponents of partial aggregation to draw a sharp moral distinction between cases involving certainty and some cases involving risk, seems mistaken. For there are pairs of cases, such as Migraines vs Death (Certainty) and Migraines vs Death (Risk) where this seems like exactly the thing to say.
Let us turn now to the second challenge.
Challenge 2:
We sketched above the Sequence Proposal, which (at least in some cases) applies partial aggregation not to individual acts, but rather to sequences of acts. As we saw, this idea promises a way of escaping the third horn of Joe’s trilemma. Joe’s objection, however, is that this proposal would have unacceptable implications for other cases, such as Long Life.
But is this so? Joe is surely right in what he says about Long Life: it would be implausible to always refrain from risky acts of beneficence, or indeed to put a limit on how many such actions you can perform over your lifetime. The question is whether accepting the Sequence Proposal for cases like Villain 3 would commit its proponents, as a matter of consistency, to these implausible conclusions about Long Life. In what follows, I will try to make a case, on their behalf, that this is not so. Despite superficial similarities, the choice situations in Villain 3 and Long Life are importantly different, in ways that make it plausible for proponents of the Sequence Proposal to apply it to the former case but not the latter.
In Villain 3, you first decide how many Y people to put at risk of death by choosing (2) rather than (1) for their pair. Let n be the number of times that you choose to pursue the risky option. Then, once all these choices have been made, the villain’s lottery determines the fate of all Y-individuals whom you have placed in jeopardy. The structure of this case thus instantiates the following type of pattern:
PATTERN 1: Choice 1; Choice 2; Choice 3; etc. –> Lottery.
Notice that while the risk to each particular Y person of being killed is never greater than one-in-a-zillion, no matter the size of n, the risk that some person(s) will be selected to be killed, increases, the higher n. According to one family of partially aggregative views – “ex post” views – this fact is of inherent moral significance. It matters, according to such views, that the risk that some Y-person(s) are killed, in pursuit of moderate benefits for X-people, be held within acceptable bounds. For proponents of ex post views, it is therefore natural to consider the relevant object of moral assessment in a case like Villain 3 to be the totality of the agent’s decisions prior to the lottery, not each individual decision considered in isolation. To ensure that the risk that some Y person(s) will be killed remains within acceptable bounds, the agent must make it the case that n remains below a certain threshold over the sequence of decisions. (That threshold may, in some cases be 0; in other cases, it will be some positive integer). As a result, the permissibility of enrolling a given Y person in the villain’s lottery cannot be settled in isolation. Rather, we must know how often in the sequence we have already made (or will make) this type of risky choice.
The choice situation in Long Life, however, is crucially different. Your life will contain many points at which you have to decide whether to spare someone from a minor burden (or give them a minor benefit), by acting in ways that expose others to tiny chances of death. If you decide to take the risk, a ‘natural lottery’ takes place, which determines whether the risk for those endangered by this decision materializes or not. Over the subsequent course of your life, further opportunities for risky beneficence present themselves, and the story repeats. (Absent special circumstances, we can think of the different natural lotteries as probabilistically independent). The choice structure of this case thus instantiates the following pattern:
PATTERN 2: Choice 1 –> Lottery 1; Choice 2 –> Lottery 2; etc.
I submit that to proponents of an ex post view this difference in the choice structures of Villain 3 and Long Life should make an important moral difference.
To see why, consider first a modification of Long Life that would make it relevantly like Villain 3, by the lights of the Sequence Proposal:
Long Life*: At the start of your life, you have to enter a binding pre-commitment concerning whether or not you will take various opportunities to benefit others that will arise over the course of your long life. Each act of beneficence will come with a tiny risk of killing a third party. Let n be the number of times that you precommit to perform such risky acts of beneficence over the course of your life. You have to settle on n before any of the corresponding natural lotteries have played out.
This choice situation could be represented as follows:
PATTERN 1*: Choice 1; Choice 2; etc. –> Lottery 1; Lottery 2; etc.
In the morally relevant respects, this is like the choice situation in Villain 3. (That there is a single lottery in Villain 3, which settles the outcomes of all risky decisions at once, whereas the outcomes of the risky decisions in Long Life* are settled by different, probabilistically independent, lotteries is of no consequence, I believe. What matters is that, in both cases, all decisions have to be taken before the lottery/lotteries which resolve them take place). In Long Life*, as in Villain 3, the greater you make n, the higher the likelihood that you will kill someone over the course of your life. Hence, as in Villain 3, a proponent of an ex post view will think of the totality of your decisions as the appropriate object of moral assessment, and will instruct you to keep the value of n below a certain threshold. (To fix ideas, let’s suppose that you ought to make it the case that n < 50).
By contrast, I believe that taking an analogous approach to the original Long Life case (by, say, following a rule that instructs you to keep your number of risky acts of beneficence below 50) would be implausible, even by the lights of the ex post view.
To see this, suppose you have already helped 49 times so far, and nothing has gone wrong. Can you help some more or have you ‘used up’ your permissible occasions for helping? Clearly, it is permissible to go on helping. After all, the probability that someone will be killed as a result of your helping a further time is no higher at this point than at any earlier point in the sequence. The probabilities ‘reset’ after each lottery. “The dice have no memory”, as the gambler’s adage goes. Hence, how often you have already gambled and been lucky is irrelevant to what you are permitted to do now.
Likewise, suppose that you have been unlucky early on. Tragically, the very first time you chose to perform a risky act of beneficence, things went wrong and someone was killed as a result. Should we conclude that you are now “done with helping” for the rest of your life? Again, the answer is clearly ‘no’. It would be a mistake, akin to the sunk cost fallacy, to treat the fact that past gambles haven’t paid off as being, in itself, a reason to refrain from taking (probabilistically independent) gambles in the future. Whether a further act of risky beneficence is permissible depends solely on facts about the future (does the promise of moderately benefiting someone justify the tiny risk of severely harming someone third party?), not on what has already happened up to this point.
The general lesson is this: unlike situations that instantiate Pattern 1 or 1*, in cases that instantiate Pattern 2 your past decisions to engage in risky beneficence are irrelevant to the permissibility of deciding to do so again, even by the lights of the ex post view. This is why such cases naturally fall outside the scope of the Sequence Proposal. Your actions, at each choice point, should be assessed individually and not as part of a sequence. Given this, there would be no objection to your always opting for the risky beneficent option.
A proponent of the ex post view could thus embrace the Sequence Proposal as a plausible way of analyzing cases like Villain 3, without being committed to endorsing the implausible implications that such a proposal would have for cases like Long Life. If this is correct, then perhaps the Sequence Proposal represents a way of avoiding the force of Joe’s trilemma.
In his paper, Joe seems to anticipate – albeit in highly compressed form – some of the moves I have just sketched. However, he does not appear to believe that they can fundamentally blunt the force of his trilemma. He suggests that his trilemma argument could just as well be presented in a way that substitutes for Villain 3 a modified version of this case that makes it, in crucial respects, more like Long Life, namely
VILLAIN 3*: A villain has kidnapped ten zillion X people and ten zillion Y people. He pairs each X person with a Y person. For each pair, the villain will either (1) inflict a migraine on the X person, or (2) inflict a one-in-a-zillion chance of death on the Y person. You must choose between these options for each pair in turn.
But I am not so sure. As I read this case, the decision structure now conforms to Pattern 2, i.e. it is
Choice 1 –> Lottery 1; Choice 2 –> Lottery 2; etc.
This would indeed make the case relevantly more like Long Life than the original Villain 3, and therefore suggests that choosing the risky option (2) for each pair now ought to be permissible, since our choices for each pair should now be assessed individually and not as part of a sequence.
What I question is that this produces any kind of problematic inconsistency with our judgments about Villain 2 (assuming that we agree with Joe that a proponent of PA should choose (1) in that case). For our judgments across cases are only inconsistent if the cases are indeed alike in morally relevant respects. But this is precisely what a proponent of the Sequence Proposal ought to deny. To the extent that Villain 3* has indeed become more like Long Life than the original Villain 3 case was, it has become less like Villain 2. So the divergence of our judgments across these cases need not trouble us.
This concludes my critical précis of Joe’s excellent paper. Hearty thanks for having given us all so much to chew on. I very much look forward to the discussion!
Thanks to Joe for the excellent paper and thanks to Johann for the excellent critical précis! I’d like to try out what Johann calls ‘CHALLENGE 3’. Joe writes:
“I claimed earlier that choosing (2) for each pair in Villain 3 is “in all morally relevant respects equivalent” to choosing (2) in Villain 2. This claim might also be challenged. It might be argued that, because the permissibility of our acts is determined by more than just their outcomes, there could be an important moral difference between bringing about an outcome via a sequence of choices and bringing about the same outcome via a single choice.
But even if this difference is sometimes morally important, it is very implausible that it makes a decisive moral difference in these villain cases. If it did, it would follow that, although it is permissible to sequentially choose (2) for each pair in Villain 3, it is wrong to cut to the chase, by making a single choice for the villain to give a lottery ticket to each Y person.” (519-20)
But it might seem there is a decisive moral difference here. The choice of (2) in Villain 2 imposes a sufficiently high risk of death (choosing (2) in Villain 2 is well over the line as it imposes a one hundred percent chance of ten randomly selected Y people being killed). No individual choice of (2) in Villain 3 imposes a sufficiently high risk of death.
Of course, the entire sequence of choices of (2) in Villain 3 imposes a sufficiently high risk of death, and you might make each such choice in the sequence. But in what sense can you choose to make this sequence of choices? Suppose that in Villain 3 you confront each choice in the sequence on its own, so that diachronic coordination is not possible. It might then seem less problematic to claim that, although it is permissible to sequentially choose (2) for each pair in Villain 3, it is wrong to choose (2) in Villain 2. Or suppose that, rather than the same agent confronting each of the choices in the sequence in Villain 3, there are as many different agents as there are choices, each confronts only one, and it is not possible for these agents to coordinate. It might not seem problematic to claim that, although it is permissible for each agent to choose (2) for each pair in this variant of Villain 3, it is wrong to choose (2) in Villain 2.
Such iteration issues arise in many other contexts. Consider, for example, ‘anti-aggregationist’ principles like Temkin’s Disperse Additional Burdens View, collective harm puzzles like Parfit’s Harmless Torturers, and various diachronic Dutch books. And it strikes me as an increasingly popular—though by no means uncontroversial—view that there can be decisive moral or rational differences between bringing about an outcome via a sequence of choices and bringing about the same outcome via a single choice. Suppose we already claim that this view applies in a wide range of contexts. What would we have to lose by claiming that it applies here too? Is there something *especially* implausible about claiming that there is this sort of moral difference between Villain 2 and Villain 3?
I am extremely grateful to Johann for writing this excellent critical précis. I am especially happy that he was willing to write it because it was his objectively wonderful 2015 PPA article that got me thinking about these issues (and thereby gave me, finally, a topic for my PhD dissertation).
Johann offers three main responses to my argument. First, he argues that, because fully aggregative views have very counterintuitive implications in certain cases, we can plausibly see my argument not as a reductio of partially aggregative views but rather as an impossibility theorem. On this interpretation, my argument shows that no moral view can avoid implausible implications in individual cases, avoid implications that are in tension with the intuitions that motivate the view, and avoid inconsistent implications across cases that are in all morally relevant respects equivalent. Second, he argues that it might still be plausible for proponents of partially aggregative views to accept that you should choose (2) in Villain 2, because my explanation of why they cannot plausibly accept this implication is unpersuasive. Third, he argues that proponents of partially aggregative views might be able to escape my reductio by adopting “the Sequence Proposal”, without thereby committing themselves to implausible conclusions about cases like my Long Life.
Response 1
Johann is right that it is plausible to interpret my reductio as an impossibility theorem rather than as an argument for fully aggregative views. But it seems to me more plausible to interpret it as an argument for fully aggregative views. This is because the most counterintuitive implications of fully aggregative views seem to me less counterintuitive than the most counterintuitive implications of non-aggregative and partially aggregative views, and also less counterintuitive than accepting that we should reject all moral views.
I partly defend this interpretation of my argument in the paper, by emphasising the counterintuitiveness of the positions available to proponents of partially aggregative views (assuming my argument succeeds). I think there is not much that can be said to make the most counterintuitive implications of fully aggregative views more palatable. But it might be worth emphasising that, even if the most counterintuitive implications of fully aggregative views were on a par with the most counterintuitive implications of partially aggregative views, there would still be strong reason to prefer fully aggregative views. For the idea that, other things equal, we should maximize total well-being, or minimize total complaints, is simpler and more intuitive than the ideas that motivate partially aggregative views.
Response 2
Johann is again right that my initial explanation of why proponents of partially aggregative views cannot plausibly accept that you should choose (2) in Villain 2 is unpersuasive, and he is right about why. However, this explanation can, I think, be repaired.
My initial explanation is this:
“It would be bizarre to hold that, in cases involving certainty, there is no number of people you should save from migraines rather than saving one person from death, and yet also hold that, in some cases involving risk, you should save a huge number of people from migraines rather than saving ten people from death.”
I should instead have said this:
—It would be bizarre to hold that, in cases involving certainty, there is no number of people you should save from migraines rather than saving one person from death, and yet also hold that, *in cases with the structure of Villain 2*, you should save a huge number of people from migraines rather than saving ten people from death.—
I offer this more careful version of the explanation in the final section of the paper.
Seth Lazar has recently developed a partially aggregative that does imply that you should choose (2) in Villain 2, and he suggests that proponents of partially aggregative views should embrace this implication. Johann defends Lazar’s position as follows:
“If our main misgiving [with the implication that you should choose (2) in Villain 2] is simply one of intuitive fit, then this is a problem that Lazar shares with proponents of full aggregation (such as Joe), who reach the same verdict albeit for different reasons. But perhaps one can dig deeper and find further reasons why a commitment to choosing option (2) in Villain 2 should be *especially* hard to stomach for proponents of partial aggregation. My point, however, is that the specific argument to this effect which Joe has given us is not compelling. Rejecting a view like Lazar’s on the grounds that it would *in general* be ‘bizarre’ for proponents of partial aggregation to draw a sharp moral distinction between cases involving certainty and some cases involving risk, seems mistaken.”
Despite what the above quote from me suggests, my reason for rejecting partially aggregative views like Lazar’s is not that it is bizarre to draw a sharp moral distinction between cases involving certainty and cases involving risk, but rather that it is bizarre to draw a sharp moral distinction between cases involving certainty and cases with the structure of Villain 2.
I discuss Lazar’s view in the final section of the paper. I there offer a slightly different presentation of my grounds for rejecting the view. Suppose that in
Lollipops for Lives: If you grant the Devil permission to kill a few people in one half of the universe, he will give everyone in the other half a lollipop.
Since the Devil has not told you who will be killed, Lollipops for Lives is relevantly like Villain 2. So, Lazar’s view implies that, if enough lollipops are on offer, you should make the trade. If we are willing to accept this implication, can we really reject fully aggregative views because they direct us to exchange lives for lollipops in cases involving certainty?
Response 3
Johann proposes a way for proponents of partially aggregative views to use the Sequence Proposal to escape my reductio, without thereby committing themselves to implausible conclusions about cases like my Long Life. I will need to think more about this proposal, but here is my initial reaction.
Suppose you often take opportunities to spare some people from minor burdens by acting in ways that expose others to tiny chances of death. Though it seems to you that you have never killed anyone by taking these opportunities, you cannot be sure. Perhaps you did clip someone with the car on your way to the pharmacy? So, as you go through life taking these opportunities, the chance you have killed someone, from your perspective, rises steadily.
Suppose next that you know you will live an extremely long time—zillions of years. As you look ahead at the rest of your long life, you know there will be many more opportunities to spare some people from minor burdens by acting in ways that expose others to tiny chances of death. I think the view Johann proposes forbids you from taking these opportunities. It implies either that you should never take these opportunities, or that you should take some and then refuse to take any more. Both implications seem to me implausible.
Johann might respond by arguing that what matters on his view is not keeping the subjective chance of you having killed someone sufficiently low, but rather keeping the objective chance of you having killed someone sufficiently low. I think his view would then draw an implausible distinction between Villain 3 and a version of Villain 3 in which the villain runs the lottery before you make any decisions (and does not tell you which tickets were drawn).
Thanks for a great paper, Joe. I learned a lot from it. And thanks Johann for a really excellent precis. I’m sympathetic to your (Joe’s) conclusion but want to offer a reply on behalf of the partial aggregationist, which builds on some of what Johann says in Challenge 2. My reply takes for granted that PA can apply PA to a sequence of acts. The crucial further assumption is that the future is metaphysically open or unsettled, such that for example it is now indeterminate whether or not there will be a sea battle tomorrow, because it has not yet happened or failed to happen.
I share your judgments about Villain 1 and Villain 2. But even in a partial aggregationist mindset I find it intuitively permissible to pick (2) in Villain 2*, thereby inflicting a 1/zillion chance of death on all 10 zillion Y people. What can the partial aggregationist say distinguishes Villain 2 and Villain 2*? I agree with Johann that one important difference is that in Villain 2* the gambles are probabilistically independent, while in Villain 2 they are dependent. This could matter because it means that when you choose (2) in Villain 2 it is determinately the case that ten people will die, but when you choose (2) in Villain 2* there is (on the crucial assumption I’m making) no fact of the matter how many people, if any, die. (Just to be clear, I’m making a metaphysical claim, not an epistemic one.) The most that is determinately the case when you choose (2) in Villain 2* is: 10 zillion people each face a 1/zillion chance of death. That is also true of choosing (2) in Villain 1.
So, I wonder whether the partial aggregationist could say that when it comes to Villain 3, it is impermissible to choose (2) ten zillion times because performing that sequence of acts determinately brings it about that 10 people die, while the most that performing the corresponding sequence in Villain 3* (determinately) brings about is that ten zillion people face a 1/zillion prospect of death.
One more scattered idea, regarding how many lottery tickets you may hand out in Villain 3. In Villain 1, if we increase the probability of death that choosing (2) will inflict on B, at some point it will become impermissible to inflict it and you should instead inflict the migraine on A. Call that threshold probability p. One explanation for that impermissibility appeals to the fact that your act inflicts a p chance of death on B (de re). But an alternative explanation will appeal to the fact that your act makes it the case that there is a p chance that someone will die (de dicto). If we take on the latter explanation, maybe we can apply the same reasoning to Villain 3: the threshold at which it becomes impermissible to hand out another ticket is p*(10 zillion) tickets, because once you hand out the p*(10 zillion)th ticket, you have made it the case that there is a p chance that someone will die.
Quick erratum: at the end of the second paragraph I wrote “That is also true of choosing (2) in Villain 1”. I meant something like “Similarly, when you choose (2) in Villain 1 you inflict on each person (in this case one person) a 1/zillion chance of death”.
Theron, thanks very much for joining. I think that is a plausible reason to think there is an important moral difference between choosing (2) in Villain 2 and choosing (2) for each pair in Villain 3. But, ultimately, accepting that it is wrong to choose (2) in Villain 2 and yet permissible to choose (2) for each pair in Villain 3 seems to me implausible, for the reason given in the passage from me that you quote. It would follow that, although it is permissible to sequentially choose (2) for each pair in Villain 3, it is wrong to cut to the chase, by making a single choice for the villain to give a lottery ticket to each Y person. I cannot see how it could be permissible to string the decisions out and yet wrong to cut to the chase.
Todd, thanks for these two suggestions. I think your second suggestion commits us to implausible implications in cases like the version of Long Life that I outline in my response to Johann above. The first suggestion seems to me a tempting way to go. But consider a version of Villain 2 in which, after you make your choice, the villain will, before doing anything else, roll a zillion-sided die. If the die shows one, the villain will release everyone unharmed. If the die shows any other number, the villain will, as in the original case, randomly select and kill ten Y people. I think that, on the view you suggest, it is permissible to choose (2) in this case, even though it is wrong to choose (2) in the original case. But it seems to me that adding the zillion-sided die cannot make such a significant moral difference.
Thanks Joe! What do you think about my variant of Villain 3 involving one agent per decision? There no one is stringing the decisions out.
Hi Joe, thanks for the great paper (and thanks Johann for the great précis)! One reaction I had to your comparison of Villain 3 to Long Life is that you know in Villain 3 that choosing (2) every time will cause ten people to die. But you don’t know in Long Life (or in Villain 3*) that you will end up causing anyone to die. There seems to be a morally relevant difference between a course of action that you know will kill someone and one that you don’t. (This is a version of Johann’s Challenge 2.)
Since you say that it’s “statistically certain” that you will kill someone in Long Life, you might respond that you do know this (assuming it ends up being true). This gives rise to lottery-paradoxical results. For example, it’s also statistically certain that each person you are in a position to affect will survive the tiny risk you impose. So by parity of reasoning you are in a position to know of each person who will survive that they will survive. If you can really know this, then great, go ahead and benefit them! It seems at least permissible not to run the risk for the remaining folks who you don’t know will survive (because they won’t) or who you know (by multi-premise closure) will survive. That’s really weird, so probably better to say that you just don’t know whether you’ll kill someone in Long Life. But then the seemingly relevant difference between Villain 3 and Long Life/Villain 3* remains.
Great discussion so far.
Joe, you suggest that a fully aggregative view is less counterintuitive than the kinds of risk principles that are consistent with partiallly aggregative views, such as certain sequence proposals.
Suppose that is right. I don’t think that this counts very heavily in favour of the fully aggregative view, because I don’t think that we should assess principles simply in virtue of how intuitive their implications are.
I think that principles concerning outcomes are basic and principles concerning risk are derivative. The moral value or disvalue of an outcome explains the value or disvalue of risking the outcome, and not the other way around. If that is right, it is not clear how the counterintuitive implications of restricted aggregation principles for risk could explain why restricted aggregationism is wrong.
Consider the view that if restricted aggregationism is true, the best risk-based principle in a sequence case involves imposing a very small risk of death on some people in a group of identically placed people, where those people are certain to receive a small benefit, but we must then stop rather than treating all members equally. This might be defended on the basis that the conjunction of our acts will impose too large a risk of death overall, and any death caused won’t be justified by benefits to other people. That is inconsistent with restricted aggregationism. That result about risk is strongly counterintuitive. But it is not clear how that fact about risk is supposed to explain what is wrong with restricted aggregationism, given that the restricted aggregationism cannot be made true or false by its implications for risk (assuming that the morality of risk is derivative on the morality of outcomes).
In contrast, restricted aggregationism can explain why we should take certain attitudes to risk, because they make true the morality of risking, because restricted aggregationism is an outcome-based principles, and such principles are basic.
As a result, I’m inclined to think that there are theoretical reasons to believe that a strongly counterintuitive principle concerning risk might be true, if that principle is explained by a principle concerning outcomes that we have good reason to endorse, even where the intuitive basis of the outcome-based principle is less strong.
As a result, I think that your argument for the fully aggregative view is of the wrong kind.
Jake, great to have you involved. I think you’re right that we don’t know you will kill anyone in Long Life. But we can make the same true in Villain 3. Suppose that, after you make all your choices, the villain will, before doing anything else, roll a zillion-sided die. If the die shows one, the villain will release everyone unharmed. If the die shows any other number, the villain will proceed as he would have in the original version of the case. We now don’t know that, by choosing (2) for each pair, you will kill someone. But our confidence that you will kill someone can be arbitrarily high. So, it seems to me that, if it is wrong for you to choose (2) for each pair in my original Villain 3, it must also be wrong for you to choose (2) for each pair in this new version.
Theron, my view about your multi-agent case is that each agent should impose the risk of death, just as the single agent should always impose the risk of death when asked to choose multiple times. This case might present a further problem for proponents of partially aggregative views, though.
Hi Joe, Johann, and others. Great discussion, as I expected. But I want to add two things, both basically laid out in my forthcoming paper (which Johann mentioned). And I have to add, Johann quotes me as saying that an agent can respond to Joe’s challenge with regard to the sequence version of Villain 3 in two ways, but he gives only 1. In fact, I give three ways to respond to the argument, one of which has two branches (both of which Johann mentioned).
So, the first thing to say, concerning the sequence case, is that I develop the point Johann makes about how long life involves a series of independent choices, but in a different way. My point–this is the last section of my paper–is that not all choices have to be a choice of a policy, as one has in Villain 3. One can have a different sort of choice, which is simply to endorse a certain kind of choice as morally acceptable, the implication being that even though it is repeated a zillion times, we don’t take the overall effect as a choice in its own right. What is relevant are only the smaller scale choices. One needs an argument to show that one should take all the similar smaller scale choices as part of a larger whole which is being chosen for itself. I think that argument is often missing in cases like driving people for minor goods, imposing TINY risks on others. There is no need to frame this as a single choice to permit a limitless number of deaths (over enough space and time) for the sake of a limitless number of small benefits.
Second, I think one can be an ex post theorist about cases in which we know who will suffer the harms, and yet allow those cases to be transformed into ex ante, conflict-free cases when we don’t know who will suffer. This is true Even If we know that people can be divided into two groups, the Xs and the Ys, who face different risks. The reason is that I think people have a duty to waive their potential ex post claims if, at the time an agent has to act, the odds of them suffering those claims is small enough compared to the benefit the others would enjoy. So, I come to the view that in both Villain 2 and Villain 3, one can defend the position that the right choice is to allow the 10 deaths. The reason is that none of the Ys who will die face much chance of dying, and the migraine is a real problem. They can all view the situation through the lens of Villain 1 (or going to the store to get migraine meds) and say: we should waive our ex post claim not to be killed because we are SO unlikely to suffer it, and the migraine is a real problem.
Note, this does NOT lead to unlimited aggregation. In Transmitter Room, we know who Jones is, and his claim beats all those who want to watch the World Cup match. And in various risk cases, if the risk is high enough, those who face the risk of a serious harm can refuse to waive it for the sake of another who will certainly suffer a nearly equivalent harm. Aggregation might not come up with the same result. So, PA survives as important, but the dilemmas are dissolved.
That, at any rate, is the thesis of my forthcoming paper.
Thanks Joe, Johann, and everyone else for the illuminating discussion.
I think Joe’s argument really puts pressure on PA proponents to bring to the surface a number of important background assumptions they may be carrying around. Some that have come up so far:
1) Inseparability: Joe’s argument assumes that we can assess the permissibility of acts separately, without considering what other acts will be performed. As I read Johann, Theron, and Todd’s suggestions, a proponent of PA may deny this separability assumption, holding that the permissibility of an act may depend on what other acts are performed. However, it seems that at this point of the dialectic, we need to look at the PA ‘manual’ to determine, upfront and not post hoc, when these moral interactions kick in and when they do not. I think Johann’s appeal to ex post view is helpful here, as are the ideas given by Theron and Todd. But I think a fuller account is needed, given how clearly Joe has put the challenge. The crucial question, it seems to me, is whether Joe (with this PA manual in hand) will be able to find some context where there is a set of risky acts that can be separably evaluated as permissible, while also together being morally equivalent to some impermissible risk.
2) Contextual Variance: When switching between contexts (e.g. Villain 3, Long Life, Long Life*, Villain 3*), I share Johann’s thought that the Long Life cases are relevantly different in ways that block the moral equivalence. But, again, anyone taking this view needs to provide the details about what contextual factors matter and how (Johann again suggests plausible ways forward). [As an aside, I happen to think, though, that the requirement to deprive benefits in Long Life is not implausible: if I am an eternal taxi driver who prioritises the safety of my passengers, I will deny them the benefit of thrills of a faster ride (assuming faster rides don’t save their live or promote other relevantly important goods), because of the overall safety risk this policy will incur.]
3) Levels of Act Description: Although Seth and I conjectured in our 2017 that in some contexts particular levels of act descriptions should be privileged e.g. sequences over individual acts, or vice versa (as Alec may be suggesting in his forthcoming work?), to my mind, at least, this raises a lot of questions that are related to, but different from inseparability and contextual variance. e.g. how long is a sequence — can there be sequences of sequences? In any case, more needs to be said to determine whether this really is an avenue through which PA proponents can escape Joe’s argument. This is, of course, to say nothing of Seth’s (2018) explorations of this debate.
Overall, whatever the eventual fate of PA, I think this latest critique by Joe will again help to push new work that will be beneficial to moral philosophy as a whole.
Thanks everyone,
Chad
Hi Joe, I think Alec’s paragraph starting with “So, the first thing to say” captures much of what I had in mind in my initial comment. I brought up the multi-agent variant of Villain 3 to highlight the fact that sometimes, “What is relevant are only the smaller scale choices.” And that, “One needs an argument to show that one should take all the similar smaller scale choices as part of a larger whole which is being chosen for itself.” Against the claim that [while each choice of (2) in the original Villain 3 is permissible, the choice of (2) in Villain 2 is wrong] you objected that you cannot see how it could be permissible to “string the decisions out” and yet wrong to “cut to the chase”. But suppose that, in Villain 3, the agent lacks the “larger scale” option to bring about the outcome in question via a series of “smaller scale” decisions; the agent’s only options correspond to the individual “smaller scale” decisions in the series, made one at a time and with blinders on to other (past or future) decisions in the series. For your “string the decisions out”/”cut to the chase” objection to apply, won’t the agent need to have some option above and beyond the individual “smaller scale” options? If not, then I don’t see what the objection adds—it would then seem to be saying nothing more than that it is implausible that [while each choice of (2) in the original Villain 3 is permissible, the choice of (2) in Villain 2 is wrong].
I had a question for Johann about his interesting suggestion about pre-commitment cases. Sorry if I’m misunderstanding.
Suppose that I have a huge pot of aspirin, where I know that over the course of my very long life I will be able to prevent zillions of migraines with the aspirin, but that if I do so I will be virtually certain to cause a few deaths. I could either destroy a large number of these aspirin now, so that the chance of my later causing any deaths is low, or not destroy them. If I don’t destroy my stockpile I will be able to help people with migraines as I come across them.
I wasn’t sure whether your analysis of Long Life* implies that I must destroy lots of aspirin now. In this case, I can (but need not) prevent myself from later taking a sequence of acts that will together almost certainly result in some deaths. Your suggestion might be read as requiring me to destroy my stockpile (or a large part of it) It might be argued that it is strongly counterintuitive that we are permitted to give out aspirin that is available, but we must destroy our stockpiles of aspirin to ensure that we don’t give them out.
Also, suppose that I get the decision wrong in a pre-commitment case. I don’t pre-commit to restricting the number of migraine-preventing acts I will perform sufficiently. Your suggestion seems to imply that I am now permitted to do a sequence of acts that I was required to pre-commit not to do simply in virtue of the features of that sequence of acts. There seems to be a tension there.
Thanks very much for the paper, Joe, and précis, Johann! I’m wondering what you make of the following response on behalf of the partial aggregationist: Joe’s right about all three of his cases, but death is not the worst of evils and migraines are pretty bad. So, imagine instead the following analogues, involving hangnails and torture:
Villain 1**: A villain has kidnapped A and B. He will either (1) inflict a hangnail’s worth of pain on A, or (2) inflict a one-in-a-zillion chance of agonising century-long torture on B. You must choose which.
Villain 2**: A villain has kidnapped ten zillion X people and ten zillion Y people. He will either (1) inflict a hangnail’s worth of pain on each X person, or (2) randomly select and inflict agonising century-long torture on ten Y people. You must choose which.
Villain 3**: A villain has kidnapped ten zillion X people and ten zillion Y people. He pairs each X person with a Y person. For each pair, the villain will either (1) inflict a hangnail’s worth of pain on the X person, or (2) give the Y person a ticket for a lottery with ten zillion tickets. You must choose between these options for each pair in turn. You know that, after you have chosen for each pair, the villain will randomly select ten tickets and inflict agonising century-long torture on anyone who has a corresponding ticket.
Suppose a partial aggregationist chooses (1) in each of these cases. That combination of judgements seems plausible, motivationally coherent, and consistent. This hypothetical partial aggregationist thus seems to have a view that meets all three of Joe’s desiderata.
Thanks to Joe for the great paper and Johann for the great précis! I’d like to raise two questions that are related to things that have been brought up.
The first question concerns Long Life and the implausibility of putting a limit on how many actions of a certain type it is permissible to perform within a lifetime. I agree that such a limit it is not exactly an attractive position to hold. But I wonder if, after all, it might be the lesser evil than adopting the fully aggregative view. Of course, there will hardly be any number x of actions such that performing x actions is permissible, but performing x+1 actions ist impermissible. But perhaps the boundary is vague? At least, when it comes to prudential rationality we adopt this kind of reasoning all the time and it doesn’t seem faulty. For example, it is common for people to start using their bike helmet when they begin to cycle more frequently. And if their aim is not to be injured in an accident over the course of their lives, this is not bad reasoning. Couldn’t it similarly be the case that preventing a zillion migraines takes priority over avoiding the small risk that someone dies, but not over avoiding the large risk that someone dies?
The second question concerns Joe’s claim that Villain 2 and 3 could be replaced by Villain 2* and 3*. Here, I disagree (and I argue for this at some more length in a manuscript). It seems to me that our intuitive response to dependent and independent risks differs a great deal in real life (regardless of whether the dependent risks certainly lead to harm or whether there is a chance that nothing happens) and that this is not just a sign of irrationality. To give just one example: In the 1990ies, there was a suspicion that someone had managed to place poisoned products in the shelves of a german supermarket. Consequently, the police chose to close the whole store, asking customers not to consume anything bought there. The reaction seems appropriate to me (and it was generally approved by the public), although number of poisoned products was believed to be very small. At the same time, Salmonellosis, which can be contracted from consuming homemade mayonnaise, caused around 60 deaths a year in Germany. Yet, it was not considered immoral to serve homemade mayonnaise to people without warning them about it being homemade. My guess is that the dependence/independence of the risks plays a role in shaping our reactions and that this role is not unjustified. Of course, real-life cases are always difficult in that there are a lot of distorting factors (e.g. the bad intention of the person placing poisoned products in the shelves in this case). But the pattern is too pervasive to be meaningless.
Victor, many thanks for this challenge. I agree with you that we should not assess principles simply in virtue of how intuitive their implications are. We should also assess the intuitiveness of the principles. But when a principle has an implication that is strongly counterintuitive, the seems to me a strong reason to reject it.
You say that because “the moral value or disvalue of an outcome explains the value or disvalue of risking the outcome, and not the other way around…, it is not clear how the counterintuitive implications of restricted aggregation principles for risk could explain why restricted aggregationism is wrong.” I agree that these counterintuitive implications cannot *explain why* partially aggregative views are wrong, but it seems to me they can still be *strong evidence that* partially aggregative views are wrong.
Is your view that, because principles concerning risk are derivative of principles concerning outcomes, the intuitiveness of the implications that partially aggregative views have in cases involving risk cannot give us *any* strong evidence that those views are false? That seems wrong to me. Surely some claims about cases involving risk are so implausible that we should reject views that imply these claims, regardless of how plausible these views are otherwise. If that is right, while my argument could be wrong in degree (because the implications it highlights are not sufficiently counterintuitive), it cannot be wrong in kind.
Alec, great that you could join. I really looking forward to reading the final version of your paper in print next month. You say above that you
“…come to the view that in both Villain 2 and Villain 3, one can defend the position that the right choice is to allow the 10 deaths. The reason is that none of the Ys who will die face much chance of dying, and the migraine is a real problem. They can all … say: we should waive our ex post claim not to be killed because we are SO unlikely to suffer it, and the migraine is a real problem.”
Here are a few questions and thoughts about this view:
First, you say the view does not have fully aggregative implications in Transmitter Room, because in Transmitter Room “we know who Jones is”. Suppose the case is revised such that, though we know who each of the zillions of World Cup viewers are, we don’t know who Jones is—he must be one of the zillions of people who hate watching the World Cup. Does your view then imply that we should leave Jones to suffer rather than cutting the transmission? If we accept that result, can we really reject fully aggregative views because they imply that we should leave Jones to suffer in the original version of the case?
Second, is it correct that your view implies that we should trade lives for lollipops in my version of Lollipops for Lives? If so, then again, can we really reject fully aggregative views because they imply that we should trade lives for lollipops in the standard version?
Third, you say in the quote above that it is permissible to choose the deaths in Villain 2 because “none of the Ys who will die face much chance of dying, and the migraine is a real problem” and “we should waive our ex post claim not to be killed because we are SO unlikely to suffer it, and the migraine is a real problem.” Suppose that, in a different case, the villain has kidnapped A and ten zillion others. He will either (1) give A a migraine, or (2) randomly select and kill ten of the others. Does your view imply that we should choose (2), because the migraine is a real problem and none of the ten zillion face much chance of dying?
Chad, thanks very much for getting involved. This is a nice way of framing the debate, and potentially really helpful. I do have a methodological (maybe that word is too grandiose) worry about this framing, though. I am less confident in general background assumptions like Inseparability than in the falsity of the case-level claims that proponents of partially aggregative views must embrace (assuming my argument succeeds). So, even if those general assumptions turn out to be false, that might not persuade me that partially aggregative views are defensible. Inseparability does seem to me false, as it happens, at least as you formulate it above. It seems to me that whether it is permissible for me to give someone a slow-acting poison can depend on whether I (or someone else) will quickly give her an antidote.
Theron, thanks for pressing me on this. You say: “For your ‘string the decisions out/cut to the chase objection’ to apply, won’t the agent need to have some option above and beyond the individual ‘smaller scale’ options?” I don’t think so. My “cut to the chase” remark is meant to make vivid the counterintuitiveness of holding that it is wrong to choose (2) in Villain 2 and yet required, or at least permissible, to choose (2) for each pair in Villain 3. Here is another, similar way of bringing out that counterintuitiveness. Compare two versions of Villain 3. In Version A, the agent has a ‘migraines’ button and a ‘tickets’ button. If she presses the migraines button once, every X person gets a migraine. If she presses the tickets button once, every Y person gets a ticket. In Version B, the agent again has two buttons, but she must choose between them for each pair in turn. Imagine also that the agent is able to move super quickly, so that she can execute all her decisions within a single second. Can we really accept that, although it is wrong for the agent in Version A to press the tickets button, it is required for the agent in Version B to press the tickets button ten zillion times in rapid succession? Could we really commend the agent in Version B while condemning the agent in Version A?
Hi Joe,
Thanks for the reply. Where a principle has counterintuitive implications in a case, the counterintuitiveness of those implications is normally explained by some value or idea that the principle fails to accommodate. So the cases help to reveal how intuitive principles are. When we consider derivative principles, such as principles concerning risk, it is less clear how these implications play this role.
For example, suppose someone argued – restricted aggregationism is false because it fails to accommodate the value of a group of people having better prospects where they are members of a large population, they don’t know whether they will be the ones suffering the very severe harm, or one of the many who will benefit, and outcomes for each person are made independent of what happens to the others. The fact that this is valuable is revealed by the Villain cases.
This argument doesn’t seem too good. Our valuing the prospective outcomes of our actions just depends on how we value the outcomes of our actions. And we obviously can’t justify killing a person for the sake of many people getting lollipops on the basis of the fact that, in some other world, this person would have had better prospects if this kind of thing is justified. This fact about prospects just seems to do nothing to unsettle the view that killing one person cannot be justified by giving lots of lollipops to many other people.
So I think it’s not a matter of measuring the intuitive costs. It is a matter of showing that the counterintuitive implications of restricted aggregationism for risk can explain why restricted aggregationism is false.
You suggest that their role is evidential – the fact that restricted aggregationism has these counterintuitive implications in risk cases (if it does) is evidence that restricted aggregationism is false. But what are they evidence of that then more directly explains why restricted aggregationism is false? I couldn’t see how that argument is supposed to go.
Now, you are right that sometimes strongly counterintuitive implications on risk might incline us to think that some argument can be mounted against some outcome-based principle. But that’s just an inclination – we need to find the arguments. And one thing that contemporary moral philosophy teaches us is that conventional views about morality are likely to be strongly inconsistent with each other, and many will fail.
In contrast, our intuitions about cases where we provide a small benefit to many whilst failing to save one person from death can clearly explain why seemingly appealing principles about risk are false – we have good reason to think that it’s not a good justification for killing a person that we will benefit many, and certain principles concerning risk that seem initially very appealing might fail because they fail to respond appropriately to this idea. And that idea, important though it is, tends to fade into the background in certain risk cases, which helps to explain our intuitions about them.
So when assessing the picture on restricted aggregationism overall, I think that there is at least a very heavy thumb on the scale in favour of a theory that has intuitive implications for outcomes even if it has strongly counterintuitive implications for risk.
Elliott, welcome to the discussion. I think you’re right that a more limited partially aggregative view, on which only the most trivial harms cannot be aggregated (when in competition with the most severe harms), could escape my reductio. I take it this limited view would be too limited for most proponents of partially aggregative views, though. I also think the intuitions that support this view could be better captured (more intuitively and simply) by holding that the most trivial harms are entirely morally irrelevant—perhaps we have no moral reason whatsoever to spare people from harms as trivial as hangnails (though we might have reasons of other kinds). Of course, we might instead think that even the most trivial harms are morally relevant. I think we should then hold that there is some very tiny risk of severe harm that we should choose over allowing these trivial harms. Though denying this seems plausible to me, accepting it seems plausible to me as well.
Johanna, these are great questions and examples. I think you’re right that, if we are going to accept that you should at some point stop taking the risks in Long Life, it is more plausible to hold that it is vague how many of these risks it is permissible for you to take. I am not sure that makes the position *sufficiently* plausible, though. As you say, some people do practice something like this position with respect to cycling. But it seems to me they should always be wearing their helmet, even when they are riding less frequently, at least if it is rationally required for them to wear their helmet when they are riding more frequently.
I look forward to reading your manuscript on dependent and independent risks. These cases are super interesting. I am inclined to think the patterns of response across them are explicable either by appeal to irrationality or by appeal to other distorting factors (like the one you mention). But I’ll withhold judgment on that until I’ve considered your arguments.
Thanks Joe for a great paper and everyone else for a very interesting discussion! Since I’m a bit late to the party, most things I wanted to say have already been covered. I’ll just add a thought to the above exchange between Victor and Joe.
(This is a question for Victor rather than Joe.) Victor writes that “principles concerning outcomes are basic and principles concerning risk are derivative. The moral value or disvalue of an outcome explains the value or disvalue of risking the outcome, and not the other way around. If that is right, it is not clear how the counterintuitive implications of restricted aggregation principles for risk could explain why restricted aggregationism is wrong.”
Here’s my thought: We might think that *axiological* principles concerning outcomes are basic and *axiological* claims concerning risk are merely derivative. However, when we are talking about the normative or deontic in a more direct way, as we are here and Joe is in his paper, it is unclear why this should be the case. After all, *all* choices are made under some degree of uncertainty. It thus seems implausible that our most basic normative principles should disregard risk and uncertainty entirely.
Take the following two principles.
1) Weak Ex Ante Pareto. If prospect A has higher expected value than prospect B for everyone, then we ought to choose A over B.
2) Weak Stochastic Dominance. If prospect A yields a better outcome than prospect B in all states of the worlds, then we ought to choose A over B.
These are principles concerning risk. Still, they are intuitively very plausible and independently so from any principles concerning the more direct evaluation of outcomes. We also know that many (or all?) not fully aggregative theories violate one or the other of these two principles. This is usually seen as a problem for such views, and rightly so. It wouldn’t be a good reply on part of these views to just shrug one’s shoulders and say that the view seems plausible under unrealistic conditions of certainty.
Addendum: Especially since we know that all thus far proposed views of partial aggregation have intuitively implausible implications even under conditions perfect certainty.
Victor, thanks for elaborating your challenge and approach to the debate. You write: “You suggest that … the fact that restricted aggregationism has these counterintuitive implications in risk cases (if it does) is evidence that restricted aggregationism is false. But what are they evidence of that then more directly explains why restricted aggregationism is false? I couldn’t see how that argument is supposed to go.” I think these counterintuitive implications are evidence either that our principles concerning outcomes are wrong, or that the correct principles concerning risk are not derivative of principles concerning outcomes. Of course, we might be so confident that our principles concerning outcomes are correct, and that principles concerning risk are derivative, that it is rational for us to just accept the counterintuitive implications of our principles concerning risk. But my view is that we do not yet have arguments for any principles concerning outcomes, or for the derivativeness of principles concerning risk, that are so persuasive as to justify us in accepting these counterintuitive implications.
Korbinian, great response, and thanks for joining.
Hello Korbinian
Thanks for the question. What I meant is just that when we assess whether it is wrong, permissible, justified or required to act in a way that creates a chance of some outcome, our assessment depends on some independent assessment of the value of the outcome. And the converse is not true – our assessment of the value of an outcome does not depend on an independent assessment of the value of chance. The fact that uncertainty is pervasive doesn’t make it any more basic. So we still have a problem of working out how judgements about derivative chance-based principles can illuminate the truth or falsity of basic outcome-based principles. Joe, I think, is right to suggest that they might provide us with evidence that outcome-based principles are false. But that’s just an invitation to show us that they really are evidence of the falsity of outcome based principles, and to flesh out what they are evidence of.
Hi Joe,
I don’t think that will do! The fact that lots of smart people believe some principle is also evidence of its truth. But it’s not the kind of evidence that we can rely on in philosophical argument without more. What we need is evidence that there is some value or idea that restricted aggregationism fails to attend to, where it is then shown that the idea explains the evidence. As I suggested the counterintuitive implications in risk cases do not plausibly indicate what that idea is directly, but at most provide evidence that there is such an idea. I couldn’t see what it was.
Thanks! Here’s a couple more points. It seems implausible to say that trivial harms are entirely morally irrelevant. If I could prevent everyone on earth from undergoing a hangnail’s worth of pain at very small cost, I think it would be immoral of me not to do so. And I share your view regarding tiny risks of severe harm versus trivial harms for sure. But since the contrary view is also plausible, at least one partially aggregative view has not been reductio-ed. That said, I take your point that this view won’t satisfy some advocates of existing partially aggregative views.
Victor, why not take the counterintuitive implications as evidence that partially aggregative principles concerning outcomes are mistaken, and specifically as evidence that they are mistaken *because they fail to aggregate the value of many small benefits*? The idea that n+1 of a benefit is better than n, regardless of the size of the benefit, is independently plausible. If we embrace this idea (in its simplest form), we avoid counterintuitive implications in my villain cases. So, why not think that my villain cases give us evidence that it is problematic for moral views to fail to attend to this idea?
Correction: Obviously we do not avoid counterintuitive implications in my villain cases—fully aggregative views come with a price tag. I meant to say the implications we get are less counterintuitive.
WOW, what a terrific discussion so far!
This is a comment for Joe:
I am not convinced by your response to my Challenge 3. I suggested that we can apply the Sequence Proposal to cases like Villain 3 (thereby avoiding the third horn of your trilemma), but without having to apply it to Long Life, where its implications are intuitively implausible. In your response, you suggest that this idea runs into trouble in versions of Long Life where I don’t know how past risks that I took have played out:
“Suppose you often take opportunities to spare some people from minor burdens by acting in ways that expose others to tiny chances of death. Though it seems to you that you have never killed anyone by taking these opportunities, you cannot be sure. Perhaps you did clip someone with the car on your way to the pharmacy? So, as you go through life taking these opportunities, the chance you have killed someone, from your perspective, rises steadily.“
You suggest that under these circumstances, the Sequence Proposal commits me to stop taking risky opportunities for beneficence – either right away or at some point in the future – before the subjective probability that I have killed someone gets too high. But this seems implausible.
I agree that if the Sequence Proposal in fact had this implication, that would be a problem. But the proposal doesn’t have this implication. In fact, the question “how likely is it that I HAVE killed someone?” should be *irrelevant* to a supporter of the Sequence Proposal (and indeed to any right-thinking person 🙂 in deciding how to act now. As I explained in my précis, how the risks I took in the past have played out is typically of no consequence to the permissibility of taking a risk now (at least assuming that the risks are probabilistically independent). After all, suppose that the worst-case scenario has transpired and some of my past acts of risky beneficence have in fact resulted in disaster. Even that would not, in itself, be a reason to refrain from acts of risky beneficence going forward. To assume otherwise would be to engage in something akin to sunk-cost reasoning. So, a fortiori, the mere possibility that, for all I know, some of my past risky acts of beneficence have turned out poorly is not, in itself, a reason to refrain from further such actions going forward.
The question that should matter to my decision, according to proponents of the ex post view, is “how likely is it that someone WILL be killed, if I act thus and so?”. And with regard to this question, Villain 3 and Long Life (even the version of the case that you describe) are crucially different.
In Villain 3, my past choices are relevant to answering this question. The chance that someone will be killed, once the villain runs his lottery, goes up with every additional person that I choose to enroll in the villain’s lottery. So, how many people I have already enrolled in the villain’s lottery matters to the permissibility of enrolling a further one.
In Long Life, by contrast, my past choices are irrelevant. Whatever I have done in the past, and however past lotteries have played out, does not affect how likely it is that someone will be killed, if I choose to perform a risky act of beneficence now.
That is why (at least for proponents of the ex post view) it is natural to assess my decisions in Villain 3 in their totality rather than in isolation, but to assess my decisions one-by-one in Long Life.
“Challenge 3” above should read “Challenge 2”.
Hi Joe, thanks for the reply—that’s a great case! Two quick thoughts about it:
1. Is it really so implausible that, once you remove knowledge that you’ll kill people by introducing the die, it must still be wrong to choose (2) every time if doing so was wrong when you did know? It seems wrong to distribute a zillion pills to cure headaches when I know that one contains cyanide but don’t know which. It doesn’t seem to me so clearly wrong when I don’t know but am merely extremely confident that one contains cyanide and don’t know which. Do you find this more implausible than full aggregationism? After all, knowledge makes an enormous moral difference in other contexts—why not this one?
2. I wonder if we could reason about your modified Villain 3 as follows. Suppose first that if the die lands 1, there will be no killings or migraines: all harms are off the table. Then even though you don’t know that you’ll be killing anyone by choosing (2) every time, you know that you will in the event that your choice matters morally which it almost certainly will, and you shouldn’t knowingly do that. Now add the migraines back in. “Aggregate relevant claims” people might reasonably say that this is not so different from the case just imagined: the 1-in-a-zillion event in which you are choosing between migraines and no harm at all doesn’t give rise to a relevant claim in comparison to near-certain death. They might say that this doesn’t work for Long Life because competing relevance of claims against risks should be restricted to risks that are dependent. They might say, why should a person’s claim against death be relevant to another person’s claim against a migraine when those harms eventuate from completely independent risks? I’m sure this leads to other problems but might be a way of filling out the intuitive difference between dependent and independent risks, which Johanna mentioned above.
Quick response Joe, and I might just be repeating a bit. We can easily test whether we should always aggregate small benefits by considering standard lives for lollipops cases. What would it add to use these much more complicated cases about risk to establish that? I take it that what it is intended to add is that the intuitive view in the initial cases is inconsistent with a powerful intuition in intrapersonal risk cases, and some plausible principles about the relationship between individual cases and sequences. But the sequence cases just surreptitiously introduce interpersonal factors.
To see this, note that we can’t plausibly justify performing a very large number of low risk of death cases together on the basis that this will likely be better for everyone. We must do so on the basis that the benefits to the many outweigh the harms to the few. But the move from purely intrapersonal cases to interpersonal cases itself then needs to be tested by considering whether interpersonal factors are significant. On the plausible assumption that the value of sequences of acts dominate over their constituent parts, we should assess whether a death that is virtually certain to result from a sequence can be justified by the fact that the sequence will also result in small benefits to very many people. And the initial lives for lollipops cases tell us that this is not so. The fact that we have the initial intuition that if a single act in the sequence is justified, so must an iteration of the same acts is unreliable given that the iteration introduces interpersonal concerns, where where any interpersonal values that arise are suppressed by the apparently intuitive idea that repetition can’t make a difference. So rather than providing evidence of the falsity of interpersonal factors that restricted aggregationists rely non, these principles must be tested against such factors. That follows from the simple idea that I noted earlier – that our attitudes to risk depend upon independent judgements about outcomes, and not the other way around.
Hi Victor (this is in response to your question to me above),
I think there are indeed situations where I can permissibly perform a sequence of acts that it wouldn’t have been permissible for me to precommit to (indeed, a sequence of acts such that, if I had to settle in advance whether I would perform this sequence of acts, I should commit not to do so).
Consider the following example:
You will have the opportunity to repeatedly roll a fair dice. If you roll any number other than 6, you receive a minor benefit. If you roll a 6 something bad happens. You’ve determined that the risk of rolling a 6 must be kept under 50%.
Suppose you are playing a version of this game in which you have to precommit at the outset to the number of times you will roll the dice. In that case, you ought to commit to rolling no more than 3 times. (That way, the likelihood of rolling a six is approximately 42%, whereas if you commit to rolling the dice 4 times, it rises to approximately 52%).
But now imagine that you are playing a version of this game without precommitment. You have already rolled the dice three times and it hasn’t come up six so far. Are you permitted to roll it a further time? Clearly yes. The dice have no memory. So, it is rationally permissible for you to perform a sequence of acts in which you roll the dice four times, even though it wouldn’t have been permissible for you to precommit to rolling four times (and indeed, in the version of the game requiring you to decide in advance how often you will roll the dice, you should have committed to rolling the dice no more than 3 times).
Whether an analogous story is also plausible in Long Life* depends on how plausible we find the idea that it actually matters, in itself, how likely it is that *someone* (as opposed to any *particular* person) will be killed as a result of my risky acts of beneficence. Proponents of the ex post view think so, and that is why, I suggested, they should be sympathetic to the Sequence Proposal. I, on the other hand, have my misgivings about the ex post view, as you know, and I am correspondingly less sure that the story it tells about Long Life* is a good one. But what I was doing in that part of my précis was trying to sympathetically articulate, on behalf of a proponent of such a view, what seemed to me like a good rejoinder to Joe’s objections to the Sequence Proposal.
Good point Johann. That’s clearly right in the dice cases. I’ll need to think more about the relationship with the other cases when I have a bit more time.
Joe: I wonder what you think of the following thought about intuitive resistance to full aggregation. Suppose that no matter what you do in these death vs. headaches cases, the villain will kill a thousand people. For some reason this weakens my untutored intuition that you should impose all of the headaches to prevent one more death. It’s not like I find it obvious that you should allow one more to die; I just find myself less confident that you shouldn’t. But it seems ridiculous to think that a thousand other people dying no matter what makes any difference. If this modification to the case has a similar effect for other people’s intuitions, I wonder if it suggests a debunking strategy of our intuitive resistance to full aggregation: we are somehow especially attracted to the possibility that no one will die if we do something, but of course in any real-life case, lots of people will die no matter what we do. (I imagine someone has already made this point, unless the effect on my intuitions here is totally idiosyncratic.)
Victor: Consider some valid argument to a conclusion about outcomes (e.g., that PA is false) where some of the premises are about risk (e.g., Joe’s). Is your thought that, since the moral facts about risky choices hold in virtue of the moral facts about their outcomes, it will never be the case that the premises of such an argument are collectively more plausible than the negation of their conclusion (in this case, PA)? Or is it instead that such an argument could never explain why its conclusion is true (in this case, why PA is false)? The second thought seems right to me, but irrelevant, since Joe’s aim (I took it) was to show that PA is false, not to explain why it is false, and we can know that something is false without knowing why. The first thought just seems wrong: a claim about F’s can be more plausible than a claim about G’s even though the facts about F’s depend on the facts about G’s. I wonder then if you have in mind a third thought, which is that our reason for believing any claim about risky choice must be some claim about its outcomes. But of course this doesn’t imply that if some plausible claims about risk are inconsistent with some plausible claims about outcomes, then the claims about risk must be less plausible. So, anyway, I’m not sure what you have in mind, unless it’s just that PA in particular is more plausible than the conjunction of Joe’s premises involving risk—which I happen to agree with, but not because of anything about the derivativeness of principles about risk, and which still leaves us with the very interesting puzzle of figuring out where the argument goes wrong.
Thanks Jake. That’s nicely put. It’s a bit late here and I’m running out of steam. My thought was something like: we already have intuitions about aggregation from simple lives for lollipops case. These cases already allow us to evaluate the claim that some very large number of tiny benefits can outweigh some great harm to a person in the most direct way possible. The risk cases add something only if they illuminate some value or idea that we can’t see in the simple cases. But it isn’t clear what that value could be, given that the goodness or badness of risking some outcome is just derivative of the goodness or badness of the outcome risked.
They could help if it was plausible that our outcome-based principles were justified in virtue of the Independent value of reasoning in conditions of uncertainty. Then the lives for lollipops cases wouldn’t illuminate something valuable that the risk cases could bring to light that might lead us to revise our judgements in what initially seemed to be the more straightforward cases. But, I suggested, that isn’t plausible. If that’s right, why shouldn’t we just have more trust in our direct judgement about lives for lollipops cases than our judgements in derivative cases that point to nothing additional of value, and that have the potential to obscure values that might be relevant in the initial cases.
I am very sympathetic to something Johanna said above, namely that it often seems to make a moral difference whether the risks to different individuals are probabilistically independent or probabilistically dependent. I agree with Johanna that this is seems to constitute a further morally relevant difference between Villain 2 and 3, on the one hand, and Villain 2* and 3* on the other.
Incidentally, the distinction between probabilistically dependent and probabilistically independent risks is connected to, but not equivalent to, a distinction that Jake asked about above, namely the distinction between what Victor somewhere calls “closed” and “open” cases. (Closed cases are cases in which it is certain, and not merely extremely likely, that someone will be harmed; open cases are cases where it is in principle possible that no-one is harmed, even if this is extremely unlikely). I think that all closed cases are cases where risks are probabilistically dependent; but in open cases the risks can be either independent or dependent.
Why might it matter morally whether we are in a case where risks are probabilistically dependent or independent? Here’s a thought (and I’d be curious if you’re thinking along the same lines in your manuscript, Johanna):
Suppose that you and I are among the Y-people in Villain 2 or Villain 3. If the agent chooses option (2), then we will both be exposed to a probabilistically dependent risk. Specifically, our fates our *inversely* correlated: I know that, if things go well for you, it is more likely that things will go badly for me, and vice versa. I therefore have self-interested reasons to hope that things will go badly for you. But this is a morally unattractive attitude to have towards another person; I should not wish ill on another innocent person. I think it’s plausible that we have at least pro tanto reason not to put people in situations where they are pitted against each other in such ugly ways, and where the salvation of one person is purchased at the expense of another’s misfortune (or at least the increased likelihood of such misfortune).
By contrast, suppose that the risks we face are probabilistically independent, as in Villain 2* or Villain 3*. I can then wholeheartedly wish you well. Your good fortune does not make it any less likely that I, too, will be fortunate. The way in which we relate to one another is not morally problematic in the way it was in Villain 2 or Villain 3.
I think that for any ethical outlook that gives some weight to people standing with one another in relations of *solidarity*, this difference between probabilistically independent and inversely correlated cases is morally significant. While I’m not sure just how much weight we should attach to this factor, it could be part of the reason why choosing option (2) may be somewhat easier to justify in a case like Villain 2* and 3* than in a case like Villain 2 or 3.
Of course, most desirable of all, for a view that prizes solidarity, would be a situation in which the risks to different people are *positively* correlated with one another: in that case, I should wish you well, not just for moral but for purely self-interested reasons.
So many great comments here. Let me try to address some of them.
Johann, I agree that the question “how likely is it that I have killed someone?” should be irrelevant. But if your view treats this question as irrelevant, doesn’t it draw a very counterintuitive moral distinction between my Villain 3 and a version of my Villain 3 in which the villain randomly chooses the lottery tickets before you make your decisions?
If the lottery is run before you make your decisions, and you don’t know which ten tickets were drawn, and the question “how likely is it that I have killed someone?” is irrelevant, then your proposed view implies that it is permissible to give a ticket rather than a migraine to each pair, doesn’t it? But your proposed view also implies that it is wrong to give a ticket rather than a migraine to each pair in my original Villain 3. So, your proposed view implies that it makes a decisive moral difference whether the lottery is run before you make your decisions or after. That seems to me very counterintuitive.
Perhaps you would say that, if the villain does not kill anyone until after you have made your decisions, then you have not actually killed anyone until you make your final decision, regardless of when the lottery is run. But even if you don’t count as having killed anyone until after you make your final decision, you do count as having massively reduced the life expectancy of certain people (possibly to just a few minutes).
Jake, it does seem to me implausible that introducing the die makes a decisive moral difference in Villain 3. Presumably, in any more realistic version of the original case, there would be a one-in-a-zillion chance of the villain dropping dead just before running the lottery. But awareness of that possibility cannot change what it is permissible for you to choose, or so it seems to me.
Victor, I don’t think it’s right that “we can easily test whether we should always aggregate small benefits by considering standard lives for lollipops cases.” The implications that fully aggregative views have in these cases are counterintuitive, but whether that is a reason to reject these views depends on the relative plausibility of alternative views. What my villain cases show is that, if we reject the idea of full aggregation (at the level of principles concerning outcomes), then we get very counterintuitive implications in at least some cases involving risk. Unless there is a way to avoid these implications without embracing full aggregation, these implications give us evidence in support of full aggregation (at the level of principles concerning outcomes), or at least evidence against alternatives to full aggregation. We must then decide whether the intuitive evidence against full aggregation is weaker or stronger than the intuitive evidence in support of it.
Jake, perhaps unsurprisingly, that seems to me a really promising debunking argument. I haven’t seen anything similar in the literature. I might now withdraw my above remark that “there is not much that can be said to make the most counterintuitive implications of fully aggregative views more palatable.”
Johann, that’s a super interesting defence of the moral importance of the distinction between probabilistically dependent and probabilistically independent risks. I think Thomas Rowe proposes something similar in his paper ‘Risk and the Unfairness of Some Being Better Off at the Expense of Others’, Journal of Ethics and Social Philosophy (2019).
As Johann said when he rejoined: “WOW what a terrific discussion!” I’m afraid I don’t have time to think through all the interesting things that have been said, and maybe someone will point out to me how what I’m about to say has already been shown to be silly. But here goes, a response to Joe’s 3 Qs:
JOE: “First, you say the view does not have fully aggregative implications in Transmitter Room, because in Transmitter Room “we know who Jones is”. Suppose the case is revised such that, though we know who each of the zillions of World Cup viewers are, we don’t know who Jones is—he must be one of the zillions of people who hate watching the World Cup. Does your view then imply that we should leave Jones to suffer rather than cutting the transmission? If we accept that result, can we really reject fully aggregative views because they imply that we should leave Jones to suffer in the original version of the case?”
Well, would a reasonable person–who may hate watching the World Cup, but who appreciates that others LOVE watching the World Cup–be willing to run some SMALL risk for the sake of those who love it. If so, then I suppose there might be a number of people large enough such that, if they were the denominator, the risk would be small enough and then we should allow the risk to be encountered.
BUT (and this is a really important But), in sympathy with Johann’s decomposition test, it might not be permissible to say that we can allow people to accept the risk in such a way that we just stand by and allow whoever Jones turns out to be to suffer cause, hey, he waived his ex post claim. There may be some ex post claims that can’t be waived that way. (Think of Shirley Jackson’s “The Lottery.”) So, on the one hand, it may still be permissible to put on the World Cup even if it is really likely that someone will be electrocuted, I suspect that there’s an independent reason, having to do with the limits of what can be waived, why we can’t tolerate leaving Jones to suffer longer than necessary.
In addition, I want to echo the thought that certainty may make some difference. If it looks like someone HAS to be electrocuted for the World Cup to be shown, and it’s just a Q of who, then I think we get a kind of compound claim not to have to suffer being used as a means. First, there’s the claim not to have to waive an ex post claim for the sake of other’s benefits. Second, there’s the claim that one would suffer as a means of allowing the World Cup to be shown. I think that second claim may be robust enough that anyone could object and their objection would have to be respected.
(I want to flag that this is in some tension with what I say about Johann’s decomposition test in my paper, but trying to finesse that here would take me too far afield.)
JOE: “Second, is it correct that your view implies that we should trade lives for lollipops in my version of Lollipops for Lives? If so, then again, can we really reject fully aggregative views because they imply that we should trade lives for lollipops in the standard version?”
I’m not sure I’ve got your version straight from the standard version, so forgive me if I’m missing the point–and please clarify the point–but my sense is that we might want to say that NO risk of death can be reasonably imposed on anyone for the sake of a lollipop. To that you might reply: but surely I can drive to the corner store to get a lollipop, and in doing so, I impose a tiny risk of death on others. My response, then, would be that there’s more than a lollipop at stake. There’s the freedom to drive and the freedom not to have to question whether every instance of driving is falling below some threshold reason for driving that must be met in order to drive permissibly. Make it JUST a lollipop: what risk of death may you impose on another in order to get an enjoy a lollipop, right here, right now? Or, putting it better in terms of my framework: is there any risk so low that I would be DUTY-BOUND to say: sure, go ahead, impose that risk on me? I’m not sure there is. Even if you said: the risk is 1 in the number of atoms in the universe, I think I could say: “you know, I understand that that is about as close to zero as possible, but I can still say: “no, not just for the sake of a lollipop.”
JOE: “Third, you say in the quote above that it is permissible to choose the deaths in Villain 2 because “none of the Ys who will die face much chance of dying, and the migraine is a real problem” and “we should waive our ex post claim not to be killed because we are SO unlikely to suffer it, and the migraine is a real problem.” Suppose that, in a different case, the villain has kidnapped A and ten zillion others. He will either (1) give A a migraine, or (2) randomly select and kill ten of the others. Does your view imply that we should choose (2), because the migraine is a real problem and none of the ten zillion face much chance of dying?”
I’ll bite: make a zillion big enough and the duty to waive kicks in.
Thanks for a great discussion everyone!
I have two quick questions for you, Joe. Since I am late to the discussion, if you have already addressed my concerns in your previous comments, please feel free to refer me to the answers you have already given.
First, your second desideratum states that, when applied to cases involving risk, any adequate version of PA must “avoid implications that are in tension with the intuitions that incline people toward partially aggregative views” in the firs place. On your presentation, this desideratum rules out choosing option (2) in Villain 2 (a risk-involving case), because so choosing produces an outcome in which ten lives are lost at the price of alleviating many migraines, an outcome that PA proponents are keen to avoid in non-risk-involving cases.
My question is whether we are entitled to treat as a desideratum that PA must avoid licensing an outcome in risk-involving cases that it would not license in non-risk-involving cases. As I understand it, your second desideratum rules out ex ante versions of PA from the very outset. It seems to me, however, an open question whether ex ante theorists can successfully defend the claim that the best rationale for PA in non-risk-involving cases leads us to endorse ex ante verdicts in cases like Villain 2 (for example: PA is best supported by the idea that some individual claims are irrelevant; an individual has a claim only if she can press the claim on her own behalf; no one in Villain 2 can press the claim that she will be killed against your handing her a ticket).
The correct dialectical move, it seems to me, is to engage ex ante versions of PA on their merits (particular versions of ex ante PA might run afoul of your first desideratum) rather than ruling them out with your second desideratum, whatever rationale they might offer for treating risky and non-risky cases differently.
Second, for someone who favors the ex post approach in your Villain cases (myself included), I had a hard time appreciating the force of your Long Life case. I am drawn to the suggestion that, in Villain 3, we should stop giving out lottery tickets to the Y people once the ex post objections become sufficiently strong. Long Life is supposed to be a counterexample to this suggestion.
Now, in Long Life, am I supposed to imagine that the choice between performing and refraining from risky activities is given to me at the start of my life (or at the start of some segment of my life), which will bind my subsequent behaviour? If so, I don’t see why it is counterintuitive to choose to refrain.
Alternatively, the case is supposed to invite me to imagine making a choice whenever an opportunity to perform a risky action arises. If so, we should distinguish two versions of the case, in order for it to be sufficiently analogous to Villain 3.
A key feature of Villain 3 is that, whether the villain runs the lottery before or after I hand out the tickets, I won’t find out who (if anyone) will be killed after I make my choices. The big reveal, so to speak, comes at the end. Similarly, we can imagine a version of Long Life in which the outcome of all my risky activities will be revealed only on the last day of my life (judgment day). If this is how we are to imagine Long Life, then I don’t find it counterintuitive to restrict the number of risky activities we perform.
On the other hand, we can modify Villain 3 in the following way: for each ticket I give out, the villain immediately reveals whether the person to which the ticket is given will be killed. If this is how we are to imagine Villain 3, then on the ex post approach, we should give each Y person a ticket (since the ex post likelihood of someone being killed is only 1-in-ten-zillion). Similarly, in an analogous version of Long Life, where the eventuation of risk is revealed immediately upon the performance of a risky action, the ex post approach permits us to perform each instance of these risky actions.
My worry, then, is once Villain 3 and Long Life is made appropriately analogous, Long Life ceases to serve as a counterexample to the ex post approach.
In any case, thank you, Joe, for such a stimulating article!
I feel I might be belaboring the obvious but
1. “MIGRAINES VS DEATH from aspirin” – usually we conceive this as the same individuals taking the risk for their own benefit, with those at higher risk from complications individually minimizing their risk. At the aggregated level this may look V3, but not from the “consenting adult” POV.
2. “Long Life” is not that different from the career of any medical practitioner – most doctors can think of at least one person who has been killed by a, say, epistemic shortcoming on their part. Fortunately, they can offset this by considering lives actually saved, as opposed to migraines averted.
3. Do more elaborate aggregations such as DALYs/QALYs avert some these problems, or do they lead to even more clashes eg mask wearing v. deaths of those with already short life expectancies?
Erik and David, sorry for the late response. I was away for a few days without internet.
Erik, thanks for these extremely well-presented challenges. On the first… I am not sure why “the correct dialectical move … is to engage ex ante versions of PA on their merits”. A proponent of a fully aggregative view might similarly say that we cannot simply dismiss their view “from the very outset” by appeal to cases like Lollipops for Lives, but it seems to me that would not be a good way for them to defend their view. I also don’t think my second desideratum rules out ex ante partially aggregative views from the very outset, for it is open to proponents of these views to argue that the implications of their views in cases like Villain 2 and my version of Lollipops for Lives are not really “in tension” with their intuitions about cases without risk. However, these implications and intuitions do seem to me in tension.
On the second… You say “a key feature of Villain 3 is that, whether the villain runs the lottery before or after I hand out the tickets, I won’t find out who (if anyone) will be killed after I make my choices. The big reveal, so to speak, comes at the end.” Suppose you are in a version of Villain 3 in which the lottery was run in advance, you have already given out a number of lottery tickets, and you are now deciding whether to give a ticket for the next pair. On the view you describe, it can make a crucial difference whether you know whether any of your previous decisions to give out tickets killed anyone. This seems to me problematic in two ways. First, as Johann says above, the question “how likely is it that I have killed someone?” seems morally irrelevant to your subsequent decisions. Second, your view seems to make the question about your previous decisions morally relevant in a very peculiar way. For you know that, were you to get an answer to this question, it would become permissible for you to give a ticket for the next pair, regardless of what the answer is—that is, regardless of whether you have killed no one, one person, or several people. But, surely, if you know that, were you to learn the answer, it would become permissible for you to give a ticket to the next pair regardless of what the answer is, it is morally irrelevant that you do not know the answer, and hence already morally permissible for you to give a ticket to the next pair.
David, thanks for these thoughts. On the first… “Migraines Vs Deaths” is usually used in this debate to refer to a case in which you can save either some very large number of people from migraines or some much smaller number of *different* people from death. On the second… the choices made by medical practitioners might be importantly morally different from your choices in Long Life because medical practitioners typically have the consent of the people they expose to risk. On the third… I don’t know enough about the use of DALYs and QALYs to answer with much confidence, but my sense is that adopting these measures would not help us avoid puzzles about aggregation.
Thanks again to everyone for the many excellent questions and challenges!