A great deal of ink has been spilled attempting to show that contractualism, alternately, can or cannot accommodate “numbers” in a plausible way. Contractualism aspires to provide an attractive and theoretically robust alternative to consequentialism and the unrestricted interpersonal aggregation that it implies (foundationally anyway), but the abiding worry about the contractualist approach to aggregation has been that it proves too much: while it rejects appealing to numbers in some cases where that rejection seems correct, it also rejects appealing to numbers where numbers seem clearly relevant or even dispositive. What I want to suggest here is a modestly deflationary way that contractualism might be able to accommodate the relevance of numbers.
Rahul Kumar and I are working on a paper together on contractualist risk imposition, and while a discussion of aggregation may not make the paper's final cut, I've been trying to figure out a way for a contractualist to consider population risk in addition to individual risk (i.e., the number of people who are subject to risk at any level, and not just the level of risk that any one person is subject to). This puzzle is of course just one instance of the wider challenge a contractualist has of dealing with interpersonal aggregation. So here's my proposal: interpersonal aggregation is morally permitted/required, according to contractualism, if but only if we first satisfy what we owe to each other. In the standard life-saving case, where one person is on the first rock and five are on the second and you can only rescue those clinging to a single rock, this proposal holds that an individual claim from each rock cancels out or silences the other, providing in particular the recognition that is owed to the single person on the first rock, at which point contractualist reasons run out/are fully satisfied, allowing non-contractualist (though not anti-contractualist) moral reasons about states of affairs or just general goodness to account for saving the greater number on the second rock. This, it seems to me, captures what is right about Scanlon’s and Kamm's and Kumar’s approaches to aggregation while also capturing what seems to be right about Raz's deflationary account of aggregation, revolving around reasons/values simpliciter. The claim of the one who ends up not being rescued matters but is met by a symmetrical reason, so that what we owe to each other has been satisfied, at which point we look to the fact that there are four people whose lives could be saved and the world accordingly made better if we rescue them.
This approach takes seriously Scanlon's claims about the limited moral domain that his theory covers. What we owe to each other remains the most important part of morality, but it is only one part. My proposal holds that what we owe to each other must be still satisfied, but it also recognizes that satisfying that is not always the last word on what to do all things considered, and that if contractualist reasons run out — as I think they do when symmetrical claims cancel out or silence each other — then we can appeal to the domain of non-contractualist moral considerations (whose existence Scanlon implicitly recognizes) to resolve the matter. On this picture, then, contractualism does not *itself* solve all problems of aggregation (though it will solve some, like the Transmitter Room case), nor should it be expected to. It just doesn’t stand in the way of interpersonal aggregation in those recalcitrant cases where it really seems that the numbers do matter. What we owe to each other will sometimes be dispositive in and of itself, but even when it is not, it will nevertheless constrain non-contractualist moral reasoning. This approach differs from Raz's more thorough-going deflationary account of aggregation because it accepts the primacy of contractualist reasoning about what we owe to each other, but it shares with Raz the idea that in certain cases (in my view but not Raz's, those where contractualism itself is indifferent) the numbers will matter just because we can have reason to do that which preserves more value.
This of course implies that certain aggregation cases are not purely questions of what we owe to each other, but that's an implication that I think we can live with. I for one don't actually have the intuition that we owe it, as a matter of narrow contractualist duty, to each in the larger set to save them qua members of the larger set. Of course I do think that we ought to save the larger number, but it seems to me that we can, as contractualists, unapologetically do precisely that — it just won't be required by what we owe to each other.
I'd be grateful for your thoughts.
Hi John,
It seems to me that your proposal handles the standard life-saving case quite well, and I agree that contractualists should acknowledge that other (e.g. consequentialist) reasons can be morally decisive when contractualist reasons cancel each other out. But I wonder if your approach can plausibly handle cases in which it seems that numbers matter but the contractualist reasons don’t seem to fully cancel each other out.
If faced with the choice between saving one person from losing an arm and a hand, and saving 1000 (or more, if that number isn’t high enough to convince you) people from losing an arm, it seems pretty clear to me that we should save the 1000 from losing an arm. But standard contractualism seems to give the contrary result.
We could attempt to preserve standard contractualism by claiming that the strength of the objections that the one and each of the 1000 would have to a principle that would permit not saving them in this case are equal – that is, we could claim that the distinctions in the strengths of contractualist reasons are rather coarse-grained – but this seems pretty implausible to me, since losing an arm and a hand is clearly worse than losing an arm.
I’m not sure, then, that your proposal allows aggregation in a wide enough range of cases.
I’m little puzzled as to what the motivation for the proposal would be. I’ve actually argued for this in my ‘What We Owe to Many’ (Social Theory and Practice 2004 [in that paper I also try to say something about the population risks]), but it seems to me that contractualism can give the right intuitive results in the life-saving cases without interpersonal aggregation. And, as Mike Ridge, Parfit, and others have argued, the contractualist rationale for why we ought not to do the wrong thing seems to better fit the principles no one can reasonably reject given their own personal objections to different principles.
So, in this situation, as a contractualist, before being willing to consider interpersonal aggregation, I would want to know what’s wrong with not aggregating claims interpersonally. What’s wrong, for instance, with the Kamm/Scanlon tie-break argument? That argument won’t accept your claim that the contractualist reasons run out once the first-order death based burdens of individuals cancel out one another. I know Otsuka argues that it actually aggregates claims interpersonally, but this doesn’t seem right as Hirose and others have argued.
Thanks very much for your comments Brian and Jussi. And I apologize to Jamie for posting so closely on the heels of his post.
Brian, I would want to embrace something akin to the proposed solution you suggest (but reject) to the single arm/hand loss or the 1000 arm loss case. I don’t think, however, that the basis for that tack has to do with contractualist reasons being “coarse-grained” per se. Its due rather to what Kamm calls the principle of irrelevant utilities (Scanlon follows her on this point). Now, admittedly, your example is a closer call than the standard one that’s given to substantiate this principle — we’re not choosing between an arm plus a headache, on the one hand, or a million headaches, on the other, for example. I’m not sure what else to say other than to admit that a Sorites argument can indeed be deployed that puts a lot of pressure on the principle of irrelevant utilities, but say that I think Sorites arguments in general are dodgy. Maybe a clarification is in order here though: in making that move, I wouldn’t be denying the relevance of losing a hand tout court in the way that one could plausibly deny the relevance of a having headache tout court; rather, I’d be denying its relevance when coupled with the much more severe loss of one’s arm.
Jussi, as a general matter, I think I probably believe that contractualism’s individualist or personal reasons restriction is more robust and also more essential to the theory than you do. But here’s a risk hypo that will test that prediction and which, in any case, motivates my concern: an airline can use one of two planes to transport their passengers from A to B, Plane 1, which is a super jumbo jet like the new double-decker Airbus, or Plane 2, which is a standard jet that carries half as many people. Now, as take-offs and landings are the riskiest part of air travel, it follows that choosing Plane 2 would be the riskier option, as more flights — and thus more take-offs and landings — would be needed to get everyone from A to B. Now, from the perspective of any given passenger, their individual risk would be the same whether they fly on Plane 1 or Plane 2, because taking either plane involves only one take-off and landing — the risk to them would be the same in either case. On the assumption that the individualist restriction permits only individual risk to be taken account of, then, contractualism must be indifferent between the planes used to get people from A to B. And yet if Plane 2 is selected, the population risk will be higher — that is, more people are likely to die being transported from A to B because using Plane 2 will necessitate more risky take-offs and landings. For this reason, it seems to me that we should certainly opt for Plane 1, but it seems no less clear to me that contractualism itself says that we should flip a coin to choose between the planes. And so I find myself wanting to take up Scanlon’s claim that the morality of what we owe to each other does not account for all of our moral reasons, paving the way for appeal to population risk, which in this case tips the balance in favor of Plane 1.
John,
why do you think the first point? I defend the personal reasons restriction in my paper and argue that even with this restriction you can get the right results in the aggregation cases. How more robust and essential can the restriction get? I don’t really understand…
In the example, I don’t see how contractualism would have to be indifferent between the planes people use to get from A to B. Remember that contractualism assesses acts not by their consequences to individuals in one-off cases but rather by whether the acts are forbidden by principles which no one can reasonably reject. And, whether principles are reasonably rejectable is determined by what kind of standpoints would be created by individuals as a result of their general adoption.
So, we have to compare two worlds: one in which only Planes of the type 1 are allowed (world A) and another in which passangers are free to choose between Plane 1 and Plane 2 (world B). If the risk is higher with plane 2, there will be more fatalities in world B than in world A. If the tie-break argument works, there will be some individuals in the world B who will have stronger personal objections (intrapersonally aggregated) to the principles they are living than any individual has in the world A. This means that the permissive policy can be reasonably rejected whereas the only planes 1 policy cannot be.
This means that contractualism with the personal reasons restriction gives us the right intuitive result. This is anyway my argument in my paper. Of course, there is a further complication. The world A severely limits the choice of individuals. If Scanlon’s value of choice argument holds, then lacking a choice about how to travel is one objection that must be intrapersonally aggregated to the objections plain-crash victims have. This means it might turn out that objections against the plane 1’s only policy might be more serious in the pair-wise comparisons.
But, nothing so far shows that the pair-wise comparisons of the personal objections intrapersonally aggregated don’t give the right results in the risky policies questions.
Thanks for your response, Jussi. You write “[i]f the risk is higher in plane 2, there will be more fatalities in world B than in world A.” But this is not the case unless you build the number of people affected into your assessment of the risk, which is just what I’m trying to avoid in distinguishing between individual and population risk. (I’m of course guilty of presenting a convoluted hypothetical — sorry.) The individual risk to any given passenger is identical whether plane 1 or plane 2 is selected, but the population risk is higher if plane 2 is selected, such that there will be more fatalities in world B. Although contractualist reasoning applies to principles of conduct, and so assesses what Kumar helpfully calls “social worlds” and not individual acts, I don’t believe that move is sufficient to address this case. First, one’s personal reason in rejecting the principle that prevails in world B cannot of course be “there will be more fatalities”, as that would be an impersonal reason. Second, the move also cannot be “but I am more likely to die in world B than world A”, which is what I take you to be saying when you write “there will be some individuals in the world B who will have stronger personal objections…”, precisely because the individual risk is held constant in the two worlds. So I remain worried about trying to handle all cases of numbers using only contractualist resources. It seems to me that a more modest contractualism, one which provides something akin to side constraints in certain numbers cases, is not a contractualism we should be embarrassed to uphold.
No – I think you are missing the point. Imagine that in the world A when the plane 1’s policy has been adopted, throughout the history there will be 7000 people who die in the plain crashes and in the world B with the risky policy there will be 10000 fatalities.
Now, an individual, call her Ann, can make an objection to the principles that caused her death in the world A based just on the fact that she died. Ben, a person in world B, can make an identical complaint complaint to the riskier policy in her world. So, now we are in the tie-break situation.
Then we can apply the tie-break argument. At this point, at first look, it seems like contractualism would have been equally neutral between the world A policy and the world B policy had there been the same 7000 casualties in the world B. At this point, Ben can make the additional complaint that his presence makes no moral difference to our moral reflection.
As a result, where any individual in the world A can complain only for his or her death, some individuals in the world B can complain both for their death and the fact that, if we don’t adopt the save many policy, their presence has no moral weight at all. This intrapersonally combined objection is more serious than any personal complaint in the world A. And, this means that the riskier policy is reasonably rejectable based even if we accept the individualist restriction on the grounds for reasonable rejection.
This is just the basic application of the Kamm/Scanlon argument that I give in my paper for this type situations. I still don’t see what’s wrong with it.
Thanks very much, Jussi, I appreciate the clarification and I’ll be sure to read your paper on the issue. I’m certainly open to embracing a more thoroughgoing contractualism than my post suggests. I’ll just have to spend some more time thinking this through.