In a famous passage in What We Owe to Each Other, T.M. Scanlon introduced a case where we have to choose between saving one person from a terrible harm and saving an enormous number of people from much smaller harms. This quickly inspired an enormous literature on whether the right approach to such moral questions is aggregative or non-aggregative.
In this debate, I claim that one can simply have one’s cake and eat it. Both aggregative considerations and non-aggregative considerations are genuine sources of reasons. The right response to simply to weigh these two kinds of reasons against each other.
Here is Scanlon’s famous case (p. 235):
Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we cannot rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones’s injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks. Should we rescue him now or wait until the match is over?
In this post, I shall simply describe my view of this case – a view that to my knowledge has not yet been discussed in the vast literature on this topic. The task of defending this view will have to await another occasion.
According to my view of this case, then, at least if sufficiently many viewers are watching the match, there are the following two competing reasons in favour of each course of action:
- Waiting until the match is over is better for the common good of the whole group consisting of everyone who would be affected, while rescuing Jones now is worse for the common good.
- Waiting until the match is over is unfair to Jones (who is forced to bear the whole cost of this benefit to everyone else), while rescuing him is fairer (since then the cost is shared out among almost everyone).
The notion of the “common good” that I am deploying here is straightforwardly aggregative. Suppose that there is a way of measuring the degree to which each of these courses of action is worse than the alternative for each individual in the group. (If the course of action is optimal for the individual, the degree to which it is worse for the individual = 0.) Then the measure of how bad each course of action is for the common good of the whole group is just the sum of the measures for these individuals. The lower this overall sum, the better the course of action is for the common good of the whole group.
On the other hand, the notion of “fairness” that I am appealing to here is non-aggregative. Fundamentally, this notion of fairness is the one that John Broome articulated in his 1991 Aristotelian Society paper “Fairness”. This notion of fairness presupposes the idea of the strength of an individual’s claim.
To keep things simple, suppose that there are just two options available to you, A and B. Very roughly, let us assume that an individual has a “claim” that you do act A rather than B if and only if the individual is in relevant respects worse off if you do B rather than A.
The “strength” of this claim is an increasing function of both (a) how much worse off the individual is if you do B than they would be if you did A, and also (b) how much worse off the individual is than other affected individuals if you do B. The degree to which your doing B satisfies the individual’s claim is in inverse proportion to the strength of the individual’s claim that you do A instead of B.
To be perfectly fair, a course of action must satisfy every individual’s claims to the same degree. The degree to which an action is unfair is a measure of how far it departs from this ideal of perfect fairness.
In Scanlon’s case, neither course of action is perfectly fair. But it seems clear that the degree to which rescuing Jones now departs from perfect fairness is much less than the degree to which waiting until the match is over departs from fairness.
If you rescue Jones now, then there is only a tiny difference between the degree to which you satisfy Jones’s claims (viz. completely) and the degree to which you satisfy each television viewer’s claims (you fall very slightly short of completely satisfying their claims). On the other hand, if you wait until the match is over, there is a huge difference between the degree to which you satisfy each television viewer’s claims (viz. completely) and the degree to which satisfy Jones’s claims (you fall extremely far short of satisfying his claims).
Both promoting the common good and treating people as fairly as possible are good features of actions. Unfortunately, they sometimes pull in opposite directions. In those cases, they simply need to be weighed against each other.
At one extreme, a small loss to the common good can be justified by a sufficiently great improvement in fairness; at the other extreme, a small loss in fairness can be justified by a sufficiently great improvement to the common good. In between these two extremes, it becomes hard to say what to do, but in at least some of these intermediate cases, both courses of action will be permissible.
Hi Ralph,
The degree to which you satisfy a claim is surely specific to that claim. So if we save Jones now, then Jones’ claim is fully satisfied but the claims of any given fan is not satisfied at all. You seem to suggest that each of the fans claims is partially satisfied if we save Jones. Could you perhaps explain some more about what you mean by the ‘degree’ to which a claim is satisfied?
Ralph, my comment here may or may not be of any help in your specific approach, but frankly I cannot address the latter without these other thoughts arising first. I have never liked this particular case because it is so under-described, and/or treats the two types of claims as completely isolated from any past context or future consequences. There are some counter-examples to aggregative reasoning which strike me as truly troubling, but this has never been one of them. For from consequentialist aggregation we can derive the television company’s strong obligation to protect its workers from such mishaps, strictly liable for these in a way it is not for its promise to broadcast, barring some unusual circumstances to the contrary. If we don’t do that, then companies–who are the primary agents able to affect such risks–will be strongly incentivized to allow such risks in the future, with very bad overall consequences. If we do, then companies must first limit workers’ risks, and then make sure they are providing a valuable product within that constraint. The market can sufficiently incentivize them to do the latter; only liability for workers’ rights can do the former adequately, as history has often shown. Hence we should of course rescue Jones, and let the chips fall where they may, probably making a lot of people angry at the television company which now has a strong incentive to make sure that such circumstances never again arise. But this is reached via entirely consequentialist, aggregative reasoning. We need only remember that the common good involves not only the good of the viewers and the good of Jones, but the good of all future workers whose conditions of employment might put them at risk.
Of course one can simply take the lazy philosopher’s way out and stipulate that there will be no future effects of this sort without a detailed explanation of how this is so, but then we have to imagine a very different situation from the one initially described and reasonably expected to hold, or a very different world from the one we live in. If we do describe vividly a very different situation, then both the consequentialist verdict and our intuitive responses might change in lockstep. Say, if Jones had in some way taken on some special responsibility for the broadcast and his injury not mandated by his employer, e.g., he put his own unauthorized equipment in (perhaps testing it in order to sell his own version thereof to the company) and his carelessness in doing so resulted in the accident.
I have a worry about this “fairness” factor, that it’s not actually independent of other gains or losses.
If someone knocks down my house, I suffer a loss. I think that loss is already modulated by the extent to which other people are affected. To say that (only) my house being knocked down and a tornado knocking down the houses of everyone in the town is *the same* loss to me just seems… a bit insensitive to differences.
Or maybe it’s that when you count unfairness, you count again negative utility that has already been counted. If event X causes me -20 utility and my friend -1 utility, to say that there is *also* 19 unfairness there seems to negate the point of the utility measure. If fairness is a real thing, then it should already be included in the utility that I have suffered, shouldn’t it?
I’m not sure if I’ve really got to the heart of the problem yet, but I certainly feel that this fairness measure is a kind of double-accounting.
I also think that Scanlon’s problem suffers from many of the same issues that plague trolley problems. First, the prediction problem – that certainty about how much disutility will be caused is unrealistic. Second, the systems problem – in real life, that problem simply doesn’t arise, does it? The broadcast is stopped, and you rescue the guy in front of you. Because that’s how people build our systems: we don’t actually tolerate that kind of “dilemma”; we fence around them, and prevent them from coming into being. That seems like a much more important truth about human morality than “what would happen if…”