Welcome to what we expect will be a very interesting and productive discussion of Gerard Vong‘s “Weighing up Weighted Lotteries: Scarcity, Overlap Cases, and Fair Inequalities of Chance.” The paper is published in the most recent issue of Ethics, and is available here. Nate Sharadin kindly agreed to contribute a critical précis, and it appears immediately below. Please join in the discussion!
Nate Sharadin writes:
Gerard Vong’s “Weighing Up Weighted Lotteries: Scarcity, Overlap Cases, and Fair Inequalities of Chance” has two main aims. The first is to convince you that the fair procedure for distributing benefits in equal conflict cases, i.e., cases where all claimants have equally strong claims to the benefits, there aren’t any other morally relevant differences between them, and where the benefit cannot (say, because of indivisibility) be distributed equally to all, must use a weighted lottery. The second is to convince you that, among weighted lotteries, the fairest weighted lottery procedure is his, what he calls the exclusive lottery procedure. I learned a lot by reading and thinking about Vong’s paper, and I’m pleased to have the chance to introduce it here. I’ve divided this too long precis into two sections. The first section lays out Vong’s arguments and some of his technical machinery. The second raises a worry with his approach. If you’ve already read Vong’s paper, you can skip right to the second section. If you haven’t, or would like a refresher, read both.
- Unweighted, Weighted, and Composition-Sensitive Lotteries
The argument against unweighted lottery procedures is straightforward. Take an equal conflict case. An unweighted lottery theory tells us that fairness requires we give each and every claimant an equal chance of benefiting. So far, so good. But now imagine that there is some overlap between claimants in different outcome groups, i.e., groups of claimants that can be benefited. Then, an unweighted lottery procedure entails that there is no fair distribution of chance of benefit. Here is why: Suppose we can benefit A&B, A&C, D&E, or D&F, and that we cannot benefit no one. Then, as Vong points out, while we can give each outcome group equal chances, we cannot give each claimant equal chances. Hence the unweighted lottery procedure entails that no chance distribution is fair. But that is implausible: pretty clearly, there is a fair way to distribute the chance of benefiting in such cases, viz. to divide it equally across the outcome groups.
(The natural revision of such unweighted lottery theories — an equal-as-possible unweighted lottery — is, Vong argues, also unacceptable: it implies that, in situations where we can give each and every benefit an equal zero chance of benefiting, doing so is more fair than any alternative non-equal distribution of the chances. But, again, that is implausible.)
Vong diagnoses the problem with unweighted lottery theories as driven by their failure to appreciate the importance of two features of overlap cases: our considered judgments about absolute fairness and outcome group composition. Take the first of these first. Comparative fairness is a matter of how one claimant’s claims are treated as compared to other relevantly similar claimant’s claims. If all equally strong claims are treated equally (e.g., by contributing equal chances to the unweighted lottery), then comparative fairness is fully realized. If all we’re concerned about when we’re concerned about fairness is comparative fairness, then, in equal zero benefit cases what it’s most fair to do is give every claimant an equal zero chance of benefiting. Again, that’s implausible on its face. Hence, absolute fairness: absolute fairness is what is promoted when a claimant’s chances of receiving a benefit to which they have a claim are increased without regard to others’ (potentially equally strong) claims. If all we’re concerned about when we’re concerned about fairness is absolute fairness, then, in equal zero benefit cases what it’s most fair to do is give the outcome group with the most claimants a 100% of benefiting. But, that too is implausible on its face. What we need to do then, Vong argues, is pay attention to both notions of fairness: we should be pluralists about fairness and care instead about ‘all-things-considered’ fairness, comprising both notions. I’m going to return to the idea of all-things-considered fairness and the way in which it’s composed of both comparative and absolute fairness below, in raising a worry for Vong’s account.
What about outcome group composition? Vong’s idea is that the fair lottery procedure must be composition-sensitive, i.e., sensitive to the the composition of the outcome groups (again, the groups that can be benefited that in turn contain the claimants). But there are many ways of being sensitive to the composition of outcome groups. How can we constrain out options? Vong’s suggestion: by reflecting on our considered judgments about the relative importance of comparative and absolute fairness in contributing to all-things-considered fairness. Importantly, as we just saw, comparative and absolute fairness can push in different directions when it comes to our selection of an outcome group. In equal conflict cases comparative fairness requires that each claimant contribute equally to the selection of the outcome group. But absolute fairness, since it requires in general that all claimants have a 100% of receiving a benefit they are due, will in equal conflict cases simply pick the largest outcome group and assign that outcome group a maximal chance of benefiting (since that is how we can maximize absolute fairness).
Vong’s judgment (and here he agrees with his opponents) is that, in equal conflict cases, is is the most comparatively fair procedure that is most all-things-considered fair; in other words, whatever composition-sensitive procedure we select, it must be such that each equally worthy claimant contributes equally positively to the outcome group selection. Again, I’ll return to this idea, below, in raising a worry with Vong’s approach.
This idea, that comparative fairness is what matters most, narrows the field: no simple ‘assign-the-largest-outcome-group-100%’ composition-sensitive procedure will satisfy the requirement to be comparatively fair. But it still leaves a wide range of possible procedures, among which these three: the equal composition-sensitive lottery, the exclusivecomposition-sensitive lottery, and the iterated individualist composition-sensitive lottery.
Equal composition-sensitive lotteries work like this: the chance contribution of a claimant, A, to each group of which A is a member is a proportion both of the number of claimants, c, and the outcome groups, Oa, of which A is a member, i.e., each claimant contributes a chance equal to: (1/c) / Oa to every group of which A is a member. Intuitively: if there are 5 claimants divided into outcome groups like so: A&B, A&E, C&D, then B, C, D, and E each contribute their (1/c/Ob-e=1/5/1=) 20% to their respective groups, while A contributes (1/c/Oa=1/5/2=) 10% to both A&B and A&E. The advantages, in terms of comparative fairness, are clear: each claimant makes an equal contribution (hence, equalcomposition-sensitive) to the chances. But, like its unweighted brethren, it falls to our judgments about overlap cases. Consider Vong’s example. Suppose we have 1,000 people numbered sequentially, and three outcome groups: G1, comprising claimants 1-500, G2 comprising claimants 501-1,000, and G3, comprising claimants 2-999. (The existence of G3 is what makes this an overlap case.)
The equal composition-sensitive lottery delivers the ‘startingly implausible result’ that it is most fair to give a greater than 50% chance of saving either G1 or G2. As Vong points out, this is an ‘affront to absolute fairness’: it simply cannot be that all-things-considered fairness requires giving 998 claimants a worse chance of benefiting than 500.
What about exclusive composition-sensitive lotteries? Claimant A is (by stipulation) exclusive relative to claimant B just in case A is in at least one but not all outcome groups of which B is a member. Thus, if a claimant is not exclusive relative to another claimant, then the first claimant is in either all or none of the outcome groups of the second claimant. Then, exclusive composition-sensitive lotteries work like this: the chance contribution of a claimant, A, to each group, W, of which A is a member, is a proportion both of the the number of claimants in a group that are exclusive relative to A (uA,W) and the sum of the claimants that are exclusive relative to A in every group to which A belongs (eA), i.e., each claimant contributes a chance equal to: uA,W /eA.
Intuitively, the idea is this: in deciding the positive contribution a claimant, A, should make to the chance some outcome group of which she is a member is selected, we should ignore other claimants who will either definitely be benefited if A is benefited, or who will definitely not be benefited if A is benefited. As Vong explains, this disjunctive condition captures the intuitive ideas that claimants who aren’t in any of the relevant claimant’s outcome groups are irrelevant (why should they play a role in determining A’s contribution?) and claimants who are in all of the relevant claimant’s outcome groups are also irrelevant (why should it matter, since they’ll receive the same chance in any case!). In other words, only differences in composition make differences.
You can see the appeal of such an approach. It handles overlap cases such as the one causing trouble for equal composition-sensitive lotteries with ease: doing the math, the result is that G1 and G2 each have a ~.2% chance of benefiting, whereas G3 has a ~99.6% of benefiting. That is not an affront to absolute fairness. Moreover, it is comparatively fair: all equally worthy claims had an equal positive impact on the outcome group selection.
This brings us to iterated individualist lotteries. These work exactly as they say on the tin: the procedure for selecting an outcome group is to first run an equal-chance individual lottery, i.e., a lottery where each claimant with an equal claim is given an equal chance at winning. Then, we iterate that equal chance lottery across claimants who are members of outcome groups of which the winner of the first lottery is also a member until we arrive at a unique outcome group. For instance, if the outcome groups are (again): A&B, A&E, C&D, then we first run an equal chance lottery over with 20% chances over A-E. Assume A wins. Then, we run an equal chance lottery with 50% chances over B and E. Assume B wins. Then, the outcome group selected is A&B.
Iterated individualist lotteries have the exactly the same plausible results as exclusive lottery procedures in overlap cases. I’ll leave the math to Vong’s paper, but the result is that in our G1-G3 scenario from before, G1 and G2 each receive (on the iterated lottery view) a ~.2% chance and G3 receives a ~99.6% chance.
So: How shall we decide between iterated individualist lotteries and Vong’s preferred account, the exclusive composition-sensitive lottery? Here’s the separating case, according to Vong: overlap cases involving multiple subset-groups. An overlap case has this feature when one or more outcome groups are proper subsets of more than one maximal group (an outcome group that is not a proper subset of another outcome group). For instance, consider a case where you can benefit A, A&B, A&B&C, or D. In such cases, the intuitive judgment is that it’s absolutely fairer to benefit A&B — a subset group — than it is to benefit (just) A — also a subset group — and similarly that it’s absolutely fairer to benefit A&B&C — the maximal group — than (just) A&B or (just) A. Because subset groups (such as A or A&B) can in principle be selected by either the iterated or the exclusive lottery procedures, and because such selections are intuitively unfair, we need a way to proceed. Vong’s suggestion is straightforward: in both cases, we (re)iteratelotteries.
For the individualist lottery, the procedure is simple: just run lotteries until all winners are members of only one maximal group. From our example: If A is selected in the first run, run a lottery on B and C. If C is selected, you’re done. If instead B is selected, run a lottery on C (it’ll win!). Hence, if A (a subset group) is selected in the first run, whatever happens in this case you’ll end up with a maximal group of which A is a member, viz. A&B&C.
For the exclusive lottery, things are somewhat, though not much, more complex. The basic idea is exactly the same: we simply iterate — this time exclusive — lotteries until all the winners are members of just one maximal group. In our example from before, we arrive at the same result.
Here, then, is how to use such cases to decide between the (re)iterated individualist lottery procedure and Vong’s preferred (iterated) exclusive lottery procedure. Take a case with multiple subset groups with the following structure: as before, we have 1,000 claimants. The outcome groups that can benefit are, however, more diverse. They are: (as before) 1-500, 501-1000, and (newly this time) each individual and each possible pair contained in 1-1000 (e.g., 1&2, 1&3, … 1&1000, 2&3, 2&4… 2&1000). So: many more possible outcome groups, and many subset groups.
The results in this case (again, I’ll leave the math to the paper) is that the iterated individualist lottery procedure gives the following chances to the outcome groups:
Neither group of 500 (a maximal group that is a pair, e.g., 2&999): ~50.050%
Whereas the iterated exclusive lottery procedure gives the following chances to the same outcome groups:
Neither group of 500 (a maximal group that is a pair, e.g., 2&999): ~33.3%
Vong’s judgment is that the results delivered by the iterated individualist lottery are “deeply implausible”. This is because, he says, when everyone is a member of a group of 500 it would be “clearly unfair” to make it more likely that two people benefit than that some group of 500 does so. And that is exactly what the iterated individualist procedure does. Not so, happily, with his own view: on that view, the chance of one of the groups of 500 benefiting is significantly greater (double) the chance of just two people benefiting.
Hence, while both theories are indistinguishable in terms of their comparative fairness — they both treat equal claimants’ equally worthy claims equally — they can be distinguished on grounds of absolute fairness. The exclusivelottery procedure is more absolutely fair. Hence, since both are equally comparatively fair, the exclusive lottery procedure is more all-things-considered-fair.
- All-Things-Considered Fairness and Outcome Groups
The remainder of Vong’s paper addresses what’s come to be called the ‘awkward conclusion’ in non-guaranteed benefit cases, and his remarks there are worth considering. But in what follows I’ll just focus on his proposal for driving a wedge between the iterated individualist lottery and his preferred exclusive lottery procedure by using cases involving multiple subset-groups.
I’ll put my point two different ways. The first is rhetorical, but it’s meant to warm you up to the more constructive suggestion that follows. Rhetoric, then, first: If we think there’s something perverse, from the point of view of fairness, about it being more likely that some pair of individuals benefits than that one of two groups of 500 benefits (of which every member of each pair is also a member of at least one of the groups), then it seems to me this is because there’s something perverse, from the point of view of fairness, about some pair of individuals benefiting having any chance at all of benefiting in the face of the possibility of benefiting one of two groups of 500 of which each pair is a member of at least one. In other words: we certainly ought not to accept a procedure that gives some (any) pair that is a maximal group higher chances of being selected than the (much) larger groups of 500, but that in turn is because we ought not to accept a procedure that gives pairs that are maximal groups any chances at all when the groups of 500 (again, of which each member of a pair is a member of at least one) are possible beneficiaries.
Vong, I imagine, would reply that we’re constrained in our choice of procedure first by comparative fairness. And any attempt to zero out the chances of the pairs that are maximal groups will fail to respect the requirement to allow each equally strong claimant to have an equal positive effect on the outcome group selection (the requirement of comparative fairness defended earlier in the paper). But I am not so sure this is right, for two reasons.
First, Vong’s quite sensible suggestion, early on in his paper, is that there’s a “reasonable compromise between comparative and absolute fairness” (332). What’s odd about this is that he then endorses the view that, in equal conflict cases (such as the one we’re now grappling with), “comparative fairness is the most important type of fairness” (ibid.). But saying there’s a reasonable compromise between comparative and absolute fairness, and then judging the fairness of procedures first by what’s comparatively fair, and only then ranking them in terms of absolute fairness doesn’t, to me, seem like a reasonable compromise between comparative and absolute fairness. Instead, it seems to me like comparative fairness shoving absolute fairness out of line, gobbling up its meal, and then telling absolute fairness to eat its fill (of the scraps). So, it’s not clear to me that Vong’s judgments about cases — which I share — show that we ought to be constrained in our choice of fair procedures, in the way Vong thinks we are constrained, by comparative fairness.
Second, I think we can reach a reasonable compromise between comparative and absolute fairness that delivers the judgment that, above, I invited you to join me in making, viz. that the groups of 500 should each receive a 50% chance of benefiting, and that the pairs that are maximal groups should have zero chance of benefiting. Here is how: we simply disallow (i.e., ignore) outcome groups when the members of those groups can be given the same, or higher chance of benefiting by disallowing them. I lack the space to defend this procedural requirement in detail here, and to explain fully why I don’t think it’s ad hoc, but I’ll say a couple things.
First, let’s see how the suggestion works in the present case. (I’ll use the exclusive lottery for illustration, but it works for the iterated individualist lottery too. That, in part, is why it’s a challenge to Vong’s proposal. It removes the wedge he drives between the two procedures.) The chance of (say) claimant 2 being benefited if we allow the pairs that are maximal groups to stand as outcome groups is equal to the chance of the group 1-500 being selected (33.3%) plus the chance of a pair that’s a maximal group containing 2 (e.g., 2&999) being selected (vanishingly small), which sum to less than 50%. The chance of claimant 2 being benefited if we disallow the pairs that are maximal groups to stand as outcome groups is clearly greater than this, since it’s exactly 50%. Similar remarks go for claimant 3. And for claimant 4. In each case, we can improve a claimant’s odds of receiving the benefit to which they have a claim if we disallow the outcome groups that are the maximal pairs containing them from standing as potential beneficiaries in the lottery. My suggestion is that if we do so, we are promoting absolute fairness (by increasing the odds that those who have a claim on a benefit will receive that benefit — the odds of the groups of 500 go to 50%) and, moreover, that we are not treating anyone’s claim on the benefit unequally, since each claimant is subject to the same procedural vetting via their outcome group membership.
You might think that in following this suggestion we’d somehow be treating the pairs that are maximal groups unfairly, since we’re not allowing them to stand as potential beneficiaries in the lottery. But this, I think, misunderstands the nature of outcome groups. To admit that some group is an outcome group that we can benefit (and so enter it into the lottery) is already to make a choice from the point of view of the fair distribution of benefits. The complex convergent case Vong presents that has led us where we are is a case in point. If we allow the pairs that are maximal groups to be considered potential beneficiaries in our lottery, we thereby sacrifice absolute fairness on the altar of (a particular conception of) comparative fairness. But we are not required, by a reasonable conception of all-things-considered fairness, to do so.
One last way of driving home the point. Consider lifeboat cases containing no overlap. Absolute fairness, given its requirement to maximize the chances that each individual receives the benefit they are due, requires us to give 100% chance to the largest outcome group. Vong argues, and I agree, that this is unacceptable. Spreading the chances among the potential outcome groups, assuming they are not proper subsets of maximal groups, is the obeisance absolute fairness must make to comparative fairness. What I’m urging here is that, in cases where absolute fairness can be promoted without worsening any (and in our present case improving all) claimants’ chances of receiving the benefit they are due by dissolving (i.e., refusing to consider distributing the benefit to) certain outcome groups, this is what fairness requires. I do not think it is a violation of comparative fairness, since each claimant’s equal claim is being treated equally in the vetting, and I do not think outcome groups should be understood per se as valid claimants on benefits (it is instead their members who are claimants). But if you think it is an affront to comparative fairness, then think of it as the corresponding tithe comparative fairness makes to absolute fairness in their mutual compromise.