Suppose you

find yourself in a situation in which you can either save A and B or save

C. A, B and C are relevantly similar –

all are strangers to you, none is more deserving of life than any other, none

is responsible for being in a life-threatening situation, and so on. John Taurek (“Should the numbers count?”)

argued that when deciding what to do in such a situation, you should flip a

coin, thereby giving each of A, B and C a 50% chance of survival. Taurek seemed to be employing the “Equal

Greatest Chance” principle (EGC), according to which, when deciding whom to

save, one must give each person the greatest possible chance of survival

consistent with everyone else having the same chance. Others, including Jens Timmermann, have endorsed a “weighted lottery”

that gives each person a chance to live, but gives a greater chance to those in

the larger group. Finally there is the

“Save the Greater Number” principle (SGN), which needs no elaboration. Here is a story that refutes the EGC and

weighted lottery principles. (The story

shares important features with one given by Judy Thomson in *The Realm of
Rights*. For some reason, Thomson

did not think her example refuted EGC. If anyone knows why, I’d like to know.)

McDaniel

believes in the EGC, and never leaves home without a fair coin. McDaniel’s nemesis, a murderous bureaucrat

named Portmore, has captured three hostages, named Alice, Bob and Carol. Portmore tells McDaniel the following: “I am

going to divide these three hostages randomly into two groups – a group of two

and a group of one. I will let you

determine which group you wish to save, and I will kill only the members of the

other group. Indicate your decision by

filling out this form, and checking the appropriate box.” As a believer in the EGC, McDaniel would

choose to save the larger group, since that decision gives each person a 2/3

chance of survival, and nothing else gives each person a greater chance. If McDaniel were to flip a coin to decide

whether to save the larger or the smaller group, he would diminish each

person’s chance of survival to 50%. So

far, EGC and SGN get the same results.

McDaniel

checks the box indicating that the larger group should be saved. Portmore proceeds to divide the hostages

randomly into two groups. Alice and Bob

are in one group, Carol is in the other. Then Portmore realizes he has lost the form. “I’m sorry, McDaniel, but you’ll have to fill out another form,”

he says. If McDaniel is to follow EGC,

he must now flip a coin to decide which box to check, since that gives each

hostage an equal greatest chance of survival.

This is a

decisive counterexample against EGC. No

plausible principle entails that McDaniel should fill out the form differently

the second time. He has gained no new

information that could be relevant to his decision. The example can be adapted in obvious ways to refute weighted

lotteries.

Nice one, except I don’t like my name being associated with being a bureaucrat. The being murderous part, strangely, doesn’t bother me at all.

A question: McDaniel has gained some new information, right? When asked to fill out the form the second time around, he now knows that the three have already been put into two groups. So you must say that this new information isn’t relevant to McDaniel’s decision, and I agree. But isn’t this begging the question. This new information is relevant if one accepts EGC. What am I missing?

I find the argument very compellling. Ben, have you read John Broome’s papers on this topic? There’s one in

Ethics(1984) and one later, maybe 1990-91, in theProceedings of the Aristotelian Soc.Doug, I guess the point is that it is extremely implausible that

thatinformation should be relevant. Try this. Suppose McD is told that the people have already been put into two groups, but he hasn’t been told which people have been put into which groups. What is his answer in this case? It looks like hehasto say that the larger group should be saved. But, when we tell him which people are in which groups, then he changes his mind, and he changes it no matter what we tell him! That’s ridiculous. Isn’t it?The argument is compelling, but the thought that someone like McDaniel could endorse a principle as crazy as EGC is so counter-intuitive that I had trouble thinking through the example….

Jamie,

Okay, that helps. I see now that the implications of EGC that you and Ben are pointing to are even more ridiculous than the denial of SGN itself. I guess that I was having a hard time, because the denial of SGN seems so ridiculous to me in the first place.

Yeah, Kris, you got the raw end of the deal. I’d rather be a murderous bureaucrat, then someone so foolish as to endorse EGC. What did you do to Ben to deserve this?

Let me offer a response on behalf of the defender of EGC.

Before Portmore has divided the hostages into two groups, McDaniel knows that it’s in the interests of every hostage to choose to save the larger group. But after Portmore has done the dividing, McDaniel knows that it’s not in the interests of one of the hostages to choose to save the larger group. Surely information about what is in the interests of the hostages is relevant to McDaniel’s decision. So McDaniel does gain some relevant information.

Jamie says:

I thinks there’s an interesting analogy with the Two Envelope Paradox here. As I recall, some people, with perfectly good Beyesian credentials, hold the following position: if you look inside your envelope, then no matter how much money you find there, you ought to swap; but if you don’t look inside your envelope, then it’s not the case that you ought to swap. Am I remembering that correctly?

Jamie – thanks for the pointer, I haven’t read Broome’s papers (or many papers at all) on this topic. I will look them up.

Campbell – McDaniel does know, before the division, that it’s against the interests of one of the hostages that he pick the larger group. It’s against the interests of whoever ends up in the smaller group. What he learns after the division is that *Carol is that person*. But that doesn’t seem like relevant information. (Maybe if I told the story in the right way, ‘Carol is the person in the smaller group’ would actually be a tautology. e.g. if ‘Alice,’ ‘Bob’ and ‘Carol’ are just names McDaniel makes up, intending ‘Carol’ to pick out the person in the smaller group. Then the only information McDaniel would gain after the division would be that the division has in fact occurred. I have to think about that.)

Ben,

Suppose that Carol is the hostage that gets put in the smaller group. Consider two claims:

(1) Before the division, it is not in Carol’s interests for McDaniel to save the larger group.

(2) Before the division, McDaniel knows that it is not in Carol’s interests for him to save the larger group.

I’m inclined to say that (1) is false. Before the division, it would be rational for Carol to urge McDaniel to save the larger group, which suggests that his doing so is in her interests. If (1) is false, then (2) must also be false. But even if (1) is true, I would say that (2) is false. Before the division, McDaniel doesn’t know that Carol will end up in the smaller group. So, even if saving the larger group is not in Carol’s interests, McDaniel cannot know that this is so.

However, after the division, McDaniel surely does know that saving the larger group is not in Carol’s interests. So he comes to know something that he didn’t know before. He gets new information, in other words.

Is this new information relevant to his choice? I think it might be. I’ve yet to see a reason to think that it isn’t. Certainly, Carol will think that it is.

Well, one reason to think it isn’t relevant is that McDaniel knew, before the division, that one of the people would end up in the smaller group, and that at that point, it would be against that person’s interests that the larger group be saved. Yet pre-division, he wouldn’t think this: “I need to save the larger group… but what if it’s Carol who ends up in the smaller group? Then I’ll have to change my decision.” He would believe that he’s choosing correctly by saving the larger group, *no matter the results of the division*. But then his holding EGC would require him to change his mind anyway.

You are right that Carol will think it’s relevant that she is in the smaller group. But EGC is supposed to be motivated by a concern for fairness or respect. I don’t see how Carol can complain about being treated unfairly or being disrespected. Especially given that, pre-division, she would have been in favor of saving the larger group.

I’m not sure what to say about (1); it might depend on details of the example that I didn’t make explicit, e.g. whether the division is truly indeterministic. I am inclined to say that (1) seems false given the way things turn out in the story, thought I would accept:

(1*) Given what Carol knows pre-division, it is rational for her to prefer that the larger group be saved.

Campbell,

Am I remembering that correctly?Well, someone thinks there is an argument to that conclusion, and the argument is valid, and the premises seem impossible to deny. But the conclusion seems impossible to believe. So, it’s a paradox. Which is good, because otherwise it would have a misleading name.

So you think that when McD finds out that it is

Carolwho will be in the smaller group, that is relevant information, in the sense that it will lead him (correctly) to judge that a coin should be flipped. And if he found out that it isAlice, that would also lead him to the same conclusion, and likewise if he found out that it isBob. But without knowing which person is in the smaller group, he correctly judges that the larger group should be saved, with no coin flip. And this seems fine?What if someone has already been assigned to the smaller group, but he doesn’t know which person? What if he then asks Portmore, and Portmore says, “I can’t remember what name the person usually goes by, but I always call the person ‘Leslie'”? What if nobody has yet been assigned, but McD says to himself, “Let me hereby dub whoever will be in the smaller group, ‘Lucky'”?

Ben,

I am persuaded, indeed I think this is what I inchoately thought all along. EGC and Lottery both can get the Consequentialist outcome when the situation is presented as uncertain. Why not focus on that moment to satisfy EGC and Lottery? This point is enhanced when we see, as you and Jamie are both pointing out, that those principles need to change their answers when the uncertainty is taken away. But 1) as Jamie says surely it is awkward to know that you will have to change your answer to B after time T, but to insist before that A is the right answer. I suspect you could money pump people who thought like this. Imagine the default setting is A, but the proponents of EGC and Lottery recommend paying a bit to switch to B while there is uncertainty, and then pay a bit again to switch back to A when the uncertainty is lifted. 2) Surely also there is a strong intuition that if a plan was fair when there was uncertainty (and all parties supported the plan knowing that the plan is the best bet for their interests but also that later it might be that they will have cause to wish that they had not supported the plan) that people’s interests have been accommodated and they cannot later cry injustice on the grounds of their unluckily losing the good gamble. This would be like rationally agreeing to let a coin flip determine the matter, but then complaining that the outcome in which one loses the flip is unfair.

Look here

http://leiterreports.typepad.com/blog/2005/11/tenure_and_acad.html#comments

for a pretty lame question posed by me, and a pretty lame (but unfortunately best) answer given by David Sobel.

Ahhh, the glory!

Here’s the only defense of EGC I’ve been able to think of. EGC has to have some backwards-looking element to it; you have to look to see whether any fair randomizing process has already taken place. Otherwise, in any of these situations where there’s time to change your mind, you’d have to be constantly flipping coins up until the last second. The idea would be that Portmore’s act of dividing the hostages into groups would count as the fair randomizing process. So you wouldn’t need to flip any coins. Only I don’t know what the view would be anymore. Maybe something like: if you’re in one of these sorts of choice situations, flip a coin/perform a lottery *unless something equivalent to a coin flip or lottery has already occurred*. (In which case, do what?) But that is hopelessly vague.

Either that or wait till the last second and flip the coin once.

Well, I seem to have landed the job of defending ECG. So let me try again.

Objection.EGC tells McDaniel to change his mind for no good reason. When he makes his first decision, ECG tells him to choose one option, yet when he makes his second decision, ECG tells him to choose a different option. But between the time of the first decision and the second, McDaniel receives no new information that would justify choosing different options.Response.Given McDaniel’s first choice, it is impossible for him not to change his mind, because the option that he first chose is no longer available when he comes to make his second choice. You might say; “not true; he first chose the option of saving the greater number, and that option remains open to him when he makes his second choice.” But that’s an incomplete description of the option in question. At the time of the first decision, he has the option ofsaving the greater number and giving every hostage a 2/3 chance of survival, and he rightly chooses this option. But at the time of the second decision, he no longer has that option. If he did have it, he surely ought to choose it; but since he doesn’t have it, he is forced to choose something else instead.This defense makes sense to me, as does EGC itself initially—if all people are equal, why should a person be penalized for being unlucky enough to land in the smaller group? But if in the original example there were, say, 1000 people divided randomly into a group of 999 and a group of 1, I don’t think I could do anything except save the greater number.

Of the three principles, the weighted lottery is the one that I would least like to follow, as it still penalizes the supposedly equal people in the smaller group (though not as much as SGN does) yet still has a chance of letting the greater number die.

I can’t see why he should flip a coin after the division is made. Suppose I will preserve Red’s life if you choose the box that contains the red item: Box1 or Box2. There is one red item, one green and one blue. Since the distribution is random, each item has a .5 chance of being in Box1 and a .5 chance of being in Box2. No doubt, one of the boxes will contain more items after the distribution and your chances of finding the red item in the larger group is 2/3. Before the division, you should choose Larger Group (if that is an option).

Now suppose the division has already been made. Now I say, “I’ve already distributed the items randomly, and you have these options: choose the larger group or choose a box randomly”. Red’s chances of being in the larger group are still 2/3 (same for blue and green’s chances). So you should again choose Larger Group.

It is only after you know who is in what group that the best chances for each surviving drops to .5. So suppose I show you which box the red item is in and which box the blue and green items are in. Now to give Red, Blue and Green the best equiprobable chance of surviving, you have to randomize between the Box1 and Box2.

On the other hand, it is perfectly reasonable for Red, Blue and Green to agree to a policy which states that those in the larger group should be saved under the reasonable assumption that the chances of finding oneself in the larger group is at least as high as finding oneself in the smaller. That would give each the highest equiprobable chance of surviving. And that will be conceded by all parties to agreement even after the distribution is known to all.

I don’t understand why McDaniels’ choice is an ethical question at all. Portmore is behaving unethically, depriving his hostages of their rights. Niether Portmore nor any of the hostages have any claim on McDaniel, and Mcdaniel has no responsibility for Portmore’s actions nor for the outcome of Portmore’s actions. McDaniel has the choice to refuse to participate in any of this.

Apologies for prematurely abandoning this post. Holiday traveling, getting sick… fun times.

Campbell – thanks for taking up the burden. I’m still not finding the view any more plausible. It’s true that his alternatives have different properties before and after the division. But he knows beforehand that he’ll have those options, with those different properties, after the division. Either he’s just going through the motions when he makes the original decision, knowing he’ll have to re-do the decision later, or he genuinely wants to save the larger group beforehand, even knowing he has to re-decide things later. Either way, he is just crazy.

Mike – not sure I followed you, but if I did, your example is another similar argument against EGC, right?

John – I think you’re wrong about that. If I have a chance to influence someone else’s actions to prevent something bad from happening, my decision seems like an “ethical” decision, in the sense that I can behave rightly or wrongly depending on what I do. But that’s not really crucial to the case. The case could be changed so that instead of an evil agent, some natural catastrophe is threatening three people in an otherwise relevantly similar way. Then the issue you’re worried about wouldn’t arise, I think. (Unless you also don’t think questions about rescuing people from natural disasters are ethical ones.)

Ben, you say in the initial argument (in your post),

“Portmore proceeds to divide the hostages randomly into two groups. Alice and Bob are in one group, Carol is in the other. Then Portmore realizes he has lost the form. “I’m sorry, McDaniel, but you’ll have to fill out another form,” he says. If McDaniel is to follow EGC, he must now flip a coin to decide which box to check, since that gives each hostage an equal greatest chance of survival.”

But if McD does not know who is in the larger group and who is in the smaller, your conclusion is mistaken. McD follows EGC only if he chooses Save Larger Group. It does not matter that the hostages have been divided already, the probability of each finding him/herself in the larger group remains .66. And so the probability of each surviving is .66 if you choose Save the Larger Group.

But if McD knows that Alice and Bob are in the Larger Group and Carol is in the Smaller Group, then he follows EGC only if he randomizes.

This is easy to see. Suppose I have already divided three items R,B, & G into two boxes randomly. I give you two options

{Choose a box randomly, Choose the box with the Larger Number of items}. What gives you the better chance of finding item R? Of course, the second option does. And that is true, of course, even though I have already divided them into two groups. Now just suppose I say, “I will save R’s life if you find item R”. What gives R the best chance at surviving? It is the second option. And that is true in the case you describe, too.

Thanks Mike, now I see what you’re getting at. I think there is some question here about what sort of probability to use. Or maybe there are different versions of EGC employing different notions of probability. You’re assuming the relevant probability is the probability given McDaniel’s evidence or knowledge. I’m not sure that’s the sort of probability EGCers think is relevant. I suspect they might want objective probability. In any case, I think it’s bad enough if McD has to change his mind only in cases where he knows how the division turned out.

One reason to think subjective probability wouldn’t be OK for the EGCer is that there would be no reason to flip coins; he could do things like save the people with the longer names (if he doesn’t know their names), which would give the same subjective probabilities as flipping but wouldn’t *really* be giving some people a chance at all.

Ben, you say,

“You’re assuming the relevant probability is the probability given McDaniel’s evidence or knowledge. I’m not sure that’s the sort of probability EGCers think is relevant.”

It’s hard to see why this is not the relevant sort. It is after all, McD’s choice and he has to choose based on what he knows.

About this claim,

“One reason to think subjective probability wouldn’t be OK for the EGCer is that there would be no reason to flip coins; he could do things like save the people with the longer names (if he doesn’t know their names), which would give the same subjective probabilities as flipping but wouldn’t *really* be giving some people a chance at all”

On the contrary, it would. To see it, suppose the hostages themselves do not know who has the longer names but do know that 2/3rd’s of them do. Knowing this, they would plead that McD choose Longer Names. They wouldn’t say, “oh well, it doesn’t *really* improve my chances if he chooses the longer names.” Indeed, it *really* is their best chance–even if the names have already been distributed!

On the other hand, I can’t imagine this problem really depends on nature being indeterministic or chance being objective. If it is true now that the coin will turn up “heads”, am I really being unfair to someone in some group in tossing the coin to choose survivors?

Mike,

I don’t see why what the hostages know is relevant. I thought you just said that it’s McD’s knowledge that determines the relevant probabilities.

In any case, you are certainly right that it *shouldn’t* matter whether there is such a thing as objective chance. Whether it actually matters to EGCers, I couldn’t say. Of course, we can still avoid all these problems by just stipulating that people have the relevant bits of knowledge, in which case EGC still entails that McD has to re-do his decision, as far as I can see.

Ben,

We could avoid some of these problems by stipulating that McD has the additional information. I agree. But I’m a little lost as to why the counterexample remains. EGC would have us give to each hostage the greatest equiprobable chance P of survival. The value of P is going to vary depending on McD’s information. Given no information on who is in the larger group and who is in the smaller, McD can give each the greatest equiprobable chance of surviving only if he chooses the Larger Group, P then equals 2/3. Given the information (I) that A&B are in the larger group and C in the smaller, McD can give each the greatest equiprobable chance of surviving only if he randomizes, P then equals 1/2.

There’s no doubt that the information (I) that McD possesses affects which option of {Larger Group, Random} gives the greatest equiprobable chance of surviving. But how is that a counterexample to EGC? Everyone agrees that the information in (I) *does* affect which option provides the greatest equiprobable chance. The fact that McD changes his mind on information (I) is exactly what we should expect, isn’t it? It is the rational thing to do for anyone concerned with giving each hostage the greatest equiprobable chance of surviving. In any case, as far as I can see, it is the rational thing to do.

Ben,

You say:

Either [McDaniels]’s just going through the motions when he makes the original decision, knowing he’ll have to re-do the decision later, or he genuinely wants to save the larger group beforehand, even knowing he has to re-decide things later.But that’s not quite right, is it? McDaniel doesn’t know that Portmore will lose the first form that he fills out. So, at the time of the first decision, he doesn’t know that he will have to ‘re-decide’ later.

Mike,

I think the objection to EGC is stronger than you realise. Ben’s example shows that EGC is inconsistent with this principle:

To see the inconsistency, let the propositions P and Q be as follows:

P = Alice is in the smaller group

Q = Each individual has a 1/2 chance of survival.

And suppose that the two groups have been determined but not revealed to McDaniel. Then EGC implies that the antecedent of the above principle — i.e. the conjunction of (a) and (b) — is true. McDaniel, the agent, does not know whether Alice is in the smaller group. But if he were to learn either that she was in smaller group or that she wasn’t, then, according to EGC, it would be the case that he ought to see to it that each individual has a 1/2 chance of survival. However, EGC also implies that the consequent of the principle is false. Given McDaniel’s current epistemic situation, ECG says it’s not the case that he ought to see to it that every individual has a 1/2 chance of survival.

I think the principle is

veryplausible. Can you think of any non-contentious counterexamples to it?As I read it the principle states that,

IF (a) ~Kp & ~K~p

and

(b) (Kp v K~p) []-> Oq

THEN Oq

Read (a): McD does not know p and does not know ~p

Read (b): If McD were to know p or McD were to know ~p then it would be the case that McD ought to see to it that q.

Therefore (you urge) from (a) and (b) we should derive Oq (or that he ought to see to it that q).

But this principle is pretty clearly invalid. We cannot derive the consequent of the counterfactual in (b) (viz. Oq) unless we have the antecedent of that counterfactual true (viz. Kp or K~p). But premise (a) denies that the antecedent of the counterfactual is true.

So, here’s a counterexample. Suppose I’ve taken my final exams and reason this way.

(a) I do not know that I passed my exams and I do not know that I did not pass my exams.

(b) Were I to know that I passed my exams or know that I did not pass my exams, then I would know what classes I should sign up for next semester.

Therefore, I know what classes I should sign up for next semester.

(a) and (b) are true. But it certainly does not follow that I know what classes to sign up for next semester. I won’t know that until I know how I did on the exams.

I don’t think that’s a counterexample. The conclusion, ‘I know what classes I should sign up for next semester’, is not a proposition of the form Oq. Suppose we substitute a proposition of that form:

(a) I do not know that I passed my exams and I do not know that I did not pass my exams.

(b) Were I to know that I passed my exams or know that I did not pass my exams, then it would be the case that

I ought to sign up for a philosophy class next semester.Therefore,

I ought to sign up for a philosophy class next semester.That inference seems fine to me.

The idea behind the principle is this: if what I ought to do knowing that P is the same as what I ought to do knowing that ~P, then knowledge of either P or ~P is irrelevant to what I ought to do; so what I ought to do knowing neither P nor ~P must be the same as what I ought to do knowing either P or ~P.

Another thing occurs to me. Your statement of condition (b) is not quite what I intended.

You write: (b) (Kp v K~p) []-> Oq.

But it should be: (b) (Kp []-> Oq) & (K~p []-> Oq).

My own earlier statement of the principle was probably misleading in this respect.

That’s not the principle that Campbell said was so plausible, Mike, but your exam story does provide a counterexample to his, too.

Campbell’s principle has “the agent ought to see to it that Q” where your example has something of the form “I know…” Still, you can just change your example so that the conclusion is “I ought to see to it that I go and sign up for next semester’s classes right now.”

You (Mike) misinterpreted Campbell’s claim, by the way. You seem to think he was saying that the principle is logically valid; I’m sure he just means that it is ‘normatively valid’, i.e., that it has no normatively plausible counterexamples. But anyway, you’ve given a normatively plausible counterexample.

I still think Ben’s point is very persuasive, but you’ve convinced me that it needs to be reformulated more carefully. I’ll try to think of how to reformulate it.

Oh, whoops, sorry about that — I sent my last posting before I saw Campbell’s last two.

But the point is that sometimes there is something that I ought to do

as soon as I know whether P, but I ought plainly not to do it until I find out. Here’s a nice seasonal example: LetPbethe Patriots will beat DenverandQbeI place a very large bet on whichever team I believe will win.The form of the “principle” is obviously invalid. There might be something you should do were you to know p and something you should do were you to know ~p but nothing you should do were you in ignorance concerning p. So, yet another counterexample.

(a) I do not know that Smith has arrived at t and I do not know that Smith has not arrived at t.

(b) Were I to know that Smith arrived t or know that Smith has not arrived at t, it would be that I ought report his arrival time to you.

Therefore, I ought to report his arrival time to you.

That conclusion doesn’t follow. It is not only the information you learn that matters here. The fact that you have learned the information also matters. If the antecedent of the counterfactual were,

(b) (p v ~p) []-> Oq

then, yes, obviously, you are obligated to do q. But that is not the antecedent. The antecendent requires the fact that you are not in ignorance concerning p,

(b) (Kp v K~p) []-> Oq

When McD is in ignorance concerning Alice and the others, the greatest equiprobable chance of survival is 2/3. When he knows where Alice and the others are, the greatest equirobable chance of survival is 1/2. So it *matters crucially* to what McD ought to do that he is in ignorance concerning those facts.

Ok. In case you don’t like that counterexample either, I’ll provide a simple countermodel. The countermodel is enough to show that there is a concrete counterexample, even if I haven’t devised one that you happen to like.

(a) ~Kp & ~K~p

Assume (a) is true in our world @ and assume that ~Oq is also true in @. Let there be a unique closest world w to @ at which (Kp v K~p) is true. (It is not necessary that there be a unique closest world, but it keeps matters as simple as possible). Assume it is also true in w that Oq. Since Oq is true in the closest world to @ at which (Kp v K~p) is true, it is true in the actual world @ that,

(b) (Kp v K~p) []-> Oq

So we have (a) and (b) true in the actual world and also ~Oq true in the actual world. So,

w = {Oq, (Kp v K~p)}

@={~Oq, ~Kp, ~K~p, (Kp v K~p)[]-> Oq}

That’s a perfectly consistent model. Maybe there is an analogue of this principle that is valid. But this one is plainly invalid.

Jamie,

I’m not sure what you mean by “normatively valid”. The validity of the principle–I’ve no doubt we agree–is a structural question. I’m happy if more structure is added to the principle or the principle is supplemented with normative theorem(s). I just don’t happen to know of any that would do the work.

And I agree that this is the way to try to rescue the principle. But once all of the modifications are made–once all of the structure is added–it remains perfectly fair game to provide the usual sort of countermodel to the principle.

Jamie,

I don’t think that’s a counterexample to the spirit of the principle.

Is what you ought to do if you learn the Patriots will win the same as what you ought to do if you learn they will lose? In one sense, yes: in either case you ought to bet on the team you believe will win. But in another sense, the sense I intended, no: if you learn the Patriots will win, you ought to bet on them, but if you learn that they’ll lose, you ought to bet on the Broncos.

The tricky thing is to formulate the principle in such a way as to better reflect the intended sense.

Mike,

My response to your new counterexample is the same as my response to Jamie’s. I want to say that what you would be required to do were you to learn that Smith has arrived at t is not the same as what you would be required to do were you to learn that Smith has not arrived at t. Whereas, in Ben’s original example, EGC implies that what McD would be required to do were he to learn that Alice is in the smaller group is precisely the same as what he would be required to do were he to learn that Alice is not in the smaller group. But I don’t see an obvious way to formulate the principle so that it captures this intuitive difference between the two cases.

Campbell,

Maybe this is more in the spirit.

(a) I do not know that materailism is true and I do not know that materialism is not true.

(b1) If I know that materialism is true, then I ought to abandon my skeptical teachings about materialism.

(b2) If I know that materialism is not true, then I ought to abandon my skeptical teaching about materialism.

Therefore, I ought to abandon my skeptical teachings about materialism.

The consequent is exactly the same–in the sense, I think, that you’re after–in (b1) and (b2). Yet, the conclusion clearly does not follow.

Mike,

In the same sentence I gave a paraphrase:

Is that not clear enough?

Campbell,

… for the spirit of the principle is that it should have no counterexamples!

I don’t think different ‘senses’ are going to help at all. My gut feeling is that you should leave out the stuff about knowledge. I mean, you are after some kind of dominance principle, right?

Why not just the ‘sure-thing’ principle? Won’t that do here? (An ethical version: if you ought to do A rather than B in case P, and you ought to do A rather than B in case not-P, then you ought to do A rather than B.) Maybe that doesn’t work here but I think it’s headed in the right direction.

Campbell, I couldn’t agree more about the following,

“Whereas, in Ben’s original example, EGC implies that what McD would be required to do were he to learn that Alice is in the smaller group is precisely the same as what he would be required to do were he to learn that Alice is not in the smaller group”

But you’re forgetting the third option. What McD ought to do under the third option–when he is in ignorance about where Alice and the others are–is completely different. Ignorance of the facts is itself a morally relevant fact in this case, as it is in so many others.

Jamie,

I wasn’t certain, even after your gloss, whether you were ruling out the use of abstract countermodels. I’m still not. That’s why I asked.

The sure-thing suggestion is interesting, but I’m not sure how it would go. And I know that this is no more than a suggestion on your part of a way things might go. But if p= Alice is in the small group and q= each has a 1/2 chance of survival, then it’s not true that,

(b) p []-> Oq & ~p []-> Oq

If McD does not know what groups the hostages are in (and so does not know that Alice is in the smaller group and does not know that Alice is not in the smaller group), then it is not true that he ought to give each of the hostages a 1/2 of surviving.

Oh, well, my point was that since Campbell did not claim that the principle is

valid, it is not an objection to point out that it isn’t valid. (Similarly it is not an objection to point out that there is an abstract countermodel.)I agree with you that if McD doesn’t know what groups the hostages are in, it isn’t true that he ought to give each a 1/2 chance of surviving, but it seemed to me that EGC implied it is true (in that case). It’s now unclear to me what the principle means. It would be nice if someone reading this thread actually endorsed or believed the principle, so we could ask for interpretive help, but as far as I can tell nobody does.

I see. I hope the right interpretation of EGC is consistent with assigning probabilities correctly. Would be an awfully small victory over EGC, if the principle entailed that sort of error. Even so, I can’t see much damage to EGC were defenders to modify the principle in the obvious way.

Ok, so let’s see if I can modify it in the obvious way:

(The ‘Cr’ is for ‘Credence’.)

Does that seem right?

That does seem right. It has to be something like,

(EGCr) When deciding whom to save, choose so that your credence of any one being saved is the same as your credence of any other being saved; subject to that constraint, maximize your expected number of savings

or maybe,

EGCr’. When deciding whom to save, always provide everyone with the highest equi-credential chance of surviving.

Maybe that’s a bit more economical. Add the assumption that credence equals objective probablity in cases where we possess the objective probability. D. Lewis provides a nice way to do that, but this is probably more than we need to put (explicitly) in the principle.

Anyway I think (EGCr) is in the ball park.

Yours is just as good; I just wanted to make it explicit whose credence is at stake. (Your economical one could be misinterpreted to say that each victim’s credence for being saved must be the same.)

Ok, now: who thinks (EGCr) is correct?

I think EGCr is certainly not correct. Setting aside Ben’s objection, there’s another rather more decisive objection.

Suppose each of our three individuals swallows a lethal poison: Alice swallows poison A, Bob poison B, and Carol poison C. Their only hope of survival is that you supply them with the antidote. You can manufacture only one of two antidotes, X and Y. X is an antidote to both A and B, Y to both B and C. Whichever antidote you make, there’ll be enough for two people. Suppose, finally, that you know all of this.

It might seem that the thing to do is toss a fair coin and make X if it lands heads, Y if it lands tails. But then your credence of Bob’s being saved would be 1, while your credences of Alice’s being saved and Carol’s being saved would each be 1/2. In fact, the only way to equalise the credences is to make no antidote at all, in which case you would be certain that no one is to be saved. So this is what EGCr tells you to do: let everyone die. But that’s absurd.

There’s a fairly obvious way to revise EGCr so that it avoids the objection. The revised version is a kind of maximin principle:

MGCr: When deciding whom to save, maximise the minimum credence of any individual’s being saved.

But I don’t think MGCr is correct either, again for reasons other than Ben’s objection. Suppose you must choose between saving one billion people and saving one person (where the one person is not among the one billion). EGCr and MGCr both say that you ought to choose by tossing a fair coin. But that’s also absurd.

Mike,

Your latest example, the one about materialism, is helpful. It avoids my earlier worry. Now I want to hear a plausible normative story according to which the premises are true and the conclusion false.

“You can manufacture only one of two antidotes, X and Y. X is an antidote to both A and B, Y to both B and C. Whichever antidote you make, there’ll be enough for two people”

Well, it isn’t true that I cannot give each an equiprobable chance greater than 0. Off the top of my head, I can certainly use this gamble

.5(X) x [.5(B), 1(A), 0(C)]

.5(Y) x [.5(B), 0(A), 1(C)]

In this case each has a .5 chance of surviving. But that is not much more intuitive. Let me think more about it.

Yeah, that’s a good point: maximin is surely a better principle, and even pure maximin intuitively gives far too much wait to fairness: we can’t sacrifice even a tiny bit for the sake of saving thousands, for instance.

It might be plausible that fairness (in the form of something like EGCr or a maximinish relative) has some importance, to be traded off against other values. To isolate fairness, it would make sense to think about cases where the other values are small or trivial, for instance if it isn’t lives to be saved but small treats to be awarded.

Mike,

Fair enough. There was an unstated assumption in my example. But rather than spell it out, let me give a more abstract description of what I had in mind. Suppose there’s only three ‘prizes’: either (1) A and B survive and C dies, or (2) A dies and B and C survive, or (3) all three die. (Here A, B and C are people.) The only lottery over these prizes that gives equal chances of survival to everyone is the one that gives no chance of survival to anyone. Let p be the probability you ‘win’ the first prize, and q the probability you win the second. Then the probability A survives is p, the probability B survives is p+q, and the probability C survives is q. Assuming these probabilities are equal we have: p = p+q = q. But this implies that p = q = 0.

Campbell,

The abstract case is not nearly the case you presented. But more to the point, it is rather obvious that EGC is designed for a limited range of controversial cases. That we can find possible cases in which EGC yields a counterintuitive recommendation does not so much as put it in the same league with every other known moral principle (all of which, as you know, do this in spades). This is because EGC is, at best, a subsidiary principle that is applicable (and even here controversially so) under quite specific conditions. What exactly those conditions are is part of the philosophical work. I don’t think, in fairness, that Taurek ever intended more than this. That it does not obviously fail in the well-known case is interesing. What it does in a close range of cases, I don’t know. Applied very broadly, it loses interest altogether. The more interesting question to me is how many of the more narrow range of cases in which we thought it gave the wrong answer, does it give a plausibly right answer.

Mike,

I’m not entirely sure what your point is. I take it you agree that EGC has an implausible result in the case I described, but you don’t find this very interesting. Fair enough. I happen to think it is interesting (at least interesting enough to warrant posting a comment about it on a blog). But I won’t insist that you find it interesting too.

Campbell,

No, I wasn’t suggesting that it was not interesting. Sorry to give that impression. I was trying to point out that the principle seems to be designed for a limited range of cases. That’s all.

No worries. Point taken.

Thanks everyone, for the great comments.