Consider Newcomb’s Problem: “A psychology professor at your school has a reputation for being brilliant as well as possessed of an enormous fortune she has dedicated to her research. One day you get a request to report to her office at a certain hour. On a table are two boxes. One of them, labeled A, is transparent; in it you can see an enormous pile of $100 bills. The other, labeled B, is opaque. She tells you that there is $10,000 in transparent box A and that in box B there is either $1,000,000 or nothing. She tells you that she is going to give you a choice between:"


"1. Taking just what is in box B.

2. Taking what is in both boxes.

(Think about what you would choose given this much information.) Then, she tells you that this is part of an experiment. During registration at the beginning of the quarter, you walked under a peculiar device that reminded you of the machines at airports that are used to prevent hijacking. You didn't think much about it at the time. But, she now informs you that this machine was something she designed and that it recorded an instant profile of your basic personality and character traits. On the basis of your profile, she made a prediction about what choice you would make, and she decided what to put in Box B on the basis of this prediction:

1. If she predicted you would take both, she put nothing in Box B.

2. If she predicted you would take only Box B, she put $1,000,000 in it.

(Now, think about what you would do given this much information.) At this point you ask her how accurate her predictions have been. She says that 1,000 students have been given the choice, and she has only been wrong once. In all the other cases, students who chose both boxes got only $10,000, whereas those who chose only box B got $1,000,000. Then she tells you to choose. What do you do?” —Robert Nozick, “Newcomb's Problem and Two Principles of Choice,” in Essays in Honor of Carl G. Hempel, edited by Nicholas Rescher (Dordrecht, The Netherlands: Reidel, 1970).

For the moment, let’s set aside the question of what you should do and ask what attitudes (e.g., beliefs, desires, intentions, etc.) you should have. First, I think that you should believe that by choosing to take what’s in both boxes you’ll end up with more money than you would by choosing to take only what’s in B. After all, you can see that box A has some money in it. And you have good testimonial evidence that box A contains $10,000. You also have every reason to believe that, by choosing to take what’s in box A, you don’t thereby magically change what’s in box B. So I think that we should accept claim 1:

(C1) You ought to believe that by choosing to take what’s in both boxes you’ll end up with more money than you would by choosing to take only what’s in B.

And so we should also accept claim 2:

(C2) You ought to believe that your choosing to take what’s in both boxes as opposed to your choosing to take only what’s in B is a necessary means to your ending up with more money rather than less money.

Note that C1 and C2 are compatible with both (C3) you ought to believe that if you take what’s in both boxes, you’ll very likely end up with only $10,000 and (C4) you ought to believe that if you take only what’s in box B, you’ll very likely end up with $1,000,000. So even if, like me, you accept both C3 and C4, that should not deter you from accepting C2 as well.

Now, let me just stipulate that the situation is such that you ought to prefer your ending up with more money to your ending up with less money. There might be cases where it would be better if you ended up with less money. But let’s just assume that this is not one of those cases. So we must, given my stipulation, accept claim 5:

(C5) You ought to prefer your ending up with more money to your ending up with less money.

Moreover, it seems that the following is a very plausible normative principle:

(C6) If you ought to prefer E1 to E2 and you ought to believe that your X-ing as opposed to your Y-ing is a necessary means to your getting E1 rather than E2, then you should intend to X and should not intend to Y.

Lastly, it seems to me that we should accept:

(C7) If you should intend to X and should not intend to Y, then you should do X and should not do Y.

So what this all shows, it seems to me, is that those who think that, in Newcomb’s Problem, you should choose only what’s in box B must deny at least one of C2, C6, or C7. For C2, C5, C6, and C7 conjoined entail that you should take what’s in both boxes.

Does rejecting one of these three claims seem like a significant cost to you? And if you are a one-boxer, which of these three would you reject and why? Or do you want to say that this is just a normative dilemma: a case where one cannot perform and have all the acts and attitudes that one should perform and have?

22 Replies to “A Possibly New Take on Newcomb’s Problem

  1. I’m a little unclear on one thing, Doug. In (C2) and (C5), you use the genitive expression, “your ending up with more money rather than less money.” Can you put that some other way? It looks suspicious to me. More money… than what? (The subordinate ‘rather than less money’ seems to do no work — the contrast with ‘more x’ is always ‘less x’, right? What I’m asking is, more than what, less than what?)
    If you could spell out the actual amounts, instead of ‘more money’ and ‘less money’, it would be clearer, but I think that’s hard to do. I worry that there’s some trick embedded there.

  2. Doug: I’m more than happy to reject (C2). I don’t think it’s true that I ought to believe that choosing to take what’s in both boxes is a necessary means for obtaining more money than by taking one box only. I believe the paradox presupposes backwards causation, and that sincerely intending to take one box only *will* result in you getting more money than two boxes. Because I think I ought to believe the paradox presupposes backwards causation of this sort, I think I ought to believe that taking one box is a necessary means for me to end up with more money — contra (C2).

  3. Hi Marcus,
    Why believe that the paradox presupposes backwards causation? There are other possibilities that would explain the high correlation between the predictor’s predicting you’ll take both boxes and your taking both boxes, such as common cause.

  4. I’m a little unclear on one thing, Doug. In (C2) and (C5), you use the genitive expression, “your ending up with more money rather than less money.” Can you put that some other way? It looks suspicious to me. More money… than what? (The subordinate ‘rather than less money’ seems to do no work — the contrast with ‘more x’ is always ‘less x’, right? What I’m asking is, more than what, less than what?)
    If you could spell out the actual amounts, instead of ‘more money’ and ‘less money’, it would be clearer, but I think that’s hard to do. I worry that there’s some trick embedded there.

  5. Hi Jamie,
    I’m making a comparison between choosing what’s in both boxes and choosing only what’s in B. I’ll end up with more money by choosing what’s in both boxes than by choosing only what’s in B: more precisely, I’ll end up with exactly $10,000 more. Do you deny that if I choose both boxes I’ll have $10,000 more than I would have if I had chosen only box B?
    If you want, we can change things as follows (although I think that the original is preferable):
    (C2*) You ought to believe that choosing to take what’s in both boxes is a necessary means to your ending up with an additional $10,000 — additional to whatever is in box B.
    (C5*) You ought to want an additional $10,000.
    (C6*) If you ought to want E and you ought to believe that your X-ing is a necessary means to your getting E, then you should, other things being equal, intend to X. And other things are equal.
    (C7*) If you should intend to X, then you should do X.

  6. Oh, okay.
    Then I think (C6*) is not true, because of the last clause. I don’t think other things are equal.
    I do think that choosing both is a necessary means to getting $10,000 in addition to what’s in box B. And you want and ought to want $10,000 in addition to what’s in box B. But the thing is, if you choose both boxes there will be much less money in box B than there will be if you choose just one box. So it does not seem to me that other things are equal.
    I can see that someone persuaded by the causal reasoning is going to be quite certain that there is some good sense in which other things really are equal, and that a version of (C6) will still be true when that sense is used; it’s just that this is going to be exactly the point at issue between the person persuaded by the causal reasoning and the person persuaded by the evidential reasoning.

  7. Hi Jamie,
    I knew adding an other things being equal clause was a bad idea. Would you reject both of the following?
    (C6**) If you ought to want E, you ought to believe that your X-ing is a necessary means to your getting E, and you ought to believe that X-ing will not have any adverse effects and is not intrinsically bad, then you should intend to X.
    (C6b) You ought to believe that X-ing will not have any adverse effects and is not intrinsically bad.

  8. Well, yes, I guess I would reject (C6**). I mean, it in effect says that the outcomes that are connected to your actions by evidence but not cause do not matter. Right? And that’s what is at issue between the evidential theory and the causal theory.
    Have I interpreted (C6**) correctly?

  9. Hi Jamie,
    Maybe that’s right, but I’m not sure. What would you say is the evidentialist’s alternative to C6**? Perhaps, it’s this:
    (E6) If you ought to want E (say, utility) and you ought to believe that your X-ing is a necessary means to your getting the most evidentially expected E, then you should intend to X.
    But even E6 doesn’t get you to the conclusion that you should choose only what’s in box B. After all, there is no reason for you to believe that your choosing only what’s in B is a MEANS (let alone a necessary means) to your getting the most evidentially expected E. So the antecedent is false.
    I, of course, concede both that “if you choose both boxes there will [most likely] be” no money in box B and that if you choose only what’s in box B, there will most likely be a $1,000,000 in box B, but that doesn’t mean that your choosing only box B is an action by which it is brought about that there is $1,000,000 in B. Choosing only box B isn’t a means to bringing about the outcome in which there is $1,000,000 in B.
    What the evidentialist would need is something like:
    (E6*) If you ought to want E (say, utility) and your X-ing has the most evidentially expected E, then you should intend to X.
    But E6* is false given that the relevant ought here is not the fact-relative ought, but the subjective ought. It is not the case that you subjectively ought to intend to X if you ought to want E (say, utility) and your X-ing has the most evidentially expected E. For suppose that you ought to believe that your X-ing will ensure that you get absolutely no E. It that case, surely, you shouldn’t (subjectively speaking) intend to X.

  10. I’m a little lost.
    You’re right that no substitute principle that uses the idea of ‘means’ is going to be amenable to an evidentialist. ‘Means’ is causal, after all, and so is ‘bringing about’. We evidentialists think causal influence is highly overrated.
    I thought we already had very a good substitute for means/ends principle reasoning, and it was just decision theory. (Conform your preferences to the axioms!) Do we really have to bring back the plodding old dichotomous dogmatism now? If so, I guess the proper evidentialist substitute for “x is a necessary means for y” would be “pr(y|~x) = 0.”

  11. Hi Jamie,
    I should say that I’m a bit out of my depth in that there is a lot that I haven’t read on Newcomb’s problem. Moreover, I’m not well versed in decision theory. So please be patient. I appreciate your help.
    You say that a good substitute for the means/ends principle is to conform your preferences to the axioms. But what do evidentialists use to bridge the gap between preferences and intentions if not a means-end belief? Is it a belief about evidentially expected utility?
    Is the alternative something like this, then:
    (E6**) If you ought to want E (say, utility) and you ought to believe that your X-ing has the highest evidentially expected E, then you should intend to X.
    If so, then my approach is a dead-end. The same issues will appear when looking at attitudes as when looking at acts. I don’t find E6** intuitively plausible, but that’s probably just because I’m not an evidentialist.

  12. Just as an aside, since I don’t think you meant this to be important: according to decision theory utility is not really something you want (or ought to want). It is more of a measure of how much you want other things.
    I haven’t thought much about how to bridge the gap between preference and intention. I guess I’d try the simplest approach first: intend the act you prefer. Is there a problem with that?
    Also, are there interesting examples illustrating the gap between preference and intention? It seems like there would be something badly wrong with someone who didn’t intend the thing she preferred. That might be someone who was so irrational that formulating normative advice for her would be pointless. But quite possibly I’m missing something interesting there.

  13. Hi Jamie,
    I doubt that you’re missing something. I think rather that I am probably pursuing a dead end. But let’s see, because I’m still not clear on how the evidentialist should respond.
    Now if you go with “you ought to intend to do what you ought prefer to do,” then won’t the one-boxer need to hold that you ought to prefer to do the act that has the most evidentially expected utility? But what I should want is more money, not to perform the act with the most evidentially expected utility.
    Suppose I choose only what’s in box B. Further suppose that the predictor was wrong and thought that I would choose both boxes. I, then, get $0. Of course, I performed the act that had the most associated evidentially expected utility — the act of choosing only what’s in box B. But that’s not, it seems to me, what I care about, nor what I ought to care about. I care, and ought to care, about the money I end up with. And I would have ended up with more money had I chosen to take what’s in both boxes.
    So I guess that I would deny that I ought to prefer to perform the act with the most evidentially expected utility. I think that I shouldn’t care about whether the act that I perform has the most evidentially expected utility. Instead, I should care only about whether I ended up with more money that I would have had I performed some alternative act. But I’m guessing that you’ll say that’s just what’s at issue in the debate.
    I was thinking (probably, wrongly) that those on all sides of the debate could (or should) agree about how to move from desires to intentions and there is not reason to desire to perform an act with the highest evidentially expected utility.

  14. I don’t really get it.
    Aren’t all of the oughts indexed to probability distributions? That’s how I think it works, anyway. (Happy to say more about this if you want.) So pick a probability distribution, p, and index the ought to that — no fair switching in the middle. Then surely it will turn out that the act you ought[p] to prefer is the act with the highest expected[p] utility. No? Counterexample?
    If you now pick another probability distribution, q, then it will likely turn out that there is a different act you ought[q] to prefer, and that one has the highest expected[q] utility.
    Do you not like the indexing idea?

  15. I guess that what I’m saying boils down to this: I see why I ought to prefer that my act has the greatest causally expected utility, but I fail to see why I ought to prefer that my act has the greatest evidentially expected utility. And it seemed to me that the claim that I ought to prefer that my act has the greatest causally expected utility was less controversial than the claim that agents ought to maximize causally expected utility. But you’ll point out that the act that we ought to perform and the act that we ought to most prefer are inextricably linked. And so the issue is as much about what we ought to prefer as about what we ought to do, and, thus, my approach makes no headway merely by switching the conversation from acts to attitudes.

  16. Neither form of decision theory tells you that you ought to prefer that your act has the greatest expected utility.
    The theories tell you to conform your preferences to the axioms. The two kinds of theories have different sets of axioms, so they give you different advice, although the difference only shows up in very exotic situations.
    Find someone whose preferences conform to Richard Jeffrey’s axioms and I’ll show you someone who prefers the act with the greatest desirability. (He uses ‘desirability’ instead of ‘utility’.) You just follow the axioms, and the maximizing thing takes care of itself.
    I have a chapter of the Oxford Handbook of Rationality (Mele & Rawling) on decision theory and morality. I think I can even send you a nice pdf if you want.

  17. Jamie, I agree with all you said in your last comment, and at some point in my life I used to think that everyone else agreed with these points too. But doesn’t Weirich take utility to be primitive (something like a comparative measure of strength of desire, and strength of desire is not conceptually linked to preference) and a principle roughly like “Choose the act with the greatest (or, such that no other alternative act has, etc. etc.) expected utility” to be a (true) substantive normative principle? I am also pretty sure that he denies that utility should be defined as something like “that which is maximized by rational action”, so I think he would deny that “You just follow the axioms, and the maximizing thing takes care of itself”. At any rate, I haven’t looked back at these parts of Weirich’s book in a while, so I might be mistaken here, and this is obviously not relevant to your debate with Doug, and yet I found myself typing this comment…

  18. Interesting. I did not remember Realistic Decision Theory (only one of Weirich’s books I have read) that way. I have it on my shelf, but in my office, where I am not.
    However, even someone who thinks of utility as primitive, and believes it is a substantive principle that we are to maximize its expectation, can’t deny that the maximization takes care of itself if only we will conform to the axioms. I mean, that’s a theorem. (In fact it’s the theorem, you might say.)

  19. Jamie, of course you are right that following the axioms will suffice to maximize utility. I thought you had meant to say that *by definition*, you’ll be maximizing utility by following the axioms, since utility is defined as the quantity that is maximized when your preferences conform to the axioms (but perhaps I was reading too much into what you were saying). This is different (I think) from inferring, via the representation theorem, that by conforming your preferences to the axioms you maximize utility.

Leave a Reply

Your email address will not be published. Required fields are marked *