Many philosophers (such as J. H. Sobel, R. J. Wallace, G. Harman, M. Bratman, and J. D. Velleman) endorse something along the lines of the following normative requirement regarding intention:

R1: It is impermissible/irrational for S both to intend to X and to believe that she will not X (even if she intends to X).

And I think that most (all?) philosophers endorse something along the lines of this requirement of instrumental rationality:

R2: It is impermissible/irrational for S to intend to Y, believe that X-ing is a necessary means to Y-ing, but not intend to X.

But are there any philosophers who endorse the following requirement?


R3: It is impermissible/irrational for S both to intend to X and to prefer some possible world in which S doesn't X to any possible world in which S does X and also impermissible/irrational for S both to intend not to X and to prefer some possible world in which S does X to any possible world in which S doesn't X.

[Updated 6:39 AM: Please see third comment below for a clearer formulation of R3.]

Regardless, is R3 plausible or not? And are there any other normative requirements regarding intention that seem at least as plausible as R1, R2, and/or R3?

I would welcome any thoughts or citations regarding these issues.

51 Replies to “A Bleg: Normative Requirements regarding Intention

  1. I wonder if you could clarify what’s supposed to be held fixed when we’re considering possible worlds with respect to R3. If I’m convinced that millions of innocent children will be tortured mercilessly unless I harm my neighbor tomorrow, then I’m going to form an intention to harm my neighbor tomorrow. And I don’t suppose that there’s anything irrational about intending that and simultaneously preferring many possible worlds in which I don’t harm my neighbor tomorrow (because there are lots of possible worlds in which refraining from doing so has no awful consequences). But I suppose that it probably is irrational for me to intend to harm my neighbor tomorrow to avoid the deaths of millions of innocent children, and simultaneously prefer the possible world that is otherwise just as I take the actual world to be, but in which I don’t harm my neighbor tomorrow.

  2. Can you say a little more about what you had in mind by R3–and, in particular, what sense of “possible” you had in mind?
    Read as “…any metaphysically possible world…” R3 seems to me to be clearly not a requirement of rationality.
    A counterexample: I wish it wouldn’t rain tomorrow, but I believe it will rain. Since I believe it will rain, I intend to take an umbrella tomorrow.
    In this case, there is a possible world in which I don’t take an umbrella that I prefer to any possible world in which I do take an umbrella, viz., some (very good!) possible world in which it doesn’t rain. So I violate R3. But clearly I am not irrational. I’m just planning what to do given that I believe the world isn’t how I would prefer it to be.
    If we read the “possible” in R3 as “epistemically possible,” however, then it’s more plausible:
    (R3*) The following is irrational: S intends to X and prefers some world that is epistemically possible for her in which she does not X to every world epistemically possible for her in which she does X.
    (I’m assuming that a world, w, is epistemically possible for you just in case it is consistent with your beliefs that w is actual–or something along those lines. That by itself surely won’t do.)
    This principle, R3*, strikes me as a kind of “enkrasia” requirement (as John Broome would put it), albeit relating intentions to preferences rather than to normative judgments.
    Preferring some epistemically possible world in which you don’t X to every epistemically possible world in which you do X is analogous to judging that you ought not X (in the actual world). So an agent who violated R3* would be analogous to an akratic agent who intends to do something she judges she ought not do. To talk like a decision theorist: such an agent intends to do something which is not the most preferred option in her feasible set.
    Is R3* a requirement of rationality? I think this will depend on how we conceive of preference–and most of the folks writing in this literature haven’t spent as much time thinking about preference as they have thinking about intention, desire, belief, and so on.
    My two cents are as follows. If we think about preference as just another kind of conative state, like desire, then R3* is plausibly not a requirement of rationality. Lots of folks think that it’s not irrational to intend to do something that is not what you most desire. (Perhaps you believe you ought to do it anyway!) However, if we think of your preferences as encoding a kind of all-out or all-things-considered judgment about “where you stand as an agent,” then R3* plausibly is a requirement of rationality. A rational agent’s intentions cohere with her practical standpoint.
    Is that sort of what you had in mind?

  3. I see that I didn’t formulate things well. I apologize. Let me give it another go.
    R3: It is impermissible/irrational for S both to intend to X and to prefer the prospect of S’s performing some alternative act to the prospect of S’s X-ing and also impermissible/irrational for S both to intend to perform some alternative act and to prefer the prospect of S’s X-ing to the prospect of S’s performing that alternative act.
    The prospect of S’s X-ing is a probability distribution over the various possible ways that the world may turn out as a result of S’s X-ing. Thus, if the laws of nature are deterministic, the prospect of S’s X-ing is just the unique possible world that would be actualized by S’s X-ing.
    Note that the prospect of S’s X-ing is not the value of S’s X-ing nor the expected value of S’s X-ing.

  4. I think R3 is true.
    But maybe you can talk me out of it.
    (Also, the new formulation of R3 is definitely better. It’s much easier for me to grasp principles about preferring prospects than principles about preferring particular possible worlds. The last “Thus” clause in your gloss seems wrong to me, but I imagine that is not important.)

  5. Hi Jeff,
    You write: “If I’m convinced that millions of innocent children will be tortured mercilessly unless I harm my neighbor tomorrow, then I’m going to form an intention to harm my neighbor tomorrow.”
    So the prospect of your harming your neighbor includes your neighbor being harmed by you and millions of children not being tortured mercilessly. And I assume that you prefer this prospect to the one where your neighbor is not harmed and millions of innocent children will be tortured mercilessly. If so, R3 holds that you are rational in intending to harm your neighbor.

  6. Hi Jamie,
    I don’t want to talk you out of it. Do you think that it’s on a par with R1 and R2? That is, do you think that it’s a sort of consistency requirement on our attitudes as opposed to some substantive requirement? (I’m hoping that it’s on a par with R1 and R2.)

  7. I think R3 is false.
    Take a case like Smart’s unpleasant promise: you are dying and I promise I will give your money to your son, whom I later learn would spend it all on cocaine and Republican fundraisers. Still, a promise is a promise and I intend to keep it. I even know I might fail to, being rather akratic, but am conscientious enough to take all the reasonable steps to prevent that.
    *Still* I may prefer unambivalently the possible world in which all my efforts fail and I somehow find myself doing something else–giving the money to the needy, say. (If someone else intervened and caused this to happen, I’d be glad, and thank them afterwards.)

  8. Since this is being posted right above a discussion of the self-torturer, it would be remiss not to point out that the self-torturer is a potential counterexample to R3. If Quinn’s description is correct, the self-torturer intends to stop at N, even though he prefers continuing to N + 1 over stopping at N (or “the prospect of ST performing the act of moving to the next setting over …).

  9. Hi Benjamin,
    Do you prefer, other things being equal, the world to be such that you have taken all reasonable steps to keep your promise to its being such that you haven’t taken all reasonable steps to keep your promise? It seems that you must.
    Of course, given your description, I understand that you prefer the world in which you taken all reasonable steps to keep your promise but failed to the world in which you’ve taken all reasonable steps to keep your promise and succeeded. But unless you want the world to be such that you’ve taken all reasonable steps to keep your promise, I don’t see why you intend to take all reasonable steps.

  10. Hi Sergio,
    Isn’t it being assumed that the self-torturer must be worried that there is some chance that if he doesn’t stop at N, he’ll end up going too far and end up in unbearable pain? So there’s a risk to going past N. Perhaps, though, you’re thinking that there’s no greater risk associated with stopping at N+1. If so, I’m inclined to think that it’s irrational for him to intend to stop at N (as opposed to N+1).
    In any case, I’m thinking of objective probabilities. So there is some objective probability that once at N, the self-torturer will stay at N. And there is some objective probability that once at N, he will continue on to setting N+1 and stop there. And there is some objective probability that once at N, he will continue on to setting N+2 and stop there. And so on and so forth. This is the prospect of his going to N. Likewise, there will be a prospect associated with going to N+1. Now suppose that the prospect of going to N is that there is 100% chance of remaining there. And suppose that the prospect of going to N+1 is that there is 100% chance of going to the highest setting: setting 1000. In that case, I think that the self-torturer would clearly prefer the prospect of going to N to the prospect of going to N+1. And so he should intend to go to N. But if he prefers the prospect of going to N+1 to the prospect of going to N, then he should intend to go to N+1. Or so it seems to me. In any case, once you spell out exactly what the prospect of the act and the prospect of its alternatives are, it seems that he should intend to perform an act if and only if he prefers its prospect to that of the alternatives.

  11. What about Kavka’s “Toxin Puzzle”? In such a case, it’s certainly rational to intend to drink the toxin (doing so will win you $1,000,000) but it doesn’t seem irrational to prefer the prospect of not drinking the toxin (given that drinking the toxin will make you ill, and you will still win $1,000,000 if you fail to carry out what you intend). It may be psychologically impossible to intend to drink the toxin while preferring an alternative, but psychological impossibility doesn’t imply irrationality or impermissibility.

  12. Hi Doug,
    I might intend to take all reasonable steps for either of two reasons. First, there might be a formal norm (along the lines of R2) such that if you intend to X you ought ceteris paribus to take (or intend to take) all reasonable steps to ensure that you X. (I don’t know if there is such a norm, but it seems at least as plausible as R3.) Second, there might be a substantive norm such that if you make a promise you ought ceteris paribus to take (or intend to take) all reasonable steps to ensure that you keep it. (This norm strikes me as the more plausible of the two.)
    I don’t see any reason why either of these norms should further require me to actually want it to be the case that I successfully take these steps. Here too, if you prevent me from taking them, and thereby impede me in keeping the yucky promise, I’d be glad. (Of course, if your intervention still left it possible for me to keep the promise, I would still conscientiously intend to take what reasonable steps were left open to me.)

  13. Hi Max,
    “It’s certainly rational to intend to drink the toxin (doing so will win you $1,000,000).”
    I don’t think that this is clear (or certain) at all. Many, including myself, would hold that although it is rational to want to intend to drink the toxin and to intend to do what might cause you to form this intention, it is not rational to intend to drink the toxin, when drinking it will do you no good and will only make you very ill.

  14. I don’t think you need any such assumption. The ST can rationally believe that he is just as likely to stop at N + 1 as he is to stop at N, but this couldn’t suffice to make it the case that it is rational to continue, because the argument will generalize, and would have as a consequence that it is rational to go to the last setting. I have troubles assessing objective probabilities in cases we are assuming that ST is rational. If ST is rational, and Quinn is right, the OP that he will stop at N is 1. If we now look at the counterfactual possibility that he wouldn’t, then I would guess that insofar as ST is rational and Quinn is right, the OP he’ll stop at N+1 is again 1 (since this is the closest to the original plan). But you don’t even need it to be 1, it could be just arbitrarily large and either you conclude that it is rational to stop at N or that the only rational place to stop is the last setting.
    At any rate, many solutions to the ST puzzle conclude that ST must choose counterpreferentially, which would be in violation of R3. I think these are the only plausible solutions. In general, other solutions will be committed to there being a most preferred outcome in the series (at least for a rational ST), which rejects the initial setup of the puzzle and, to my mind, arbitrarily restricts what can count as a rational set of preferences. Of course, not everyone agree with me here. But it is at least not obvious that any plausible way of dealing with the ST puzzle and similar cases will be compatible with R3.

  15. Hi Benjamin,
    “I don’t see any reason why either of these norms should further require me to actually want it to be the case that I successfully take these steps.”
    But you don’t intend to successfully take these steps, do you? You intend only to take these steps. Thus, your case is a bit like my following the directions to fix the sink when I’m not confident, due to my ineptness, that I will succeed in fixing the sink even if I attempt to follow the directions. What I intend to do is to follow the directions (or to try to follow the directions), not to successfully follow the directions. I don’t intend to successfully follow the directions because that doesn’t seem to be up to me.
    For these reasons, your case doesn’t seem like a clear counterexample to me. You build into the case that you know that you might fail to keep your promise. And given the acknowledged potential for failure, it’s a bit strange for you to intend to keep your promise. Perhaps, all you can do is intend to take all reasonable steps to ensuring that you’ll keep your promise, which is what your own words suggest.
    Can you think of any example where it is clear that you are intending to do X as opposed to intending to try to do X (or intending to take steps toward ensuring that you do X)? Suppose, for instance, that you promised to push a button. Do you think that in this case you intend to push but prefer the prospect of your not-pushing?

  16. Yes! Suppose you are a very dutiful but humane commander of a nuclear submarine. The news comes in: the Russians have attacked Washington and you are to fire the missiles. Orders are orders and you intend to push the button. But knowing the horror that will result, you wish with all your heart that the world be somehow such that you do otherwise.

  17. Hi Sergio,
    Initially, I thought that you were comparing two act sequences:
    S1: Advance each week for N weeks.
    S2: Advance each week for N+1 weeks.
    And I thought that you were claiming that it was rational for the self-torturer to intend to perform S1 even though the prospect of his performing S2 is preferred to the prospect of his performing S1. But my point was that you never told us what the prospects are. If the prospect of his performing S1 is that he will certainly end up with a lot of money with only tolerable, mild discomfort and the prospect of his performing S2 is that he will certainly end up with a lot of money with terrible agony for which he would gladly trade any amount of money to be rid of, then I don’t see why we should accept that he prefers the prospect of S2 to the prospect of S1. Do you? So you must have some other prospect in mind. Tell me, then, what the prospects of performing S1 and performing S2 are.
    So let X and Y be two alternative acts and let PX and PY be their prospects. Could you tell me exactly what X, Y, PX, and PY are? And for something like PX, I’ll need to know each way the world might turn out as a result of X-ing and the probability, given X-ing, that the world will turn out that way.
    You say, “many solutions to the ST puzzle conclude that ST must choose counterpreferentially, which would be in violation of R3.” I think that what they say is that you ought to perform S1 as opposed to S2 even though you prefer the world in which you have advanced each week for N+1 weeks and stopped there to the world in which you have advanced each week for N weeks and stopped there. But that’s no a violation of R3, for the prospect of performing S2 may not be a 100% chance of ending up in the world in which you have advanced each week for N+1 weeks and stopped there. The prospect may be unbearable pain.

  18. Hi Benjamin,
    Could you tell me why you push? Do you care about following orders?
    Consider this case:
    You are playing chess. The news comes in: unless you move your rook diagonally (which is against the rules) the U.S. will nuke Moscow. Rules are rules and you intend to move your rook horizontally, not diagonally. (This is true despite the fact that you only care about following the rules for instrumental reasons.) But knowing the horror that will result, you wish with all your heart that the world be somehow such that you move your rook diagonally and prevent the horror.
    I can’t help think that you are irrational in this case precisely because you violate R3.
    The only way that I can make sense of your case is by thinking that you somehow care about following orders for non-instrumental reasons and care about them quite a bit such that you prefer the world in which you follow orders and the horror results to the world in which you don’t follow orders and you prevent the horror.

  19. Doug,
    Can we restrict R3 to cases in which you are certain that if you intend to do it, you will do it?
    Because it does seem like there are cases in which you hope you’ll fail through no fault of your own, because then you’ll have fulfilled your obligations but the thing you don’t like still won’t have happened. (Which is what Benjamin’s examples are examples of.)
    There might be other ways to get around these examples, but the restriction is simple, so if it’s good enough for your purposes maybe you should just accept it.

  20. Right: as the dutiful submarine commander, you indeed intend to push the button because you non-instrumentally value or care about your duty (or your relationship to the military, your country, or whatever), in a way the chess player does not similarly value the game. But plausibly, valuing or caring about something only requires treating certain considerations as reasons for certain actions or attitudes regarding it (or being disposed to perform certain actions or hold certain attitudes under relevant conditions).
    It’s therefore a substantive question what the actions and attitudes involved in caring or valuing a given thing are. I think that in the submarine example, valuing your duty (in a way that could make sense of the case) requires treating the fact that you are ordered to push the button under the circumstances as a conclusive reason to do it (and to intend to do it), but does not require treating the fact that you are ordered to do something as a reason to prefer the prospect that you do it. (Much the same is true in the deathbed promise case.) But maybe you disagree.

  21. Hi Jamie,
    I’m worried that the proposed restriction would be quite far reaching, for I suspect that there are very few acts where the agent is certain that if she intends to do it, she will do it.

  22. Hi Benjamin,
    I take your point and see its force now. If I want to resist your point, I’ll need to tackle head on why we should reject the following despite its initial force:
    “In the submarine example, valuing your duty (in a way that could make sense of the case) requires treating the fact that you are ordered to push the button under the circumstances as a conclusive reason to do it (and to intend to do it), but does not require treating the fact that you are ordered to do something as a reason to prefer the prospect that you do it.”
    Thanks.

  23. I’m a first-time commenter here. Hope it’s alright that I butt in.
    Doug,
    I share your intuition. I wonder if the shift from possible worlds talk to “prospect” talk, while helpful in clarifying, might obscure what (I think) is your point. Perhaps I am mistaken, but I think you mean to hold the world of deliberation fixed. That is, the dutiful button-pusher might wish that she never received the order to push, but that doesn’t change the facts that are relevant for rational deliberation. Given that the order has come down, the button-pusher has a choice between pushing and not-pushing. It is hard for me to see how she might push while preferring the prospect that she doesn’t push (after we discount the worlds where the order never came). Of course, the button-pusher might wish that it was possible to follow orders without the foreseen consequences (as Jamie says), but those possible worlds (prospects) don’t seem relevant to rational deliberation. I think you had it right when you say, “…you prefer the world in which you follow orders and the horror results to the world in which you don’t follow orders and you prevent the horror.” It seems that those are the relevant worlds. Or am I still missing the force of Benjamin’s point?

  24. Hi Brandon (and Benjamin, please see this as well),
    First of all, welcome!
    Second, this is very helpful. You have convinced me that I shouldn’t have been so conciliatory to Benjamin. After all, you’re absolutely right that we have to compare worlds in which the order has been given, and so if there are only two possible worlds (the one where the commander fires the nukes and the one where he doesn’t), then we are just comparing these two worlds:
    W1: The submarine commander receives orders to fire the nukes, he intends to obey that order and, thus, intends to push the launch button, he pushes the launch button, the nukes fire, they hit their target and kill everyone in Moscow.
    W2: The submarine commander receives orders to fire the nukes, he does not intend to obey that order and, thus, does not intend to push the launch button, he refrains from pushing the launch button, the nukes don’t fire, no one in Moscow is killed.
    But suppose that there is some chance that his intention to push the launch button might be ineffective, as there is some small chance (say, 1%) that he will be struck by paralysis just before doing so. In that case, there’s another relevantly possible world:
    W3: The submarine commander receives orders to fire the nukes, he intends to obey that order and, thus, intends to push the launch button, he suffers paralysis just before reaching out to push the button, the nukes don’t fire, no one in Moscow is killed.
    In that case, we would have to compare two prospects:
    P(try to obey): 99% chance of W1 and 1% chance of W3.
    P(not try to obey): 100% chance of W2.
    Now, I gather that Benjamin wants to say that the submarine commander ought to prefer W3 to W1 even though he ought to intend to push. But, nevertheless, it seems (to me) that if the commander should intend to push, then he should prefer P(try to obey) to P(not try to obey). And that’s what R3 holds. R3 doesn’t hold that if the commander ought to intend to push, then he must prefer W1 to W3.
    So it seems that the commander should intend to obey, should prefer both W1 and W3 to W2, and should prefer W3 to W1. But that’s not enough for Benjamin’s purposes. He needs to show that the commander should intend to obey (or to try to obey) but should prefer P(not try to obey) to P(try to obey). But that seems false.
    So, in response to Benjamin, I should say that not only does valuing your duty (in a way that could make sense of the case) requires treating the fact that you are ordered to push the button under the circumstances as a conclusive reason to do it (and to intend to do it), but also requires treating the fact that you are ordered to do something as a conclusive reason to prefer the prospect of your trying to obey to the prospect of your not trying to obey.

  25. Putting aside the details of the case of the self-torturer, suppose that one allows that there are some situations in which (1) it is rationally permissible to have intransitive preferences, but (2) if one follows one’s intransitive preferences, then one will end up the worse for it. Then one might, like Quinn, see resoluteness, understood as sticking to a plan that requires one to act against one’s preferences, as rationally required. R3 would then be false. Of course, you could deny that there are any situations that satisfy (1) and (2).

  26. Hi Chrisoula,
    What does it mean to “follow one’s intransitive preferences”? That is, what act or sequences of acts does following one’s intransitive preferences entail performing? I’m guessing that it means, in the self-torturer case, that you perform the sequence of acts consisting of your advancing each and every week, ending up at the highest setting and in such unbearable pain that you would gladly return all the money you receive in order to return to the initial setting. But why then think that R3 is false? Clearly, the prospect of performing that sequence is worse than the prospect of performing various alternative sequences, such as the prospect of your performing the sequence of acts consisting of your advancing the first 10 weeks, but then refusing to advance the next 990.
    You and Sergio both seem to think that these cases involving intransitive preferences present a counterexample to R3. But I’m not following. What is the act, X, and its alternative, Y, such that you should intend to perform Y as opposed to X, but ought to prefer the prospect of your X-ing to the prospect of your Y-ing?
    If we let X equal the sequence of acts consisting of advancing each and every week and let Y equal the alternative sequence of advancing only the first ten weeks, then you ought to intend to perform Y as opposed to X, but you ought not prefer the prospect of your X-ing to the prospect of your Y-ing. So no counterexample there. If, alternatively, we again let X equal the sequence of acts consisting of advancing each and every week (that is, all 1000 weeks) but let Y equal the alternative sequence of advancing only the first 999 weeks, then you ought to prefer the prospect of your X-ing to the prospect of your Y-ing, but you ought not to intend to Y as opposed to X. So there’s no counterexample here, either.
    So when you say “R3 would then be false,” what counterexample do you have in mind?

  27. Wait, I’ve lost track of exactly what the principle means.
    Is the idea that preferring that I fail is not the same as preferring that I perform some other action? So, the sub commander intends to Φ and prefers that he not Φ, but there is no alternative Ψ such that he prefers Ψing to Φing?

  28. Hi Jamie,
    The principle says, that for any act X and any subject S, it is irrational for S to intend to X if there is some alternative Y such that S prefers the prospect of S’s Y-ing to the prospect of S’s X-ing. The prospect of S’s X-ing is a probability distribution over the various possible worlds that could be actual if S were to X.
    “So, the sub commander intends to Φ and prefers that he not Φ, but there is no alternative Ψ such that he prefers Ψing to Φing?”
    It’s not about preferring X-ing to not-X-ing or preferring X-ing to Y-ing. It’s about preferring the prospect of S’s X-ing to the prospect of S’s Y-ing. Of course, all the worlds that could be actual if S were to X are worlds in which S X’s.

  29. So a counterexample to the principle would have to take the following form:
    It is rational for S to intend to X yet there is an alternative Y such that it is rational for S to prefer the prospect of S’s Y-ing to the prospect of S’s X-ing.
    And, of course, I’ll want to hear what X and Y are as well as what their prospects are. So I’ll want to have a list of the worlds that could be actual if S were to perform the act in question and I’ll want to know the probability there is that that world would be actual if S were to perform that act.

  30. Hi Doug:
    Here are the things that ST might intend at N. I assume that this is choice under certainty so the prospect is just the state of affairs in parenthesis.
    (1) Stop at N (Pain at level N &$X)
    (2) Continue at N then stop at N + 1 (Pain at level N +1 (indistinguishable from N), $X & 100,000)
    (3) Continue at N and N + 1, stop at N + 2 (Pain at level N + 2 (indistinguishable from N + 1) & $X + 200,000)
    .
    .
    .
    On Quinn’s proposal, it is rational to choose (intend to perform) act (1) even though (2) is preferred to (1). This seems to be a straight violation of (R3). If the reply is that if ST really prefers (2), she should choose (2), then, by parity of reasoning, she should choose (3) as she prefers (3) over (2), and so forth. Given that the preferences are intransitive, for every act she intends to perform, there is an act that you prefer over the one you intended to perform. Your reply to Chrisoula seemed to assume that the choice must be between only two alternatives, but I don’t know why we should restrict the choice set in this manner.
    I take it that “by following one’s intransitive preferences”, Chrisoula means that you choose according to preference at each choice node. One could have the view that ST has intransitive preferences but it is perfectly rational for her to go all the way to the last setting (and also to switch back to the first setting and no money if she is later given the option). Or one could have the view that intransitive preferences are not rational (or not even possible). But I agree with Chrisoula that if you reject both these views, it’ll be hard to endorse (R3).

  31. Hi Sergio,
    That’s helpful. So if I understand things, the proposed counterexample (Quinn’s?) is that it is rational for ST to intend perform (1), yet it is rational for ST to prefer the prospect of his performing (2) to the prospect of his performing (1).
    If so, I would just deny that it is rational for ST to intend to perform (1). After all, there’s no risk (you stipulate) that if he intends to perform (2), he will continue past N+1. So performing (1) seems clearly sub-optimal.
    Why should I accept that it is rational for ST to intend to perform (1) when he prefers {Pain at level N +1, $X + $100,000} to {Pain at level N, $X}?
    Now I realize that my line of reasoning commits me to saying that, for any N, it is irrational for S to intend to N if S can, as you stipulate, effectively intend to go to N+1 and remain there indefinitely. But, given ST’s intransitive preferences, this seems like the right thing to say insofar as we think that intransitive preferences could be rational. Is there some reason why it’s absurd to take this view?
    In any case, the more interesting cases, as far as I’m concerned, are those cases where there is some chance that even if you were to intend now to go to, say, setting N+1 and stop there, you will end up changing your mind and going beyond setting N+1. And that’s where you need to assess which possible settings you might end up at and the probabilities associated with your ending up at each of those settings.

  32. Here’s a probability distribution: the preferer’s credences. Let that be the distribution for all prospects. I don’t see the difference between preferring X-ing to not-X-ing and preferring the prospect of S’s X-ing to the prospect of S’s not-X-ing.

    So I’ll want to have a list of the worlds that could be actual if S were to perform the act in question and I’ll want to know the probability there is that that world would be actual if S were to perform that act.

    Yeah, I don’t that kind of thing. Lists of worlds, probabilities of individual worlds. Yech.

  33. Hi Doug:
    I guess I don’t understand why would we say that intransitive preferences are rational if we then conclude that an agent that chooses on the basis of such preferences is irrational no matter what she chooses (is this what you are proposing?). If your preferences put you in a position that whatever you do is irrational, then I would say that they are not the preferences of a rational agent. Generally when people say that some sets of intransitive preferences are rational, they mean that an agent with such a set of preferences could still choose rationally.
    I am also not sure why you think that the probabilistic case is more interesting. Of course, you could reproduce the same structure with probabilities, if the probabilities of moving up too far are never large enough to offset the gains of continuing to the next stage.

  34. Hi Sergio,
    “I guess I don’t understand why would we say that intransitive preferences are rational if we then conclude that an agent that chooses on the basis of such preferences is irrational no matter what she chooses.”
    We might say that they’re rational because we hold that the betterness relation is intransitive and we think that our preferences ought to track the betterness relation.
    Suppose I hold that I ought to prefer more of what’s good to less of what’s good, and suppose that I hold that souls up in heaven are good. Now imagine a situation in which God will make the number of souls in heaven equal to whatever natural number I pick (and make it zero if I pick no number). Further suppose that it’s rational (or permissible) to do X if and only if there is no alternative to X whose outcome I ought to prefer to X’s outcome.
    Now, in this case, no matter what number, N, I pick, there will be an alternative (picking N+1) whose outcome I will prefer to this act’s outcome. So my preferring more good to less good puts me in a position that whatever I do is irrational. But I don’t think that this shows that my preferences are irrational. Do you want to say that it does?

  35. Hi Doug:
    No, I think that if you allow preference sets with no upper bound, you will have other cases in which you need to allow that it would be rational to choose counterpreferentially. At any rate, I find it very intuitive that if ST stops at a point that is not too far in the series she acts rationally (and if she goes to the end she acts irrationally). And if accepting (R3) commits you to denying this, it seems to me a high cost (but, of course, you might have independent reasons to deny the consistency of the ST scenario as I am describing it).

  36. Hi Sergio,
    For some N not too far in the series, you hold that it is rational for ST to stay at N and not advance to N+1. But don’t you also hold that in any one-shot version of the case in which there are no subsequent opportunities to advance but only the choice either to advance from N to N+1 and collect $1,000 or to stay at N and collect nothing, it is not rational to stay at N? But N can be the same in both cases, and the prospects of advancing and not-advancing are the same in each case. So you have to explain the difference between the two cases in terms of something other than the prospects of advancing and not-advancing. That seems counterintuitive to me. It seems that it’s only the prospects that matter in this sort of case.
    Also, supposing that betterness is intransitive and that ST’s preferences track betterness, you have to hold that it is rational to take what one knows is the worse of two alternatives when it seems that all that matters is the goodness of the alternatives. That seems like a cost as well.
    So my view does holds that, for any N, ST is irrational if he refuses to advance to N+1 when he rightly prefers N+1 to N and can advance to N+1 without risking going so far that he prefers N (where he is now) to where he ultimately ends up. And you find this counterintuitive. Fair enough. I guess I don’t find it that counterintuitive and certainly don’t find it as counterintuitive as saying that it is permissible to stay at N when advancing to N+1 is rightly preferred to N and comes at no risk.
    To my mind the only relevant difference between the one-shot case and the original case (not the one where you stipulate that there is no risk of continuing further than one originally intends) is that in the one-shot case advancing is risk free and in the original case advancing is risky. Thus, in the original case, we just look at the risks and rewards and choose the option with the best prospect.

  37. Hi Doug,
    And here I was thinking I’d gotten away with it!
    I’m worried I might also be losing track of the dialectic a little. Let me give a direct response in this comment and a broader (but maybe digressive) one in the next one.
    First, I wonder how much work is being done by the fact that you formulate R3 in the way you do–as just requiring that there be no other action such that you prefer the prospect of doing it over the prospect of doing what you intend–rather than the stronger way Jamie entertained–as requiring that you prefer the prospect of doing what you intend over the prospect of not doing it (even if you do nothing else). Is the point just that the commander’s preference of W3 over W1 doesn’t violate R3 because R3 requires that he prefer the prospect of his doing something other than pushing the button, yet in W3 this is not the case (since, being paralyzed, he does nothing at all)?
    If so, there’s a really simple fix. Modify W3 as follows:
    W3’: The sub commander receives orders to fire the nukes, he intends to obey that order, and, thus, intends to push the launch button, but he momentarily hallucinates and believes the (obviously, red) launch button to be the (blue) button to its immediate left, pushes that button instead, (accidentally) disarms the nukes, and no one in Moscow is killed.
    If the sub commander should prefer W3’ to W1, I take it he should prefer P(push the disarm button) to P(push the launch button), even though he should intend to push the launch button. (He should presumably likewise prefer P(push the disarm button as part of trying to obey) to P(push the launch button as part of trying to obey), even though he should intend to push the launch button as part of trying to obey.) So the counterexample stands.

  38. Now for the broader response. Doug, you write:
    “So it seems that the commander should intend to obey, should prefer both W1 and W3 to W2, and should prefer W3 to W1. But that’s not enough for Benjamin’s purposes. He needs to show that the commander should intend to obey (or to try to obey) but should prefer P(not try to obey) to P(try to obey). But that seems false.”
    If what I say in the previous comment is right, I don’t need to show this. But I want to defend it anyway, because I don’t want the sub commander’s status as a counterexample to R3 to depend on the claim that he should prefer both W1 and W3 to W2. In particular, I think his valuing his duty (in a way that requires him to treat the fact that he is ordered to push the button as a conclusive reason to do it) should be consistent with his preferring W2 to W1–that is, with preferring the prospect of his intentionally disobeying to the prospect of millions dying. Otherwise, valuing anything would require what is presumably a bizarrely self-centered concern with your own actions and attitudes over what would seem to be much more important facts about others. (I feel like some people used to read Kant’s conception of the value of a good will along these lines–maybe they still do?)
    So, the sub commander can prefer W2 to W1 if rationally intending to do something doesn’t require you to prefer the prospect of your intending to do it over the prospect of not having that intention (or intending to do something else). But I’d think this claim is pretty plausible and uncontroversial–right? (It would seem to be one of the things the Toxin Puzzle does illustrate.)
    Maybe it would help if I said more about why I think the sub commander should have these preferences, and more generally why I want to reject R3. I think it makes sense for someone in the sub commander’s position to be ambivalent, analogously to someone in a dilemma. The difference is that while someone in a dilemma has (roughly) conclusive reasons both to perform a certain action and to perform another incompatible one, the sub commander has (relative to his values, at least) conclusive reasons both to perform a certain action and to prefer an incompatible state of affairs. This doesn’t leave him stuck, since there are no reasons for action competing with his reasons to push the button, but it does leave him conflicted. So, if he does happen to break down and disobey orders, I think he might perfectly sensibly feel both very guilty and very relieved–guilty because of the putatively impermissible action, relieved because of the desirable state of affairs. (Conversely, if he obeys orders, he might see himself as doing what he tragically must, or as not being able to do anything else (in a Martin Luther, “here I stand sense”), in a sense compatible with something else being more desirable even under the circumstances.)
    But this is, again, a substantive claim about moral psychology and I could see how someone might want to resist it.

  39. Hi Benjamin,
    There is something that I don’t like about your putative counterexample. When I’m deliberating about whether to push the disarm button or the launch button, I cannot consider the prospect of my hallucinating and pushing the disarm button at a result of my hallucinating. When I’m deliberating about which button to push I have to assume that which button I push is up to me. And then I consider how the world may turn out as a result of my pushing the launch button and how the world may turn out as a result of my pushing the disarm button.
    What this means (and I take this to be Brandon’s point) is that you must hold fixed everything prior to t when evaluating the prospects of performing various alternative actions at t. So we can hold constant that you were given an order to launch the nukes, but we cannot vary whether or not you hallucinated.
    This means, I take it, that when evaluating P(push the disarm button) and P(push the launch button) we cannot vary across possible worlds resulting from pushing a given button whether an order to launch has been given or whether you have been hallucinating.
    So I’m unclear exactly what the prospects are because it’s unclear whether you mean for me to hold fixed everything up to the time of action. And it’s also unclear how I’m supposed to deliberate about what to do when I’m hallucinating.

  40. Hi Benjamin,
    I didn’t quite follow the broader response. You say, “I think [the sub commander’s] valuing his duty (in a way that requires him to treat the fact that he is ordered to push the button as a conclusive reason to do it) should be consistent with his preferring…the prospect of his intentionally disobeying to the prospect of millions dying.” And then you say, “the sub commander can prefer [his intentionally disobeying to the prospect of millions dying] if rationally intending to do something doesn’t require you to prefer the prospect of your intending to do it over the prospect of not having that intention (or intending to do something else).”
    Is the idea that the prospect of intentionally pushing the disarm button is identical to the prospect of intending to push the disarm button? That doesn’t seem true.
    But I think that I just didn’t follow what you were saying.

  41. I think the problem with Benjamin’s example now is that the alternative action is not an action. It includes he momentarily hallucinates and believes the (obviously, red) launch button to be the (blue) button to its immediate left, and hallucinating isn’t an action.
    I don’t think this is quite Doug’s problem with the example, but I may be misunderstanding Doug.
    On the broader response: I also don’t exactly see what’s going on. However, unlike Doug, I do think intentionally pushing the disarm button is identical to the prospect of intending to push the disarm button. What is the difference supposed to be? (Let the probability distribution for the prospect be the agent’s credence distribution.)

  42. On the putative counterexample: I thought all I needed was the possibility that the sub commander intend to push the launch button, but (consistently with everything prior to the conclusion of his deliberation) unintentionally push the disarm button instead. Is that right?
    If so, I assume it’s indeed possible to intend to do something and unintentionally do something else, though maybe the hallucination example doesn’t illustrate it well. But there are other examples. Maybe the hallucination occurs after he makes his decision–he just redirects his finger. Or maybe he pushes the button reflexively or spasmodically. Or his less dutiful XO cleverly switches the buttons just as he’s about to push the launch one. Or a mad scientist intervenes and causes him to push the disarm button via remote control.

  43. You need an example where the sub commander ought to intend to push the launch button but ought to prefer the prospect of his performing some alternative act to the prospect of his pushing the launch button.

  44. I assume it’s possible to act unintentionally, and so if he unintentionally pushes the disarm button in one of the ways I describe, he would then be performing some alternative act. If so, why oughtn’t he to perform the prospect of doing one of these things to the prospect of pushing the launch button?

  45. R3 is supposed to be a principle that can provide normative guidance. You can’t go from (1) I ought to prefer the prospect of my unintentionally pushing the disarm button to the prospect of my pushing the launch button to (2) I ought to intend to unintentionally push the disarm button.
    So I took it to be implicit that we talking about intentional actions. To make this explicit, let me rephrase R3:
    For any act X and any subject S, it is irrational/impermissible for S to intend to X if there is some alternative Y such that S prefers the prospect of S’s intentionally Y-ing to the prospect of S’s intentionally X-ing. The prospect of S’s X-ing is a probability distribution over the various possible worlds that could be actual if S were to intentionally X.
    FYI: I’m in class most of the day, so I’ll be slower than usual in replying.

  46. Anyway, let me try to restate to the broader response. If it’s right, whether it’s possible for the sub commander to intend to push the launch button but push the disarm button instead (where this is an action) is a moot point.
    For I want to claim that (relative to his values) the sub commander:
    (a) ought to intend to push the launch button, even though he
    (b) ought to prefer the prospect of his pushing the disarm button (intentionally or not) to the prospect of his pushing the launch button, because he
    (c) ought to prefer the world in which he disobeys orders, intends to push the disarm button, does so, and doesn’t nuke Moscow (that is, W2) over the world in which he obeys orders, intends to push the launch button, and nukes Moscow (that is, W1).
    In other words, (b) needn’t true because the sub commander wants to have his cake and eat it too–i.e. have the right intention without the bad results. It can be true because he finds the bad results so bad or undesirable that he prefers a world in which he has the wrong intentions to a world in which they come to pass. After all, since it would be silly to find a world in which you get to be dutiful preferable to a world in which millions die, valuing duty should not require that you prefer this.
    Nor need it, if in valuing duty the sub commander is required to treat the fact that he is ordered to push the button as a conclusive reason to push it (and intend to put it), but is not required to treat this as a conclusive reason to prefer either that he push it or that he intend to push it.
    Does that help?

  47. Hi Benjamin,
    That helps me understand where we disagree. I just don’t have the same intuition. To my mind, holding (a)-(b) is just as implausible as holding that I ought to (i) intend to Y, (ii) believe that X-ing is a necessary means to my Y-ing, and (iii) intend to not-X. After all, (a)-(b) entail that the sub commander ought to intend to push the launch button but ought to prefer the world in which he instead intends to push the disarm button and intentionally pushes the disarm button. So he’ll be intending to push the launch button while simultaneously hoping that he changes his mind and ends up intending to push the disarm button and intentionally pushes the disarm button instead. And this seems to me to be just as much an incoherent set of attitudes as (i)-(iii) is.
    So this is helpful in that it shows that to other minds R3 is much more controversial than say R2. Fair enough.

  48. Yep. I don’t want to beat a dead horse, but I meant the last two paragraphs of my original version of this response (http://peasoup.typepad.com/peasoup/2014/03/a-bleg-normative-requirements-regarding-intention.html?cid=6a00d83452b89569e201a5117bcea8970c#comment-6a00d83452b89569e201a5117bcea8970c) to explain why I think the sub commander’s conflicting attitudes are less like someone who violates (say) R2 and more like someone facing a dilemma–the conflict is genuine, so there’s indeed a sort of incoherence, but it’s an understandable and potentially rational one.
    But I think this is definitely the point where we agree to disagree. (And this is helpful for me, too, because now I have a clearer sense of what I would have to accept if I want to reject R3.)

  49. Re the self-torturer, my sense is that you want to
    deny that (1) the self-torturer can rationally intend to stop at n while preferring the prospect of proceeding to n+1;
    but agree that (2) the self-torturer can rationally intend to stop at n while preferring the option of stopping at n+1.
    So the issue seems to be: Given that, for any n, proceeding to n+1 leaves it completely open that the self-torturer can stop at n+1, shouldn’t the self-torturer prefer the prospect of proceeding to n+1 over the prospect of stopping at n (given the self-torturer’s preference for stopping at n+1 over stopping at n)?
    I think your views concerning how to identify and evaluate options in a deterministic world might make you think this question incorporates a false assumption; but if one is not (yet) convinced by those views, one might think the question is fine and the answer is yes. Since your views are controversial, you might expect some resistance against R3 via this route.

Leave a Reply

Your email address will not be published. Required fields are marked *