We are pleased to present the next installment of PEA Soup's collaboration with Ethics, in which we host a discussion of one article from an issue of the journal.  The article selected from Volume 121, Issue 3 is Tom Dougherty's "On Whether to Prefer Pain to Pass" (open access here).  We are very grateful that Caspar Hare has agreed to provide the critical precis of Tom's article, and his commentary begins below the fold.

Tom’s paper is superb. I recommend that you go ahead and read it.

As you will see, his topic is future-bias-with-respect-to pain – the sort of attitude that will lead you, for example, to prefer, other things being equal, that you experienced two hours of pain yesterday than that you will experience one hour of pain tomorrow.

Conventional wisdom has it that this attitude is not action-guiding in realistic contexts. The characteristic preferences are between states of affairs with different past components (e.g. a state of affairs in which you suffered 2 hours of pain yesterday and a state of affairs in which you suffered no pain yesterday). But, outside of science fiction, we are never in a position to bring about states of affairs with different past components. Outside of science fiction, we cannot change the past.

Tom’s first contribution is to show us that conventional wisdom has it wrong. If you are risk-averse then you will act to protect yourself against the possibility of REALLY BAD things happening. But in some situations in which you are in a position to protect yourself against the possibility of REALLY BAD things happening, what things you consider REALLY BAD will depend on whether you are future-biased. In these situations, if you are risk-averse and future-biased then you will act one way, if you are risk-averse and future-unbiased then you will act another way.

Tom’s second contribution is to show us that sometimes, by acting on future-bias and risk-aversion, you will work to your own acknowledged disadvantage. He imagines a situation with this general form: You have options A and B at t1, options C and D at t2. If you are future-biased and risk-averse then, at t1, you prefer AC to BC, AD to BD (in something closer to English: you would rather take option A, irrespective of what option you will later take). And, if you are future-biased and risk-averse then, at t2, you prefer AC to AD, BC to BD (you would rather that you take option C, irrespective of what option you previously took). But, throughout, you prefer BD to AC (throughout, you would rather that you take B-and-then-D, than that you take A-and-then-C). In this situation, if you take A at t1 and C at t2, then you are acting on your future-bias at t1 and t2, but working to your own acknowledged disadvantage by taking A-and-then-C rather than B-and-then-D.

What should we make of this? The weak conclusion to draw is just that sometimes, when you lack the power to self-bind (in this case: the power, at t1, to prevent yourself from taking option C at t2) then it is undesirable to be future-biased. This is no great news. For any attitude you might have there are situations in which it is undesirable to have that attitude. When your head will explode if you love your mother then it is undesirable for you to love your mother.

Tom tentatively pushes us towards a much stronger conclusion, the conclusion that it is a rational defect in you to be future-biased. The difference between his example and the loving-your-mother example is that in the former the outcome that is undesirable by your own lights comes about as a result of your own free choices, choices that you endorse throughout. You are acting in a disunified way. But rational people never act in a disunified way.

Why do rational people never act in a disunified way? Tom doesn’t really spell this out in detail. The basic reasoning, I take it, is this: In the situation he describes, if you are future-biased and risk-averse then, whatever you do, you are an appropriate subject of rational criticism. If you fail to take option A at t1 then the critic can say “Why didn’t you take option A? You preferred (irrespective of what you would later do) that you take that option.” If you fail to take option C at t2 then the critic can say “Why didn’t you take option C? You preferred (irrespective of what you would later do) that you take that option.” If you take A-and-then-C then the critic can say “Why didn’t you take B-and-then-D? You preferred throughout that you take that option.” But rational people are not appropriate subjects of rational criticism, so in this situation, if you are time-biased and risk-averse then, whatever you do, you are irrational. And rational people are not such that they would be irrational if put in the wrong situation. So, simpliciter, if you are time-biased and risk-averse then you are irrational.

As a way of kicking off the discussion, I will say that I am not entirely persuaded by this reasoning. There are other cases in which people, by acting on faultless preferences (endorsed throughout) in a step-by-step way, work to their own acknowledged disadvantage. Consider Satan’s Apple (due to Arntzenius, Elga and Hawthorne – this is the diachronic version of their case):

Satan cuts his apple into infinitely many slices and offers them to Eve, one by one, over the course of an hour – one at 11am, another at 11.30, another at 11.45… etc.  Eve will make infinitely many decisions, knowing that she has no powers of self-binding, and that no decision she makes will influence any later decision she makes. If, at noon, she has eaten infinitely many slices of apple, then she will Fall. Otherwise she will remain in Eden.

Eve strongly prefers Eden to Earth. And, all-other-things-being-equal-Eden-and-Earthwise, she prefers to eat more apple rather than less apple. What should she do?

It appears as if, whatever Eve does, she will be an appropriate object of rational criticism. If she fails to eat any one slice then the critic can say “Why didn’t you eat that slice? You preferred (irrespective of what you would later do) that you eat it, and you knew that your eating it would have no bearing on whether you ate finitely many or infinitely many slices.” If she eats every slice then the critic can say “Why did you eat infinitely many slices? You preferred that you eat finitely many slices.”

But it strikes me that it would be wrong to conclude that there is something awry with Eve’s preferences – that she is rationally defective in preferring Eden to earth, more apple to less apple.

Where does the reasoning to this conclusion go wrong? This is a hard question. Let me put an answer on the table: If Eve takes all the slices then she is NOT an appropriate subject of rational criticism. We are rationally criticisable for the accessible options we take or fail to take. For an option to be accessible to Eve, it must the case that, at some time, if she were to decide to take the option then she would take the option. But there is no time such that if Eve had decided at that time to take finitely many slices then she would have taken finitely many slices. By hypothesis her later decisions were causally isolated from her earlier decisions.

The same can be said for you in Tom’s case. If you take AC, then we cannot rationally criticize you for failing to take BD, because BD was never an accessible option for you. There is no time such that, if you had decided at that time to take BD then you would have taken BD. By hypothesis your later decision was causally isolated from your earlier decision.

24 Replies to “Ethics Discussions at PEA Soup: Tom Dougherty’s “On Whether to Prefer Pain to Pass,” with commentary by Caspar Hare

  1. I haven’t had a chance to read Hare’s commentary but will, after running some errands, do so. In the meantime, I thought that I should post the main questions that I had as I read Doughtery’s fascinating article.
    I think that I might be inclined toward the self-binding response (pp. 533-534). I think that it is permissible to take Help Early on Monday only if, on Monday, you can see to it that you won’t thereby be turned into a pain pump (that is, only if, on Monday, you can form an effective intention/resolution to refuse Help Late on Wednesday). Likewise, Jones should trade her pint of vanilla and some money for the pint of strawberry only if she can see to it that in making this trade she won’t thereby be turned into a money pump (that is, only if she can form an effective intention/resolution to refuse one of the future trades that would result in her ending up with the same pint of vanilla and less money). The only thing that Dougherty says against this response is that it implies that “a money pump argument cannot show that intransitive preferences are irrational” and that “denying that money pump arguments show intransitive preferences [to be] irrational would arguably fly in the face of orthodoxy.” But to deny that a money pump argument can show that intransitive preferences are irrational doesn’t mean that we have to accept that they are rational. And that denying some proposition is against orthodoxy seems like no good reason not to deny that proposition.
    So I have two questions: (1) is there any *good* reason to reject the self-binding response? And (2) what is Dougherty’s view on how the permissibility of larger sets of actions relates to the permissibility of smaller sets of actions? My own view (very roughly speaking) is that a non-maximal set of actions, x, is permissible if and only if, in performing x, the agent can see to it that she performs some permissible maximal set of actions, where y is a maximal set of actions if and only if there is no other available set of actions, z, such that performing y involves performing z but not vice versa. I have the suspicion that once we realize that this is way that non-maximal acts are to be assessed something like the self-binding response, if not the self-binding response itself, will be unavoidable.

  2. Caspar, thanks so much for the excellent precis and discussion points.
    Tom, this really is a fascinating and important paper. I have a minor quibble and then a naive question, both having to do with your discussion of the possible non-irrationality of time-biased and risk-averse preferences on pp. 534-535. First the quibble: on 535, you say, “This response [about unified practical agency as thwarting possible diachronic non-irrationality] rests on our seeing ourselves as agents who persist over time. It could be that this view is wrong. It may be that different time-slices are really different agents….” But this moves too quickly from how we conceive ourselves to how we might be. These are not contradictories (unless by “this view,” you mean our self-conception, and not the theoretical position you’ve just articulated).
    Here’s the naive question: might it be possible for there to be two different perspectives from which rationality/irrationality assessments are made? One may be the first-person, synchronic deliberative perspective (where I view myself as a current time-slice of some limited duration whose only present aims matter), whereas the other may be a kind of third-person, diachronic perspective of the agent as a whole (which I myself may take up). Perhaps, then, the charge of irrationality is motivated in your cases only from the latter perspective, but not the former.

  3. This was a fun paper to read. Thanks to the editors at Ethics and those in charge of this blog for making it available. I’m admittedly not familiar with the literature on time-bias or risk aversion, so hopefully my comments will be on point.
    As I was reading, I couldn’t help but feel that there was something off about the thought experiment. I think the problem is that we aren’t given any information about the agent’s knowledge of the situation, and this seems relevant to me. Does the agent know in advance of taking the first pill that a second pill will be offered and that s/he will be stricken with amnesia? If s/he does, then it seems like s/he should recognize that it has the potential to turn him/her into a pain pump, and this potential seems like something that the risk averse person would factor into his/her risk calculation. As a result, it’s not clear to me that the risk averse person would choose to take the first pill, and if s/he did, then s/he would be irrational if his/her preference was to avoid risk.
    On the other hand, if the agent doesn’t know the full situation, then I think Caspar’s objection holds. We can’t count the agent as irrational for choosing A over D since s/he was never explicitly faced with that choice.
    One last point: I wonder if there’s not another option to be included at the end of the paper. Could it be that it’s rational to be both risk averse and time-biased but that there must be times when one preference overrides the other?

  4. This is a really nice paper, but I fear that Tom misrepresents the status of the risk aversion hypothesis that he needs. It is simply false that rejecting his example by holding that risk aversion is irrational commits one to the absurd claim that buying fire insurance is irrational. The rationality of buying fire insurance can be explained by the decreasing marginal utility of money. Agents who value money in this way are sometimes called risk averse within the context of standard expected utility theory, but that variety of risk aversion will not lead to pain pumps, nor will an analogous non-linear valuing of pain (decreasing marginal utility of anti-pain). To get his result, Tom needs to appeal to a kind of risk aversion that is extremely controversial. The representation theorem of von Neumann and Morgenstern shows that what must be rejected is the Independence Axiom: if outcome A is preferred to outcome B then, for any outcome C, and for any probability p, getting A with probability p and C with probability 1-p is preferred to getting B with probability p and C with probability 1-p. But many people find the Independence Axiom very compelling.

  5. I like the example (and the paper) a lot. My sense that there is a serious rationality problem for time-biased agents is not much diminished by the suggested replies. Cross-time irrationality never seems quite as bad, to me, as at-a-time irrationality, but as dynamic rationality problems go, Tom Dougherty’s problem for time bias looks pretty bad.
    I want to think a little more about the cogency of dynamic dutch books in general and this one in particular. For now, just a few quick hits:
    1. Doug, can you explain why the patient is irrational to take the pill (on Monday) if he cannot bind himself (form an effective intention)? He prefers the prospect of taking the pill to the prospect of not taking it. Why is it irrational for him to choose the prospect he prefers?
    2. Nate, the patient can know the whole situation (everything we know); I don’t see how this helps. The example is confusing, but one thing that’s important to remember is that the patient will be offered the second pill (on Wednesday) whether or not she took the first pill. (The example would not work if the Wednesday pill were only offered to patients who took the Monday pill; in that case, as long as the patient knew this, it would be plainly irrational to take the Monday pill.)
    3. Mark, no, it’s just a matter of marginal utility for hours of pain (more intuitively, increasing disutility). The patient does not violate the independence axiom. Suppose the patient doesn’t care at all, at any moment, about past pain, so we can ignore that utility. The utility function for hours of future pain can be this:

    u(h) = –(2^h).

    (To make the numbers easier to handle try letting ‘h’ stand for the duration in half hours instead of hours.)

  6. Hi Jamie,
    I’m assuming that the patient prefers (and ought to prefer) most of all not to suffer one more minute of pain and to gain nothing for it. If he takes both pills, then he will suffer one more minute of pain and gain nothing for it. Now, my view is that we must determine the rational permissibility of non-maximal sets of actions by determining which available maximal sets of action are permissible and then employing a distribution principle (the one that holds that permissibility distributes over conjunction) to determine the permissibility of those non-maximal sets of actions. And I also hold (very roughly) that the view that the relevantly available maximal sets of actions are those that are securable at the relevant time. So my view (very roughly speaking) is that a non-maximal set of actions, x (e.g., taking the pill on Monday), is rationally permissible if and only if, in performing x, the agent can see to it that she performs some permissible maximal set of actions. There is, I’m assuming no permissible maximal set of actions in which he takes both pills. He can, however, see to it that he doesn’t take both pills by not taking the one pill on Monday. So unless he can also see to it that he doesn’t take both pills by taking the one pill on Monday while simultaneously intending not to take the next pill on Thursday, then it is rationally impermissible for him to take the pill on Monday.
    Of course, I’m talking about objective rationality. But let’s just assume that the patient has all the relevant information since Dougherty doesn’t say whether he does.
    Does this help explain why I think that the patient is irrational to take the pill (on Monday) if he cannot bind himself (form an effective intention)?
    Bottom line: I think that in determining the deontic statuses of individual actions we must look to the deontic statuses of larger sets of actions that do or do not include them. I definitely think that it is mistake to agglomerate. Thus, I just want to hear what Dougherty thinks the relationship between the rationality of smaller and larger sets of actions are.

  7. Jamie,
    I agree with everything in your comment, and in particular I take back what I said about Independence violation. However, I still think that it is misleading to compare the kind of risk aversion Tom needs to the risk aversion of someone who buys fire insurance. While buying fire insurance is clearly the smart thing to do, once we factor out such things as the unpleasantness of uncertainty / thrill of gambling and the fact that being in pain might stop you from doing other worthwhile things, I can’t see any reason to prefer an hour of pain for sure to a 50% chance of two hours of pain and a 50% chance of no pain. I think I am perfectly indifferent between the two options. In fact, Tom’s example might be seen as a reason not to be risk averse in this way. But I worry that it is the wrong kind of reason in something like the way that non-depragmatized Dutch book arguments are the wrong kind of reason for being probabilistically coherent.

  8. To Everyone:
    Can anyone explain to me why we should think (as Dougherty reports is the orthodoxy) that the money pump arguments show that intransitive preferences are irrational? Generally speaking, it is widely recognized that the fact that having some attitude would bring about bad consequences is not a good reason for thinking that it is irrational to have that attitude. That’s, as they say, the wrong kind of reason. And that’s all the money pump arguments show, right? Or am I missing something?
    As Caspar points out, “For any attitude you might have there are situations in which it is undesirable to have that attitude. When your head will explode if you love your mother then it is undesirable for you to love your mother.” But surely this doesn’t show that it is irrational for you to love your dear mother.
    So Dougherty dismisses that the self-binding response quite quickly it seems because the response implies that “a money pump argument cannot show that intransitive preferences are irrational” and that “denying that money pump arguments show intransitive preferences [to be] irrational would arguably fly in the face of orthodoxy.” But, of course, the fact that some proposition is contrary to orthodoxy is not a good reason to dismiss a view, and I don’t even understand what reasons there are to accept the orthodox position in the first place.

  9. Doug,

    I’m assuming that the patient prefers (and ought to prefer) most of all not to suffer one more minute of pain and to gain nothing for it.

    Hm. When? The patient’s preferences for complete courses of events change over time. But as far as I can tell, there is no time at which the extra minute scenario is the patient’s least favorite. Is that what you meant?
    Look at the patient’s preferences on Monday. Here are the possibilities. (They are not all available to the patient at that moment, in your sense, but we can return to that later.)
    Take the Monday pill and not the Wednesday pill. (M~W)
    Take the Wednesday pill and not the Monday pill. (~MW)
    Take both pills. (MW)
    Take neither. (~M~W)
    Now, these are prospects, not outcomes, one might say. (For outcomes we also have to include the choices made by ‘nature’, as decision theorists say — that is, another variable for whether the patient gets the early or late treatment.) Does this matter? I will proceed as if it doesn’t, for my purposes. Correct me if I’m wrong about that.
    The scenario in which the patient gets the extra minute of pain for no compensation is (MW). But on Monday, the patient prefers this prospect to (~MW). So, I think, it isn’t true that the patient prefers most of all, on Monday, to avoid the (MW) prospect.
    Now let’s look at what is available. Suppose this is a patient who cannot form the binding intention. Then it seems plausible that on Monday the patient fully expects to take the Wednesday pill. (Right?) This means that (M~W) and (~M~W) are unavailable, in your sense. So only (MW) and (~MW) are available. And of these, on Monday the patient prefers (MW). So it seems rationally compulsory to take the pill.
    Of course, it may be that the time-biased preference pattern is irrational. I only mean that insofar as this patient’s preferences are rationally permissible, it is rationally compulsory to take the pill on Monday.

  10. Mark,
    That’s an interesting point, about the ‘wrong kind of reason’. I haven’t decided yet what I think about the dutch book argument.
    As to risk aversion for hours of pain, I am inclined to think that there is nothing compulsory about one or the other (risk averse, risk neutral) attitude. They both seem permissible, on the face.
    Note that other sorts of experiences could work. What’s important is that there be a type for which time bias seems natural and risk aversion is rational. So maybe boredom would work? Hours of watching Andy Warhol movies? (I have to admit that Tom’s placing the two experiences on Tuesday and Thursday made me think of teaching an introductory philosophy class. Shame on me.)

  11. Hi Jamie,
    You ask, “When?” That’s a good question. So I think that that we must evaluate maximal sets of actions first in order to evaluate non-maximal sets of actions. And let’s assume (and this is just for the sake of argument) that we should evaluate maximal sets of actions in terms of whether or not they maximally satisfy the agent’s rational preferences. Now, if time-biased preference patterns are rational, then which maximal set of actions the patient will rationally prefer will change over time. So what will maximally satisfy his rational Monday-preferences will not necessarily be what will maximally satisfy his rational preferences over time. Now, as I’ve said, I want to hold that whether he is rationally permitted to take the pill on Monday depends on whether, in doing so, he can see to it that she performs some permissible maximal set of actions. But it seems wrong to me to assess maximal sets of actions only in terms of Monday-preferences (why just in terms of Monday-preference as opposed to Wednesday preferences?). Rather, I’m thinking that it is more plausible to suppose that if we’re going to evaluate maximal sets of actions in terms of preferences at all, it should be in terms of rational preferences over time. That is, we should accept something along the following lines: a maximal set of actions is permissible (on our working assumption that they are to be evaluated in terms of rational preference satisfaction) iff performing that set of actions would maximally satisfy his rational preferences over time. And I’m thinking that his taking the pill on Monday in any instance in which he is unable to bind himself against taking the pill on Wednesday won’t be what maximally satisfies his rational preferences over time. Maybe I’m mistaken to think this. But what I’m thinking (and this all just off the cuff) is that if it were the case that he could maximally fulfill his rational preferences by performing a maximal set of actions that involves his performing MW, then it would be a mistake to characterize MW as an instance in which the patient suffers an extra minute of pain and gains nothing for it. For if he gains the maximal satisfaction of his rational preferences over time, he gains something.

  12. Hmmmm.
    We are starting from very different ideas of what makes something rational. I think that if time bias is rational, then certainly whether it is rational to Φ at t depends on whether Φing satisfies (expectationally speaking) t preferences.
    But I am interested to hear more about your view. Can you explain what you mean by “satisfies his preferences over time”? If someone prefers that p at one time and that not-p at another time, what will determine whether p or not-p satisfies his preferences over time?

  13. Thanks to everyone for such great comments, and doubly thanks to Caspar: first, for his precis and second to his invaluable help with writing the paper in the first place. It grew out of a paper for a seminar of his. From the amount of time and effort that Caspar put into meeting with me about it, (as he does with students in general) you would never have thought that he has been on a very difficult tenure-track. This recently ended with him deservedly getting tenure: so congrats to him!
    I’ll reply to everyone individually in turn. To anticipate, I’ve written these out ahead of time, and they already seem far too long, so my apologies that I’ll no doubt fail to respond adequately to some great comments.

  14. @Caspar. I haven’t got anything compelling to say about the Satan’s Apple case, but I think it’s noteworthy that it’s a puzzle about an infinite number of choices. My hunch is that the correct diagnosis of what’s going on will make an essential appeal to this fact. I don’t know what the correct diagnosis is, so I suspect you may not be very impressed with this hunch.
    In any event, your proposal is that “We are rationally criticisable for the accessible options we take or fail to take”, which is a general proposal that covers both finite and infinite cases. On the face of it, this sounds very plausible, but there are two controversial bits. First, what counts as an option? You include both individual decisions and sets of individual decisions. Second, what counts as “accessible”? There is obviously *one sense* in which declining both pills is accessible to you. (There isn’t a gun at your head etc). But the sense that you think is relevant is, roughly,
    option X is accessible to you iff were you to choose to take X, then you would take X.
    This glosses over the fact that you may be responsible for the latter counterfactual failing to hold (as is the case in money pump cases). When you are responsible, then it looks like you are at the very least open to a discussion about your rationality (which you wouldn’t be if a third party was responsible for the counterfactual failing to hold).
    I also note that your proposal will entail that someone with intransitive preferences is rationally criticisable when they are money pumped. (If you’re interested, I can spell out how to show this with a case, but to give you a hint: think about the standard ice cream case, but suppose that Jones starts with a pint of each type of ice cream, and then is offered the usual trades). Of course, this bleeds into my discussion about money pumps with Doug below.

  15. @ Doug. I’m perhaps more sympathetic to the self-binding line than I may have implied, and I hadn’t meant to be dismissive of it. The brevity of my discussion of self-binding reflected how little of worth I had to say about the issue, rather than how seriously I took it.
    Also, I think we’ve been thinking about the “self-binding response” in different ways. I had in mind the response that it is rationally permissible to take both pills, and the pain pump argument fails to show otherwise: it only shows that people who can’t self-bind can be exploited. This is different from your interesting proposal that it is rationally permissible to take the first pill only if you can bind yourself into refusing the second pill. I share Jamie’s worry about this proposal. Suppose you cannot bind yourself into refusing the second (as I assume most of us are unable to). I assume that, on your view, you are then rationally required to refuse the first. But this is strange: you know that whether or not you take the second pill, taking the first pill will better satisfy your current preferences. Why, then, are you rationally required to refuse the first pill?
    Of course, this doesn’t detract from the main thrust of your challenge to the claim that money or pain pump arguments expose irrationality. (NB, it is my impression that it is orthodoxy among decision theorists that money pumps expose irrationality. I’m more than happy to be corrected on this point if I’m wrong, as I don’t have any great sociological insight here, beyond a few conversations with decision theorists.) I haven’t got a good argument one way or the other in response to your challenge, and certainly nothing that counts as an explanation of why money pump arguments expose irrationality. (I wonder if a deeper explanation is even possible?) Still there are some ways of pumping the intuition that these arguments do.
    Consider intransitive, ice-cream loving, Jones. A sympathetic friend says to her: “you do realize that if you trade this initial pint of vanilla ice cream, you are embarking on a series of trades that will leave you with this very pint of ice cream but a dollar worse off?” Jones acknowledges that this will be the outcome. But she points out that this particular trade is to her advantage: it best satisfies her preference for Strawberry over Vanilla. At the second trade, the friend implores her to stop–she’s still on the path to a sure loss! But Jones says that she sees what the overall result will be, but she doesn’t want to stop trading. She likes Chocolate better than Strawberry. They have a similar exchange at the third trade, but Jones makes it, leaving her back with the original pint of vanilla and with one less dollar. The friend says, “can’t you see how foolish you’ve been?” Jones replies, “I’m not foolish in the least—at every point I best satisfied my preferences!” Of course, there is no reason to expect that there is only one cycle of three trades. This cycle could be repeated again and again. Suppose Jones begins a billionaire with a pint of ice cream. It is predictable that she will end up penniless with that pint as her only earthly possession. Moreover, Jones herself can predict that this will happen, and this trajectory is all of Jones’s doing: she makes a series of free choices. Nevertheless, she goes ahead and willingly ruins herself.
    I get the a strong intuition that there is something rationally defective about Jones. As a result, I’m tempted to think money pump arguments that target synchronic preferences work. As a result, I’m tempted to reject responses to the pain pump argument that targets diachronic preferences if these responses also apply to this money pump argument. Perhaps you don’t share the original intuition, or at least don’t find it so compelling. Also, I agree that if we’re eventually left with balancing intuitions, then this is a relatively unsatisfying card to play. I’d like to be able to offer you a deeper explanation, but I’ve got nothing.
    As for the general issue of “wrong kinds of reason” that you and Mark (in a different context) mentions. I agree that a vulnerability to losing money is not what *constitutes* Jones’s irrationality. I only claim that it is *indicative of* her irrationality.
    You ask what I think about the “relationship between the rationality of smaller and larger sets of actions are.” In light of the intransitive money pump cases, the view I’m inclined towards is that there are sui generis rational constraints on sets of actions that don’t reduce to rational constraints on individual actions. I don’t think any of the ice cream preferences is irrational in itself, or acting on one by itself is irrational. But I think a set of intransitive preferences is irrational and acting on this set is irrational.
    You say that you want to evaluate sets of actions according to someone’s “rational preferences over time”. I’m not entirely sure I have a grip on this (and I suspect Jamie might not either.) Using Jamie’s terminology above, could you say for us how you would rank the options MW, -MW, M-W and -M-W according to a time-biased and risk averse person’s rational preferences over time? That might help with our discussion.

  16. @David: you’re quite right about the quibble. I think my confusion may have stemmed from thinking about the position that being an agent-over-time is rationally optional, but if X takes herself to be an agent-over-time, then X faces diachronic rational constraints. I’m starting to think that position is wrong, but I haven’t worked out in detail why it is (if indeed it is).
    That’s a very interesting suggestion that synchronic rationality is linked to the deliberative perspective. I take this to chime with the spirit of Caspar’s suggestion about accessible options. (Only immediately available options are accessible to the deliberative perspective.) I’m sympathetic to the idea that we can evaluate people in different ways, and I need to think more about your point. It makes me think of something that’s a little off-topic, but very interesting. In Judy Thomson’s 1984 trolley problem paper, she notes in passing that it seems wrong for you to kill one person presently to avoid having killed five in the past. (Imagine you are the doctor in the usual transplant case where you can harvest the organs of one person to save five, but you are responsible for the five needing the organs in the first place.) Thomson doesn’t discuss this issue in detail, but she suggests that the explanation of this is in terms of the fact that you have to choose between options that are available to you here-and-now. I suspect that it would be interesting to see what comes of applying your proposal about time and rationality to these cases involving time and ethics.

  17. @ Nate. That’s a really important point, and I’m embarrassed that I wasn’t explicit about it. Yes, I had assumed that the patient knows in advance that all pills will be offered. You say that “it’s not clear to me that the risk averse person would choose to take the first pill”. I’d been hoping to address this in sections V A and B. Do you have a suggestion in mind for why she would refuse the first pill? I hadn’t thought about the issue of one of time-bias and risk aversion overriding the other. It seems to me that these two don’t conflict in the case I mention, since it’s possible to act in a risk averse and time-biased way. Have I misunderstood your suggestion?

  18. @ Mark. I’d been trying to avoid taking a stand on how the risk aversion gets characterized formally (not least because I take these issues to be controversial, as you point out.) One formalization that I had in mind was Lara Buchak’s (available on her website). I think Buchak’s risk aversion violates the continuity axiom, and my impression is that this axiom is more controversial than the independence axiom. In any event, I suspect that for any type of risk aversion that entails the thesis “Every Risk Reduction Has Its Price,” then the argument goes through. I think that this assumption would hold with at least a lot of (and perhaps all?) diminishing marginal utility views of risk aversion.
    Just to be clear about fire insurance, I think I claimed that if only minimal risk aversion is rationally permissible, then it’s impermissible to pay money for fire insurance. This is a trivial claim given I defined minimal risk aversion as being unwilling to pay to reduce risk. I take your point that it’s misleading to talk about fire insurance side by side my discussion of risk aversion, without being clearer about the relationship between the two, and the extra factors you mention. But it seems to me that a great deal of the unpleasantness of uncertainty is derivative from the fact that we’re particularly worried about the uncertainty being resolved unfavorably—i.e. we’re particularly worried about the worst outcome.
    You say you can’t see “any reason to prefer an hour of pain for sure to a 50% chance of two hours of pain and a 50% chance of no pain”—good for you… I think you’re not even minimally risk averse about pain, and so you’re safe from being turned into a money pump!

  19. @ Jamie: I agree that a very fruitful place to go next is to think about dutch book strategies etc in general. There’s obviously a much bigger issue about diachronic rationality lurking here, and time-bias is only one small part. I’ve been thinking about this issue since writing the paper, but without much luck. I’m very interested if you have any thoughts to share!

  20. Uh-oh. Jamie’s Hm’s are getting longer and longer. I think that I do have a very different conception of rationality than what’s being assumed here and what’s been assumed by decision theorists generally. In any case, Jamie and Tom have helped me see that I’ve been confused on a number of points. So I’m just going think about all of this more.

  21. Tom,
    Thanks for the response. Originally, I was thinking that if the agent knows the entire situation, then she would know that taking the first pill will cause her to become a pain pump, and, thus, she would refuse the pill. However, after reading the paper a second time, I think I’m inclined to agree with you. She would only refuse it if her other option was to keep the situation the way it was, but the way the puzzle is set up this just isn’t an option.
    As for my question about overridingness, I meant that as more of a general question. In the case of the ice cream flavors the agent will presumably figure out what’s going on, and her preference not to be a money pump will override her preference for a different flavor of ice cream. I was just curious to know if it might be possible to apply the same sort of rule to these two preferences but not necessarily to your example. Again, I’m not familiar with the literature on either of these issues, so it may not be a question worth asking.
    Thanks again. This was a really fun article to think about.

  22. Hi Nate,
    That’s a great point about the money pump case. Schick’s offered an argument along similar lines, saying that someone with intransitive preferences will foresee the money pump, and hence refuse the first deal.
    It’s an interesting response, for sure. I think it works for some money pumps, but not others. Suppose Jones prefers chocolate to strawberry, strawberry to vanilla and vanilla to chocolate. Suppose she has a pint of each. A trader announces she will offer Jones the following series of deals. First, she will offer to give Jones a pint of chocolate in return for her pint of strawberry and 10 cents. Second, she will offer to give Jones a pint of strawberry for her pint of vanilla. Third, she will offer to give Jones a pint of vanilla for her pint of chocolate.
    I think that surprisingly, no matter how much Jones wants to avoid being a money pump and foresees what will happen, she will accept each deal and end up 10 cents worse off. This is for similar reasons to the ones I give in the text: the decision about each trade is independent of her other decisions, and the dominant option for her at each point is to accept the trade.
    Of course, on the conception of rationality that Caspar proposes at the end of his commentary, it turns out that Jones does not act irrationally. If Jones ends up with a sure loss of 10 cents, we cannot criticize her for failing to maintain the status quo, since doing so was not an option for her. (At least, that’s how I interpret his proposal… I may have misunderstood Caspar’s intention). Since I think Jones does act irrationally, I don’t accept Caspar’s proposal.

  23. Hi Tom,
    Thanks. I deserve no credit at all for paying attention to work as interesting as this.
    I like the three pints example directly above. Indeed, according to the proposal I sketched out, Jones is not rationally criticizable for accepting all three deals. The refuse-all-deals option is never accessible to her. There is (supposing that she lacks the ability to self-bind) no time at which, if she decides to refuse all the deals then she will refuse all the deals.
    If I were minded to defend the proposal, I would say that we have to be careful when we talk about people ‘acting irrationally’. One way to be irrational is to act in a way that doesn’t make sense, given your preferences. Another way to be irrational is to have preferences that don’t make sense. Say you ‘act irrationally’ when you act in such a manner that you must be irrational in one of these two ways.
    In your example Jones acts irrationally. But it’s not because her actions don’t make sense, given her preferences. At all times she prefers
    Strawberry, Chocolate, Vanilla – 10c
    to
    Strawberry, Chocolate, Chocolate -10c
    to
    Vanilla, Chocolate, Chocolate – 10c
    to
    Vanilla, Strawberry, Chocolate
    so each deal is an upgrade, in her eyes. It’s because her actions illustrate that her preferences at all times don’t make sense — you can’t successively upgrade to something worse.
    The question is whether the fact that your time-biased person acts-over-time to her own acknowledged disadvantage also illustrates that her preferences do not make sense. She will say that it does not: her preferences, at all times, are coherent and defensible.
    Enough of that. In any case, your paper really brings home that we need a way of sorting all the cases where people act-over-time to their own acknowledged disadvantage (the time-bias case, the intransitive preference case, the Satan’s Apple case, and other cases we have not talked about yet — like the Professor Procrastinator case and cases where you have a fuzzy propositional attitudes). I suspect that it is best to focus on developing a principled, general account of what sorts of things get to be rational or irrational (atomic actions?, composite actions?, preferences? beliefs about what is best?, patterns of deliberation? etc.) and why. The proper sorting will, hopefully, fall out of that account.

Leave a Reply

Your email address will not be published. Required fields are marked *