Please join us to discuss Preston Greene & Meghan Sullivan‘s “Against Time Bias,” published in the most recent issue of Ethics and available open access here. Caspar Hare has kindly contributed a critical précis, below the fold. It should be an exciting discussion!

 

 

Meghan and Preston’s paper is very interesting.

Say that you are hedonically near-biased when you rather, other things being equal, for pain to be in your far future than your near future, and for pleasure to be in your near future than your far future. Say that you are hedonically future-biased when you would rather, other things being equal, for pain to be in your past than in your future, and for pleasure to be in your future than your past.

Meghan and Preston make three claims. First, if you are hedonically near-biased then you are irrational. Second, if you are hedonically future-biased, but not hedonically near-biased, then you are irrational. Third, there is a plausible evolutionary explanation of why we are inclined to think otherwise.

I take the crux of the paper to be their argument for the second claim, so I will focus on that here.

In a 2011 paper in Ethics (symposium discussion here) Tom Dougherty argues that hedonic future bias is irrational. He says that if you are hedonically future biased and risk averse with respect to pleasure and pain then you are predictably exploitable (to put this carefully: there are situations in which, by acting on your preferences at every moment, you perform a sequence of actions that you at all times disprefer to another sequence of actions that you were in a position to take.) Predictable exploitability is a mark of irrationality, says Tom.

Preston and Meghan agree that predictable exploitability is a mark of irrationality, but they worry that Tom has not shown that the problem lies with future-bias. Maybe the problem lies with risk aversion with respect to pleasure and pain, they say. They offer us a much simpler argument.

They start with some examples of behavior that they take to be at least prima facie ‘absurd’. I take these to be the basic kinds of example (I am recasting them a bit, so, Meghan and Preston, please correct me if this was not what you had in mind):

Putting off the Cookie

You like cookies. On Monday I say ‘I can give you just one cookie just one day this week. You choose the day’. You choose Sunday, the last day of the week. Why? You are not hedonically near-biased, so you do not now care about whether you get the cookie tomorrow or in six days time. But you know you are hedonically future-biased, so if you now choose any day other than Sunday then you will later regret your choice. You choose to avoid later regret.

Putting off the Cookie Infinitely Many Times

You and I are immortal, and we know it. You like cookies. Each day I come to you and say: ‘I can give you just one (eternally fresh) cookie just one day this afterlife. Do you want it now, or shall I come back tomorrow?’ Each day you put it off… and you never consume that cookie. Why? You are not hedonically near-biased, so, each day, you don’t care about whether you get it that day or much later. But each day you know that you are hedonically future-biased, so if you take the cookie that day you will later regret it. Each day you choose to avoid later regret.

Meager Returns

On Monday I say to you ‘Do you want two cookies on Tuesday or one on Thursday?’ You ask for the one cookie on Thursday. Why? You know you are hedonically future biased, so you know that, if you ask for the two cookies on Tuesday then you will regret it on Wednesday, while if you ask for the one cookie on Thursday then you will never regret it (on Friday you won’t care about the past). You choose to avoid later regret.

Their argument then goes like this (again, I am recasting it, so Meghan and Preston, please correct me if this is not what you had in mind):

P1       It is not consistent with your being rational that you behave in the ways above.

P2       If it is consistent with your being rational that you be future-biased, but not near-biased with respect to pleasure, then it is consistent with your being rational that you behave in the ways above.

C          It is not consistent with your being rational that you be future-biased, but not near-biased with respect to pleasure.

To get the discussion going, let me give my initial reaction to this argument.

I do not see any absurdity or irrationality in your behavior in the first case, Putting Off the Cookie. We have stipulated that you aren’t near-biased, so why not put it off until Sunday – saving the best until last?

The second case, Putting Off the Cookie Infinitely Many Times is tricky. Note that (as Meghan and Preston acknowledge) we don’t need future-bias to generate prima facie ‘absurd’ behavior like this. Consider:

Putting off Ever More Cookies Infinitely Many Times

You are immortal, but not, this time, future-biased. You just want more cookies over the course of your life. Each day I come to you and say ‘I can give you just one batch of cookies just one day this afterlife. Do you want today’s batch, or do you want me to come back tomorrow with a batch twice as large?’ Each day you put it off… and you never get any cookies. Why? Because each day you know that, if you take the batch that day then you will later regret it. Each day you choose to avoid later regret.

In this case it is clear that the problem is not with your cookie-preferences (wanting more of them is harmless enough). The problem is with your efforts to avoid regret. Likewise, the problem in Putting off the Cookie Infinitely Many Times is with your efforts to avoid regret.

So Meghan and Preston’s argument really rests on the third type of case.

Why is P2 true of this case? Meghan and Preston appeal to a general principle:

Weak No Regrets:   If an agent has full and accurate information about the effects of the options available to her, then it is rationally permissible for her to avoid options she knows she will regret in favor of actions she knows she will never regret.

But this principle, as stated, does not seem quite right. Consider:

The Bomb

I am on a small island with a large, ticking bomb. If I stick around then I will be vaporized. If I swim to the mainland then I will live on and prosper. I very much want to live on and prosper, and, though the water is cold, I don’t presently care about my soon being cold. But I do know that I tend to have a visceral reaction to being cold. I know that, very briefly, while swimming to the mainland, I will regret not staying on the island.

If I swim to shore then I will briefly regret not sticking around. If I stick around then I will never regret not swimming to shore. But it is not rational for me to stick around.

What principle can Meghan and Preston appeal to instead? The The Bomb case suggests something weaker:

Weaker No Regrets: If an agent has full and accurate information about the effects of the options available to her, then it is rationally permissible for her to avoid options she knows she will regret in favor of actions she knows she will never regret, so long as there is nothing else she presently cares about at stake.

In The Bomb there is something else you presently care about at stake – whether you will live on and prosper.

But that doesn’t give Meghan and Preston what they want. In Meager Returns there is something else you presently care about at stake – how many cookies you will get.

So maybe they could go with:

Weaker No Regrets II If an agent has full and accurate information about the effects of the options available to her, then it is rationally permissible for her to avoid options she knows she will regret in favor of actions she knows she will never regret, so long as she presently regards her later regret as rational.

In The Bomb you don’t presently regard your later, cold-induced regret as rational. But in Meager Returns (so long as you are not akratically future-biased – future biased against your better judgment) you do.

Is this principle right? It is certainly controversial. Evidential decision theorists deny it (in Newcomb cases they know they will rationally regret one-boxing once they see what is inside the boxes). Rationalists about creation ethics deny it (in some non-identity cases we are rationally compelled to create one baby rather than another, even though we know that, once confronted with the baby, we will rationally regret it.)

One way to think about the question is to suppose a connection between rationality and reasons, and frame the question in terms of reasons. Does acknowledging that I will have strong reasons to want something always give me strong reasons to want it now (strong enough to make it rationally permissible for me to ignore strong reasons not to want it now – e.g. that I will get fewer cookies)?

Liz Harman (in “‘I’ll Be Glad I Did It’ Reasoning and the Significance of Future Desires.” Philosophical Perspectives 23 (2009): 177-199.) and others have answered no to this question. To push things ahead Meghan and Preston will need to make a case for answering yes. And to do this they will need to lean heavily on the rational significance of personal identity. Acknowledging that somebody else has strong reasons to want something does not always give me strong reasons to want it. Acknowledging that I will have strong reasons to want something does always give me strong reasons to want it. They will need to explain why this is.

 

24 Replies to “Ethics Discussions at PEA Soup: Preston Greene & Meghan Sullivan’s “Against Time Bias,” with critical précis by Caspar Hare

  1. Hi Preston and Meghan (if I may). I very much enjoyed your critique of future bias and defense of temporal neutrality (at least the conditional claim that if you reject near bias you should also reject future bias). In my case, you were preaching to the choir. I think you make a good case for future bias being both unstable and subject to regret in a way that makes it look irrational, and I think you are on to something in identifying the control heuristic as something that explains, without justifying, future bias and, hence, supports an error theory with respect to that bias. I have a few comments/questions.
    (1) When you formulate the temporal biases, both near and future, you restrict their operation to hedonic goods (pleasures) and bads (pains). I wondered what your rationale for this restriction is. On the one hand, one might think that principles of temporal distribution are structural claims about what attitude we should take to the temporal location of goods and bads, whatever these are, and so ought to be orthogonal to substantive issues about the content of the good. On the other hand, as Parfit concedes in his defense of future bias and as I argue in “Prospects for Temporal Neutrality,” the rationality of future bias is less clear in at least some cases involving non-hedonic goods and bads. For instance, I might strongly prefer that I suffer some minor embarrassment tonight than that it be true that I disgraced myself in a big way last night. This second thought already seems to reflect a limitation in the appeal of future bias. Perhaps the rationale for your restriction to hedonic goods is purely dialectical — you want to begin with cases in which future bias looks most plausible. If so, that invites the question whether you want to generalize the defense of neutrality to non-hedonic goods as well.
    (2) Your conclusion that future bias might be irrational because it leads to temporally unstable judgments about what is rational and, hence, to regret is similar to claims that I make in “Prospects” and that Tom Dougherty makes in some forthcoming work, though I think you and he make the case much better than I did. One way to think about why this claim that friends of temporal neutrality share is interesting is in relation to the contrast between constant and hyperbolic discounting. Whereas rational choice theory seems comfortable with constant discounting, some people read Ainslie and others as suggesting that hyperbolic discounting introduces dynamic inconsistency in an agent’s preferences that will fund regret and, hence, is uniquely irrational. But, as I see it, friends of temporal neutrality believe that future bias as such produces preference instability and regret that should appear irrational. If so, instability and regret attend any form of time bias and are not limited to hyperbolic discounting. Isn’t that a feature of your/our view? If so, that seems significant and worth highlighting.
    (3) There seems to be an asymmetry in the biases toward the near and the future — whereas it’s all too easy to act on the near-term bias and so there are many opportunities to avoid it, it seems difficult to act on future bias, inasmuch as that privileges having bad things in your past, which seems beyond one’s control. Insofar as this asymmetry is robust (but see below), it suggests that adopting a temporally neutral perspective involves regulating one’s *actions* in avoiding near-term bias but is more a matter of regulating one’s *attitudes* toward actual and possible life events in the case of avoiding future bias. I’m not sure what hangs on the difference between rationality in action and attitude, but it’s worth thinking about. Of course, both you and Dougherty want to deny a strong version of the asymmetry, according to which we could never act on or against future bias. In your case, the Scheduling Problem and the Meager Returns Problems illustrate this. Here’s another kind of example. If there’s someone I should compensate for sparing me some pain, should I compensate her differently for sparing me the same quantum of pain depending on whether this is something she has already done recently or is going to do in the near future? Presumably not. But even if the strong version of the asymmetry is mistaken, presumably some weaker version is plausible (hence your control heuristic). So I guess the residual question here is what, if anything, of interest is there to learn about rationality by noticing the ways in which different biases are differentially connected with action and attitude. Any thoughts?

  2. Great paper, Meghan & Preston!
    I was wondering what you thought about this response to your argument from a friend of future-bias: by itself, future-bias is rationally permissible, and by itself, aiming to avoid regrets is rationally permissible. But you’re rationally forbidden from putting the two together.
    To independently motivate this thought to some extent, we might think that (i) future-bias is a concern with the future; (ii) regret-avoidance is a concern with the past; and (iii) having some concerns with the future rationally constrain us from having certain concerns with the past. Ok, (i) – (iii) don’t entail that you can’t put future-bias and regret-avoidance together, but they could get us in the mood of thinking that there’s something amiss with combining the two (even if each is fine by itself).
    What do you think?

  3. Thanks to Meghan and Preston for a great paper, and Caspar for the precis! I have two main thoughts:
    (1) Like Caspar, I thought reliance on the Weak No Regrets principle to be the main place where defenders of time-bias should resist M&P’s conclusion. As Caspar points out, Evidential Decision Theorists and certain rationalists in creation ethics will deny it. But these are controversial positions, to say the least. I think there are also cases in which it’s pretty clearly rationally impermissible to take the option that you know you won’t regret.
    Suppose you have three options: travel to Argentina (A), travel to Brazil (B), or sit around on the couch (C). You know that if you travel to Argentina, you’ll become the sort of person who slightly prefers travel to Brazil over travel to Argentina but prefers both over sitting on the couch. That is, if you take A, you’ll later slightly prefer B to A, but greatly prefer both A and B over C. If you take B, you’ll later slightly prefer A to B, but greatly prefer both to C. But if you take C, you’ll later be indifferent between A, B, and C – the plush cushions and endless stream of reality TV will lull you into a sort of ‘meh’ attitude. Let’s stipulate that you’ll be much happier if you take A or B than if you take C.
    Here, C is the only option you’re certain you won’t regret, but it also seems rationally impermissible to take it.
    That’s the structure of the case. It’s a counterexample to Weak No Regrets.
    It’s not clear that it’s also a counterexample to the modified principles Weaker No Regrets and Weaker No Regrets II (see Caspar’s formulation of those principles above). For it may be that this is a case in which nothing else is at stake; after all, your levels of happiness differ depending on whether you take A, B, or C, and they’re lowest if you take C. But as Caspar notes, Weaker No Regrets is too weak to get the anti-time-bias result that M&P are after.
    What about Weaker No Regrets II? You might think that here, at least some of your possible future preferences are irrational. For instance, perhaps the indifferent C-preferences are irrational, or perhaps preferring the trip you didn’t take reflects some irrational ‘grass is always greener’ mentality. That may be right in this particular case, but it seems like there will be SOME case with the structural features outlined above in which it doesn’t seem that your possible future preferences are irrational. (This will depend, of course, on how Humean about preferences we are.)
    (2) I also wanted to say something about the claim that near-bias is arbitrary. M&P write that, “near-biased agents make arbitrary distinctions among future experiences” (p. 952). But suppose our agent has read her Parfit and concluded that personal identity over time is not “what matters.” Instead, it’s facts about psychological connectedness and continuity that matter. Two time-slices are said to be psychologically connected to the extent that their mental states are directly connected (they share memories, beliefs, desires, etc, and there’s some causal explanation of why they share such mental states). And time-slices are said to be psychologically continuous iff they can be linked by a chain of intervening psychologically connected time-slices. That is, psychological continuity is the ancestral of psychological connectedness.
    Now, suppose our agent doesn’t care equally about all time-slices with whom she is psychologically continuous. Instead, she cares about time-slices with whom she is psychologically continuous to the extent that they are psychologically connected with her. Roughly, this means she discounts the experiences of psychologically continuous time-slices by how psychologically dissimilar they are to her present time-slice. I contend that this is a reasonable attitude. (After all, if you live long enough, there will be time-slices psychologically continuous with your present time-slice who bear absolutely no resemblance to you.)
    Caring about psychologically continuous time-slices to the extent that they are psychologically connected with you will mean you’re near-biased to some extent. After all, among the time-slices psychologically continuous with you, you’re more psychologically connected to those time-slices that exist in the relatively near future and the relatively near past (since your mental life changes gradually over time).
    Now, this sort of agent doesn’t care about time as such. Rather, she cares about something – psychological connectedness – which is closely correlated with time. But she’ll display much of the same behaviour as someone who is near-biased and cares about time as such and which M&P think is irrational.
    But our agent will reject Weak No Regrets. After all, she doesn’t care all that much about her far-future selves (since they’re only weakly psychologically connected with her present self), so who cares if they regret her current actions? She doesn’t care all that much about her far-future selves’ preferences for much the same reason she doesn’t care all that much about the preferences of strangers. Thus, she’ll defend her near-bias in the manner suggested by Parfit (Reasons and Persons, p. 187):
    “he may regret that in the past he had his bias towards the near. But this does not show that he must regret having this bias now. A similar claim applies to those who are self-interested. When a self-interested man pays the price imposed on him by the self-interested acts of others, he regrets the fact that these other people are self-interested. He regrets their bias in their own favour. But this does not lead him to regret this bias in himself.”
    Our Parfit-inspired agent doesn’t care equally about all her past and future time-slices for the same reason she doesn’t care equally about all people, namely that they aren’t all psychologically connected to the same degree with her present time-slice. Insofar as it’s rationally permissible not to be agent-neutral, this suggests to me that it’s also rationally permissible not to be time-neutral.

  4. Hi Caspar, David, Tom, Brian … and Everyone;
    Thanks for participating in this symposium, and for all of the interesting comments on the paper! Sorry for the slight delays in response too—we are in Hungary (Meghan) and Singapore (Preston), at the moment. We are optimistic that having people thinking about time bias in so many different time zones will give us some new insight….
    We have some replies to your comments. Preston is going to start off by talking a bit about the motivations for regret avoidance. Preston?

  5. General Motivations for Regret Avoidance
    Thank you for the précis, Caspar, we think it raises some intriguing points.
    Both Caspar and Brian point out interesting limitations in regret avoidance behavior. Let me first talk a little bit about how we’re thinking about the motivations for regret avoidance, before turning to their concerns.
    In much of the practical reasoning and regret literature, regret-avoidance is seen as the result of taking a more sophisticated perspective on one’s preferences through time. The contrast is to a completely myopic reasoner, who focuses exclusively on the preferences she has at a given moment when deciding what to do. By lacking a more sophisticated perspective, such myopic reasoners seem doomed to unfortunate patterns of behavior, such as that described by Bratman in The Second Pilsner. The point trying to be made here, as we take it, is not that unfortunate patterns of behavior necessarily indicate irrationality or that regret-avoidance is an obvious rational requirement. Rather, the point is that myopic reasoners have an irrational perspective on their agency through time: unfortunate patterns of behavior are a symptom of this underlying problem while regret avoidance is one the most promising potential cures.
    Why is regret avoidance so promising? It’s not clear how, even in principle, the problem of temporary reversals of preference could be solved without some appeal to regret—that is, without some appeal to your expectation that in the future you will prefer that you had done otherwise now. So in so far as a sophisticated reasoner is interested in handling temporary reversals of preference in a better way than the myopic reasoner, regret avoidance will play a role.
    With this background, we can state our initial motivation for appealing to regret avoidance: any gains in practical reasoning that a sophisticated agent achieves by avoiding regret in classic cases are demolished when we look at the case of a future-biased near-neutral agent. In cases involving this type of agent, we argue, defenders of future bias must appeal to the idea that one is required to ignore future regret. And the problem is not with a specific formulation of the regret-avoidance strategy: we appeal to regret avoidance principles that are weaker than anything else on the market (weak no regrets and the weak meta-preference principle).

  6. Response to Caspar and Brian on Regret
    Nevertheless, there are certain future regret preferences that ought to be ignored: regret based on information loss and irrational regret. (A trickier case is that of expected future regret based on a shift in core values, as in Parfit’s Russian Nobleman. The reason we ignore these cases is that we do not believe that an agent is required to ignore regrets caused by a shift in values. Rather, without overriding ethical considerations in play, it is permissible for the agent to take such a shift into account and it is permissible for the agent not to). The motivation for ignoring irrational and under-informed regret is pretty clear. The agent might rightly regard her future perspective to be degraded in comparison to her current one, and thus not a good guide to anything that should matter to her. We see a pattern in the cases in which it is impermissible for an agent to avoid regret: these are almost always cases in which the agent views her future perspective as degraded. The notable exception is regret caused by future bias. Here, defenders of future bias will (hopefully) agree that it is irrational for the agent to act in these ways (but at the same time continue to endorse future-biased preferences.) But, given the striking pattern, we suggest that a more likely culprit is future-biased preference, rather than regret avoidance.
    Of course, as Brian suggests—if it can be shown that there other preferences, not generated by time bias and obviously rational—that cause the same sorts of problems for sophisticated agents aiming to avoid regret, then that would lower our confidence in the conclusion.
    As we briefly note in the essay, discounting the far future on the basis of diminished psychological continuity is indeed different from discounting the far future on the basis of temporal distance. However, Brian is right to point out that we cannot so simply set aside psychological-continuity discounting because it has implications for weak no regrets. The problem stems from confusion over what it means for the agent herself to regret her choice at some point in the future, when she believes that her psychological connectedness will diminish over time. We suspect that for proponents of this view, a full theory of regret avoidance will weigh future regret by psychological connectedness. The importance that an agent places on her preferences through time would thus be sensitive to psychological connectedness (and not time bias).

  7. Personal Identity:
    Several commentators brought up issues with personal identity. In the paper, we agree with David that a major component of the philosophical case against near bias relies on the compensation assumption—the assumption that a far-sighted agent makes present sacrifices because she thinks she (rather than someone else) will be eventually compensated for those sacrifices. (p.951) I (Meghan) don’t share Parfit’s skepticism about our ability to numerically persist through change. I didn’t argue for it in this paper, but I think it should be a starting assumption for a theory of rational planning that an agent believes she will numerically persist into the future, undergo changes, and that she ought to plan for those anticipated changes. But the connection between the metaphysics of personal identity and rational planning is complicated, as you all know, and well beyond the scope of this paper. For the present discussion, I think we should just note that there are many cases of near and future bias where the time scales are not long enough to raise plausible worries about identity being destroyed. When I delay getting my physical by a year (rather than scheduling it for next week), it isn’t because I nihilistically think I won’t survive to next year. When I am glad my surgery occurred last week rather than next week, it isn’t because I think that surgery happened to some other poor unfortunate soul. Rational defenses of time biases based on concerns about numerical identity are going to be of pretty limited use. And if you buy into the usual philosophical criticism of near bias, you likely don’t have Parfitian skepticism about personal identity.

  8. Weaker No Regrets II:
    Caspar suggests we replace Weak No Regrets with Weaker No Regrets II. Preston and I are both sympathetic with Weaker No Regrets II. Recall what we say in footnote 21 (p.958): “ Perhaps the only examples that have the potential to give weak no regrets trouble are situations in which an agent anticipates experiencing irrational regret. For example, what if an agent is faced with the choice between killing herself or taking a “regret pill,” which causes those who take it to regret doing so? Is the agent rationally required to take the pill? We don’t think this is clear-cut. However, if the agent is so required, then weak no regrets seems to give the wrong result. In light of the possibility of this sort of case, we might consider further weakening weak no regrets to only apply to cases that do not feature anticipated irrationality. In any event, all of the examples we discuss in this essay do not feature anticipated irrationality.’’

  9. Newcomb Cases and Creation Ethics:
    Alright, but what are we going to say about how regret avoidance relates to the Newcomb problem and Puzzles with Creation ethics? Recall that the original Weak No Regrets principle required an agent to have full and accurate information about each state of affairs over which her preferences range. Preston points out that Evidential decision theorists only one box (and then regret their choice) if they do not know how much money is in the opaque box.
    The creation puzzles are also deeply interesting. I (Meghan) am tempted to say it is rationally permissible to create any of the children in the paradigm cases, so long as you anticipate never regretting your choice (and so long as the choice isn’t intrinsically immoral or irrational). That last caveat takes us into territory of more substantive issues in creation ethics, which are definitely beyond the scope of our project here.

  10. Response to Tom
    It’s an interesting argument. If there is a problem, I think it’s with premise 2. When agents aim to avoid regret, they’re best described as having a concern with the future, rather than the past. As we say in our “general motivations” above, we take regret avoidance to be a consequence of an agent taking a less-myopic view on their preference structure through time, and regret avoidance is generated by a future-looking perspective. For example, when Ann refuses the second pilsner, it’s because she can see how her preferences are likely to evolve in the future; she doesn’t care what her preferences were like in the past.
    In order to illustrate what a past-looking perspective on one’s preferences would look like, let me introduce an interesting question posed to me by Hallie Liberto. Consider the reverse of weak no regrets: it is rationally permissible for an agent to avoid options she knows she has at some point preferred that she wouldn’t take, in favor of options that she knows she has never preferred she wouldn’t take. Hallie’s question is: what reason is there to endorse weak no regrets but reject this second principle, especially for fans of temporal neutrality? The difference between the two principles seems to exactly concern what we’re discussing: consistency between one’s choice and one’s future preferences and consistency between one’s choice and one’s past preferences.
    I have some thoughts on how to answer this question, but maybe it’s best if I hold off going into it here.

  11. Responses to David
    David raises three interesting questions. First his (1) and (3)—how does this debate about time bias related to non-hedonic values? David wonders why we focus primarily on hedonic time-biases in the paper. Shameless Paper Plug: Preston and I have a manuscript of a new essay tackling some issues in the debate over non-hedonic time biases, which we hope to submit soon! But in brief: we take it to be pretty clear that most of us discount our own future and past pleasures and pains. In the new paper, we offer some arguments for doubting that some supposed examples of non-hedonic temporal discounting really are evidence that we are time-biased in our non-hedonic valuing. So while we think the interesting debate about hedonic time biases concerns their normative status (rather than, say, their psychological prevalence), we think it is an open question whether we have any robust forms of time bias in our non-hedonic valuing. In the new paper, we are working on this open question. Stay tuned…
    Tom Dougherty also has a paper in progress—Shameless Plug for Tom!—addressing this issue David raises in his third point: If there’s someone I should compensate for sparing me some pain, should I compensate her differently for sparing me the same quantum of pain depending on whether this is something she has already done recently or is going to do in the near future? We won’t rehash his arguments here, but we’ll recommend his paper and add that we agree with his conclusion there: when it comes to making tradeoffs involving both hedonic and non-hedonic values, it is far less clear that it is rational to be time-biased.
    Now to David’s (2). David asks what we make of the debate about whether some forms of future discounting are rationally permissible (i.e. constant) and others are not (i.e. hyperbolic). We agree with you that what we call the “philosophical criticism of near bias’’ judges all forms of distant future discounting as irrational, regardless of what discount rate they apply. According to the philosophical criticism, once an agent has taken the relevant probabilities and utilities into account, she shouldn’t engage in further discounting.

  12. Hi Everyone;
    Because of the time differences, we just dropped a bunch of replies on you at once. We’ll be keeping an eye on the thread and welcome more discussion on any of this.
    -Meghan and Preston

  13. Hi Meghan and Preston,
    I really enjoyed your paper and, for what it’s worth, am inclined to agree that future bias is irrational. At the same time, I am wondering whether a defender of non-absolute future bias can find a principled way to reject premise (2) of your argument. Here’s the reconstruction you give on pp. 964-965.
    (1) It is permissible to avoid certain regret.
    (2) If one is future biased and chooses to avoid certain regret, then one acts in
    ways that lead to the scheduling and meager returns problems.
    (3) It is irrational to act in such ways.
    (4) Therefore, future bias is irrational.
    One strategy for the defender of the rationality of future bias is to “hold that one can aim to avoid regret except in cases in which doing so leads to the schedule or meager returns problems” (p. 965). But you suggest that this option is ad hoc. Now, I am thinking that a defender of non-absolute future bias could find a principled way to allow one to be future biased and act in ways to avoid certain regret, except in cases that lead to meager return problems. They could adopt the following principle.
    Restricted Future Bias (RFB) It is permissible to act in accordance with one’s future biased preferences except in cases where doing so reduces one’s total well-being.
    This principle does not seem ad hoc to me and could allow the defender of non-absolute future bias to avoid the meager returns problem. It is irrational to wait to have one cookie in the future, rather than three now, because you will be better off if you eat three cookies now (assuming we’ve accounted for the displeasurable experience of regret one will face).
    The scheduling problem is a bit trickier. I don’t see Jack’s actions in the initial Fine Dining case as obviously irrational. The defender of non-absolute future bias might agree that Jack should wait until the last day and not see this as a bullet to bite. I am not sure whether RFB can help in the version where Jack is immortal. If he never eats the meal, then presumably his total well-being will be less than it would be if he ate it at some point. So, perhaps RFB would entail that in this version of the case, Jack should just pick some day to eat the meal. If, however, he really would be better off if he is always anticipating the meal (rather than regretting eating it in the past), then perhaps Jack really should always put off eating it.

  14. Response to Travis:
    Travis proposes that defenders of future bias endorse: Restricted Future Bias (RFB) It is permissible to act in accordance with one’s future biased preferences except in cases where doing so reduces one’s total well-being.
    It is an interesting proposal. Compare RFB with our proposal—complete temporal neutrality. As we mention in the paper, there are two key claims of the philosophical case against near bias. (1) Near bias is in a certain sense arbitrary—Near biased agents arbitrarily discriminate between their future stages. (Rawls and Sidgwick press this too.) And (2) being near-biased can lead to avoidable deductions in well-being. One of our projects is to develop the analogous argument against future bias, in particular by developing (2) and arguing that past discounting can lead to deductions in well-being.
    RFB also answers (2), but not (1). For these reasons, we think it is ad hoc (and insufficient)). Now perhaps you think the past is so different from the future, that it is *not* arbitrary to distinguish past and future stages of your life. In “Against Time Bias’’ we mention but don’t really get into the debate about whether metaphysical differences between the past and future give us a good reason for discriminating between them. In a recent (still unpublished) paper, Peter Finochiarro and I (Meghan), take on this tricky question, and conclude the metaphysicial differences shouldn’t matter from a rational standpoint. (More shameless plugging). But, in brief, I think the viability of RFB turns on whether you think there is also a rational problem with agents who arbitrarily favor certain temporal stages of their life over others.

  15. Thanks to Preston Greene and Meghan Sullivan for their interesting paper, and for giving us the opportunity to think about their question.
    You are going to have a good dinner and you could choose to have it on Monday or on Friday. If you choose Monday, you will regret for the rest of the week that you did not choose Friday. G&S argue that, given this regret, you should choose Friday, and that this may be so even if the dinner available on Friday is worse than the one available on Monday. They claim, however, that it is not the case that you should choose Friday, from which it follows that you should not have this regret. Indeed, they argue the regret is irrational.
    What sort of a state is your regret? Suppose first it is a belief or judgement. Suppose it is the belief that you should have chosen Friday rather than Monday. Whether or not this belief is true, it is not itself a good or a bad feature of your life, so it does not itself make it the case that you should choose Friday. It might be that it should figure as evidence when you make your choice between Monday and Friday. You might think that the fact that, if you choose Monday, you will later believe your choice is wrong constitutes evidence that it actually is wrong. This seems odd, but maybe there is an argument to be developed here. However, G&S do not develop it. They are not concerned with this sort of regret.
    Suppose, second, that your regret is a bad feeling. Let us assume that a bad feeling is a bad feature of our lives, so that a life with fewer bad feelings is a better life, other things being equal. Your life will therefore be better if you choose Friday. So you should choose it. Even if the dinner available on Friday is not as good as the one available on Monday, the badness of your feelings through the week may be enough to outweigh the difference. In that case, too, you should choose Friday.
    Given your feeling of regret, there is nothing irrational about choosing Friday. Is your feeling of regret itself irrational? The fact that you have this feeling makes your life worse than it otherwise would be; you would be better off without it. Similarly, the pain amputees sometimes feel in ‘phantom limbs’ makes their lives worse than it otherwise would be; they would be better off without it. If the feeling of regret or pain in phantom limbs could be got rid of by some sort of therapy, it would be would be a good idea to get rid of it. But the mere fact that these feelings make your life worse is not good evidence that they are irrational. We would need more argument than that. This argument too is not developed by G&S. They are not concerned with this sort of regret either.
    They take regret to be neither an affective nor a cognitive state but a preference – ‘viz. preferring that one had done otherwise’. Preferring that one had done otherwise could itself be understood as a cognitive state – viz. believing that one should have done otherwise. It could alternatively be understood as a feeling – viz. feeling bad about having not done otherwise. But we are not supposed to understand the preference in either of these ways. The preference is some other state. Perhaps it is a disposition to choose that has a counterfactual content: a disposition to make the opposite choice were that possible.
    When planning your dinner, why should you take any notice of a preference understood this way? G&S evidently accept some sort of a preference-satisfaction account of your good. Preference-satisfaction theories come in different versions. There is a narrow-scope version, something like: if you prefer A to B, then A is better for you than B. But we are dealing with a case where your choice affects which preferences you have, and this narrow-scope version is not enough. Instead B&S accept a wide-scope version that implies something like: other things being equal, having a preference that is not satisfied is worse for you than not having it. Their own formulation, the ‘Meta-preference principle’, does not have an ‘other things being equal’ clause, but I think this is a mistake. It implies that it is permissible to prefer a passionless life where you want little and achieve little to an enterprising life where you want a lot and achieve most, but not quite all, of what you want. My own qualified version will serve their purposes just as well.
    There remains the question of why we should accept this qualified wide-scope preference-satisfaction theory. If the preference – the regret in this case – was associated with a bad feeling, there would be an easy answer. But we have set aside that sort of regret. Some philosophers think that a person’s good actually consists in the satisfaction of her preferences. But it’s not clear how that idea should be applied to the wide-scope version of the preference-satisfaction theory. Suppose a person has all her preferences satisfied, and then she acquires a new preference that is not satisfied. Does this acquisition make her life worse? Or is it neutral, making the life neither better nor worse, whereas acquiring a new preference that is satisfied makes it better? There are different views about this. At any rate G&S need to justify their version of the preference-satisfaction theory to complete their argument.
    The idea that a preference is a disposition to choose opens up a quite different way of arguing that it is irrational. If your dispositions to act change over time, you are in danger of a particular sort of incoherence in your life. You may do something that later you will undo. Moreover, if you know how your dispositions will change, you may do something knowing that later you will undo it. This seems an incoherent way to live. To avoid this sort of incoherence, you may have to take up a strategic attitude towards your future self. You may have to treat your future self as a player in a strategic game with your present self. This too seems a sort of incoherence.
    The disposition that constitutes regret may not actually lead to incoherence of this sort. It has a counterfactual content. It is a disposition to undo your previous choice, but you do not in fact have the opportunity to do that. Tom Dougherty, in the paper of his cited by G&S, tried to create an actual incoherence out of future bias. That seems to me a promising route to take, and it does not depend on any preference-satisfaction theory of good.
    Dougherty assumes in his example that his subject is risk-averse about the length of time she suffers pain. G&S reject his argument on the ground that one should not be risk-averse about the length of time one suffers pain. I do not think they are right about that, but even if they are, the example could be changed. So I still think Dougherty’s approach is promising.

  16. Many thanks to Greene and Sullivan. While I’m out of my league, could we extract the agent from these difficulties with the following: Assume the agent has hedonic functions such that u(t)= u/((|t|+1)*d) for expected and remembered pleasure and pain where t = number of time periods away from maximum pleasure/pain, d = discount coefficient, u = utils of eating cookie. Also assume that agent cannot feel <1 util. Second, assume agent has an identity persistence-probability function 1/((t*i)+1).
    Now, offer agent a cookie, a pleasure, they value at 100 utils that they may consume any day, t, 0 - 6 (Mon - Sun). Imagine their discount coefficients for expected-remembered pleasure are (1,2). If they consume cookie on day 4 they receive on days 0 - 3 the following utils from expected-pleasure: 20, 25, 33, 50; on day 4 they eat cookie for 100 utils; on day 5 they receive 25 utils and so forth.
    If the agent knew with certainty that their identity would persist until at least day 56, they should then consume the cookie on Sun, day 6, as this has the largest integral of utils * persistence-probability (looks like future-bias). However, if they were certain to persist until day 66, they are indifferent to eating the cookie on any day, 6 -16 (does not look like strict future-bias). This is thus mini-immortality, for if the agent knows they are immortal (persistence function = 1), they are indifferent as to which day they eat the cookie (not strict future-bias) as the value of the integral is always the same as long as it is not so soon that they lose expected-pleasure utils (true immortals will wait until at least day 100 to eat the cookie) (again, looks like future-bias).
    However, if i = 0.1 (the agent strongly lacks identity-persistence certainty), for example, then the value of the integral is maximized by eating the cookie on day 4 over day 6. Of course, persistence-probability does not need to be a smooth curve with respect to time (in the way hedonic functions likely are), as imagine that Two-Face visits you on day 6 with certainty and you die with 50% probability, when do you eat the cookie?
    These two functions may help see why real agents are rationally hedonically near-biased (their identity persistence function, discount coefficients, and their inability to feel <1 util) and why they might be hedonically future-biased under strict constraints ((j,k)-pleasure (preferring cookies on day 6 over day 4 depending on "i"; immortals waiting until at least day 100)). It seems, prima facie, that one could easily add an (m,n)-pain/regret function to this integral, thus, eat cookie on day = max(integral: [(pleasure-function minus pain/regret-function) * identity-persistence function]). Thus, perhaps the parameters of our functions (i, j,k, m,n) forbid absurd conclusions and show why strict future-bias is plausible but too strong.

  17. Yes, my example of temporally unstable pricing of invariant goods (e.g. pricing how much I owe in compensation to someone for sparing me an invariant quantum of pain depending on whether the pain is just past or just prospective) is structurally similar to a series of excellent examples Tom develops in a very interesting forthcoming paper against future bias. Since it’s now been referenced three times, perhaps Tom could provide the publication details of his paper, which I imagine readers of Preston’s and Meghan’s paper would be interested in.

  18. Thanks for the responses, Meghan and Preston. A few thoughts:
    (1) Just a quick note on Meghan’s response to personal identity worries:
    I take the Parfitian idea not to be that we cannot/do not numerically persist through time, but rather that personal identity over time isn’t “what matters” for things like morality and rationality. You can hold that personal identity over time isn’t what matters without being skeptical about the coherence of the notion of person identity over time.
    Moreover, to get a case where someone rationally acts in a near-biased sort of way, we needn’t hold that this Parfitian thesis is true, but only that it could be rationally believed. Someone who rationally believes that identity isn’t what matters, and instead discounts future experiences by that future time-slice’s degree of psychological connectedness will act in the ways M&P deem irrational, but this person’s discounting isn’t arbitrary in way that Rawls, Sidgwick, and M&P suggest.
    (2) Meghan says that they’re conceiving of rational agents as sophisticated choosers rather than myopic ones. While I think that’s the right way to think about practical rationality, I don’t think sophisticated choice motivates any sort of No Regrets principle. The idea behind sophisticated choice, I take it, is that you’re aiming to satisfy your present preferences, but you don’t (myopically) assume that your future selves will share your present preferences or act rationally. So in figuring out how best to satisfy your present preferences, you take into account the ways in which your future selves might scuttle your plans. This sort of sophisticated chooser doesn’t necessarily care about satisfying her future preferences; she just cares about how the fact that her later selves will have those preferences matters for how best to attempt to satisfy her present preferences.
    (3) I think it’s worth pressing the point about agent-neutrality a bit further. As M&P note, just as the time-neutrality could be motivated by arbitrariness considerations, so could agent-neutrality. I would add that the diachronic incoherence considerations that M&P appeal to in support of time-neutrality can also be used to support agent-neutrality. The cases of diachronic incoherence are essentially collective action problems, with your various time-slices as the members of the collective (they are like Prisoner’s Dilemmas, with your later time-slices as the other prisoners). In the temporal case, your various time-slices fall prey to a collective action problem in virtue of having different interests (if you’re not time-neutral), and M&P argue that this is irrational. But in the case of groups, different people can fall prey to a collective action problem in virtue of having different interests (if they’re not agent-neutral). Why isn’t this likewise irrational?
    M&P appeal to David’s notion of compensation as explaining why we should go in for time-neutrality but not agent-neutrality. But I’m not convinced that it does the work needed. Suppose we have someone who’s time-biased, and hence doesn’t care equally about all his past and future selves. This person doesn’t want to sacrifice a moderate pleasure now for the sake of more pleasure later. The philosophers accuse her of being irrational. She responds that she doesn’t care as much about her future selves as about her present self (and she notes that likewise, she doesn’t care about strangers as much as herself). Then the philosophers say to her, “But if you make this small sacrifice, then YOU are the one who gets compensated later for your present sacrifice.” Our agent quite rightly be unmoved: “Sure, it’s me (i.e. my future self) who gets compensated for my present sacrifice, but as I’ve explained, I don’t care as much about my future selves as I do about my present self. The fact that my future self is (part of) ME doesn’t change that.” So I think more needs to be said to motivate the claim that it’s a requirement of rationality that you care equally about all of your past and future selves, but not that you care equally about all other agents.
    Thoughts?

  19. Response to Broome and Brink
    David pointed out in his first post, section 3, that near bias seems to have a direct effect on our actions, while future bias seems to only have a direct affect on our attitudes. I agree that this asymmetry exists, and I think it is caused by the stark asymmetry in the effects our actions have on the future vs the past. Nevertheless, what Meghan and I point out is that future bias can indeed have an effect on our actions, by changing attitudes that plug into principles of practical reasoning that directly affect action. The principle that Tom uses in his excellent “Whether to Prefer Pain to Pass” concerns risk, while the principle we appeal to in our essay concerns regret. In each instance—risk aversion and regret aversion—we see future bias modifying attitudes that plug into these principles in a way that directly affects an agent’s actions and causes practical problems.
    So I must insist that the difference between Tom’s project and ours is not as great as John Broome makes it out to be in his otherwise quite illuminating post (much of the post I interpret as a call for Meghan and I, and other writers on time bias, to think harder about what we mean when we say things like “I prefer this future surgery was past” or “I prefer this past surgery was future,” which is a point well taken). Perhaps a principle that licenses risk aversion to pains strikes you as more plausible than our weak no regrets, but what is at stake is simply the plausibility of the principle, and not overall argumentative structure.
    I’d like to say something briefly about why I am slightly suspicious of risk aversion to pain, because this didn’t make the paper. As Meghan and I point out in the paper, we are onboard with the idea that for an agent money might have diminishing marginal utility. Clearly, if we make x big enough, winning $2x might not be twice as good as winning $x. We are, however, much less certain why pain would have diminishing marginal disutility; i.e., why the agent would view 2x units of pain as not twice as bad as x units of pain, when all else is equal. Now, the way that I can best get my head around this is to adopt a laissez-faire perspective on rational preference: “an agent’s preferences can be whatever, whatever problems that might cause, as long as they’re consistent!” This is what we call “the economic view.” I don’t find this view implausible, but in order to reject time bias we must abandon it. And I find it hard to see a principled reason to accept it in this instance but abandon it in the case of time bias.

  20. Response to Brian (2) and (3)
    Since it has a specific and somewhat well-known meaning, I shouldn’t have used the term “sophisticated choice.” I meant to be speaking generally about agents who adopt a perspective that allows them to ignore temporary reversals of preference. I believe Bratman refers to this as moving away from a “narrow instrumental conception” of rationality, though I don’t like that wording.
    I found your description of philosophers who harangue time-biased agents delightful! On a serious note, all we hoped to accomplish by appealing to compensation is to provide a reason to accept temporal neutrality without accepting agent neutrality. As you point out, the compensation argument alone may not not preclude one from rejecting both.

  21. Apologies to Joseph D, above, whose comment appeared really late; it had gone to the spam folder for some reason. I’ll keep checking the spam folder, but do send me an email (hpaakkunATsyrDOTedu) if you notice that your comment isn’t appearing after a while.

  22. Some thoughts about the relationships among personal identity, compensation and temporal neutrality. There is no simple narrative on which we get the combination of temporal neutrality and agent bias from personal identity alone. Here are two thoughts about how they might be related.
    (1) Prudence can be represented as the combination of temporal neutrality and agent bias. One rationale for prudence appeals to realism about personal identity and the compensation principle (the claim that compensation is necessary and sufficient for justifying sacrifice). On this rationale, realism about personal identity and the compensation principle are individually necessary and jointly sufficient for prudence. These two claims justify intrapersonal temporal neutrality: assuming the benefit is greater than the cost, intrapersonal sacrifice is compensated, because benefactor and beneficiary are the same. The two claims do not support agent neutrality: interpersonal compensation is not automatic, because benefactor and beneficiary are distinct people. This justification of intrapersonal temporal neutrality assumes only that compensation is sufficient for justifying sacrifice (not that it is necessary). Of course, an agent may not care about compensation, but the rationale appeals to the fact of compensation, rather than the agent’s desires. Of course, this rationale appeals to substantive, and not purely formal, principles of rationality. It implies that temporal bias is one of the many ways in which an agent’s desires might be irrational. But it depends on a substantive claim about compensation that could be contested. Here, the claims about compensation (the sufficiency claim) and realism about personal identity each play a role in rationalizing intrapersonal temporal neutrality.
    (2) One might question the independence of the compensation principle and realism about personal identity, assumed in (1). One might think that the internalization of something like the compensation principle is an ingredient in personal identity. This is a complicated issue with several moving parts, so I will have to be compressed and impressionistic here. Locke, Butler, and Reid thought that special concern for one’s own future presupposes personal identity inasmuch as we seem to have special concern for own selves, including our future selves. But it is open to those with broadly psychological reductionist sympathies about personal identity to think that in some important respects this has things backward: it is not so much that special concern presupposes personal identity as it is that personal identity presupposes special concern. Jennifer Whiting’s excellent article “Friends and Future Selves” is the best statement of this possibility. On her view, just as special concern for one’s friends is ingredient in the sort of interpersonal association that is friendship, so too special concern for one’s own future is ingredient in the sort of intrapersonal unity that constitutes personal identity (in nonbranching cases). We might hijack this idea for present purposes and suggest that the differential attitudes toward the temporal and personal location of benefits and harms that the compensation principle recommends is itself partially constitutive of the sort of intrapersonal unity that constitutes personal identity. On this view, it is my differential readiness to make present sacrifices for greater future benefits in my own case that establishes the sort of diachronic unity that is constitutive of personal identity. Think about the way in which planning and normal narrative structure within a life seem to presuppose differential concern for future stages in one’s own life. Of course, compensation, like memory, presupposes identity. So we can’t appeal to compensation to explain personal identity without circularity, anymore than we can explain personal identity in terms of memory without circularity. But just as the Lockean can appeal to quasi-memory, which presupposes continuity rather than identity, to explain personal identity, so too the psychological reductionist can appeal to quasi-compensation, which presupposes continuity rather than identity, to explain personal identity. If so, there is a sense in which personal identity presupposes (q-)compensation. Of course, local suspension of the compensation principle is possible in which an agent wonders whether she should care about compensation in a particular case or range of cases. But the thought on offer is that global suspension of the compensation principle is either impossible or at least would be destructive of personal identity. But then one might argue that intrapersonal temporal neutrality is recommended by the very principle that is at least partially constitutive of personal identity. Here one tries to ground some demands of temporal neutrality in personal identity but only because personal identity itself involves a commitment to the idea of compensation. This is obviously an ambitious set of claims, but ones worth taking seriously, or so I think.

  23. Thanks Preston and David for the thoughts re: compensation.
    So it looks like the compensation principle has the consequence that it provides a sufficient but not decisive reason to be time-neutral, unless one has decisive reason to care about facts about compensation. If an agent could rationally not care about compensation, then (barring other arguments) she could rationally be time-biased. Right?
    That would mean that in order to have an argument that it’s rationally required (not just permissible) to be time-neutral, we’d need an argument that it’s rationally required that one care about compensation. That seems like a pretty strong claim.
    David’s constitutivist-style defense of the compensation principle in (2) is really interesting. It would suggest that it’s impossible to care not at all about compensation, and that it’s impossible to be maximally time-biased (caring only about your present self and not at all about your future selves), for then the future time-slices in question wouldn’t count as part of you. I think that’s a really interesting sort of position. Just as a bit of navel-gazing, I’ve been defending a view on which personal identity over time plays no fundamental role in the theory of rationality, and that principle also entails that there can be no diachronic norms (like, say, a belief-update rule like Conditionalization). But you might turn the tables and say that conforming with such diachronic norms is partly constitutive of personal identity over time, so it’s impossible to violate those principles too much and still count as the same agent over time. But provided you satisfy the relevant identity-constituting norms enough of the time, you count as a temporally extended agent who is then rationally obliged to satisfy those norms, so that if you satisfy those norms sometimes-but-not-always, then you count as not fully rational. Definitely an interesting position worth pursuing, though I don’t have too much more on it at present. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *