We are pleased to present the next installment of PEA Soup's collaboration with Ethics, in which we host a discussion of one article from an issue of the journal. The article selected from Volume 121, Issue 3 is Tom Dougherty's "On Whether to Prefer Pain to Pass" (open access here). We are very grateful that Caspar Hare has agreed to provide the critical precis of Tom's article, and his commentary begins below the fold.
Tom’s paper is superb. I recommend that you go ahead and read it.
As you will see, his topic is future-bias-with-respect-to pain – the sort of attitude that will lead you, for example, to prefer, other things being equal, that you experienced two hours of pain yesterday than that you will experience one hour of pain tomorrow.
Conventional wisdom has it that this attitude is not action-guiding in realistic contexts. The characteristic preferences are between states of affairs with different past components (e.g. a state of affairs in which you suffered 2 hours of pain yesterday and a state of affairs in which you suffered no pain yesterday). But, outside of science fiction, we are never in a position to bring about states of affairs with different past components. Outside of science fiction, we cannot change the past.
Tom’s first contribution is to show us that conventional wisdom has it wrong. If you are risk-averse then you will act to protect yourself against the possibility of REALLY BAD things happening. But in some situations in which you are in a position to protect yourself against the possibility of REALLY BAD things happening, what things you consider REALLY BAD will depend on whether you are future-biased. In these situations, if you are risk-averse and future-biased then you will act one way, if you are risk-averse and future-unbiased then you will act another way.
Tom’s second contribution is to show us that sometimes, by acting on future-bias and risk-aversion, you will work to your own acknowledged disadvantage. He imagines a situation with this general form: You have options A and B at t1, options C and D at t2. If you are future-biased and risk-averse then, at t1, you prefer AC to BC, AD to BD (in something closer to English: you would rather take option A, irrespective of what option you will later take). And, if you are future-biased and risk-averse then, at t2, you prefer AC to AD, BC to BD (you would rather that you take option C, irrespective of what option you previously took). But, throughout, you prefer BD to AC (throughout, you would rather that you take B-and-then-D, than that you take A-and-then-C). In this situation, if you take A at t1 and C at t2, then you are acting on your future-bias at t1 and t2, but working to your own acknowledged disadvantage by taking A-and-then-C rather than B-and-then-D.
What should we make of this? The weak conclusion to draw is just that sometimes, when you lack the power to self-bind (in this case: the power, at t1, to prevent yourself from taking option C at t2) then it is undesirable to be future-biased. This is no great news. For any attitude you might have there are situations in which it is undesirable to have that attitude. When your head will explode if you love your mother then it is undesirable for you to love your mother.
Tom tentatively pushes us towards a much stronger conclusion, the conclusion that it is a rational defect in you to be future-biased. The difference between his example and the loving-your-mother example is that in the former the outcome that is undesirable by your own lights comes about as a result of your own free choices, choices that you endorse throughout. You are acting in a disunified way. But rational people never act in a disunified way.
Why do rational people never act in a disunified way? Tom doesn’t really spell this out in detail. The basic reasoning, I take it, is this: In the situation he describes, if you are future-biased and risk-averse then, whatever you do, you are an appropriate subject of rational criticism. If you fail to take option A at t1 then the critic can say “Why didn’t you take option A? You preferred (irrespective of what you would later do) that you take that option.” If you fail to take option C at t2 then the critic can say “Why didn’t you take option C? You preferred (irrespective of what you would later do) that you take that option.” If you take A-and-then-C then the critic can say “Why didn’t you take B-and-then-D? You preferred throughout that you take that option.” But rational people are not appropriate subjects of rational criticism, so in this situation, if you are time-biased and risk-averse then, whatever you do, you are irrational. And rational people are not such that they would be irrational if put in the wrong situation. So, simpliciter, if you are time-biased and risk-averse then you are irrational.
As a way of kicking off the discussion, I will say that I am not entirely persuaded by this reasoning. There are other cases in which people, by acting on faultless preferences (endorsed throughout) in a step-by-step way, work to their own acknowledged disadvantage. Consider Satan’s Apple (due to Arntzenius, Elga and Hawthorne – this is the diachronic version of their case):
Satan cuts his apple into infinitely many slices and offers them to Eve, one by one, over the course of an hour – one at 11am, another at 11.30, another at 11.45… etc. Eve will make infinitely many decisions, knowing that she has no powers of self-binding, and that no decision she makes will influence any later decision she makes. If, at noon, she has eaten infinitely many slices of apple, then she will Fall. Otherwise she will remain in Eden.
Eve strongly prefers Eden to Earth. And, all-other-things-being-equal-Eden-and-Earthwise, she prefers to eat more apple rather than less apple. What should she do?
It appears as if, whatever Eve does, she will be an appropriate object of rational criticism. If she fails to eat any one slice then the critic can say “Why didn’t you eat that slice? You preferred (irrespective of what you would later do) that you eat it, and you knew that your eating it would have no bearing on whether you ate finitely many or infinitely many slices.” If she eats every slice then the critic can say “Why did you eat infinitely many slices? You preferred that you eat finitely many slices.”
But it strikes me that it would be wrong to conclude that there is something awry with Eve’s preferences – that she is rationally defective in preferring Eden to earth, more apple to less apple.
Where does the reasoning to this conclusion go wrong? This is a hard question. Let me put an answer on the table: If Eve takes all the slices then she is NOT an appropriate subject of rational criticism. We are rationally criticisable for the accessible options we take or fail to take. For an option to be accessible to Eve, it must the case that, at some time, if she were to decide to take the option then she would take the option. But there is no time such that if Eve had decided at that time to take finitely many slices then she would have taken finitely many slices. By hypothesis her later decisions were causally isolated from her earlier decisions.
The same can be said for you in Tom’s case. If you take AC, then we cannot rationally criticize you for failing to take BD, because BD was never an accessible option for you. There is no time such that, if you had decided at that time to take BD then you would have taken BD. By hypothesis your later decision was causally isolated from your earlier decision.