Warren Quinn’s puzzle of the self-torturer is supposed to show that cyclic preferences can be rational, and that, in cases where they are, rationality can require resoluteness so that the agent does not end up with an alternative that is worse than the one with which s/he started. 

As Quinn makes explicit, his concern is with instrumental rationality.  It is thus natural to interpret Quinn’s use of “worse” as “worse, relative to the agent’s preferences.”  But how is “X is worse than Y, relative to the agent’s preferences” to be understood when X and Y are part of a preference cycle?



Here are three possible responses:

1) To say that X is worse than Y, relative to the agent’s preferences, is to say that X is dispreferred to Y.  

2) “Worse than” is transitive, so even if X is dispreferred to Y, “X is worse than Y, relative to the agent’s preferences” does not apply if X and Y are part of a preference cycle.

3) Even if “worse than” is transitive, and X and Y are part of a preference cycle, if X is dispreferred to Y, then “X is worse than Y, relative to the agent’s preferences” can apply in a qualified way.  More precisely, the relation can apply so long as it is interpreted as relative to a subset of options that includes X and Y and over which the agent’s preferences are transitive.  For example, in the case of the self-torturer, if the self-torturer prefers (ending up at setting) 0 over 500, 500 over 1000, and 0 over 1000, then, relative to the option set {0, 500, 1000}, 1000 counts as worse, relative to the self-torturer’s preferences, than 0.  This is so even though, relative to the set {0, 1, 2, …, 1000} (over which the agent’s preferences are not transitive), 1000 does not count as worse, relative to the agent’s preferences, than 0.   

As far as I can tell, none of these construals fits neatly with everything that Quinn says about “worse than.”  Any suggestions for other plausible ways of understanding “worse, relative to the agent’s preferences”?

15 Replies to “The Self-Torturer and Instrumental Rationality

  1. Hi Chrisoula,
    Very interesting. I can’t think of anything other than these three. I would have assumed that we should interpret Quinn as accepting (1).
    What does Quinn say that doesn’t fit with this interpretation?

  2. He says that “better than” is transitive, and that it is not true of each setting that it is better than the previous one. I thus assume he thinks “worse than” is transitive, and that it is not true of each setting that it is worse than the next one.

  3. Good question! Here is a possibility (but I haven’t looked back at the Quinn, so I might be completely wrong). Couldn’t we say that Quinn takes the lesson of the ST puzzle to be that you cannot read off that X is worse than Y from X is dispreferred to Y even when “worse than” is being read instrumentally (exactly because preferences can be rational but nontransitive, while “worse than” is transitive)? So let us say that ST decides to stop at N before the whole process begins. If this is the case, then, relative to this set of options ( I am understanding ‘set of options’ aggregatively here, which I don’t think it is possible in your third interpretation), N + 1 is worse than N, even though N is dispreferred to N + 1. Had ST chosen to stop at N + 1, than N would be worse than N + 1. This depends on a “pick and stick to it” solution (which I believe Quinn favours), but I think it could be adapted to other types of solution.

  4. Thanks for this – it is of interest to me because I have been thinking a bit lately about money pumps for intransitive preferences.
    How about this: what determines the agent’s behaviour is a choice function i.e. a function C that takes any set O of options to (the smallest) subset of O s.t. the agent always chooses an element of C(O) if presented with option set O. To say that X is preferred to Y (X > Y) is just to say that X ≠ Y and C ({X, Y}) = {X}. (Given intransitive preferences, it doesn’t follow from X > Y that e.g. Y does not belong to C ({X, Y, Z}), though given representability it does.)
    Now why not say that X is worse than Y iff X does not belong to C(O) for any O s.t. Y belongs to O. In other words, the agent does not choose X if Y is an available alternative. This captures a sense in which e.g. in the preference cycle A > A-$1 > B > C > A, it might be true that A-$1 is worse than A but false that B is worse than A (say). And it is consistent with the agent’s ending up with A-$1 in a money pump, because the agent never faces an option set containing both A and A-$1.

  5. Hi Chrisoula,
    Here’s what may be a fourth interpretation or, perhaps, is just a version of (1).
    (4) To say that X is better than Y, relative to the agent’s preferences, is to say that the agent prefers X to Y. To say that X is worse than Y, relative to the agent’s preferences, is to say that the agent disprefers X to Y. To say that X is better than Y, relative to the agent’s preferences and starting point P, is to say that the extent to which the agent prefers X to P is greater than the extent to which the agent prefer Y to P. And to say that X is worse than Y, relative to the agent’s preferences and starting point P, is to say that the extent to which the agent disprefers X to P is greater than the extent to which the agent disprefers Y to P.
    And can’t he say all this while maintaining both that worse than and better than (the not-relativized notions) are transitive?

  6. Sergio: Thanks for your comment. Quinn does favor a “pick and stick to it” solution,” but, early in the article, Quinn claims that 1000 is worse than 0 without supposing that the self-torturer has adopted any plan at all. (I’m not quite sure what the relevant set of options is when you say “relative to this set of options,” so maybe I’m misunderstanding your proposal.)

  7. Arif: Thanks for the suggestion. I see how this works. I’ll have to think about whether this sort of explanation is open to Quinn. I’m not sure it is. Quinn thinks that rationality requires the agent to settle on some stopping point N and stick to this, even though the agent continues to prefer N+1 over N (which is why Quinn thinks rationality requires resoluteness). But if preferences are construed in the way you’re suggesting, and the self-torturer always has to make pairwise choices, Quinn would be suggesting that the self-torturer must do something that is impossible (since, according to the proposed construal of preferences, one cannot possibly opt for N while preferring N+1 when faced with the option set {N, N+1}).

  8. Doug: Thanks for the follow-up comment. One might be able to say that “worse than” is transitive even if “worse than, relative to the agent’s preferences” is not; but if instrumental rationality is “a slave to the agent’s preferences,” as Quinn suggests, then the transitivity of “worse than” would be irrelevant to Quinn’s discussion, which concerns instrumental rationality. My understanding of your suggestion is that it would require us to dismiss Quinn’s comment about the transitivity of “worse than” as true but irrelevant; and to see him as accepting but omitting to mention the relevant claim that “worse than, relative to the agent’s preferences” is intransitive.

  9. Hi Chrisoula:
    I was thinking that Quinn could be proposing something along these lines: once you pick a point everything above the point is worse than everything below the point. Since we know ahead of time that you are not picking 1000, we can say, independently of any picking, that 1000 is worse than 0 (in fact, this registers the fact that it is not rational, given your preferences, to pick a point so late in the game). This might seem like an artificial regimentation, as it commits him to the claim that also N+1 is worse than 0 if I pick N, but I’m not sure that this is a problem for Quinn, given that he thinks that you should not get to N + 1 anyway.

  10. I forgot to say: “relative to this set of options” just meant to leave open the possibility that if somehow ST got unhooked and then faced the same scenario, it would be perfectly rational for her to choose to stop at a different setting.

  11. Sergio: Thanks for the clarification. Here’s my main worry: You suggest that the self-torturer will not pick 1000 as a stopping point because it’s not rational, given his preferences, to plan to stop at 1000. But then, even if the fact that 1000 is worse than 0 is related to the fact that the self-torturer will not pick 1000 as a stopping point, isn’t the order of explanation: *the self-torturer will not plan to stop at 1000 because 1000 is worse than 0*, not *1000 is worse than 0 because the self-torturer will not plan to stop at 1000*. We’re then still left with the question: What makes 1000 worse than 0? (And now we have to keep in mind that your suggestion goes along with the idea that “you cannot read off that X is worse than Y from X is dispreferred to Y even when “worse than” is being read instrumentally.”)

  12. Thanks Chrisoula for your helpful reply to me. I’m afraid I don’t know the Quinn paper very well, so excuse my naivety. But I was puzzled by your statement that the self-torturer sometimes faces a choice {N, N+1}, at least for any N < 999. Isn't it rather this: after a week at level N, he faces an option set O (N) that (for N < 999) satisfies: O(N) = {1 more week at N followed by O (N) again, 1 week at N+1 followed by O (N+1)} And I don't see why the fact that N+1 > N (> means preference) forces any particular choice from O (N) thus specified.

  13. Arif: Thanks for following up. That’s a good point. I’ve glossed over it and so would need to say more; but here’s a more direct route to my view that Quinn will not be able to adopt your construal of preferences: For Quinn, the solution to the puzzle involves recognizing that rationality sometimes requires resoluteness, where this is understood to involve sticking to a plan even though it requires one to act against one’s preferences. Quinn suggests that, having adopted a plan, the self-torturer will at some point be required to stay put rather than proceed even though he prefers proceeding over staying put. Your proposed construal of preferences casts this as impossible.

  14. Thanks Chrisoula – yes, of course you’re right about that, though now it is a question what preference is supposed to be. But the definition of ‘worse’ that I proposed doesn’t depend essentially on a behaviouristic construal of preference. Whatever binary preference is, it is a function that takes every two-element option- or outcome-set O to a subset of O that is in some (for Quinn some non-behaviouristic) sense the ‘favoured’ subset of O. Whatever exactly that means, it presumably can be used to define a function C that takes any option- or outcome-set of arbitrary size to its favoured subset. And then we can define ‘X is worse than Y’ to mean that X is not in the favoured subset of any set to which Y belongs. So *if* preference is already understood then there may be no further problem with ‘worse than’. What you are making me doubt is whether it means anything to say that the self torturer prefers X to Y.

  15. Arif: Interesting. I see what you mean about your definition of worse not depending essentially on a behavioristic construal of preference. I need to think more about this. Let me know if you have any further thoughts in the meantime. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *