Suppose that there is both an objective ‘ought’ and a subjective ‘ought’. Which of these two kinds of ‘ought’ figures in the anti-akrasia principle that it is irrational to do something at the same time as believing that one ought not to do it?

There is a simple of way of understanding the relation between the objective and the subjective ‘ought’ on which the answer to this question is: Both! It is irrational to do something at the same time as believing that one objectively ought not to do it; and it is also irrational to do something at the same time as believing that one subjectively ought not to do it.

(Note: The original version of this post contained a terrible mistake, which was pointed out by Doug Portmore and Jamie Dreier in their comments below. This is an amended version, without the mistake.)

Our concern is with agents who intentionally do A at the same time as holding a belief of the form ‘I ought not to do A’. In fact, for complicated reasons, it may matter exactly how this option A (and the available alternatives) are individuated. But let us set those issues aside for the time being. Suppose that we have somehow focused on an option A of the right sort.

Suppose that there is some kind of value such that for it to be the case that you objectively ought not to do A is just for there to be some alternative to doing A, B, such that B is better in terms of this value than A. (For our purposes, this value can be anything: it could be subjective utility, modelled by a utility function that measures your subjective preferences; it could be your lifetime level of happiness, or the total amount of happiness in the world as a whole; or it could be some more objective value, such as some kind of objective goodness. For our purposes, this does not matter.)

Now, suppose that there is a probability function that models the degrees of belief that it is ideally rational for you to have. And suppose that for it to be the case that you subjectively ought not to do A is for there to be some alternative to doing A, B, such that B is better than A in terms of the the expectation of this value according to this probability function.

Suppose moreover that it is rational for you to do something if and only if doing it maximizes the expectation of this value according to this probability function.

Finally, suppose that when the anti-akrasia principle speaks of your “believing” the proposition that you ought not to do A, they mean having credence 1 in some proposition of the form ‘B is better than A in terms of [the relevant expectation of] this value’.

I shall now show how we can derive both versions of the anti-akrasia principle from these suppositions.

First, take the case of an agent who believes that she objectively ought not to do A. Given these suppositions, if this belief is rational, then the relevant proposition that B is better than A in terms of this value will have probability 1. It follows that doing A cannot maximize the expectation of this value according to this probability function, and so doing A cannot be rational. So, the objective version of the anti-akrasia principle comes out true on these suppositions.

Secondly, take the case of an agent who believes that she subjectively ought not to do A. Given our suppositions, if this belief is rational, then the relevant proposition that B is better than A in terms of the expectation (according to this probability function) of the value will have probability 1 (according to this probability function).

Now suppose that this probability function must meet the following condition: it never misinterprets itself, by assigning probability 1 to false propositions about this very probability function (including false propositions about expectations that are defined in terms of this probability function). If the probability function meets this condition, then the proposition that B is better than A in terms of the expectation (according to this probability function) of the relevant value will be true. Again it follows that doing A cannot be rational. So, again, the subjective version of the anti-akrasia principle comes out true on these suppositions as well.

In short: assume that rationality consists in maximizing expected value – where the expectation in question is defined in terms of a probability that never misinterprets itself in this way; define the objective ‘ought’ as what maximizes this value; and define the subjective ‘ought’ as what maximizes this sort of expectation of this value. Then both versions of the anti-akrasia principle will be true.

7 Replies to “Objective and subjective akrasia

  1. “It is irrational to do something at the same time as believing that one objectively ought not to do it.”
    What about the Regan/Jackson/Parfit-style case? To illustrate, take Parfit’s Mine Shafts case, where opening Gate 1 will save a 100 if they’re in shaft A and save 0 if they’re in shaft B, opening Gate 2 will save 0 if they’re in shaft A and 100 if they’re in shaft B, and opening Gate 3 will save 90 regardless of which shaft they’re in. Now, suppose you believe (rightly) that you objectively ought not to open Gate 3. You objectively ought to open either Gate 1 or Gate 2. But it doesn’t seem irrational to open Gate 3 while believing that you objectively ought not to open Gate 3.

  2. Damn, Ralph, I was just working on something like this and I’m certain that anything good I come up with will be scooped by entries in this thread. One line that I was considering, which speaks to Doug’s question, is just to adopt a view on which the objective-ought either (a) does not satisfy an akrasia constraint or (b) is not determined by what is objectively best (i.e., best given the total state of the universe). I think a perfectly natural line to take is (if you like the anti-akrasia constraint) just that the lesson of the Regan case is pretty much that the old-school assumptions about objective-ought are mistaken. Either there’s no useful notion there at all (something like Zimmerman’s line) or there’s a notion there that isn’t what Moore takes it to be.

  3. Just to clarify the previous – Doug’s worry can be met by simply denying that in the relevant 3 option cases it would be rational to believe there’s any sense in which this subject ought to do what’s objectively the best. [If we have the old-school view about objective and subjective ought, the subjective ought is something like the thing that is both (a) rationally believed to be the objective-ought thing to do and (b) thus rationally believed to be best in light of the total state of the universe. The subject knows she ought (in some sense) do the thing that couldn’t be the thing that is best in light of the total state of the universe so, given the anti-Akrasia constraint, we get that the objective-ought thing to do, if there’s such a thing, couldn’t be the thing that could turn out to be the best given the total state of the universe.]

  4. Ralph,
    I think Doug is right about this. You say,

    if this belief [viz., that she ought not to do A] is rational, then the proposition that doing A does not maximize the relevant value will have probability 1. It follows that doing A cannot maximize the expectation of this value according to this probability function, and so doing A cannot be rational.

    But it doesn’t follow. In the kinds of cases Doug mentions, even though the probability that A maximizes the value is zero, A does maximize the expected value. This is a general feature of expectation, not a special feature of value. (There’s a philosopher named ‘Jacob Ross’ who has written about such cases in detail, and a philosopher named ‘Mark Schroeder’ who has a paper-in-progress about them – just on the off chance that you should run into one of them…)

  5. Doug and Jamie —
    You’re completely right. The formulation that I posted yesterday contained a terrible mistake. I have now amended my formulation so that it removes the terrible mistake. (There’s also a philosopher named ‘Ralph Wedgwood’ who has discussed these cases in detail, especially in “Akrasia and Uncertainty”….)

Leave a Reply

Your email address will not be published. Required fields are marked *