Classical decision theory is built around a central "representation theorem": so long as an agent’s preferences meet certain basic conditions of coherence, we can construct a function that represents the agent’s preferences — in the sense that the agent prefers one prospect X over a second prospect Y if and only if the value that this function assigns to X is greater than the value that the function assigns to Y; and moreover, this function has a fundamentally expectational structure, in the sense that the value that this function assigns to an uncertain prospect is the weighted sum of the values that the function assigns to all the possible outcomes of that prospect — where the value of each of these possible outcomes is weighted by the probability that that prospect will have that outcome.
So for classical decision theory, everything flows from these basic conditions on coherent preferences. In turn, these coherence conditions are typically defended by means of "Dutch book" arguments, which seek to show that someone whose preferences violate these conditions of coherence would be willing to take out a set of bets that would guarantee a certain loss, no matter what happened.
My problem is, I like the general idea that when we’re not certain what situation we’re in, we should be guided by probabilities. (As Joseph Butler, one of my philosophical heroes, put it, "To us, probability is the very guide of life.") And intuitively, the most rational way of being guided by probabilities in making our choices or decisions is by making choices that have maximal expected value (using probabilities to define the concept of the "expected" value of a function in the normal way). But for various reasons, I can’t accept the classical decision theorist’s explanation of why we should maximize expected value.
Don’t get me wrong — it’s not that I think that being willing to accept a Dutch book is a rational thing to do! My problem is that on all the ways that seem remotely promising of understanding the relevant notion of "preferences", these fundamental conditions of coherence seem either wrong or at least unmotivated.
- Suppose that preferences are thought of as a kind of desires. Then it doesn’t seem to me to be incoherent or irrational in any way to have preferences that are intransitive or symmetric. Such conflicting desires are simply the normal condition of human life. (If your life does not contain such conflicting desires, I would have to doubt whether you belong to the same species as I do — if indeed you belong to a species at all!) Such conflicting desires, it seems to me, are no more irrational than it is irrational to have illusory sensory experiences that conflict with one’s considered beliefs.
- Suppose that preferences are thought of as a kind of disposition to choose. Then someone in a "Buridan’s Ass" situation might have a systematic disposition always to choose to go to the Left, even though really they are perfectly indifferent between Left and Right. So, even though they would always choose a course of action that involves a 100% chance of going Left over a course of action that involves a 100% chance of going Right, this is just an arbitrary way of making a choice in a Buridan’s Ass situation; and so they need not have any preference for a gamble that gives them a 50% chance of going Left over an otherwise exactly similar gamble that gives them a 50% chance of going Right.
- Suppose that preferences are thought of as a kind of evaluative judgment — i.e. as judgments about which prospects are all things considered better than others. The trouble here arises when we ask whether the relevant sort of judgment are judgments about which prospects are objectively better than others, or about which prospects are subjectively better. Either way, problems arise:
- Suppose that preferences are identified with judgments about which prospects are objectively better than others. But the objective goodness of a prospect surely does not depend on what it rational to expect its outcome to be, but on what its outcome actually is. So if one is rational, and one does not know for certain what the outcome of a prospect will be, one will not form any outright belief about how objectively good that prospect is; in particular, one will not form the outright belief that the objective goodness of that prospect is identical with its expected degree of objective goodness (indeed, there are many cases in which it is certain that the objective goodness of a prospect is not identical to its expected degree of objective goodness).
- Suppose that preferences are identified with judgments about which prospects are subjectively better than others. But then it seems to me more plausible to identify a prospect’s degree of subjective goodness, not with its expected degree of subjective goodness, but with its expected degree of objective goodness. But then we don’t seem to have any illuminating explanation of why subjective goodness should be defined in this way, rather than in any other way. If it is a prospect’s objective goodness that ultimately matters, then why does a prospect’s expected degree of goodness matter at all? We don’t seem to have made any progress with understanding the fundamental question.
Anyway, that’s the problem that I’ve realized that I need to worry about (I should thank Ned McClennen, who was my commentator at the recent Syracuse conference on Practical Reason for prodding me to think about it ….): I believe that rational choice should maximize the expectation of some sort of value, but I don’t seem to be able to avail myself of the classical decision theorist’s reason for believing this.
So what reason do I have to believe this? I’ll try to post something outlining my rough ideas about what this reason might be at some point in the near future.
A question of clarification about this bit:
“Suppose that preferences are identified with judgments about which prospects are objectively better than others. But the objective goodness of a prospect surely does not depend on what it rational to expect its outcome to be, but on what its outcome actually is. So if one is rational, and one does not know for certain what the outcome of a prospect will be, one will not form any outright belief about how objectively good that prospect is.”
Initially the first sentence sounds plausible. If I’m not wrong it’s something like what Smith holds in the ‘Right, Good, and Unicertainty’ paper. He doesn’t talk about preferences as such but the right actions which is a function of the probabilities one gives for the outcomes and the objective value-as-seen-by-one.
In any case, could you explain a bit more about what the problem is? I’m not sure I see why it follows from this idea that the objective goodness of a prospect would have to be taken to depend on what is rational to expect the outcome to be. I guess I don’t see why, if one doesn’t know for certain what the outcome of a prospect will be, one couldn’t still form a belief about its objective value. Of course, this would be an estimate and the degree of belief would be lesser. Is the problem then the one mentioned in (2) that this is arbitrary? But, Smith seems to be able to motivate this somewhat as I remember.
Right, I have the same question as Jussi’s.
If I liked the ‘evaluative judgment’ approach, I would put it this way.
So that guarantees that the person’s preferences will be expectational if they are consistent. There may be various problems with it, but I don’t see which you’re pointing to.
Suppose that preferences are thought of as a kind of desires. Then it doesn’t seem to me to be incoherent or irrational in any way to have preferences that are intransitive or symmetric.
As I’m sure you know, these sorts of preferences can make you into a money-pump. Isn’t that irrational? Suppose you prefer B to A and C to B and A to C, so you violate transitivity. Let e be some very small amount of money. There is a sufficiently small e such that you prefer B to A-e and you prefer C to B-e, and you prefer A to C-e. You start off with A and through a series of rational trades (given your preferences) you wind up with A again minus 3e! If you continue with these rational trades (i.e., if you don’t have much of foresight) you’ll wind up right where you started, except bankrupt. That looks irrational, no?
“Suppose that preferences are thought of as a kind of desires. Then it doesn’t seem to me to be incoherent or irrational in any way to have preferences that are intransitive or symmetric.” Some theorists distinguish two sorts of desire: (1) a sense in which conflicts are not irrational, where you might have a desire to do A and a desire to do B, knowing you cannot do both, so you have to decide which to do, and (2) an “all things considered” sense, in which conflicts are supposed irrational. Do you accept this distinction? If so, someone who identifies preferences with desires is thinking of the second sense. Does your remark here still apply?
I’m not sure what Ralph had in mind but I think I agree with what he wrote. One idea could be that preferences are much closer to action than desires and thus preferences cannot be mere desires. After all, we can and do externalise desires and are not always motivated by them if we find them alien in us (not backed by second-order desires, evaluations, plans, and so on).
As a result, you don’t get the money-pump going from mere desires – it requires actions to which more solid preferences might take. From desires one can always stand back. I think this even goes to what one desires most over-all.
Ralph, you write,
Suppose that preferences are identified with judgments about which prospects are objectively better than others. But the objective goodness of a prospect surely does not depend on what it rational to expect its outcome to be, but on what its outcome actually is.
True, the objective goodness of some prospect does not depend on its expected value, but certainly its expected value is evidential for its objective goodness. So there seems nothing mistaken in forming a judgment about the objective value of some prospect (I take it you mean by ‘prospect’ some action or option) P on the evidential basis of the expected value of P. That is, there is nothing epistemologically amiss in judging that P is objectively better than P’ on the basis of evidence for P’s (and P”s) objective value derived from P’s (and P”s) expected value. But you add,
So if one is rational, and one does not know for certain what the outcome of a prospect will be, one will not form any outright belief about how objectively good that prospect is . . .
That seems mistaken to me. I can be rational and have enough evidence from the expected value of P (snd P’) that P is more objectively valuable than P’. Wouldn’t you say?
Thank you so much Jussi, Jamie, Gil, and Mike! As always, PEA Soup has generated a really terrific set of comments! (Unfortunately, I’m travelling around the UK visiting friends this week, and this will slow down my responses to any comments…)
1. Jussi and Jamie (and Mike in his second post) ask why I think that if we identify “preferences” with judgments about which prospects are objectively better than others, these preferences won’t conform to the axioms of decision theory.
The simplest way to make the point is by appealing to a case like Parfit’s “mine shaft” case. This is a case with three options — A, B, and C — in which one knows for certain that either A or B is objectively the best option, and also that whatever situation one is in, C is bound to be suboptimal; however, one also knows that whatever situation one is in, C only falls very slightly short of being optimal — whereas if in fact it is A that is the best option, then B is utterly disastrous, and if B is the best option, then A is utterly disastrous. In this case, one is certain that C is not the objectively best option, while it apparently is the subjectively best option. So it seems clear to me that the objectively-better-than relation does not satisfy the axioms of decision theory.
Of course, as I explained in my original post, I do think that the subjectively-better-than relation does satisfy the axioms of decision theory, because I identify the “subjective goodness” of a prospect with its expected degree of objective goodness; but I think that this is a fact that demands further explanation, which we cannot simply read off the axioms of decision theory themselves.
2. Gil asks whether I believed in the distinction between prima facie desires and all-things-considered desires, and whether I am so convinced that all-things-considered desires violate the axioms of decision theory quite as frequently as prima facie desires.
My answer to Gil’s question is this: I’m happy to accept the existence of these “all things considered desires” so long as they’re not supposed to be anything different from both choice dispositions and evaluative judgments. If they are supposed to be different from both choice dispositions and evaluative judgments, then I’m afraid to say that I really haven’t the foggiest idea what they’re supposed to be!
3. Mike (in his first comment) and Jussi (in his second comment) focus on my point that there is nothing irrational about having conflicting desires. In particular, Mike makes the point that if I have conflicting desires, I can be made into a “money pump”. So, how can it be rational to have conflicting desires of this sort?!
In my view, Mike’s point is mistaken, since it neglects the distinction between desires and choice dispositions. Here’s an example. I used to smoke (about 20 cigarettes a day), but I have now given up. As an ex-smoker, there are still many occasions on which I desire to smoke. However, I have beaten this craving — in the sense that I now have a firm habit of resisting the desire to smoke.
So I still have conflicting desires with respect to smoking. But this won’t make me a “money pump”; even if you offer me cigarettes for free, I still won’t smoke them (in spite of my desire to do so!)
Ralph,
I didn’t understand what you meant by “objective”. Now I do.
But my suggestion didn’t use the word “objective”, in any case. It says there is just one relation, better than, which satisfies the axioms, and so is expectational.
Can you say what the problem is with that way of explaining things? (I am not endorsing it, by the way, just saying that if I liked the view that preferences are judgments of which things are better, this is the version I would defend.)
Ralph,
thanks for clearing this up. That’s helpful. I think I had slightly different view in mind. On this view, preferences would be a function of two mental states – the probabilities the subjects give to the outcomes and the beliefs they have about the objective value. But, I guess this view was the second and not the first one. The first view would not include the first judgment about probabilities in the picture and it’s right that this must lead to problems.
I do have worries about identifying preferences with judgments about expected objective value of prospects. Jimmy Lenman has nice examples in the Consequentialism and Cluelessness paper. The one that strikes me as convincing is this case:
“Suppose you are in charge of plans for the Normandy landing for the Allied Forces in World War II. You are told by your staff that two very different plans of campaign have been worked out, plan A and plan B, of which you must choose one. You know, of course, that the plan you will choose will have momentous consequences for the soldiers on the ground, the civilian population, for the future course of the war. And you know too, let us suppose, that if you choose plan A, a certain dog, Spot, belonging to a harbourmaster in Cherbourg, will get his leg broken. If you choose plan B, Spot will be just fine. And, let us suppose that you know nothing else.”
In this case the expective objective value of plan B is slightly better – the degree of the good of Spot’s leg. Otherwise the expected value of the plans is equal because we don’t know of any other differences between the outcomes. But, it does not seem likely that anyone who recognises this (dog-lovers apart) has a preference for plan B.
So I still have conflicting desires with respect to smoking. But this won’t make me a “money pump”; even if you offer me cigarettes for free, I still won’t smoke them (in spite of my desire to do so!)
I see. So you do think that all-things-considered desires/preferences are irrational unless transitive and asymmetric, right? Seems pretty clear that you’re right regarding prima facie or poorly considered desires.
1. Jamie — Well, as I said, I think that what really ultimately matters is how objectively good one’s actions are. What we really care about is what’s going to happen, not what it is rational to expect to be going to happen. So I am convinced that this relation of being objectively-better-than exists, and that it is not expectational.
Now I have nothing against defining other senses of the word ‘better’ which are expectational. In particular, we can define the notion of a prospect’s degree of “subjective goodness” as the prospect’s expected degree of objective goodness, and this relation will be expectational. But since I think that it is what is objectively better that ultimately matters, we still need to explain why any expectational notion (like “subjective goodness”) has any authority or importance when it comes to making choices or decisions. That is our fundamental question, and I don’t think that we’ll make any progress with it by simply defining “subjective goodness” so that it is expectational.
2. Jussi — I think that you’re misreading Jimmy here. (I say this with some trepidation since Jimmy is a member of PEA Soup and might be offended by my presuming to speak for him on this point…) But as I recall Jimmy’s paper, his point is that you can’t really make any rational assignment of probabilities about such huge and imponderable questions as which of two possible courses of action will be “better for the world as a whole”. Admittedly, Lenman concedes that there are some propositions to which we can rationally assign probabilities (such as the proposition that Spot’s leg will be injured); but once we start thinking that what ultimately matters in one’s actions is whether those actions are “best for the world as a whole”, then those other propositions will seem utterly trivial in comparison, and the probabilities that one assigns to them cannot be what determine what decision it is reasonable for one to make. I don’t think that Lenman is conceding that the option that involving saving Spot’s leg clearly has greater expected utility.
3. Mike — As I said in response to Gil Harman’s comment, I don’t really know what all-things-considered preferences are if they aren’t either choice dispositions or intentions or value-judgments (and I’ve already explained why I don’t think we can ground an account of rational choice on the claim that preferences in any of those senses must meet the axioms of decision theory).
Ralph,
I probably am. I think he says that Spot makes a tiny difference to the expected value of the choice but this cannot affect the reasonableness of the choices given the background.
But, I think there is still a problem here. We could have more information. We could know the different possible outcomes and their probabilities. Consider the case where these come out as even for both plans. Both plans have the same chances of such and such where such and such is always some massive Europe-wide result of deaths and saved lives. If this was the case, the other plan would still have a tiny, tiny advantage in expected value. If preferences are identified with the expected value of options then this difference should translate to a marginal preference for the other plan. I’m not sure it would.
Jussi — Well perhaps my intuitions are different from yours, but my feeling is that if in all other respects, the expected goodness of Plan A and the expected goodness of Plan B were perfectly tied, I think I’d spare poor Spot’s leg. I’d be painfully aware that this wouldn’t guarantee that the chosen plan was objectively better than its rival (only that its expected goodness was marginally greater), but it would seem to me quiute perverse to sacrifice Spot’s leg for nothing!
Maybe my intuition just shows my immoral bias against dogs! But, that’s not quite the point. The proposal is that preferences are identical with judgments about expected degree of goodness. If this was the case it would be impossible for me to have the judgment that the Spot option has more expected value and yet not prefer this option. This is not even a mistake I could do because the two states could not come apart. This sounds like a problem.
I think there is a similar phenomenon in life-saving incidents. So, you might have two people drowning and you can save both. The person has a slight head-ache and you happen to have pain-killers with you. Saving that person would presumably have more expected value but I’m not sure that I would have a preference for that.
Jussi — My position is that there are various different things that the term ‘preference’ could refer to. It could refer to:
If we understand “preference” in senses (1) and (2), then I completely agree that you could judge that A has greater expected value than B but not prefer A to B. I am simply denying that you could do this if we interpret “preference” in sense (3).
That seems trivially true of course. Do you think any of those is what preferences are uniquely according to ordinary use or is the ordinary use ambiguous too like the philosophical use?
Here’s my proposal for what it is worth. I know you probably don’t like it because it is part of the ‘reasons project’. Instead of evaluative judgments couldn’t preferences be normative judgments about what there is reason to choose or bring about. I find this quite natural. Usually what one thinks there is most reason to bring about coinsides with what outcome one judges to have most value. But, in some cases, you might think that there are considerations because of which certain good-makers are not taken to be reasons and thus one does not think that there is more reason to bring about the trivially best outcome. This would take care of the previous cases – Spot’s leg or head-aches are not able to be reasons in those contexts even when the make options slightly better.