I'm watching a riveting movie. I have no desire to get up or do anything else. Good thing that because I'm on a transatlantic flight.
Intuitively, I think I would be better off if I were watching that movie at home and could get up, walk around or stop the movie and take it up again later even if I didn't want to do any of those things. Factor out typical differences in the comfort and legroom of economy class seating and easy chairs in the average living room–put me in business class if you'd like. It still seems that the mere possibilities I enjoy in the comfort of my home contribute to well-being.
And it doesn't seem to me that I just want possibilities as a hedge against future contingencies. I have (really) on some occasions paid (around $20) for an aisle seat on a plane so that I could have the mere possibility of getting up without a hassle. And when, on those occasions, I didn't in fact want to get up but cheerfully sat through the entire flight I still deplaned satisfied that it was money well spent.
Mere possibilities don't figure in experience so hedonists have a quick answer. But if we understand well-being as preference satisfaction we note that we can be made better off by states that don't figure in experience. So…why not by states that obtain at other possible worlds?
Intuitions wanted!
It’s not just any possibility that makes you better off. What makes you better off are additional reachable possibilities. And that makes it seem as if what you’re really talking about here are additional freedoms. Those do make you better off, especially if you know about them. Just knowing that you could do something is enough to make your situation better off. Just believing may or may not make you better off, depending on how much of a subjectivist about ‘better off’ you want to be.
Well, I know this is a bit of a trite comment, but preference theorists can explain why states that obtain at other possible worlds can make you better off – if you prefer to have a certain state obtain at a (nearby) world, and it does, then your preference is satisfied.
Robert, I assumed HEB meant by ‘possibilities’ what you mean by ‘reachable possibilities’ (actually now I’m not sure what you meant — we aren’t going to get into the accessibility relations in Kripke semantics, are we?).
I’m not getting the intuitions, though — at least not strongly. Maybe what’s blocking them is the fact that I can’t just “factor out” the leg room, and when I imagine myself in business class I imagine myself so damned pleased to finally have one of those seats (in my imagination I must have gotten it for free since there is no way in hell I would ever pay for it) that my estimate of my well-being goes through the roof. (I also like aisle seats, a lot, because of the extra few inches of room I capture from the aisle.)
Try this. Suppose you spend four hours in your kitchen, enjoying yourself. Cooking on a Sunday afternoon. And then you learn that for those four hours a force field prevented you from leaving the kitchen. Does it seem like your estimation of how well off you were during those four hours goes down?
I have mixed feelings about this.
Your intuition might find some support in a Razian conception of autonomy. Raz maintains that the availability of a sufficient set of worthwhile options, whether you know about or exercise them, is a prerequisite of autonomy. (You might do without the “sufficient” qualification.) But its important to Raz that the options be worthwhile — worthless ones do not serve autonomy in any way. So I think you need to be careful in calling them “mere possibilities”. They’re “mere”, I take it, just in the sense that their normative relevance is not due to their likelihood of being realized; they are not “mere possibilities”, though, in the sense that the possibilities must be ones that are worth having — not just any old possibility will do.
Jamie,
Fuck the Frankfurt counters. By ‘reachable’ I was just thinking what you, given your actual situation, could realistically do or could have done other than what you did do — not what might logically have been. It seems to me that the fact that you (realistically) could have done x, even though you didn’t, is a good for you. Maybe you need to know it too, though I’m unsure of that.
Maybe what John refers to, in Raz, is what I’m thinking.
I wouldn’t be surprised if the belief that we have other options (rather than being “forced” to do X, even if we would have chosen X anyway) makes us happier. This seems to be what H.E.’s example is illustrating. I don’t have any intuition that I’m made worse off by a secret forcefield that will never actually get in my way.
By “mere” possibilities I mean that they have not been, are not, and will not be realized.
What makes them “worthwhile” is a separate issue. May I suggest that what makes them worthwhile for me is their being states of affairs that I desire at a nearby possible worlds–ones in which my basic psychology is similar? Worlds at which I want to get out of my seat on the plane are nearby so the possibility of getting up is worth something to me; worlds at which I want to stick a bean up my nose are remote so the possibility of doing that isn’t worth anything to me.
My intuitions about cooking in the force field are skewed because worlds at which I enjoy cooking are remote but it’s easy enough to conjure up more intuitively compelling cases of the bird-in-the-gilded cage variety and, for all intuitions are worth, I don’t think that in all of these cases it’s the [true] belief that I’m stuck in that kitchen that makes me worse off–though I think I should be even worse off if I knew I was stuck.
People are talking behind my back–something I don’t want. I’ll never find out and their malicious gossip doesn’t have any significant causal consequences for me. Granted, I’d be even worse off if I found out. But I’d still be better off if people weren’t talking behind my back at all.
I’m struck in that airplane seat. I’d be better off if I were dead drunk or asleep and the belief that I was stuck didn’t further contribute to my misery. But I’d be still better off if I weren’t stuck at all.
This is someone else’s argument – I can’t remember where I saw it:
If mere freedom is good, then it looks like it should sometimes be better to have the choice between two worse options rather than having to take one better one (so long as the difference between the value of the worse and better options is low enough).
So imagine you have a choice between apples, bananas, and chocolate. Further, imagine (plausibly) that you prefer chocolate to bananas to apples. So long as this preference is sufficiently weak, you should sometimes prefer a choice between bananas and apples than the certainty of chocolate. But that doesn’t seem right, so it seems to me.
I’m actually not entirely convinced by this, but it’s interesting. (I’m in a bit of a rush, but hopefully that’s sufficiently clear. It might need some tidying to change facts about preferences to facts about goodness.)
I share Richard’s hunch that in at least some of the examples the bad thing about having no options is that we’re being forced — and if it’s someone else, another agent, doing the forcing, that’s worse.
Sen and Nussbaum have this idea (it’s been a long time since I’ve looked at this) that freedom, which is good, is capability. When you’re stuck in the airplane seat, you have fewer capabilities than when you’re at home. So that’s bad.
This intuition is a bit of a mystery to me. It seems to me you’ve paid a lot of money for a freedom or opportunity, but if you don’t make use of that opportunity, it’s wasted. In the case given, I would say that you’re not better-off, you just paid money for the chance that you might be better-off. If you don’t end up using that chance to wander around, etc., you would have been just as well-off sitting in the middle. You might be more pleased about sitting on the aisle, and your contemplation of your possibilities might give you more pleasure, I guess I could understand that claim. I guess I could also understand the claim that a loss of autonomy is bad for you (as some have been suggesting). But the mere fact that you have the opportunity to do something you never do strikes me as not at all in your interests unless you actually do it.
Just an exegetical point, re: Sen and Nussbaum. Sen and Nussbaum don’t believe that capabilities per se are intrinsically good for you. They believe that capabilities are the proper index of distribution, etc., that they should be the, as it were, “currency of justice”. But they mean this to be in opposition to welfarist proposals. But furthermore the claim seems strange to me even if S&N do make it. Capabilities can be instrumentally good, of course, but if I have the capability to become an artistic genius, and instead sit on my couch eating Cheetos, I’m not living a better life than I would have had I not had the capability. I’m just like any other guy eating Cheetos.
Sorry; I didn’t carefully read your comment spelling out your proposal in greater detail. I’m a bit confused as to what the proposal actually is. Is the idea that:
I, the actual-world version of Dale Dorsey, want x to occur in a nearby possible world, and x occurs in a nearby possible world, hence I’m better-off? (This would be like Simon’s proposal above.) In that case I guess I can see the intuition.
But I thought the proposal was something like this:
I, the actual-world version of Dale Dorsey, has the possibility (or capability) of x-ing, or making it the case that x. I value x. But x doesn’t occur (I don’t x). The possibility of x, however, makes me better-off. This is the claim I find a bit mysterious.
But then it seems like you were inclined to believe this:
I, the actual-world version of Dale Dorsey, value x. And so does the nearby-possible-world version of Dale Dorsey (Dale*). Dale* also values x, and, in Dale*’s world, x occurs (but not in Dale’s world). The fact that x occurs in Dale*’s world makes Dale (in the actual world) better-off. This claim I also find a bit mysterious–as if Dale*’s welfare somehow also affects Dale’s. But I guess this is just asking which view you’re advancing.
I’m not sure that I share this intuition.
For example, I recently purchased a Roku player that allows me to watch streaming movies from Netflix on my TV. I have 12,000 titles to choose from, an awful lot of possibilities there. But what has happened is that I often start to watch one movie, and at the first sign of my mind wandering, I’ll stop and choose another. I have so many to choose from that I tend to become dissatisfied with any movie I am watching, thinking that there must certainly be something better I can choose. When I had fewer choices, I would sit through the parts that did not catch my attention so much, and was thus able to enjoy the movie eventually. The extra freedom here is making me worse off in at least one way.
I’m advancing a view based on the “mysterious” intuition that even if I don’t prefer some state of affairs S, and S never actually obtains, I am better off if my counterpart at a nearby possible world wants and gets S.
And I’m flying the idea that what makes something “of value” to a person is just there being a nearby possible world at which it satisfies his counterpart’s preference. This is a way of responding to the problem of adaptive preference. Also, speculatively, a way of defending a subjectivist account of well-being according to which all value turns out be preference satisfaction–at some world or other.
As for the Paradox of Choice, the problem is not having 12,000 streaming movies available but knowing that they’re available and having their availability in your face. Satisficing strategies avoid this.
H.E. – do you think it makes a difference whether one actually desires to have “mere possibilities” available? Perhaps you have such desires — in which case, as Simon Keller noted upthread, even ordinary preference-satisfaction accounts can accommodate that. So the test case is to consider someone who doesn’t care about mere possibilities. So long as I actually get everything I want, would you still insist that I’m made worse off by the fact that my counterparts have unsatisfied preferences (i.e. even though I explicitly avow that I don’t care about my merely possible counterparts, so their frustration does not entail any frustration on my part)?
Hi H.E.,
I wonder if mere possibilities might play a role in contributing to well-being in a different way. Suppose I face the choice of acting bravely or cowardly on some occasion, and suppose that how I act will impact my well-being. However I choose, I would think that my action will have value for me in part due to the mere possibility in which I acted in the contrary fashion. (This might be a view attractive to those who think achievement is an important component of well-being. An achievement seems less significant, and perhaps not even an achievement, if there was no possibility of failure.)
Richard, part of the motivation for the kind of account with which I’m playing is an interest in solving the problem of adaptive preference.
When it comes to my preoccupation for getting aisle seats, and my claim that the mere possibilities they provide benefit me I can say it’s because I actually have a taste for having lots of possibilities. The hard cases are those in which alleged victims of “adaptive preference,” e.g. the poor, ground down Indian women about whom Nussbaum writes don’t actually want stuff we think they should want–stuff that we think would make them better off–and aren’t interested in possibilities.
It seems to me that there are 3 ways to go with this: (1) they’re doing fine: no point in expensive programs of social improvement because their actual preferences are satisfied; (2) they’re not doing fine because even if they don’t prefer what we should regard as more favorable conditions there’s an Objective List of good stuff they’re not getting which they should be getting whether they want this stuff or not or (3) they’re not doing fine for some other reason that doesn’t commit us to an Objective List of good stuff.
I’m interested in exploring (3) and propose that the reason why they’re not doing well is because they (or their counterparts) at nearby possible worlds have preferences that they (at the actual world) can’t satisfy. Leaving aside possible world talk: they’re less well off because there are preferences that they “could easily have had” that they can’t satisfy.
The metaphysical issue is a separate one I think (though maybe you can persuade me otherwise). Whether these individuals at nearby worlds are me or my counterparts I think shouldn’t make any difference to whether or not what happens to them benefits me. Suppose I think that I’m a stage rather than a worm and that when I say that I’ll be freezing my butt off in Chicago at the APA next week it cashes out to I’m related in a certain way other than identity to stages in Chicago next week. Well that’s fine but I’m still packing heavy sweaters because I have a serious concern about the well-being of those future stages–whether my concern is self-interested or merely “self”-interested. And likewise what goes for those nearby possible individuals matters to me whether my concern is self-interested or merely “self”-interested.
Steve, I’ve got to think about this one. I guess it never really occurred to me that “achievement” as such, where that entails the nearby possibility of failure, could contribute to well-being. Not saying that it couldn’t because some people clearly like to gamble and value “achievement” over and above the states they achieve–I just need to think about it. Thanks for this example!
Hi H.E. (if I may) –
You mentioned a few times that you think this view might go some distance toward solving the problem of adaptive preferences, but I’m not sure I see how it’s supposed to do that. You mentioned that when the desires of your counterparts (at nearby possible worlds) are fulfilled, this makes you better-off. But I can think of at least two problems:
1. You understand “nearby” possible worlds as “ones in which my basic psychology is similar”. But then you also seemed to indicate that worlds in which you enjoy cooking are remote. OK. But then I should think that a world in which a given person doesn’t have adaptive preferences are actually quite remote–and hence wouldn’t affect the individual’s well-being in question. For instance, if I’m displaying the “horizon problem” (as Jennifer Hawkins puts it), I lack a sense of my own self-worth, and hence desire my current shabby conditions. But then my counterpart would also lack this sense of self-worth, and any world in which I had a proper sense of self-worth would be remote, and hence irrelevant when it comes to my well-being. Hence it’s not true that “they (or their counterparts) at nearby possible worlds have preferences that they (at the actual world) can’t satisfy.”
2. Adaptive preferences–I take it–are a function of a person’s psychological confrontation with his own possibilities, as it were. Hence adaptive preferences, I should think, aren’t had in a vacuum, but develop because of certain factors of the world around them. So most worlds at which I value x and can’t get it would be a world in which I adapt my preferences away from x. But your claim that a person is doing badly because her counterpart desires x but can’t get it doesn’t seem to include very many possible worlds. Assume that, prior to adaptation, I (and my counterpart) value x. Either my counterpart can get x or he can’t. If he can’t, it’s likely he won’t desire x, as a result of adaptation (and hence won’t reflect badly on my well-being). If he can, it’s likely he’ll desire it and get it, so his desire for x does not go unsatisfied (and hence will reflect well on my well-being). Of course, this is a very rough generalization, and there are surely some possibilities out there that fall outside of these two options. But I’m not sure that there are very many of them, especially given that we’re talking about a preference that is adaptive in the first place. (Hence one would expect that wherever one could get x, they would desire it, and wherever they couldn’t–like the actual world–they wouldn’t, assuming the “basic psychology” is the same.) Of course, it’s always possible I don’t know what the hell I’m talking about when it comes to possible worlds.
Also, I’m still a bit confused about the proposal. In your response to me you write: “even if I don’t prefer some state of affairs S, and S never actually obtains, I am better off if my counterpart at a nearby possible world wants and gets S”. But then above you write: “the reason why they’re not doing well is because they (or their counterparts) at nearby possible worlds have preferences that they (at the actual world) can’t satisfy”. So it sounds like on the one hand you’re measuring my well-being in part by what my counterpart in world W values and achieves in W. But then in the second criterion you suggest that you’re measuring my well-being in part by what my counterpart in world W values and could achieve in the actual world. If it’s the latter possibility, this might go some length to solving my second worry. But I might be reading you incorrectly.
Dale if I may you may…I misspoke on possible worlds. People are badly off if they at nearby possible worlds want x but can’t get x at the actual world or any other possible worlds in the neighborhoods of the nearby possible worlds at which they want x.
1. Re adaptive preference I’m skeptical about the take, by privileged observers (e.g. Nussbaum), that the oppressed poor have deep psychological problems, lack of “self-worth” or whatever. Even in Nussbaum’s stories, it doesn’t take therapy of extensive consciousness-raising to get the poor women whose stories she tells into women’s co-ops and various self-betterment projects. As soon as they’re convinced that improvement is feasible and get just a little “you go girl” they jump at the chance–which is to say that the possible worlds at which they want improved conditions are nearby. There’s nothing “deformed” about their basic psychology: they’re not rich neurotics–they’re rational poor folks, gripped by peasant-fatalism, making the best of a raw deal.
2. Again, empirically, I don’t see any reason to assume that most worlds at which I value x but can’t get it are worlds at which I cease preferring x to options that are feasible. “Adaptive preference” is supposed to be a peculiarity of the oppressed–not the norm for most who typically prefer lots of states that aren’t feasible. I’m not hanging much on preference–not occurrent desire or yearning, not feely frustration if not satisfied, etc. Just would choose if available. So no dilemma. Jeez, most of the states most people prefer most of the time are unattainable.
I’m also assuming that the objects of preference are states rather than bundles of goods though I don’t think that’s necessary to make this point: I prefer keeping my 1996 Nissan Sentra to paying $whatever to have a new, red Miata convertible, but I’d much prefer the Miata without paying $whatever.
H.E. – I didn’t mean my above comment to depend on counterpart theory vs. trans-world identity. My point was just that if I explicitly don’t care about my non-actual “selves”, then it seems odd for anyone else to insist that they affect my well-being nonetheless. This certainly goes against the spirit of subjectivism, even if you abide by the letter of appealing only to preferences (incl. merely possible preferences).
re: adaptive preferences, I don’t see the motivation for saying that these are really preferences at all. Depression and fatalism may suppress our motivation to pursue (or even consciously formulate) our preferences, but that doesn’t mean we no longer have those preferences for a better life. As you say:
“As soon as they’re convinced that improvement is feasible and get just a little “you go girl” they jump at the chance–which is to say that the possible worlds at which they want improved conditions are nearby.”
I would’ve thought that this shows that they really wanted those improved conditions all along. It’s just that they were previously too fearful and fatalistic to dare enunciate such hopes.
One may think that X is a hopeless ideal, and so settle for explicitly desiring Y instead. But this doesn’t change the fact that one prefers X to Y — not just in a nearby possible world, but right here, in this world.
So I don’t see adaptive preference as any kind of problem for existing preference theories.
Richard, that is in fact my line elsewhere on most of Nussbaum’s cases (and the Fox and the Grapes–Fox preferred the grapes all along). However it’s not far-fetched to imagine genuine cases of adaptive preference in which people are so ground down that they’re indifferent between what we should regard as better and worse alternatives. And some do need a “you go girl” though not extensive therapy.
I’m not sure how my proposal violates the spirit of subjectivism since I assume that the structure individuals’ preferences across worlds may be peculiar to them, and may even be highly idiosyncratic even if, given that we’re members of the same species, there will be lots of widespread similarities.
This is a way of accounting for the intuition that the items on Nussbaum’s List are important for human flourishing even for people who don’t actually want them. Consider e.g. “a fruitful relationship with animals and the world of nature.” Nussbaum admits that at the UN focus group where the list was contrived south asians objected to that item because culturally they didn’t care for this sort of relationship.
I can explain why it should go on: most people are animal-lovers at nearby possible worlds, including members of cultures where animals are regarded as vermin. But this isn’t selling out to an objectivist account because there are surely members of every culture whose aversion to animals is modally deep–they don’t like animals at any nearby world. On the account I’m proposing these people don’t benefit from the capability of having a fruitful relationship with animals.
That’s subjectivist enough for me: it doesn’t assume some deep account of human nature but just that there are lots of similarities as well as individual differences in people’s transworld preference structures so that, for practical purposes of policy, we can come up with a rough and ready account of which capabilities are important. But these capabilities are important because they more-or-less-kind-of reflect the actual and nearby possible preferences of typical people.
BTW thank you, and everyone else who’s contributed. This conversation is very, very useful to me in pushing me and I’m most grateful!