Tenenbaum and Raffman (2012) claim that “most of our projects and ends are vague.” (p.99) But I’m not convinced that any plausibly are. On my own blog, I recently discussed the self-torturer case, and how our interest in avoiding pain is not vague but merely graded. I think similar things can be said of other putative “vague” projects.
T&R’s central example of a vague project is writing a book:
Suppose you are writing a book. The success of your project is vague along many dimensions. What counts as a sufficiently good book is vague, what counts as an acceptable length of time to complete it is vague, and so on. (p.99)
But it strikes me as strange for one’s goal to be to reach some vague level of sufficiency. When I imagine writing a book, my preferences here are graded: each incremental improvement in quality is pro tanto desirable; each reduction in time spent is also pro tanto desirable. These two goals seem like they should be able to be traded off against each other — perhaps precisely, or (if they are not perfectly commensurable goods) then perhaps not, but this sort of rough incomparability between two goods is (I take it) not the same as either good itself being vague.
I could imagine a cynical person who really doesn’t care to improve the quality of their book above a sufficient level. Perhaps they just want it to be of sufficient quality to earn a promotion, or some other positive social appraisal. But these desired consequences are even more clearly not vague.
Similar things can be said of the standard example of baldness. I trust that nobody (sane) actually has a fundamental desire not to fall under the extension of the English-language predicate ‘bald’. What they more plausibly have is a graded desire that roughly maps onto what is socially recognized as baldness. For example, perhaps they desire not to have their appearance negatively appraised on the basis of hair loss. (Or perhaps even just not to have other people think of them as bald.) But of course there’s nothing vague about that: people either appraise you negatively or they do not. Such appraisals are graded, however: the first noticeable signs of a receding hairline may be expected to elicit a less severe appraisal than a large bald patch. (Or so we might imagine the vain man to assume.)
You may wish for a restful night’s sleep, but to stay up as late as possible as is consistent with that. Since restful is vague, one minute of sleep apparently couldn’t make the difference between a restful and a nonrestful night, and you ought to stay up for another minute. But foreseeably, if you keep thinking that way, you will stay up all night. (p.474)
As with the book case, this strikes me as simply involving a trade-off between two graded (non-vague) ends. To speak of a “wish for a restful night’s sleep” is surely just a rough shorthand for what is really a graded desire, for a night’s sleep that is more restful rather than less so. Perhaps there are some threshold effects in there, insofar as some lost minutes may have more noticeable effects than others on your state of mind the next day (and you can’t know in advance exactly which minutes these are). But it’s clearly just false to assume that a minute’s less sleep will always make no difference to what it is that you really want here (regardless of whether the term ‘restful’ still applies to your night’s sleep — there’s clearly more to your interest in a restful night’s sleep than just the binary question of whether it was restful or not).
Elson later cites Tuck’s example of “a shepherd who wishes to build a cairn of stones […] to guide him in the hills” (478). And again, while it may be vague whether a certain collection of stones is enough to qualify as a ‘cairn’ or a ‘heap’, it’s hard to make sense of anyone actually caring about this as such. Insofar as the cairn serves some purpose — to “guide him in the hills” — a certain collection of stones will either be sufficient to the task or not (e.g. if so small that it is subsequently overlooked).
This all suggests a general strategy for dissolving apparent vagueness in our projects: Whenever one is inclined to use a vague predicate in describing a project, check whether this is truly the canonical or fundamental description of the desire in question, or just a convenient way of talking about a desire that is really graded in nature, and directed at the real world phenomenon (taking the form of a spectrum) that underlies the vague predicate. I find it difficult to imagine a case where the latter interpretation is not clearly superior.
One possible exception (which I owe to
Helen) involves whimsical desires. A child, about to embark on their first airplane flight, might really want to be
inside a cloud. They care fundamentally about the higher-level predicate
cloud rather than the underlying phenomena. It just seems really cool and awesome to them to be fully inside of one of the big fluffy white things in the sky that they’ve so long admired from the ground. But it might turn out to be vague whether they (or their plane) was ever
fully inside a cloud.
Such whimsies aside, though, do you think there are any plausible examples of genuinely vague projects? (And if not, why have so many philosophers thought that there were?)
Hi Richard,
I don’t know this literature, but if our desires are at all sensitive to our moral (/normative) views, and if such views sometimes employ vague predicates, then it seems to follow that our desires will sometimes have vague objects. For example, perhaps I want to avoid intentionally causing harm, even though there is some vagueness about what it takes to cause a harm.
Everything you’re saying here sounds exactly right to me.
However, I suspect that our projects are vague in a different sense. If I’m a contractor trying to build a specific structure from a very specific blueprint, then my project is very well-defined. But if I’m an architect trying to design a good building, it may be that the metrics of what makes my project count as succeeding or not are based on precise concepts of usability and the like, but what building will be the one I end up deciding is quite unknown right now, and which criteria are involved and what their relative importance is might also be quite unknown. I don’t know if this makes the project vague in the relevant sense, but it does seem essential to a lot of our projects that part of carrying them out is figuring out what precise realization we will try to put forward to fulfill our goals.
Hi Alex, that sounds interesting. Could you flesh out an example of a borderline case of intentionally causing harm, to help me get a better grip on the idea? (Are deontologists generally on board with the idea that some acts are of indeterminate permissibility?)
It may be that deontic goals provide a second class of exception, then. (I’m reminded of this old discussion with Doug Portmore about the case of fairness as rough equality.) Though they at least won’t pose any sort of problem for utilitarians, insofar as utilitarians have independent grounds to deny that they are reasonable goals to have.
*
Kenny – thanks. That sort of open-endedness does seem an important feature of many of our projects, I agree, though as you say it’s quite different from the vagueness that T&R discuss.
I agree with a lot of this, but I think it should be taken further. Suppose that you stopped believing in the existence of hair (on whatever recherché grounds). Would you change your mind about the extent to which the agent’s desires had been satisfied, in any sense we should care about? And if you think the kind of existence question I’m gesturing at is non-substantive, do you think it’s *thereby* not a substantive question to what extent the agent’s desires had been satisfied? And suppose the agent herself stopped believing in hair. Would you suspect that this would change her behaviour?
I’d answer “no” to all of these questions. Desire-content, and so desire-satisfaction seem to me to be more coarse-grained than that. I can think of basically two general ways to accommodate this intuition: 1) Disjunctive desire-sets or desires with disjunctive content; 2) Desires with non-conceptual content.
I can say more if you’re curious. To put it sloganistically, what you want to say about vague stuff, I want to say about everything. Do you share the views about the questions I asked though?
Hi Richard,
I’m interested but struggling to understand this, and feel there are some things being conflated here. What exactly are the objects whose putative vagueness is in question?
For a start, it seems natural to think that, whatever a project is, it is not itself a bit of language, a concept, or an idea. So doesn’t that mean that on any view on which vagueness is something which only bits of language, concepts, or ideas have, it will just for that reason not be the case that projects are vague. Or is this issue meant to be understood in such a way that it is orthogonal to the debate about whether there is such a thing as worldly vagueness?
A further thing: assuming that the objects whose vagueness is in question aren’t all linguistic or conceptual things, what are they exactly? Things like desires, or interests? I.e. things people have, which are about, or directed to, circumstances? Or circumstances, or actions, or products of action themselves?
When you talk about ‘desired consequences’ or a ‘good itself’ being vague or not, for instance, it seems like the latter understanding is called for. But when you talk about ‘my interest’ or people’s ‘preferences’ or ‘the desire in question’, it seems like the former understanding is called for. But then aren’t there two separate questions here, about two different categories of object?
Hi Richard – I suppose I was thinking that there might be a vague line somewhere between causing a harm and preventing a benefit: I take it that those who defend the relevance of this sort of distinction need not claim that the distinction is perfectly clear-cut in all cases. Similar remarks may apply to various other principles, and I was actually thinking this would include the utilitarian principle, since there may be some vagueness in interpersonal comparisons of utility and in turn vagueness about which exact acts maximise utility. (If there is vagueness in inter*temporal* comparisons of utility, then this will also infect prudential goals.)
Andrew – yes, that seems right to me (assuming the agent still believes in something that plays the qualitative role of hair). A third way to accommodate this would be to posit that (most of) our desires fundamentally involve qualitative concepts, which is something I had in the back of my mind when thinking about the cases in the OP. Insofar as they are fleshed out by reference to something like a kind of phenomenal image of what we care about, it won’t matter what higher-level concepts or descriptions apply. (Or is this just what you were thinking of with non-conceptual content?)
*
Hi Tristan – you can think of the question as whether we have (rational) desires with vague contents.
Maybe an example of Robbie Williams’s fits Alex Gregory’s schema: you want to honor your obligations, and you agreed last night vote for Mary. But you were a little drunk. Plausibly it’s vague how drunk a person has to be before their would-be promises fail to generate obligations.
(This is RW’s variant of a Jackson-Smith example.)
Miriam Schoenfield has a similar example of a vague moral concept, but it doesn’t lend itself quite as well to participation in a vague project.
Question of clarification: In the baldness case, you write “nobody (sane) actually has a fundamental desire not to fall under the extension of the English-language predicate ‘bald'” and you seem to be making similar claims in the other cases too. I was wondering what kind of claim are we making here:
1. It is impossible to have such a desire/project. Perhaps this has something to do with the nature of the desire/project states as a psychological kind or the concept of desiring. In this case, I would be interested in hearing more about what explains why it is impossible to have these desires.
2. Hardly anyone as a matter of fact has this type of desires/preferences. This means that the claim is an empirical claim and here I’d like to see some kind of empirical evidence.
3. It is irrational, silly, in some sense, to have desires/projects like this. A nice thing about this reading is that then principle of charity would explain why we are unwilling to ascribe such desires/projects to others. Here I would like to see more of an explanation of what makes vague desires/projects like this irrational. Is there a threat of dutch books for example? Can there be no reasons for such plans?
One other comment. Surely our desires are also not fully decided Gibbardian hyperplans either. Surely there are vast areas of undecidedness in our plans too with respect to which outcomes we prefer. In this case, I wonder whether there are cases where it is neither determinately true nor determinately false that the outcomes of our actions satisfy the plans we had. Perhaps we also use vague terms in our planning to stand for such undecidedness too.
Hi Jamie – thanks, that’s an interesting case!
*
Jussi – I meant the claim in sense #3: it’d just be a bizarre thing to care about. (Not sure what further explanation I can offer if one doesn’t share this intuition upon seeing the alternative interpretations of possible desires in this vicinity.)
Re. ‘Hi Tristan – you can think of the question as whether we have (rational) desires with vague contents.’
OK, so in the restful night’s sleep case, when it’s correct to say that you desire a restful night’s sleep, what is really going on is that you bear the attitude of desire to some other proposition which is not vague? Or are you saying that this sort of desire – you say ‘graded desire’ – cannot be thought of as having a propositional content?
I think it’s probably clearest to think of the agent’s desires here as corresponding to a preference ordering over possible worlds. They prefer the more-restful worlds over the less-restful ones, and there isn’t anything vague about that. You can probably translate this into proposition-talk, but I don’t see that much of interest hangs on that.
OK thanks, that’s helpful, and sorry if I was being thick.
P.S. The question of whether or how you could translate that into proposition-talk seems very interesting in its own right to me, but I accept that it might not matter for what you’re up to.
Sorry to keep piling on comments, but I’ve been thinking a bit more about this, and it occurs to me that following up on whether the preference-ordering over possible worlds can be “translated” into a proposition, and then seeing what sort of proposition that is, if there is one, is quite important for understanding the nature of the (apparent?) disagreement between you and those who talk about vague projects and the like.
If it turns out, for instance, that insofar as you can model a desire or project as an attitude to a proposition, that proposition will be vague, then maybe that’s all the pro-vague-projects people mean. And then it’s possible for you to say ‘OK, that might be right, but it’s better in such-and-such ways to model this using a preference-ordering over possible worlds, rather than an attitude to a proposition’.
This could, it seems, help to resolve the disagreement and clarify things substantially.
Or you may wish to take a harder line, but I don’t see any reason to.
Granted: I desire to build a cairn, but, really, what I desire is that the pile of stones would guide me if I were in the hills. Do you think that this counterfactual’s truth is not a vague matter?
More generally: whether a world is compatible with a preference’s *satisfaction* –in other words, the worlds that are sufficiently good according to the relevant preference ordering — does often seem to be a vague matter. This is for the same reasons that counterfactuals often seem to be vague: the domain of interest (the “nearest” antecedent-worlds, or the “best” worlds) has vague edges. This is true, even granting (as seems correct) that the underlying preference is graded.
A point that arose between Tristan and Richard: graded desire, like graded belief, can’t be represented as propositional (in the standard sense of “propositional”, anyway).
Hi Nate – Even if we accept counterfactual indeterminism, I think the desire’s satisfaction isn’t really vague in the relevant sense (of individual increments making “no difference”), but rather probabilistic. If each additional stone raises the proportion of nearby in-the-hill worlds in which you are successfully guided, then traditional expected value approaches suffice (contra T&R) to give us reasons for each incremental act here.
Richard (and commenters), many thanks for starting this thoughtful discussion on some of the topics of our paper. I haven’t had time to talk to Diana, so she is not to blame for anything here. Some of the commenters (especially Tristan and Nate) already said some of the things I wanted to say (in fact, this might be convoluted version of what Nate explains in a few sentences), but I would like to go in a bit more detail as this is a great opportunity to clarify some points that might have been unclear in the paper.
I think there might be a misunderstanding here about the structure of our view. In explaining why he doubts there are vague projects, he claims: “But it strikes me as strange for one’s goal to be to reach some vague level of sufficiency. When I imagine writing a book, my preferences here are graded: each incremental improvement in quality is pro tanto desirable; each reduction in time spent is also pro tanto desirable”.
Leaving aside a possible optimism about there being a precise way of ordering books from better to worse, the basic thought here is common ground (as it is common ground that the self-torturer prefers less pain over more pain and more money over less money). We did not deny that, all else being equal, we have preferences for better books, less pain, and more money (though, a theory of instrumental rationality should allow for agents who only interested in writing a decent book, or a good enough book, or even just a book, and don’t care about anything beyond that, just like a builder might only be interested in building a house good enough but not care at all if it’s excellent or not). But this is not the issue; the issue is what our “all-out” attitudes are (or at least should be), since we want to determine what is rational for the agent to choose or do in a particular situation. Richard speaks of the issue of trade-off between things that we find pro-tanto desirable as if it were a side issue, but this is wrong. The pro-tanto desires of the self-torturer do not generate a problem for orthodox theory. But given the self-torturer’s (or the book writer’s) attitudes there is no way of determining how she should trade off between money and relief from pain by means of a preference ordering; and yet, the self-torturer (or the book writer) is perfectly rational, or so we argue. Vague projects, just like preferences (as they figure in decision theory), are supposed to be all-out, not pro-tanto attitudes. If I have a project (or the end) of writing a book, and, due to procrastination, akrasia, etc. (rather than some unforeseen circumstance) I don’t write a book without ever abandoning the project, then I (thereby) acted irrationally. But if my book is not as good as it possibly could have been, I have not (thereby) acted irrationally (even if I recognize that this outcome I would have been in some respect more desirable); I never undertook the project (or chose the end, or formed the intention) of writing a perfect book. On our view, a theory of instrumental rationality needs not just my preference for writing a (decent) book over not writing a (decent) book, but it needs also to take into account that I have a (vague) end of writing a book (in other (solo) work, I just talk about the fact that I am (intentionally) writing a book. As I said, if through procrastination, weakness of will, etc., I end up making impossible for myself to write a decent book (while not giving up my end) I am exhibiting a form of irrationality in a way that it is not (necessarily) true that I exhibited any form of irrationality if I wrote a decent book, but could have written a slightly better book (or if I could have written the same book but spent twenty more seconds playing Pokemon Go). These verdicts about the rationality of the agent cannot be captured by examining only the preferences of the agent. Of course, one could try to argue that you can do it, or that the ST preferences are not coherent, etc, but this is a different point (in the paper, we argued, that these attempts fail).
In other words, we cannot explain what is rationally permissible or impermissible for agents who are writing books, caught in the self-torturer predicament, building houses, by appealing just to their preferences; this is in part because their preferences are not transitive in such cases, at least when we take them at face value. It is worth noting that we’re not the only ones who think that a single set of preferences can represent the predicament of the self-torturer or of agents in situations that exhibit a similar structure. Although the added structure plays a different role in each theory, Gauthier distinguishes between the agent’s vanishing point and proximate preferences, Bratman argues that we need to appeal to the agent’s intentions, and Andreou distinguishes between given preferences and chosen preferences; none of these authors deny that ST (or the book writers) have these pro-tanto desires.
A couple of words on vague projects or ends: in the paper, we do not define a vague project as a project that is described by means of a vague predicate, but in terms of a certain structure. The structure, roughly, is that there will be actions or outcomes that clearly count as achieving the end (or executing the project), some that clearly do not count as such, and that choosing on the basis of our otherwise unproblematic pairwise preferences will invariably prevent us from achieving the end (due to the cumulative effect of these choices). I think that even in Kenny’s case in which we do have a precise blueprint would be a “vague project” in our sense, at least if we add a few further assumptions about the agent’s preferences. For, even in those cases there would be clear cases of following the blueprint and clear cases of not following the blueprint, but many in-between cases that are not determined in advance whether they count as following the blueprint. Such cases could also generate the structure we describe under the rubric “vague projects”. So I’ll stand by the claim, but it’s important to note that this is not a claim about the agent’s pro tanto desires or preferences, but about the agent’s ends: since nearly all such ends or projects leave much about what counts as realizing them indeterminate, they are nearly all vague ends in our sense. And Richard’s question is really whether we have vague pro tanto (rational? fitting?) desires, rather than whether there are vague ends in our sense. I do think that many of our pro tanto desires are vague (though, again, this is not the issue in our paper); Jamie and Alex have given some examples, but I there are others as well. Despite Richard’s interesting take on the psychology of the hairline anxious, I think that some people desire simply not to be bald just as some people are averse to being old (and I still don’t see what is irrational about such desires); I might have the desire to swim on the lake or dance (well, dancing, not really…) without caring how well I dance or how fast I swim. And I should say that anyone who has seen me swim or dance (fortunately, a very small number of people) must be keenly aware that “swimming” and “dancing” are vague predicates. But this is really a different issue.
Hi Sergio, thanks for your response. I should clarify that I am skeptical as to whether any of our projects are truly unachievable by means of choosing on the basis of rational pairwise preferences. My linked discussion of the self-torturer case explains why I think the pairwise preferences you ascribe to ST are invariably irrational, for example.
I actually doubt it’s possible to have the sort of structure you describe without relying on vague sufficiency-type desires. I interpreted you as holding that ST merely has a coarse-grained desire to lead “a relatively pain-free life”, for example, because once you instead consider her situation in terms of competing graded pro tanto desires for more money and less pain, it’s provable that later individual increments are net negative in value for ST, whereas you claimed that “in any isolated choice, she must (or at least may) choose to turn the dial”.
Your builder example sounds like the cynical book-writer to me. If they only care to do a “good enough” job, to make sense of this I have to ask myself, “good enough for what purpose?” Presumably it’s to achieve some kind of social consequence: good enough to avoid getting sued, or to secure a positive recommendation from the customer, or some such. But there’s nothing vague about any of that.
Well, the force of “provable” here depends on which assumptions you need for the “proof”… I just took a very brief look, and it seems to me that you assume that there is a continuous, linear function from increase in pain (it should really be increase in electric currencies) to decrease in utility. This begs the question (even leaving aside the assumptions you are making about the value of pleasure and how it adds up). Arntzenius and McCarthy propose an orthodox solution that seems to rely on weaker assumptions (again, I just took a brief look at your proposal, so I might be missing something). We do discuss their solution in the paper and try to argue it doesn’t work.
I still don’t understand these restrictions on human psychology or rational desire; they seem to me unmotivated. Just to give one possible interpretation of the builder (though, again, I don’t really see the need to elaborate; the builder could just have a basic desire to build a good house, without having further preferences about how good it is) a builder could think that he was paid to build a good house (not a perfect house, or an extremely excellent one), and care for nothing beyond doing what he was paid to do.
I think the restrictions follow quite naturally from thinking about content (e.g. of desires) in terms of possible worlds rather than in terms of words. When thinking about these cases, I ask myself, “What state of the world is this agent aiming to realize?” I imagine presenting the agent with various possible worlds (seen through Kripke’s trans-world telescope, so to speak, or perhaps given in some Chalmersian canonical language together with the cognitive capacities to immediately grasp all that follows from the fundamental facts), and asking them, “To what extent is *this* what you’re after?”
This sort of approach invites us to go beyond “face value” when talking about desires described using natural language. Judging by the other comments up-thread, I’m not being completely idiosyncratic here, but it does appear to be more controversial than I would have expected, so that’s interesting.
re: Arntzenius and McCarthy — Yes, I’m very sympathetic to their approach! I don’t really get the force of your response (or the suggestion that it’s “question-begging” to assume that the disvalue of pain is linear and continuous). You insist that “Surely a person whose stable preferences dictate that she’ll smoke only a few cigarettes and then quit so as not to endanger her life unduly [under the “stochastic hypothesis” that each cigarette has an equal small chance of triggering lung cancer, and the pleasure from smoking each is independent of how many others are smoked] is rational in light of her ends.” This just sounds like an agent who doesn’t understand expected value and irrationally ignores low-probability risks no matter how dire.
I don’t grasp the step from “the underlying object of the desire is a spectrum” to “the desire is not vague.” After all, the underlying object of what is typically taken to be a belief with vague content is often a spectrum. For example, suppose I desire not to be bald, but in fact I am borderline-bald. The belief “I am not bald” is borderline-true or vaguely true or sorta-kinda true. And likewise my desire that I not be bald is borderline-satisfied or vaguely satisfied or sorta-kinda satisfied.
Isn’t that what we’re getting at when we say we have a vague project?
Hi Heath, apologies for the late reply. Note that in order for the relevant practical puzzles to get off the ground, the problematic kind of “vague projects” need to be understood in terms of all-or-nothing satisfaction, or as otherwise conforming to the distinctive structure that Sergio stipulates as his meaning of “vague project”. But so long as you agree with me that each increment makes a difference, and there’s no problem here for traditional decision theory, then I’m fine with calling the relevant projects “vague” in some (non-problematic) sense. I’m really only concerned with vague-in-the-problematic-sense projects. 🙂