Many philosophers want to use desires to account for
rationality, reasons, well-being, and so on. Few of them use actual desires in
this project. This is because actual desires are often ill-informed. In some
cases, had we known better we would not have desired to do what we did. As a
result, many philosophers, at least since Sidgwick, have used hypothetical,
informed desires in their accounts. I wonder how this part of the desire-based
views should be best formulated.
Here are few ways in which different people have put the
full-information condition for the relevant desires.
‘rational’ to refer to actions, desires, or moral systems which survive maximal
criticism and correction by facts and logic” (Brandt 1979)”.
Williams (Smith 1994, 156): “[F]ully rational agent must satisfy the following
three conditions:
(i) the agent must have no false beliefs,
(ii) the agent must have *all relevant true beliefs*,
(iii) the agent must deliberate correctly.”
Smith uses this view about rationality to analyse claims about reasons, value, and desirability.
subjective interests for an individual A (Railton 1986). We get an idealized
agent, A+, by giving A “unqualified cognitive and imaginative powers, and *full
factual and nomological information* about his physical and psychological
constitution, capacities, circumstances, history, and so on”. By asking what A+
would want A to want in her circumstances we can find out what is in A’s
objective interest. And, what is in A’s objective interest is according to
Railton what is non-morally good for him.
seems to be defended by Brandt, Smith and Railton. It’s been well discussed
earlier by, for instance, Swanton, Hill, Gibbard, and by both fellow
Pea-Soupers David Sobel (‘Full Information Accounts of Well Being’) and Valerie
Tiberius (‘Full Information and Ideal Deliberation’). Anyway, I think I have a
case that raises slightly different worry which is more concerned with the
information about the situation from which the agent finds herself.
The book starts from George fitting on a new suit. He sees a brown spot on his
skin. ‘Cancer’ he thinks. As a result, George immediately wants to kill
himself.
rational in the light of the full-information condition? Later on, a doctor
tells George that he does not have cancer but only a benign skin-condition. George
does not believe this. He thinks that this is just what a doctor would lie to
you when you have cancer. However, we can imagine that George+ has the true
belief according to which George doesn’t have cancer. In this case, George+
would not want George to kill himself. This would make George’s current, actual
desire irrational. Even if George had cancer, fully informed George+ would
probably not want George to want to kill himself. Prospects are usually good.
have and which are relevant for his desire to kill himself. His only daughter
is getting married to a man he hates. His son is living and having sex with a
man. George is disgusted by homosexuality. His wife is having an affair with
George’s good friend. And, George’s only meaningful personal future project of
becoming an artist after the immediate retirement is going to turn out to be
unsatisfying and catastrophic. His life is falling apart but he doesn’t know
this yet.
relevant information. It is conceivable that George+ wants George to kill
himself after all in order to save him from all the humiliation and misery that
will follow. If this was the case, then George’s desire to kill himself would
be rational after all.
we do not want to say this, how can we prevent the further information making a
difference. It seems difficult to say that the further information is
irrelevant for George’s desire to kill himself. After all, it certainly would
make a difference to what George+ would want.
information we should pay attention to in the rationality assessments should
reflect personal ideals of the agent. I wonder if that would deal with George’s
case in a way that would keep George’s suicidal desire irrational. I’m not sure
how that would work. Even if George does not vividly reflect on the further
information it seems to make the difference.
say this but it appears that Smith attributes a wrong view to Williams.
Williams actually formulates his view in the following way: ‘For it to be the
case that he actually has such a[n] [internal] reason, however, it seems that
the relevance of the unknown fact to his actions has to be fairly close and
immediate (Williams 1980, 103)’. This suggests that he thought that there can
be relevant information that is not close and immediate enough for assessing
whether the agent has reasons or not. Maybe the information in George’s story
falls into that category. Maybe the relevant information has to be about whatever
facts prompt the assessed, actual desire in the first place – object-focused
information rather than state-focused.
Jussi,
Don’t desire theorists want to restrict the desires that matter to those that are intrinsic (a.k.a. telic)? If so, I’m not sure why you are focusing on George’s desire to kill himself. George’s desire to kill himself doesn’t seem to be an intrinsic desire — surely, he doesn’t desire to kill himself for its own sake. If he did, then the blot on his skin wouldn’t have prompted him to want to kill himself. Presumably, then, he wants to kill himself only as a means to sparing himself the adverse effects of cancer.
I’m not sure they do, but you are right – that might be one way of dealing with the problem by making it disappear. I have always found that distinction problematic. Anyway, I think we would want an account about the rationality of non-telic, instrumental desires too. This would probably give us a desire-based accounts of instrumental reasons and value too.
If a rational non-telic desire is one that gets the agent agent to do actions that produce the state that is intrinsically desired, then I think the problem reappears. In this sense, George’s non-telic desire may count as rational. It would get George to the state that is his intrinsic end – avoiding the adverse effects of cancer. But, there is something odd about saying this because George’s telic desire to that end is based on false information about his situation. And, I don’t want to say that a non-telic desire can inherit rationality from an irrational telic desire. If that’s true, then George’s non-telic desire is irrational like we would like to say.
But, George+ has another telic desire for George that is informed in the situation. This is the intrinsic desire to avoid the troubles of life he is actually facing. A non-telic desire to commit suicide would be an instrumentally rational desire for this end too. So, the worry is that this makes George’s non-telic desire rational after all, only for a different end.
If that makes sense, then making the telic/non-telic distinction seems to make the problem to reappear in slightly more complicated form.
Jussi – A couple of things. First, I, for one, do defend an ‘actual desires’ account of reasons. And I think Williams did, too.
Second, it’s important to keep track of some pretty big differences between Brandt and Smith. Brandt is a ‘Humean’ about rationality, in the sense that he thinks that the basic things it is rational for a person to do depend on features of her actual psychology. But Smith is an anti-‘Humean’ about reasons – even though he appeals to what X would desire, were X fully rational, in explaining what X has reason to do, he thinks that your basic reasons depend in no way on what you are actually like, psychologically – they are the same for everyone, no matter what they are like.
As a result, I think that dividing views into desire-based accounts, and then splitting desire-based accounts into actual and counterfactual varieties is a totally unhelpful piece of classification. All ‘Humean’ views hold that reasons depend on actual features of an agent’s psychology, and views like Brandt’s merely hold that we need to state counterfactuals in order to capture exactly what feature that is. So in the space of things, Brandt’s view is a lot closer to mine than to Smith’s, superficialities to the side.
Third thing: your problem looks like it is the result of taking views that are supposed to be accounts of what someone has all-things-considered reason to do, and interpreting them as accounts of what someone has at least some reason to do. Brandt’s view, for example, is not a view about reasons, but about rationality. And Smith’s view, though he says it is a view about what there is a reason for you to do, is obviously really a view about what there is most reason for you to do, because it entails that there can never be reasons for you to do conflicting things.
Mark,
thanks, that’s helpful. When I mentioned actual desires, what I had in mind were accounts that relied only on actual desires that are not improved on in any way. I’m not sure whether many people except Hobbes in his account of ‘good’ do that. So, I thought you get easily to hypothetical desire accounts when you allow change even in some of the actual desires in the light of other actual desires (for the sake of coherence and the like) and the true beliefs the agent lacks. And, I though basically all desire-based accounts do this – at least all that add information which the agent does not hold.
You are right that Smith claims to be an anti-Humean about normative reasons. But, he too begins the rationalisation process from the actual psychological make-up of the agent and their actual desires. The hope is that, with full information, in a process that uses imagination and aims at maximal coherence and unity all fully rational agents cohere in some of their advice. If this is right, then the outcome – what reasons you have – does not depend on the actual desires. But, in some of his papers he seems to hesitate about whether this really happens (‘Internalism’s Wheel’ if I remember right). So, whether there are basic reasons in the Anti-Humean way seems to depend at least in part on the actual desires of all of us.
I didn’t really had a particular account in mind about all-things-considered or pro tanto reasons. I’m not sure what hangs on that. I was mainly interested about the rationality status of one particular desire. That seemed to depend on how much true information we add and adding the full information didn’t seem to give the intuitive status for the desire. It’s a further question when we know that the desire is rational or irrational what we want to use such desires to account for.
I know Dancy critises Smith for trying to give directly an account of over-all reasons. I’ve always wondered whether this is the right way to read Smith. His account of desires is a dispositional. So, it does seem conceivable that the A+ advices A to have standing desires, dispositions, that in particular situations pull in different directions. Such dispositions could be used to give content for pro tanto reasons if what the agent has overall reason to do in the situation is which one of these rational dispositions has most motivating force. I’m not sure whether this is a line Smith would be inclined towards but it does seem available.
Jussi – I understood what you meant by desires that are not improved on in any way. Like I said, I’m someone besides Hobbes who thinks that actual desires are necessary and sufficient for the existence of reasons, and as I noted, I think Williams is, too. Counterfactuals come in for Williams when it comes to specifying what makes for a ‘sound deliberative route’ from someone’s actual ‘subjective motivational set’ to actual motivation to do the action – so they affect which action you have reason to do given a given member of your actual psychological state, but you have to already – actually – have that member of your S.
On Smith – when he engages in doubt about the ‘convergence’ of the desires of fully rational agents – which is the hypothesis that is necessary in order for him to get the conclusion that everyone has, at bottom, the same reasons – he doesn’t start thinking that at bottom, people have different reasons. Rather, he starts doubting whether there are any reasons at all. Frankly, this is kind of bizarre, because it still follows from his official account that there would be – as you note. So it’s puzzling whether he really has some other analysis of reasons in mind according to which there would be no reason at all if the convergence hypothesis is false, or how, exactly, to interpret him. There’s an MA student at Tufts – Drew Kukorowski – who wrote a very nice undergraduate honors thesis about precisely this problem for Smith, and how to get him out of it. Or maybe he’ll enlighten us.
And on Dancy on Smith – I agree that there would be ways of tweaking Smith’s account or building on it in order to get an account of pro tanto reasons, and you’re suggesting one of them. But that’s not what he actually said. What he actually said entails that you can only have conflicting reasons if your fully rational self would give conflicting advice, so I infer that it can only plausibly be construed as an account of what you have all-things-considered reason to do.
There are other ways in which his account is incomplete – for example, it appears to analyze existential claims about reasons without including an existential quantifier, or telling us what the reason is for someone to do something, when there is a reason for her to do it. But these are forgivable faults, given that he was trying to keep things simple in order to focus on other questions.
Jussi,
There’s an article that you may or may not have seen (Mark Murphy, “The Simple Desire-Fulfillment Theory” Noûs, vol. 33, no. 2, pp. 247-272, June 1999). It’s been awhile since I read it, but I recall thinking that Murphy makes a pretty interesting case for an actual desire theory.
On the whole suicide question, here’s what I imagine a full info theorist might say. George+ has full information, which is going to include full information about the possible lives that George could, or (at least) very well might, lead. Will G+ desire that George kill himself? My guess is that there are going to be a great many rich and fulfilling lives that George could lead. After all, his circumstances (as you described them) really don’t sound all that bad. G+ may desire that George overcome his hatred and homophobia, ditch (or repair) the bad marriage, and develop better future projects. It would be in George’s interests to do so. So, as the example is set up, I doubt that G+ would want suicide for his non-ideal counterpart.
Now, we could conjure up an example in which some idealized agent, A+, would want suicide for A. But I wonder if this would require that there be absolutely no possible life worth living.
Mark,
good. The way you characterise actual desire views is much weaker than what I had in mind. So, the views I had in mind have it that if you desire some particular thing that is necessary and sufficient for the thing being good, you having a reason to do whatever gets you the thing, and so on. This is what Hobbes seemed to think but not many others share this view.
I don’t think Williams accepts this even if he might think that the set of actual desires is necessary and sufficient for *some reasons*. Whatever those reasons are is determined by a non-actual, improved set of desires. So, it’s not an actual desire account of particular reasons but reasons in general. I haven’t read your paper yet so I don’t know whether you accept the strong or only the weak version of the actual desire account.
On Smith, I’m not sure you need conflicting advice for conflicting reasons (or that this is entailed by the view). A+ might advice A to want to keep her promises. This is a non-conflicting advice. If A has promised two things that she cannot hold both, the advice still guides her to conflicting directions. If what A is adviced to do is what she has reason to do, then you seem to get conflicting reasons from non-conflicting advice.
Jussi
Interesting post.
It seems to me that we have reason to believe that George is not rational, or at least not competant. One of the major issues in medical ethics is determining whether or not a patient is competent to make decisions that impact the outcome of his/her treatment. Two of the critera normally used in this assessment (originally developed by Roth and Mizell [sp?]) is whether or not the patient 1) fully understands the situation he is in AND 2) the possible outcomes stemming from the alternative actions available to him to perform. From your example George does not meet these criteria when he denies the accurate and truthful information provided to him by his doctor. When this occurs, or so the argument goes, the doctor can then take a more paternalistic approach to provideing treatment. However, if the criteria for determining competency are met, then the patient has the autonomous right to do as he chooses. I think that it is presumed that if the competency criteria are met, then the patients desires and rationality overlaps.
John,
that’s interesting. I did start to think about this last fall when I was teaching Julian Savulescu’s ‘Rational Desires and the Limitation of Life-Sustaining Treatment’ paper in a bioethics class. His view seems close to the one you describe. He too sets a full information condition on rational desires that are a requirement for autonomous decisions. But, the point is that, if we want to hold George irrational in this case, we shouldn’t test his desire on the background of full information but something more limited.
You are right that in the book George is in a confused, depressed state. I’m not sure though that not believing doctors is enough to make incompetent. Reading papers like Joseph Collins’s Should Doctors Tell the Truth makes you wonder whether they ofter really are telling the truth.
Jussi,
I have heard cases that make me think there is a special problem in this area with full-information views. For example, Gibbard offers the example of a more vivid realization of what goes on in the intestines of people when they eat causing someone to be grossed out about eating in public. I once offered a case in which a fully informed self finds the prospect of becoming an ordinary person analogous to the way we might think of ourselves after a serious brain injury (e.g. better off dead). In each of these cases, I think, we retain the strong intuition that the desires which the information might prompt do not suggest a direction which the agent has good reason to go for.
I don’t yet have this sense in the case you offer. You mention that the agent’s projects are all doomed to failure, and his deepest personal committments are going terribly (although his actual self does not know this). This is the sort of information which seems directly relevant to the agent having a reason to kill themselves. So maybe I need to hear more about why you think the full-info view (without special bells and whistles) seems to get the wrong intuitive answer here before speaking more to your case.
Jussi – Not to split hairs, but according to Williams, your reasons are determined by what you would be motivated to do under counterfactual circumstances, not by what you would desire to do under counterfactual circumstances. And yes, I defend the view that your reasons are determined by what you now, actually, desire to do.
I don’t see what you’re puzzled about when it comes to Smith – I’m just straightforwardly applying the account. According to Smith, you have a reason to do what your fully rational self would advise you to do, not what your fully rational self would advise you to desire to do. So if your fully rational self would advise you to do one thing but want to do another, then you have reason to do the one and reason to desire to do the other, not reason to do each. Like I said before, that’s not to say that there isn’t some other account in the neighborhood that would say you have a reason to do each; that just isn’t what Smith’s account says.
David,
I quite like Valerie’s way of dealing with Gibbard’s cases. Her paper is really excellent on them. I also agree that the considerations you mention are good candidates for reasons to kill oneself (if there are any).
I’m now fascinated by what Williams says. He writes that ‘the relevance of the unknown fact to his actions has to be fairly close and immediate; otherwise one merely says that A would have reason to phi if he knew the fact’. So if we look at George’s act of killing himself which he wants to carry out in the situation, then you might think that the other facts not related to cancer are not immediate enough for that particular act as George conceives it. He wouldn’t then have reason to do the very act he is considering. As Williams says, he only would have a reason if he knew the facts. That seems to fit Williams’s internalist streak. But, of course you might take an more externalist line and think that he does have a reason to kill himself.
I’m having more difficulties with the rationality judgments. I had in mind a view according to which a particular desire is rationally critisizeable if it is ill-informed – a desire which the agent would not have if she had more informed. That desires are irrational in this way seems plausible. George’s desire seems irrational for the reason that it is based on insufficient, false information of his situation. Had he more true information he would not have the desire.
But, now the question is when we critisize desires in this way how much information should we add. If we added all information (like Brandt seems to do in his account), then George’s desire would be one he would still have. This would make it a rational desire after all. And, this is what I find unintuitive. So, maybe I find it untintuitive that when we assess the rationality of desires we should assess them in the light of what the agent would desire if she knew all the facts.
But, it may be that some of the desires that are irrational for the agent are ones that the fully rational version of the agent wants her to have. If we formulate the account of reasons on the latter advice, then that rational desires are not based on full information is not relevant for these views.
Mark,
that seems right. I’m just thinking about what in what Smith says commits him to only giving an account of over-all reason claims. You are right, I but ‘want’ in the wrong place. What reason I have is what my better version would *want* me to do in a situation. Now, the question is where does it follow from his view that Jussi+ would want me to do only either one of the option and not want me to do one option and want me to do the other even more? You might think that this follows from the maximal unity and coherence of the fully rational self’s desires. But, if I have to choose between breaking one of the two promises it still might be that the ideal self wants to some degree that I keep them both.
Jussi,
I am tempted to think we need to sort out our complaints against not fully informed desires. Sometimes we want to say that such desires were poorly shaped even by information that the agent had or should have realized was worth having. Other times we want to say that the desire is in some sense mistaken, but not that the agent whose desire it is was dumb in having that desire. I am tempted to use the irrational label for cases where we want to say that the agent was in some sense dumb and say that the agent’s desire is “contrary to reason” or does not give the agent reasons when, although perhaps the agent was not dumb to want what she wanted, her desire was problematically uninformed by the facts.
It might be fruitful to make a few distinctions.
George does not have reason to kill himself because he has cancer. This is because George does not, in fact, have cancer. If his desire to kill himself is based on this reason, then this desire needs to be reformed in light of true information.
If the question is whether George has all-in reason to kill himself, then we might say “yes.” G+ knows that G’s relationships are going terribly, and that his most important projects will fail. These are reasons for George to want to kill himself. They are there, he “has” them. He just doesn’t know this yet.
But suppose that G+ knows that G will never come to find this out. Suppose that G will think, to the end of his days, that his relationships are great, and that his art is stellar, even though it isn’t. Suppose further that G will feel a lot of pleasure, will have a (false) sense of accomplishment, and so on. If we think these are important, and make life worth living, then G does not have all-in reason to kill himself, and G+ would recommend that G not kill himself (and remain ignorant).
[This hinges on some prior story about whether or not ignorance can, at least sometimes, be bliss. I’m not persuaded that there are no such cases. I think I could be easily persuaded into thinking that George’s life is just fine if he never comes to know of his cheating wife, his gay son, or how crappy his art is. But that’s another matter.]
Consistent with your example, however, suppose that G will come to find out that his relationships are bad according to what G now classifies as good relationships (no cheating on me, no gay coupling, and so on), and that his projects will fail miserably.
We now need to know whether: 1. G can revise his conceptions of what makes for a good relationship. If G+ knows that G can revise them (and will), then G does not have reason to kill himself for this reason, just to revise his conception; and 2. G can pursue some other project (and will) which will not be a (total) failure. If so, then G does not have this reason to kill himself either, just to change his projects.
If G+ knows that G cannot change his conception of what makes for a good relationship, and really cannot pursue some other project that he will be successful at, then G does have all-in reason to kill himself for these reasons (and not for the reason that he has cancer, a reason he does not, in fact, have).
David,
I like your proposal. That would a less-than full information about of rationality of desires and full information account of reasons. I wonder what we should say about full rationality then that many use in their full information accounts of reasons. If irrationality is a charge of being dumb in some sense, then it looks like you can be fully rational (not critisiceable for dumbness) without having all the relevant information that we need in accounting for reasons. Maybe the ideal agent whose desires count in the reasons accounts is then even beyond rationality.
Jussi, Peter
I have a question. How would one define, or establish criteria for, “less-then full information”, so as not to affect the outcome of choices available?
John,
that’s a good question. There is a good case that I think Valerie discusses in her paper. If I remember this right this case was about Lennie who is happily married and faithful. If Lennie got all the vivid information about having extra-marital affairs, he might no-longer want to remain faithful. That information would distort his appreciation of the marriage he is in. Many would have the intuition that this information should not have such an effect on what reasons Lennie has.
Valerie has nice way of dealing with the case where Lennie’s basic values (family ones) should have an affect on to what information his rational version should pay attention to. In the same way, maybe in George’s case his thoughts on the bad facts of his life distorts his appreciation of other things he values (like the ignorant happiness he could live in). Maybe we could argue that therefore his more informed advicer should not pay attention to those facts.
If I remember right David Sobel discusses in his paper slightly different problems. But, he discusses the idea of an amnesia account. Some mileage could got out of that here too. So, the ideal, informed agent separately considers George’s life in say ignorance of the details of and whilst knowing the facts. Whilst he does one he forgets the other information. The trick is to get a perspective where you can compare the two so that you retain enough information so that you know whether the situations are desirable but not so much information that the other part of the information distorts the disarability of the other.
I’ve also been thinking about Williams’s claim that the information must be fairly close and immediate. So, as others have suggested here, maybe when we look at the desire to kill oneself because of one thinks one has cancer, the close enough information concerns whether one has cancer or not. The state one is in in general doesn’t seem immediate enough for that desire. But, maybe there is another desire George could have for which that information would be close enough.
Jussi,
It warms my heart to discover that at least one person has actually read my paper on this — it was the first thing I ever published. Thanks for discussing it here! I wanted to clarify that my point in that paper is not to rescue full information theories, but rather to diagnose their failure. I think the problem stems from what’s at issue in the most recent turn in the discussion here, which has to do with defining criteria for “less-than-full information”.
One thing that would help full information theories in general is to be able to rely on a notion of *relevant* information. Brandt defines relevant information in terms of motivational efficacy, but this invites all sorts of counterexamples. It seems to me that any way of understanding what is relevant that actually answers the counterexamples will inevitably employ some norms or other. So, full information theories aren’t going to succeed if they want to reduce normative properties to non-normative ones.
It strikes me that defining “less-than-full information” will have just the same problem.
I’m also curious to know whether you (Jussi) are interested in these questions because you think full information theories are on the right track or whether you just like the puzzle of it. (I always thought they were on the right track about *something* myself!)
A question for Mark (if you’re still reading this thread):
I was confused about something you said about Williams. You say that you think he has the view that actual desires are necessary and sufficient for the existence of reasons and then you say
“Counterfactuals come in for Williams when it comes to specifying what makes for a ‘sound deliberative route’ from someone’s actual ‘subjective motivational set’ to actual motivation to do the action – so they affect which action you have reason to do given a given member of your actual psychological state, but you have to already – actually – have that member of your S.”
Here’s what I’m unsure about. I thought Williams’ view was that I have a reason to X if there is a sound deliberative route from something in my subjective motivational set (S) to a desire to X. If S is not a desire to X, but something else, and I currently have no desire to X (though I will have one after deliberating) how can the actual desire be necessary for my having a reason? Do you think Williams’ view is that we don’t actually have a reason to X until we get to the desire via the sound deliberative route?
Just curious. I haven’t thought about Williams for a while.
Valerie,
I do agree that full-information accounts are on the right track about something. I haven’t fully figured out yet about what. I think they are essential for accounting for internal reasons. They play an important role in the game of giving reasons in everyday life. We try to put the facts in the front of others and get them to use the right deliberative route from their desires in order for them to come to a certain conclusion about reasons they have while being motivated to act on those reasons. I think I might want to say that reasons are not exhausted by these reasons but that is another story.
About the question you pose to Mark about Williams, I think he was talking about actual desires being necessary and sufficient for any reasons whatsoever for the agent. Unless the agent has a S, an actual motivational set, there is no delirebative route to being motivated to do anything and hence no reasons. And, if there is a deliberative route to being motivated to do something in the light of full information, then there is a reason (this sufficiency part is more debatable).
But, I think you are right that for a particular reason the view does not imply that you have to have a desire to do that very thing. So, in that sense an actual desire with a particular content is not required. I’m sure Mark will be able to explain this better.
I think Valerie’s question was for Mark S., but for what it’s worth, I think Williams gives two different criteria for an “internal reasons statement” which he thinks to be equivalent. From memory they are (1) a person has an internal reason to X if sound deliberation from his/her current S would lead to a motivation to X. (2) A true internal reason statement about the agent would be false if some actual member of S (from which deliberation would lead to a motive to X) were absent from his/her motivational set. These two are only equivalent if all sound deliberation from a motivational set must depend on what is in the motivational set. Since some rationalists deny that all reasons are hypothetical, I think such rationalists deny this assumption. (Remember he is intentionally vague on what good reasoning is, and also that presupposing that good reasoning is only means-ends would make his argument for the position at least a little question-begging.)
But what this means for Williams is that he thinks we don’t actually have a reason until we have the relevant member of S in S. And it doesn’t really matter whether it got there through sound deliberation so long as it is not based on false information or bad reasoning. A desire that got into S in virtue of getting hit in the head will still ground reasons. Still it is also true that he does think that sound deliberation can add elements to S, though it is hard for me to see that this way of adding elements to S grounds new internal reasons. (Though I can see that it might do so in ways that don’t depend on these new elements grounding reasons in the way other elements of S ground reasons.)
Hi, Valerie.
Williams calls his model ‘sub-Humean’ because an agent’s subjective motivational set, S, contains things other than desires, narrowly understood. So you’re right that he doesn’t require an actual desire. But on the other hand, his view doesn’t require a counterfactual desire, either – it just requires counterfactual motivation.
His view is that the elements of an agent’s actual subjective motivational set – whether desires narrowly understood or otherwise – are necessary and sufficient for reasons. His view is not that reasons depend on desires, but only on counterfactual ones – his view is that reasons may depend on anything that can go into your subjective motivational set, S – and that the way you check to see whether an agent’s S makes them have some reason, is to see whether there is a sound deliberative route from their actual S which, if followed would motivate them to act in that way.
His point is that nothing much about what is important to ‘Humeans’ hangs on whether the psychological state that the reason depends on is a desire, in some narrow sense, or some other state that has motivational features. So a more general ‘Humean’ or ‘sub-Humean’ picture needn’t build that stuff in. I myself have pointed out that we can generalize further: nothing much about what is important to ‘Humeans’ turns on whether the connection between reasons and the psychological state on which they depend are ones that lead to a “sound deliberative route” to action. All of the problematic ‘Humean’ ideas that have made ‘Humean’ ideas about reasons play a central role in moral philsophy can be resurrected in a much more general picture than Williams’s. So arguments against ‘Humean’ ideas that turn on the specifics of Williams’s formulations and are so ubiquitous in the literature are really just arguments against an idiosyncratic view, not general arguments that reasons don’t depend on actual desire-like psychological states.
On the other hand, it is a central part of the problematic aspects of ‘Humean’ ideas about reasons, that they posit a connection between an agent’s reasons and her actual psychology – even if that connection is very indirect, as in a view like Brandt’s. That is why I think it is a mistake, for many purposes, to lump views like Brandt’s with views like Michael Smith’s, even though they are superficially very similar and will face some of the same obstacles.
Jussi and Mark – On my reading, Williams doesn’t think that a sound deliberative route has to involve true beliefs, but only that it (i>may. As I read him, he thinks that his account explains both the sense in which the guy with the gin and tonic has a reason to take a sip (because there is a sound deliberative route to this that doesn’t involve correcting his beliefs) and the sense in which he has a reason to set it down without taking a sip (because there is a sound deliberative route to this that involves pointing out to him that it is full of petrol). Personally, I think this part of the picture is crazy, but it looks to me like what he actually thought.
And I’m with Mark – for Williams, sound deliberative routes that add new members to your S can’t ground new reasons, unless concatenation of sound deliberative routes fails to yield a sound deliberative route – which sounds totally implausible. My own Humean view is that reasoning your way to new desires can result in your having new reasons; this is just one example of how Williams’s view is constrained by its idiosyncratic detail.
Thanks, Jussi and Marks. I didn’t intend much to hang on using the word ‘desire’ in my first question, though I can see I should have been more careful about this. What I was wondering (and what I’m still unsure about) is the actual vs. counterfactual contrast.
Mark VR’s first memory of Williams’ view is this: “a person has an internal reason to X if sound deliberation from his/her current S would lead to a motivation to X”.
This sounds good to me as an interpretation of Williams. So, my question: why doesn’t this imply that a person could have a reason to X without having a motivation to X as an actual member of his S (as long as some member of his S would lead to such a motivation via a sound deliberative route)?
Jussi,
I take it your interest in full information accounts does not depend on such an account being fully reductive. I think that’s a good thing.
Valarie,
You wrote:
“ So, my question: why doesn’t this imply that a person could have a reason to X without having a motivation to X as an actual member of his S (as long as some member of his S would lead to such a motivation via a sound deliberative route)?”
That is the way I read Williams, at least if to be a “motive to X” a state has to have X as part of its content (As in ‘I want to X’). I think it is perfectly ordinary talk to say that someone has a motive to kill their uncle, not because they want to kill their uncle but because they want money (and killing their uncle would be a way to get it). In that sense it may be that we would count the member of S that grounds the sound deliberation as a motive to S, in which case the person would have a motive to S in that sense. But I don’t think much turns on that so long as we are clear what we mean to be saying.