Many philosophers think that whenever we are motivated to act, our motivation always involves a belief. E.g. according to the version of the Humean Theory of Motivation that is defended by Michael Smith, your motivation for action always consists of a desire and a "means-end belief" (see The Moral Problem, p. 92).
I think this is false. There are cases in which for no further reason, you simply form an intention to perform a certain basic action right now. Then I think that this intention can motivate you to act, without the need for any "means-end belief" at all.
E.g. suppose that I am walking along by myself, and for no particular reason, I just form the intention to raise my arm right now.
Now, raising my arm right now is a basic action for me (to use Arthur Danto’s famous term). It is something that I can just do. I can simply raise my arm without doing anything else (or at least without intentionally doing anything else) as a means to raising my arm. So surely no "means-end" belief is involved in the process that leads from my intention to raise my arm right now to my raising my arm now.
It would be wrong to say, it seems to me, that I must have the belief ‘This is a way for me to raise my arm’ (where ‘this’ corresponds to one of the special "practical modes of presentation" that are discussed by Jason Stanley and Timothy Williamson in their 2001 JPhil paper "Knowing How"). After all, what difference would it make for me to have this belief, until this belief helped to motivate me to form a further intention — viz. the intention to do this? But then if it was necessary for me to have the means-end belief ‘This is a way for me to raise my arm’ in order to carry out the intention to raise my arm, why isn’t it necessary for me to have a further means-end belief, of the form ‘This1 is a way for me to do this2‘, in order to carry out the intention to do this2? But now we seem to be launched on an infinite regress…
So I think that we must recognize that an intention to do a basic action right now can lead directly to action, without needing the cooperation of any "means-end" belief, or indeed any other belief at all.
Moreover, I also don’t see why my original adoption of this intention need have been motivated by any antecedent belief either. I just decide to raise my arm because I feel like it. Perhaps it is true that as a (more or less) rational agent, I wouldn’t have adopted this intention if I had held any beliefs that would have made it rational for me to believe that there was a good reason for me not to raise my arm. That is to say, the way in which I form this intention is sensitive to the absence of beliefs that should have motivated me to do otherwise. But this is not the same as my being motivated by the presence of any beliefs that I actually hold.
So I conclude that beliefs are just not necessary for motivation at all. Intentions can suffice for motivation all by themselves.
Ralph, that’s a very interesting point, and plausible as far as I can see. Let me play the devil’s advocate however (an advocate for Descartes’s evil demon, or the mad scientist in the brain-in-the-vat hypothesis, to be more precise). Suppose that all the time while you THOUGHT you were raising your arm, you actually weren’t, because you really don’t have a body. Then it suddenly seems that you were acting on a false belief, a fortiori on a belief. Moreover it seems to be a mistaken means-end belief. If it’s true that you are a brain in a vat, somehow communicating with the mad scientist, and getting him to raise your arm, would be a means more conducive to your end.
In any case, I wonder why Michael Smith thinks that a motivation for action requires a means-end belief. My (quite uncharitable) guess is this. He uses the locution “having a reason for action” instead of “having a motivation for action”. The locution “having a reason for action” is confusing, because it could mean:
(i) there is a normative reason why A should X, which can be arrived at via deliberation, or
(ii) there is a motivational explanation why A would X.
Disambiguation (i) would necessarily require a theory of practical reasoning to tell us what counts as a deliberative reason for action, but disambiguation (ii) not necessarily, unless practical deliberation preceded and plays a part in the causal explanation of A’s having X-ed.
Anyway, my point is that if one holds a theory of practical reasoning according to which all deliberative reasons for action are means-end reasoning, then one would have to maintain that means-end beliefs figure essentially in arriving at reasons for action. But this is when we disambiguate “reason for action” in the sense of (i). In the sense of (ii), one could act on a sudden impulse and thus do something foolish and irrational, but still have a reason for action. So, in the sense of (ii), it is not so obvious to me that a means-end belief need figure into a “reason for action”.
Now this explanation of why Smith requires means-end beliefs is uncharitable, because it attributes to him a confusion. If there’s a better explanation, please let me know!
Ralph, what is bad about saying both that (i) you can raise your arm without there being something you need to do as a means to raising your arm and (ii) that some means-end belief is necessary to get the arm-raising process going? For suppose we consider beliefs to be, I don’t know, representational informational states that play “some” causal role in our actions. So, suppose some state encodes information about where the arm is in relationship to the arm-activator, information about the absence of nearby arm-raiser assassins, perhaps information about the relevant muscles that need to be tweaked to get the arm going up, rather than down…This information is there, in the recesses of our mellon, to get our arm raisings going. Perhaps they are psychologically necessary.
So is there something in the concept of “motivation” in the Humean Thesis to make these states irrelevant?
Boram — I suppose that you’re right: the best way to argue that motivation always involves beliefs is by arguing that whenever you intentionally do A, you are motivated in part by a belief of the form ‘I can do A’.
I think that the best (most charitable) interpretation of Smith may be that he thinks that this belief in a proposition of the form ‘I can do A’ is a means-end belief.
(This would, I think, be rather more plausible than your interpretation, because the distinction that your interpretation represents Smith as conflating is a distinction that he draws quite explicitly in several places.)
However, I would still object: I don’t think that it is right to describe the belief ‘I can do A’ as a “means-end belief”.
Moreover, it doesn’t seem clear to me that my belief that I can raise my arm really is part of what motivates me to raise my arm. At most, it is a necessary condition of my being motivated to do this. But there are many other such necessary conditions: e.g., I must believe that I have an arm, I must have the ability to think about my arm, and about raising things, and so on. Surely it would be odd to say that these facts about my mental states motivate me to raise my arm.
This brings me to my answer to Christian’s point. The concept of what motivates an action is presumably the concept of what explains the action, in the sense of making the action intelligible. Pointing to the mental states that motivate an action must provide an answer to the question: Why did he do that? (Or: What did he do that for?)
So, suppose that we ask, “What did he do that for?” It does not normally provide an answer to say, “He believed that he could — that’s why”!
In general, certain background beliefs (of which the agent is quite unconscious at the time) may well be necessary conditions without which these motivating mental states would not play this explanatory role. But this seems quite different from the role of a genuine means-end belief (e.g. my belief ‘I have to cross the street to reach the florist’s shop’) in explaining or making it intelligible why one acts as one does.
In my contribution to the “Instrumental Reasons, Instrumental Rationality” symposium at the Joint Session in 2004 I discussed the case of desiring to perform basic actions and I said exactly what Ralph says I should be interpreted as saying “on the best (most charitable) interpretation.” When you act on the desire to just move your finger it seems to me that you must also have and act on the belief that you can move your finger. The reason I gave in support of this had to do with what we expect of people who have the desire to just move their finger but falsely believe that they’re paralysed and so cannot. I agree with Ralph that there are alternative ways one might explain these cases. Is the belief that one can required, or just absence of the belief that one cannot? It is hard to answer questions like these. At the end of the day, it might be that the best that can be said in favour of seeing things the way I do is that we’re thereby able to give a simpler theory of action, simpler in that it treats all cases uniformly.
Anscombe style explanations of non-inferential practical self-knowledge often suggest that intending to X or intentionally Xing entails having some belief. Setiya, for exmaple, argues in his recent book that if you intend to X then you belief that our intention to X will cause on to X.
So in your case, you would believe that your intention to raise your arm will cause your arm to raise. This is compatible, of course, with your failing to have any means-end belief.
Ralph,
Sorry. That was poorly written.
Here is a corrected version of my attempt to play devil’s advocate:
Anscombe style explanations of non-inferential practical self-knowledge often suggest that intending to X or intentionally Xing entails having some belief (e.g. that one will X).
Setiya, for example, argues in his recent book that (roughly) if you intend to X then you believe that your intention to X will cause you to X.
So in your case, you would believe that your intention to raise your arm will cause you to raise your arm (or perhaps your arm to raise). This is compatible, of course, with your failing to have any means-end belief.
Doesn’t it really depend on whether we’re looking at ‘beliefs’ as some kind of internal state or as an informational state others (and yourself) project on to you to make sense of your actions?
If the former then, sure – perhaps beliefs aren’t necessary for motivation – but ‘beliefs as an internal state’ has more problems going for it that its failure to play nicely with the Humean theory of motivation.
If the latter account, then it would seem there are a number of beliefs we can legitimately attribute to you which charitably make sense of your action: (1) You have an arm, (2) Your arm isn’t bound etc….
I’m curious about this phenomenon. The example Ralph gives is:
“Suppose that I am walking along by myself, and for no particular reason, I just form the intention to raise my arm right now.”
I’m not sure I can suppose this or even conceive it. Could I imagine Ralph walking down the Merton Street and suddenly raising his arm out of nowhere when no-one is around? hmh. Hardly. And, even if I saw him do that I would begin to seek for reasons – maybe his jacket was uncomfortable or muscles tense or something. With such tacit beliefs I can interpret his intention to just raise the arm as a way of making himself more comfortable. This is done by providing an instrumental rationale he might not be aware of. But, were he to decidedly deny all such considerations (including the one about not trying to make a point) I would be really puzzled and fail to make sense of him. And, if I am at lost how to rationalise his movements there seems to be no reason to interpret them as actions rather than as something that happened to him. He might claim that he was really acting but if I cannot make sense of this I might remain doubtful.
I guess this is a vague hope to say that the talk about actions seems like a way of understanding behaviour with a network of propositional attitudes. And it seems to assume a background of instrumentality – of pursuing some ends. Otherwise the making sense part seems to fail.
Ralph,
I agree with Duncan. If by ‘belief’ you mean ‘psychological state representing the world of which the believer has an experience or awareness when she is in that state’, then, true, it seems no such state is necessary. If asked, for instance, people might well often deny that they believed anything at all when they acted in certain ways. On the other hand, if by ‘belief’ you do not include the clause about having an experience or awareness of being in that state, then it seems less plausible. If you aren’t in any states at all that represent the world around you when you act, it seems hard to explain your action, why you go in the direction you do.
Michael asked: I agree with Ralph that there are alternative ways one might explain these cases. Is the belief that one can required, or just absence of the belief that one cannot?
On a different thought Ralph suggested: So, suppose that we ask, “What did he do that for?” It does not normally provide an answer to say, “He believed that he could — that’s why”!
It seems that we should not count an absence of a belief as part of the explanation for the hand raising. Citing the absence of a belief/s does not make the action intelligible. Here are three reasons I see to think this. First, explanations bottom out with causal explanations. So a truth of the form ‘e explains f’ entails that there is some causal relation r in virtue of which e explains f. But absences cannot be causes (I realize much more needs to be said about this ‘in virtue of’ relation). Second, if citing absences of belief counted as giving (part of) an explanation for action, then an explanation for any given action would be implausibly complex. For uncountably many beliefs would need to be absent to explain the action, e.g., I didn’t believe I couldn’t raise my hand, I didn’t believe that if I were to raise my hand I would be shot, eaten, shot and eaten, etc…Third, as Ralph point out, “It does not normally provide an answer to say, “He believed that he could — that’s why”! I suggest that similarly, it does not normally provide an answer to say, “He didn’t believe that he couldn’t, he didn’t believe that there was some bad consequence for raising his had, etc.” These explanations seem, to me anyway, just as unnatural.
So I don’t think absences of belief will do the explaining of these seemingly pointless actions. So we need to cite a positive belief. If Ralph is right, then there aren’t any. But then maybe it’s worth pushing (and a number of people are) this distinction between beliefs that are mere necessary conditions for the existence of motivational states to play their explanatory role, with these beliefs being part of the relevant motivational states.
I agree with Ralph about this. Which is an agreeably novel experience, the first time I can recall agreeing with Ralph about anything.
The belief condition is questioned at some length in Mele’s “Effective Reasons and Intrinsically Motivated Actions” in PPR 48, 1988 – recapitated in M&A, pp. 71ff.
I’m not terribly convinced with efforts to plug the gap with beliefs that I can whatever. We can surely very readily envisage someone A-ing – and A-ing intentionally – who believes that he cannot A. After a nasty event, I fear that you have been killed and I have been paralysed. Then you appear and, relieved and delighted to see you well, I leap to my feat and rush to embrace you. As I do so, I think, Hey, and I thought I was paralysed. But I was wrong, a second pleasant surprise.
Of course a false belief that I am paralysed might equally well, as Michael observes, be envisaged resulting in my not being able to A. But surely the crucial and general explanatory condition here is not that I believe I can A but simply that the fact that I can A.
Jimmy,
problem with that case is that in it you leap in order to hug the person, to embrace her, to salute the friend, to express your joy that she is alive, and so on. Thus, there are plenty of beliefs about the action being instrumental to your ends. What we need is a case in which you believe that you cannot do something *and* you also lack any instrumental beliefs. Maybe there are such cases but they are harder to come by.
Thanks Jimmy! I completely agree with what you say here. Thank you as well for pointing out that Al Mele had defended the same view so long before me (obviously one danger of spending so much time in the philosophy blogosphere instead of reading published works is that one ends up repeatedly reinventing the wheel…).
In particular I agree with your response to Michael Smith (whom I am relieved to see that I haven’t misinterpreted…). Michael has highlighted the case of the person who falsely believes that he is paralysed. This is a very interesting case, but it does not show that the belief that one can A is necessary for intentionally A-ing (let alone that it is part of what motivates one to A). At most, his case shows that if one believes that one cannot A, this belief will normally (though not invariably, as Jimmy’s lovely example shows) motivate one not to have the intention to A. Jimmy’s example also seems to show that in the relevant sense of ‘can’, the belief that one cannot A does not in general guarantee the truth of the proposition that one cannot A.
BTW, there are many topics on which Jimmy and I agree. E.g. I agree with the basic point of his paper on “Consequentialism and Cluelessness” (Philosophy and Public Affairs 2000). In general, he and I are united in our opposition to consequentialism (although he draws a broadly contractualist lesson, while I am almost as implacably opposed to contractualism as to consequentialism).
The area where Jimmy and I almost invariably disagree is metaethics; in moral psychology we agree at least some of the time (e.g. I agreed with a lot of what Jimmy said in his old paper “Belief, Desire and Motivation”, APQ 1996).
If I get the chance any time soon (which is admittedly pretty unlikely given the point that we’ve now reached in Oxford’s Michaelmas Term …), I will respond to some of the other comments on this thread (e.g. to Jussi Suikkanen, Robert Johnson, et al.). Thank you all for your really great comments!
The idea that beliefs are not necessary for motivation is one that cannot be correct simply because for every action one takes, there had to be some sort of a basis or reason as to why he did it. And if one had a reason for doing something, this reason was based on some sort of foundation for which the person. Let me take the example of “I am walking along by myself, and for no particular reason, I just form the intention to raise my arm right now.” There had to be some sort of reason for him to have raised his hand. The human mind simply does not work like that. The intention to raise that arm was at the very least backed up by some sort of belief that the arm would actually go up. The arm does not go up on it’s own. It is instructed to do so by the brain. Your brain relies on functions such as sight and sound in order to be able to give its instruction for the rest of the body. These instructions will be based on some sort of conclusion that the brain made however. For example, if the brain is notified of a ball flying in your direction, it will make you raise your arm because it believes you will be hit by the ball if you do not. The brain will not give instruction to react to nothing. Anything a person does is backed with some sort of belief no matter how small and insignificant the belief may be. The brain bases it’s decisions off of some conclusion that it made based on some sort of belief of what could and will happen if it does not react.
Martin,
wouldn’t that view imply that there are no unintentional actions and surely there are actions like that, no? Walking down the corridor not being attention, I can walk into something. This is an action of mine but it is not that there was a reason for which I did this action. It’s true that what I did was intentional under a description but not that under the description of that action.
I also have worries about the talk about the brain doing things like instructing, making conclusions, believing, being notified, and deciding. These are things we do as persons but I’m rather skeptical whether these descriptions correctly apply to the brain. This is the worry that people who talk in this way make a mereological fallacy.
It does seem to me that the main example – “I decide for no reason to raise my right arm” – is, at best, an example of an odd set of edge cases. Most of what we care about is actions that have, at some level, an “end” involved, and thus, at some level, beliefs about the means that lead us to that end.
The “unintentional action” seems to me to be a red herring – if “walking into something” is a unintended action, surely the “walking” part was intended, with an end of some sort, and a (mistaken) belief that nothing was in my way…
Perhaps, if you are intentionally V-ing, you need not believe that you are V-ing. That is, you might be skeptical about whether you are successfully V-ing. But, it seems to me, that you need to believe that it is possible that you are V-ing.
But that too may be too simple.
There may be situations where you need not even believe that it is possible to be V-ing, despite intentionally V-ing. But these aren’t cases of basic action. And so there is something else you are doing–something closer–that you correctly believe it is possible to be doing.
So, if I am doodling for no further reason, I must believe it is possible I am doodling. Otherwise, my doodling is not intentional.