I’ve been reading Krister Bykvist’s and Jonas Olson’s wonderful paper entitled ‘Expressivism and Moral Certitude’ (HERE). Krister and Jonas argue that ecumenical forms of expressivism are unable to give a reply to Michael Smith’s objection to non-ecumenical forms of expressivism according to which expressivists cannot account for moral certitude. I have no quarrels with their arguments but I fail to see Smith’s problem for expressivists in the first place on which their argument is based. Here’s why.
Smith claims that evaluative judgments can have three distinct features: importance, robustness, and certitude. Krister and Jonas, think, like Smith, that expressivists can deal with the first two features of evaluative judgments but not the third.
Importance means here how important we think some obligation is, how strong we think some reasons are, or how good we think certain actions are. So, we think that one ought to save an innocent life rather than keep a promise and that it would be better to do the former. We think saving innocent lives is thus more important than keeping promises. Of course, the expressivist can say that this feature of our evaluative judgments is accounted for strengths of the desires which we express with by our moral utterances (or alternative the plans we express to choose one option over the other in conflict-cases).
Robustness of evaluative judgments means their robustness over time (i.e., diachronic stability). This feature of evaluative judgments is not a problem for the expressivists either. They can point to the fact that we have also other non-cognitive attitudes that are equally robust – our personal loves and concerns, for instance. Whatever explains the robustness of these attitudes can likewise explain the robustness of evaluative judgments. I think that here the expressivist can also correctly explain temporal robustness by higher-order attitudes. We are also concerned about retaining the evaluative attitudes we have – we care about caring about certain things.
The third feature of evaluative judgments is their varying degrees of levels of confidence or certitude. This corresponds to having more or less certain beliefs, i.e., to the degrees of belief. What is important is that this feature of evaluative judgments is independent of the previous two features. You can be certain in some evaluative judgment without that judgment being about anything very important or it being very stable over time. Thus, you cannot use the strength of the expressed attitudes or their temporal stability to explain the certitude of moral attitudes. And, Smith, Bykvist, and Olson argue also that the higher-order attitudes do not help here. I can be uncertain about whether the vegetarian diet is morally required even if I really strongly want to desire to only eat vegetables.
And, thus, the expressivist is allegedly not able to account for moral certitude. I’m not sure about this. I think the question about how certain we are about a given evaluative judgments resembles the question of how much we identify with a certain concern. So, the expressivist could just say that moral certitude is identifying oneself with the concern expressed in an evaluative judgment. To be certain of some evaluative judgment would on this view be to hold the concern as a central part of oneself. Note that this feature of desires differs from their strength – I often externalise my strong urges. The expressivist can say more about what the identification in question amounts to by following Frankfurt's account of what it is for a desire to be one’s own. The expressivist can also keep the issues separate and use Frankfurt's account of identification to merely account for certitude.
Frankfurt tried to answer the identification-problem first by using higher-order attitudes. But, given the criticism by Watson and others, he quickly gave up on that attempt (for quite similar reasons that cause problems for the expressivist according to Smith, Krister and Jonas). His new way of accounting for identifying oneself with some desires is based on the notion of *wholeheartedness*. This seems like a good account in many ways which makes me wonder why the expressivist cannot use the very same notion of wholeheartedness to account for the different degrees of certitude of evaluative judgments. On this view, to be certain of an evaluative judgment would be to endorse the concern it expresses wholeheartedly which certainly sounds right.
What does it mean to hold some concern wholeheartedly? According to Frankfurt, wholeheartedness means not having any reservations or conflicts in concerns that would move one to initiate or support attempts to stop having the initial concern or desire. You could add that wholeheartedness is also increased by having a whole network of mutually supporting cares and concerns some of which are volitional necessities. This would give us similar degrees of wholeheartedness as we have degrees of belief and would offer us an account that gives same story for degrees of belief (how wholeheartedly they are held).
Why cannot the expressivist say that it is the concerns that we hold wholeheartedly in this way that are the evaluative judgments which we hold with certitude? And, the less certain we are in our evaluative judgment the less wholeheartedly we hold it? It just seems to me that this solves all the problems referred to by Krister, Jonas, and Smith. They are usually right though so I must be missing something here.
Jussi,
Just as a first intuition, it seems to me that what I care about wholeheartedly, and what I am certain of, are quite different things. In particular, I am pretty certain of a lot of things, evaluatively speaking, which I would not say are close to my identity. Here’s a thought experiment to probe the issue.
One thing we might have evaluative attitudes about is whether some course of action is a “good idea” in some circumstances. Consider, for example, the question of what to do if you ever find yourself in a Newcomb problem (with two boxes to choose from, and a very good predictor of your choices who has stashed cash in the boxes). I have considered this problem pretty carefully but I do not regard it as particularly important, since it is unlikely to ever happen. I do not know what is a good idea in this circumstance, and this view has been robust over time. And I am not certain what is a good idea.
Consider in contrast David Lewis, who considered this problem more carefully than me. He thinks it is a good idea to take both boxes. He probably doesn’t regard the problem as important, his views are stable, and they are certain.
Now my point is, where should wholeheartedness come in? Neither DL nor me thinks this issue is important—if that’s what wholeheartedness amounts to, we’re the same. Both of us have robust views—if that’s what wholeheartedness amounts to, we’re the same. We differ in certainty—if that’s wholeheartedness, then DL has his view wholeheartedly and I do not. It doesn’t sound right to me, particularly if we tie wholeheartedness to identity. I do not think it was key to DL’s self-conception that he was a two-boxer (but I could be wrong, I suppose). In any case, surely there are a lot of philosophers who follow Lewis but for whom it is not key to their self-conception that they are two-boxers?
As a second intuition, it seems to me that certainty is tied to justification. The claim that expressivists can’t account for evaluative certainty, I suspect, boils down to the claim that expressivist accounts of evaluative justification, or moral epistemology, are handicapped in some way.
Heath,
thanks that’s helpful. This makes me think that the expressivist should give the account of certitude in terms of wholeheartedness without tying it to the notion of identity necessarily. It could be that the term wholeheartedness too is misleading. What seems to me still to be working in this case is the substantial definition from Frankfurt of wholeheartedness. That seems to capture something about certitude.
What distinguishes Lewis is that his reflection has closed off any potential motivations to revise his plan to two-box or to start reflecting on whether to do so. In this sense, he satisfies the definition of wholeheartedness (which perhaps we should call something else). In any case, wholeheartedness is not the same as importance. One can hold wholehearted views about unimportant issues.
Your cases is trickier as you don’t seem to hold a view whether you should one-box or two-box. But take someone who has perhaps somewhat reflected but uncertain view that one should one-box. In this case, the expressivist could account for this uncertainty in terms of the agent still having some cares and concerns that could prompt the agent to reassessing his plan to one-box. Or, at least the agent has not ruled out such cares and concerns.
I think this can be tied to your point about evidence. One is certain when one has evidence for the evaluative judgment. I think the expressivist can account for this by thinking that this means that the agent has a conditional plan that refers to what is evidence not to open reflection on the given concern that is expressed by the judgment.
Thanks for the kind words, Jussi! Some comments:
1) I agree with Heath that caring wholeheartedly about something and being certain about something seem to be different things, at least if the former is understood in terms of self-conception; there are many things I am certain of that have little or nothing to do with my self-conception.
2) But I am not sure how to understand the notion of wholeheartedness. It seems that on one natural understanding, to care wholeheartedly about something is to care about it very mych, i.e., to have a strong desire. But then wholeheartedness, and so certitude, are understood in terms of strength of desire. And you have already said that importance too is to be understood in terms of strength of desire. Since certitude and importance must be allowed to vary inedependently this cannot be right.
3) In response to Heath you say this:
“What distinguishes Lewis is that his reflection has closed off any potential motivations to revise his plan to two-box or to start reflecting on whether to do so. In this sense, he satisfies the definition of wholeheartedness (which perhaps we should call something else).”
This makes me suspect that you think of wholeheartedness in terms of robustness (DL’s belief that the two-box plan is the best strategy seems robust). But this cannot be right either for I can be highly certain that p although my belief that p is not robust (I might be easily persuaded). Similarly, I can believe that p with low certitude although my belief that p is robust (I might not be easily persuaded). Or perhaps you mean that wholeheartedness is to be understood in terms of beliefs about robustness. But that view is subject to similar problems. For instance, knowing that I am easily persuaded I can believe that my belief that p is not robust although I am (at the moment) highly certain that p.
Thus wholeheartedness is not strength of desire, neither is it robustness. So what is it?
Thanks Jonas for the reply. I’m puzzled. I thought I gave an explicit definition of wholeheartedness directly from Frankfurt. It was this:
“Wholeheartedness means not having any reservations or conflicts in concerns that would move one to initiate or support attempts to stop having the initial concern or desire”.
I think that wholeheartedness might be misleading name for the quality of concerns, plans, desires, and so on in question. So I could omit referring to it. I could instead say that:
Certainty of a concern (which is expressed in an evaluative judgment) consists of not having any reservations or conflicts in concerns that would move one to initiate or support attempts to stop having the initial concern or desire.
This quality is different from both strength of concern and diachronic robustness. There can be weak and fleeting concerns that have this quality.
I also added another aspect of the quality in question which was wholeheartnedness/certainty is wholeheartedness is “also increased by having a whole network of mutually supporting cares and concerns some of which are volitional necessities”.
So, I think I have a well-defined proposal for what certainty amounts to. Maybe I should call it just X instead of wholeheartedness, but I still think it’s a good account of certainty.
Jussi,
Robustness, as KB and I understand it, is a matter of how sensitive the belief is to new information, not just diachronically but also counterfactually. Thus my belief that p is less robust than my belief that q if, counterfactually, I would give up Bp but maintain Bq in the light of new information. Now you define your X (‘wholeheartedness’) in terms of what
“would move one to initiate or support attempts to stop having the initial concern or desire.”
That sounds a lot like robustness of the attitude.
No. Different properties. Notice that robustness, as you define it in terms of counterfactuals, is a property of the original concern in question.
The condition you quote from my definition of certitude is not a modal property of the concern whose certitude is in question. It is a feature of the reservations and concerns which one is not suppose to actually have.
This means that the original concern can be more or less robust diachronically and counterfactually indepedently of the modal qualities of the other concerns which one cannot have for the sake of having a concern of which one is certain.
Thus, I can have a concern that is very robust for say the welfare of animals. I’ll have it to distant future and I would not give it up under any counterfactual situations. Yet, this concern can be uncertain under my definition. I can still have other concerns that make me reconsider this concern in different circumstances even if I’m never able nor want to get rid of the concern.
Jussi,
I think the worry is that for a putatively robust but uncertain concern, like your concern for animal welfare, the uncertainty allows you to reconsider the concern but the robustness means this reconsideration is never going to result in a change of mind. So it feels like you are not really reconsidering. One might say, if you are really reconsidering then there ought to be some counterfactual situations in which your mind changes.
Heath,
I thought we would naturally think that the moral beliefs we are certain about are usually more robust counterfactually and diachronically even if these are two distinct properties. This allows for the possibility that reconsideration can lead perhaps even often to a change of view (even if this needn’t be a norm). What is important in terms of having separate properties of concerns (like Smith, Jonas and Krister correctly insist) is that certain concerns needn’t be robust and vice versa. This is enabled by the idea that there can be concerns that stand frequent reconsideration.
Interesting. Suppose that Jussi convinces us that there is something extensionally right about the correspondence between Frankfurt’s idea of wholeheartedness and degrees of confidence – both come in degrees, both can be distinguished from strength, both can be distinguished from robustness, and so on. Does he then face the potential objection that the best account of wholeheartedness will elucidate it in terms of degrees of confidence in something? I know such moves can be resisted, but to the extent that wholeheartedness is a slippery notion, and that what Frankfurt and Jussi are going on is merely confidence that there really is some important phenomenon in the neighborhood, we might worry about whether the direction of explanation simply goes in the other direction.
I was also interested by Heath’s intuition that certainty is tied to justification. I’m skeptical about that. You can have a highly justified credal state of .5 confidence in something – for example, that a particular coin flip will come up heads. So any connection between degrees of justification and degrees of confidence will have to be more subtly spelled out than that. The problem seems to be one not about epistemology, but in the philosophy of mind: that the expressivist’s account of moral judgment don’t have a rich enough structure to correspond to the rich structure of ordinary belief – a rich structure that seems to be shared by moral beliefs just as much as by non-moral beliefs.
If you compare pure expressivist views, for example, I think it’s possible to get a robust picture of where this problem comes from. For example, take the version of expressivist semantics that I call biforcated attitude semantics and develop in Being For. That view doesn’t make room for variation in confidence of either normative beliefs or of ordinary descriptive beliefs. But the way of incorporating into the account a variation of levels in confidence of ordinary descriptive beliefs that appeals to me most (it leads to several independently nice results) doesn’t generalize to normative beliefs.
So the problem looks like it is squarely in the philosophy of mind. It may, of course, lead to further, downstream, problems in epistemology, as one might expect, if one expects one’s epistemology to account for justification for degrees of confidence, and not just for outright belief.
Mark,
I’m inclined to agree with you. I still like the idea that Frankfurt’s substantial condition as quoted above could specify a feature of concerns that is what we mean by the certitude of evaluative judgments. It would have enough structure for that.
I agree that wholeheartedness is an elusive notion. I might want to say that Frankfurt was wrong about wholeheartedness itself given Heath’s counter-examples. The condition he gave didn’t capture that. I could think that it actually requires certitude (as specified by Frankfurt’s condition for ‘wholeheartedness’) and something more. So certitude would here be part of the account of wholeheartedness – necessary but not sufficient condition. I have no idea what the further requirement could be. I think that’s the line I would want to take.
Hi Jussi,
They might shoot this down in their article, but I would be interested to know what they say….
I will use ‘*believe*’ to refer to whatever the expressivist substitutes for an evaluative belief.
Drawing on the exchange between Blackburn and Egan (on expressivism and fundamental moral eror) the expressivist might say this:
A is certain that X is wrong iff A *believes* that X is wrong and believes that his *belief* will survive in the face of any improving moves.
This is not the same as believing that the *belief* is robust or its being robust. Or, as Blackburn points out its disposition to persist in the face of improving moves.
On this suggestion, degree of certitude could be translated into degree of certitude of the belief that the *belief* will persist in the face of improving moves.
Dear Brad,
well, Jonas is going to be able to say more about it, but that sort of views seem to be their main targets. They explicitly discuss Jimmy Lenman’s and Mike Ridge’s ecumenical responses to this problem. Lenman develops his view on the basis of Blackburn’s ideas so his view, if I remember this right, is closest to your proposal. The main problem with this idea seems to be according to KB and JO that accounting for the moral certitude in terms of degrees of normal beliefs seem to get rid of the moral character of the certainty/uncertainty. So, in your case, the expressivist would seem to make sense of the talk about improving moves in part in terms of positive attitudes towards certain moves. You can think of in terms of an ideal sensitivity – the sort of attitudes which an ideal version of you would have in the end. But, then, there does not seem to be room left to account for the sort of basic uncertainty about the nature of the ideal version of you or the like. I think the main argument against the sort of views you mention is along those lines. I hope I didn’t get that all wrong (sorry, Jonas).
I tried to avoid that problem by just thinking of certainty in terms of lack of concerns and cares that would make you begin to reflect again on the given concern whose certainty is under assessment. This account should cover the fundamental positive attitudes towards ideal spectators equally well.
Thanks, Jussi. I look forward to hearing more about this line of response (if they are willing to rehearse the worry). So far, I do not see the problem.
I do not see, that is to say, why “there does not seem to be room left to account for the sort of basic uncertainty about the nature of the ideal version of you or the like.”
Couldn’t uncertainty about what an ideal version of you would be like just be explained by a belief that one’s *beliefs* about what counts as an epistemic improvement might well change with some epistemic improvement?
Maybe I am not getting the worry right..
Brad,
let me try this again in a different way (different from their paper I think). When I say that such-and-such is an epistemic improvement, according to the expressivist, I am (or anyone else is) expressing positive attitudes towards certain kind of changes in attitudes. Now I can be more or less certain in my beliefs about epistemic improvements (i.e., I can hold the previous attitudes on different levels of certitude).
What does this mean? It cannot be a normal belief that this way of changing attitudes is one of those ways towards which I have a positive attitude because it was the certainty of those attitudes that was in question and to be accounted for. So, there’s still some attitudes the certainty of which is not accounted for. Does that make sense? I do recommend looking at the paper as well (sorry I cannot access Phil Quarterly from home so I cannot give more details now).
Hi Jussi,
I will check out the article when I have a chance.
You wrote: “Now I can be more or less certain in my beliefs about epistemic improvements (i.e., I can hold the previous attitudes on different levels of certitude)”
Can you give an example?
Consider this:
At time t1 I think that consistency is a strong epistemic virtue – I *believe* it is, necessarily, irrational to hold inconsistent beliefs. So no change to having inconsistent beliefs is an improvement
But then, before time t2, I read lots of Graham Priest and Buddhists and come to change my mind and come to think some inconsistent beliefs can be rationally held. I accordingly change my *beliefs* about what counts as an epistemic improvement. But – here is the important part – the arguments for the change in view do not depend on the claim about contradictions that they aim to establish (i.e. that they can be rationally believed). The Buddhist and Priest arguments are not, that is to say, question-begging.
Now if, at t1, I believe that my view of improvements might change as a result of some such process (I have not read the arguments at that point but a smart friend has said they might be compelling), I might express this belief by saying “I am not absolutely certain about what counts as an improvement, but right now my view is that no change that produces inconsistent beliefs is an improvement”.
Can you use this example to explain what you think the expressivist cannot do?
I think it would be your belief at t1 that greater consistency is an improvement. You are somewhat but not fully certain about this. For the expressivist, this belief too must be a pro-attitude. What does it degree of certitude consist of?
I think on the view you proposed it would consist of your ordinary belief that the agent who has gone through all the steps in order to make his pro-attitudes in question would have a positive attitude towards you making your attitudes more consistent. But this seems to get things wrong. It seems that you would be fully convinced about that. And, you were somewhat uncertain about whether consistency is good in the first place but now you are assuming that it is fully in the account of certitude.
This isn’t Krister’s and Jonas’s way of putting things but I thought this was the idea.
Hi Jussi,
At t1, I am “somewhat but not fully certain” that coming to have inconsistent beliefs is never an improvement. I have a negative attitude towards changes like that. But I also have reflective beliefs about how stable that attitude will be under improvements.
I believe it might change if I undergo changes that I presently have a good attitude towards (I presently count as improvements). My friend’s testimony makes me think this is a live possibility. But I am not fully certain that it would change if I went thought all kinds of improvements. (I do not change my mind without assessing the arguments)
That is why I am “somewhat but not fully certain” about what counts as an improvement or not.
Brad,
sorry- I think I’m not communicating very well. I’ll try again but I cannot guarantee that it is much better. You write:
“I believe it might change if I undergo changes that I presently have a good attitude towards (I presently count as improvements).”
Take the changes you now count as improvements – the ones towards which you have positive attitudes – whatever they happen to be. How certain are you that these very changes are improvements? Not fully probably. What is to be less than fully certain that these are improvements? I.e, what aspect of those positive attitudes towards those changes explains that you are less than fully certain that those changes are improvements?
The claim is that it cannot be a normal belief that I if I had gone through these changes myself would have positive attitudes towards just these steps. After all, (i) that seems trivially true, and (ii) what is at issue in my uncertainty was just the fact whether the person who would have gone through these steps would be improved in some sense. That doesn’t seem to captured yet in any way.
Sorry if this isn’t quite clear enough.
This may seem like an odd question, but what is the Realist account of moral certitude? Are we taking it for granted that Realists don’t need to say anything about degrees of belief in moral propositions?
That’s a good question. I remember that a lot of Smith’s original paper deals with this issue rather than the expressivists problems. I wish I had that paper with me now but I don’t. I’ll check later what he says. But I think his account is based on the degrees of ordinary beliefs in what the fully rational advisor would advice. And this ordinary belief would come in degrees just like all other beliefs.
I think the cognitivist must have story to tell. Like Smith thinks, presumably it would be the same story as the story about the degrees of belief in non-moral propositions. I take it that one popular story is in terms of subjective probabilities that are measured in terms of betting behaviour. It is a good question though what the relevant bets good be or what could correspond to them. If that isn’t specifiable, then the non-cognitivist might be able to claim that this is a different phenomenon for which we need an alternative account that differs from normal degrees of belief.
Yes, I was thinking of the betting story for more ordinary degrees of belief (or anyway, some pragmatic constructive story as in Ramsey, Savage, Jeffrey, or whoever is your favorite). And this doesn’t sound so good for moral propositions.
I’m wondering if expressivists have a “Hard, but not especially my problem” reply.
I wonder if the expressivist is off the hook so easily. I thought that the story about betting is philosopher’s favourite story of how to get good and precise evidence of what particular degree of belief someone has in some proposition. This still leaves it open what it is psychologically speaking to beliefs with the given degree of belief revealed by the betting pattern. I think the cognitivist can in any case extend that psychological story to the moral beliefs. She will have to give an alternative story of how to know what degrees of belief people have. This might refer to introspection, behaviour of criticising others (more if you are certain), inclinations to reflect on your view and to look for evidence (less if you are certain), and so on.
I tend to agree with Heath, that the concept of certainty (as opposed to conviction, or commitment, or confidence) necessarily involves an element of justification: to regard oneself as certain of something is not just to be deeply or even irreversibly committed to it somehow, but to regard oneself as having a certain sort of reason for being so committed. Or, to put it another way, the notion of ‘endorsement,’ which Jussi mentions at one point, is itself deeply counter-expressivist because, again, to endorse something is not just to commit oneself to it, it includes that judgment that one ought to commit oneself to it; and it is this ought judgment that is going to be problematic for the expressivist.
That aside, isn’t the real problem for the expressivist not with certainty, but with uncertainty? That is, the expressivist is unable to make sense of a kind of deep intellectual humility that we all ought to have: as an expressivist, I could not make sense of the thought that I might be deeply and radically wrong in my moral beliefs. I could, of course, allow that there might in the future be some relatively small improvements to my moral views, which count as improvements because they are approved by elements of my current views (and meta-views). But a radical change in my outlook won’t be approved by my current (t1) outlook, so it can’t count as an improvement in that way. And expressivism can’t say that the resulting view (at t2) will count as an improvement if it is endorsed by the agent at (t2), because this is again trivial; that sort of theory would count any resulting view (so long as its various elements, including the meta-views, hang together sufficiently) as an improvement by its own lights. So how do expressivists account for the possibility that one might be radically wrong?
Troy,
it’s not obvious to me why an expressivist would have a problem in accounting for the ideas that moral beliefs are held for a reason or that they are beliefs one ought to have. The expressivist would just understand these judgments as expressions of positive attitudes towards having the given first-order attitudes and having them on a certain condition.
And, I think expressivist has a story to tell about the possibility of radical wrongness. This is familiar to many but let’s just get it here again. To claim that it isn’t possible to be radically wrong is to think that the wrongness of actions depends on one’s attitudes. This would be to express disapproving attitudes towards the given action only in the possibilities in which one has disapproving attitudes towards those actions. But the expressivist does not have such attitudes to express. She can say instead that the wrongness of actions does not depend on her attitudes which is to express disapproving attitudes towards the action even in conditions in which one would not have the attitudes. And, to say that wrongness of actions does not depend on our attitudes is to say that one could be radically wrong – so there is a way of understanding claim too as an expression of attitudes.
So this question of possibility of radical mistake seems to slightly different from the degrees of belief. Is that right? Could I have a very high degree of belief that there is a table in front of me even if I recognise that I could be a brain in a vat?
Jussi,
I thought of another obstacle to equating certainty and wholeheartedness. To be certain that p is, among other things, to be certain that those who do not hold p are wrong. To be wholeheartedly committed to p does not have this implication–I could be wholeheartedly committed to being a Steelers fan or to eating organic produce, without thinking that those who take contrary attitudes are doing anything objectionable.
Jussi,
I thought of another obstacle to equating certainty and wholeheartedness. To be certain that p is, among other things, to be certain that those who do not hold p are wrong. To be wholeheartedly committed to p does not have this implication–I could be wholeheartedly committed to being a Steelers fan or to eating organic produce, without thinking that those who take contrary attitudes are doing anything objectionable.
Jussi,
I thought of another obstacle to equating certainty and wholeheartedness. To be certain that p is, among other things, to be certain that those who do not hold p are wrong. To be wholeheartedly committed to p does not have this implication–I could be wholeheartedly committed to being a Steelers fan or to eating organic produce, without thinking that those who take contrary attitudes are doing anything objectionable.
Heath,
I don’t think certainty has much to do with thinking that others are wrong. Belief already seems to do that. If I believe that p, then I am committed to thinking that not-p is false and thus those who believe that not-p are wrong. The expressivist is likely to use the idea of disagreement in attitudes and emotive ascent to explain this feature of evaluative beliefs. So, she might say the belief that x is wrong is a disapproving attitude towards x which disagrees with the attitudes of those who have an approving attitude towards x. She can also say that part of thinking that x is wrong is additional disapproving of those who do not disapprove x which also accounts for why someone who holds that x is wrong is committed to the idea that others are wrong. Given that the expressivist has already this story about disagreement, nothing she says about certitude has to do work here.
Hi Jussi, Heath’s last point impressed me – and I wonder if it has more going for it than you admit. As he points out, part of what it is to be certain that p is to be certain that others who believe -p are wrong about p. So certainty is (virtually?) always symmetrical with respect to these two different beliefs. It follows that if someone has a wholehearted attitude, on the expressivist account you propose, they should also have wholeheartedly disapprove of others who exhibit a contrary attitude. But this seems unlikely – can’t you wholeheartedly hold an attitude, while also being at least somewhat tolerant of others who disagree?
Of course the same sort of objection should arise just when talking about belief – if I believe p, then I believe others are wrong when they believe ~p. So the expressivist has to provide a story about why my personal disapproval should go hand-in-hand with disapproval of other people who disapprove differently. But when we add wholeheartedness to the mix, the objection seems sharper to me, because the empirical claim seems less believable.
Simon,
this is interesting. I wonder if the expressivist has room to wiggle here. I think you can get the objection you have in mind already from Blackburn’s emotional ascent story. On his story, moral attitudes always in part entail disapproval of those who don’t share the attitude. And, it’s not clear to me that this is fatal. You can disapprove of someone’s attitudes whilst still being tolerant in the sense of not wanting to beat them up or anything.
But maybe we can drop that bit of the picture. If I say that x is good I have a positive attitude towards x. I also have to think that those who think that x is not good are wrong. But this means on the basic expressivist view that we are disagreeing in attitude. Nothing yet follows from my attitudes towards the disagreeing people. All that follows yet is disagreement in planning (hurraah x vs. boo x). Now assume that I hold my attitude wholeheartedly. On my account this means that I lack other beliefs and concerns that would leave reservations and conflicts in attitudes such that they would move me to begin reassessing my positive attitude towards x. But nothing follows from this either that I would have a negative attitude towards the person who disagrees with me. And, yet my attitude towards x is wholehearted and I wholeheartedly think he is wrong (i.e. disagree in my plans regarding x). Hope that makes some sense.
If Fred disagrees with me morally, the expressivist can say that our disagreement consists in a disagreement in attitudes. But if I *claim* that Fred disagrees with me morally, the expressivist can’t, surely, say that all I am doing is expressing an attitude that disagrees with Fred’s attitudes. For my claim has Fred, or Fred’s attitudes, as part of its content. I must therefore, on the expressivist account, be expressing some attitude *towards Fred, or Fred’s attitudes*, no?
Now when I have an attitude of moral disapproval toward some action, it is perhaps plausible to claim that this is always accompanied by some sort of disapproving attitude toward those who would disagree with me. But since being certain that p, also makes one certain that anyone who believes ~p is wrong, and you want to interpret certainty as wholeheartedness, you must claim that being wholehearted in one’s disapproval of some action also makes one wholeheartedly disapprove anyone who disagrees in attitude.
But it seems implausible (to me, anyway!) to claim that whenever I have a wholehearted attitude of moral disapproval toward some action, it is always accompanied by a wholehearted attitude of disapproval toward those who would disagree with me. Tolerance seems to be at least *some* consideration in favor of not disapproving at all of someone who holds a differing attitude, even if there is nothing at all, from my point of view, to recommend their attitude as one for me. Does this help?
It does help but I’m not sure I still agree.
I would be inclined to say no to the first question. If you say that Philadelphia is north of Boston, I can say that I disagree with you geographically. But it would be natural to understand this claim to be about Boston being north of Philadelphia rather than about you. If that’s right, then the expressivist could say that claims about disagreeing are likewise about the moral issues rather than about the attitudes of other speaker. This would allow her to say that the express attitudes towards the objects disagreed about rather than attitudes towards the other persons attitudes.
Also nothing in my account entails that being certain about moral judgment entails wholehearted condemnation of those who don’t agree.
Say that you like meat and I think with confidence that eating meat is wrong. From my account, it follows that I have a disapproving attitude towards eating meat such that my other concerns rarely lead me to question this disapproval. This is all it takes for my attitude to be certain. Let’s say that I accept (which I needn’t) that part of this attitude is also some disapproval of you who eat meat. From my account, it doesn’t follow that this latter attitude would need to be wholehearted. In fact, I can have plenty of concerns based on tolerance that make me often question this disapproval. Even if this is right, nothing follows to the certainty about still thinking that eating meat is wrong.
Jussi, in your example you say “I disagree with Simon geographically”. You then suggest that this claim is only about geography. But you overlooked an important part of its meaning, namely, the part that says (roughly) that Simon believes something that is geographically false.
Well, let’s accept that and take the corresponding moral claim:
Simon believes something that is morally false.
How should the expressivist think of this?
My thought was that she could understand the claim in terms of two elements:
1. Hurraah for something (whatever the original claims was) [and]
2. A report that Simon boos for that thing.
That’s enough to take into account that my utterance acknowledges a disagreement.
You seem to want to commit the expressivist to the following analysis:
1. Hurraah for something [and]
2. a Report that Simon boos for that thing [and]
3. boo Simon’s booing.
It’s true that 3 might hold in Blackburn’s substantial expressivist proposal but I’m not sure mere non-cognitivism commits anyone to that.
I have been out of the loop over the weekend. Some responses to earlier comments:
Brad C,
I think that one reading your proposal is very similar to Lenman’s. In that case I refer to our discussion of Lenman in the paper, and to Jussi’s earlier comments. On a different reading of what you mean by *belief* your proposal is more similar to what we call ‘the attitudinal ladder proposal’. A view like this was discussed by Smith and we try to bolster his criticisms of it. Again, I will have to refer to the paper (sorry!).
Jamie,
You ask about cognitivist accounts of moral certitude. I concede that I don’t have a general theory of certitude. I take it that cognitivists can say that certitude for moral beliefs is *whatever certitude is for non-moral beliefs*. Expressivists of course cannot say that. They have to account for certitude in some other way, and they cannot say that certitude is strength of desire-like attitude; robustness; or degree of beliefs about robustness. (That is pretty much the problem posed by Smith; I should say that the bulk of our article is devoted to arguing that extant responses–especially Lenman’s and Ridge’s–do not work.) Like I said, I do not have a theory of certitude. You say that representing certitude in terms of betting behaviour “doesn’t sound so good for moral propositions”. Can you explain why? Here is a crude version: S is more certain that p than that not-p if S prefers the lottery (€10 if p, €0 if not-p) to (€10 if not-p, €0 if p).
I am not saying that this gives a plausible representation of certitude. I am only asking why it would be less plausible when p is a moral proposition, e.g., that hedonistic utilitarianism is true.
Jonas,
I think the worry about betting and lotteries in the moral context is to due with whether the outcomes are decidable. I might be fully convinced that hu is false but still not be willing to take a bet on it. This is because there really would not be that good of a way to decide the outcome with whoever doesn’t share my belief. This is not the case if we test our degrees of belief about whether it will rain tomorrow by looking at how I would bet. We can always check the result independently tomorrow.
Jonas,
Well, what betting odds do you give on the proposition that hedonistic utilitarianism is true?
I am prepared to sell you a ticket worth €100 iff hedonistic utilitarianism is true. How much are you willing to pay for it?
Jussi,
If I am fully convinced that p I must think that p is decidable, must I not?
Jamie,
I wouldn’t pay much, but my esteemed colleague Torbjörn Tännsjö would, I guess, pay €100. That is, I am not at all convinced that HU is true, but Torbjörn is. (In other words, the betting odds reflect our subjective probabilities that p is true. But I have a wlight worry that I am missing something in your question)
The objection to the betting procedure had better not be that we can’t operationalize it. Then there would be no account of degrees of confidence for lots of propositions: A dinosaur once laid eggs just where I am standing; there is intelligent life on other planets; life on earth will survive for another billion years; there is a peasant in China thinking about the Beach Boys just now; etc. If the problem with betting is that it can’t be operationalized, we might as well go back to the verification theory of meaning.
Jonas,
Okay, so will you send me €5 if I send you a ticket worth €100 iff hedonistic utilitarianism is true?
Heath,
I’m not sure what you mean by ‘operationalize’. The way I use that word, I would have said that a betting procedure is an operationalization (yow!) of credence.
Suppose we are comfortable with bets on propositions even when nobody knows what evidential situation would count for one side or the other winning the bet. In that case, I don’t see what the problem is for an expressivist: he just says the same thing the realist says. If an expressivist says, “Oh, I’ll just go with Ramsey’s model of credence and I’ll extend it to moral judgments,” what is the objection supposed to be? My first reaction was that I knew though it’s cumbersome to spell it out, but now I can’t see an objection that wouldn’t be also an objection to a realist view.
Jamie,
I’ll save my money, thanks. You say:
“I don’t see what the problem is for an expressivist: he just says the same thing the realist says.”
I don’t see how he can (consistently) do that. What is he betting on? HU being true? But for expressivists (the ordinary kind) there are no moral propositions.
Jonas,
“If I am fully convinced that p I must think that p is decidable, must I not?”
yes but this doesn’t mean that I think that there will be an uncontroversial way of settling the bet. What can I do if the bookie after taking the bet says that I lost, turns out that hu is true? With this worry in mind I wouldn’t be irrational in not betting even if I was convinced and thought the questions are decidable?
Jamie,
I think the worry is psychological. Assume that we get non-moral and moral betting patterns from some individual and we think of them as degrees of credence. In the non-moral case we can say something about the mental states that realise that level of credence – it could be something about the strength of the belief-attitude or something like that. The non-cognitivist has to too say something about what of the conative attitudes realises the degree of credence. The Smith argument is that they have nothing to say here. They cannot refer to the strength of desires because that accounts for how good we judge something to be.
Jonas,
Wise man.
For at least one ordinary expressivist, moral sentences have semantic contents (maybe they are sets of ordered pairs). Contents are propositions (aren’t they?). So, for at least one ordinary expressivist, there are moral propositions.
Now, I’m not perfectly happy with the idea of betting on those propositions, but so far I see no objection to betting on them that wouldn’t also be an objection for a realist betting on her favorite moral propositions.
Jamie,
“For at least one ordinary expressivist, moral sentences have semantic contents (maybe they are sets of ordered pairs). Contents are propositions (aren’t they?).”
So I guess.
“So, for at least one ordinary expressivist, there are moral propositions.”
I am not quite sure which ordinary expressivist you have in mind. (Incidentally, I take it that many expressivists–e.g. Hare–hold that moral sentences have semantic contents, but the question now is whether the content is moral.) Might it be Gibbard? In that case I guess the semantic content would be something like that some action is permitted/prohibited by a system of norms, N. (I don’t claim to have his theory in fresh memory!) But the proposition that an action is N-permitted/prohibited is not a moral proposition. Hence betting on it is not betting on a moral proposition. This is in fact close to the criticism KB and I press against Lenman in our paper. The criticism is that Lenman’s account fails to capture moral (as opposed to merely empirical) certitude. But this might not at all address what you had in mind…
Ah, good point about the possibility of a moral sentence having a content that is not itself moral. But no, that’s not what I meant., w a world and h a hyperplan. These are ‘factual-normative contents’. We agree that there is no problem calling them ‘propositions’.. On the other hand, if a world occurs paired with some hyperplans but not others, the proposition is (partly) normative.
The ordinary expressivist I had in mind was GIbbard (thus the example of a set of ordered pairs). As you know, Gibbard’s expressivism applies directly to ‘fundamental normative judgment’ and only indirectly to momral judgment, so I will switch gears to talk about the fundamental normative judgments (I hope that doesn’t make any significant difference to the current issue).
In Gibbard semantics, a sentence gets assigned a set of ordered pairs,
There is indeed a distinction between normative and non-normative propositions, in this model. Purely descriptive propositions will have this property: if a world, w*, occurs in any pair in the set, then for every hyperplan the set includes
Does that clear things up?
Not quite. (But maybe that is because I don’t remember Gibbard’s theory well enough.) According to Gibbard (as I recall), for S to believe that HU is true is for S to accept a plan to maximize happiness in every circumstance. It might be possible in principle to spell out such a plan propositionally. But what then is S betting on? That she will actually maximize happiness in every circumstance, it seems. But again, that is not betting on a moral proposition, so it does not capture moral certitude.
Wouldn’t he accept, however, that there is a natural property which is the thing to do? Betting that hu is true would be to bet that hu captures that property. The idea would be that there are two ways to represent this property: a normative and a descriptive which correspond to the two kinds of propositions normative and descriptive. In this case, to bet on a moral proposition would inter alia be to bet on the obtaining of a state of affairs that can also be represented in a non-normative way.
According to Gibbard (as I recall), for S to believe that HU is true is for S to accept a plan to maximize happiness in every circumstance.
Right (not quite, but good enough for our purposes).
But what then is S betting on? That she will actually maximize happiness in every circumstance, it seems.
No, that’s not it. That is a completely different proposition.
Let’s use hu for the hedonistic utilitarian hyperplan that calls for performing the happiness-maximizing option in every circumstance (and I’ll say ‘optimal’ for ‘happiness-maximizing’ hereafter).
HU (in the present simplification) would be the proposition that pairs every world with hu.
The proposition that is the content of “S will maximize happiness in every circumstance” is the proposition that pairs each world in which S maximizes happiness in every circumstance, with every hyperplan.
So, for instance, the first (plan-laden, aka normative) proposition will include a pair consisting of a world in which S performs lots of sub-optimal actions and hu, while the second will not; the second will include a pair containing a world in which S performs all and only the optimal actions and a strictly Kantian hyperplan, while the first will not.
(Darn, I tried to get the name of the hyperplan to be ‘h’ with a superscript ‘u’, but apparently that tag doesn’t work in comments.)
OK. But I don’t see what it would be to bet on such a proposition; still less do I see what it would be to win such a bet.
Precisely.
That’s how I feel about the realist story.
The realist story is just this: betting on the truth of HU is paying for a ticket that has a certain (monetary) value iff HU is true. (One might of course worry about what it is for moral propositions to be true in the first place but that wory concerns moral realism proper, rather than betting on moral propositions. If we understand the former we can understand the latter, it seems.)
For the Gibbard-expressivist things seem a lot less straightforward. What is it to bet on a ‘fact-plan world’? What is it to win such a bet? I believe I understand fairly well what a plan-fact world is but I don’t understand betting on one (and winning such a bet). This marks a contrast with the realist story, it seems to me.
Have we reached an impasse?
I guess it is an impasse.
I don’t understand what it is to win the bet, according to a realist. If the answer is, “You win if people ought to do the optimal act in every circumstance”, then that is just the same answer that the expressivist gives. Like you, I think that expressivist answer is unhelpful, but unlike you I think it is unhelpful when the realist utters the same words. (I might put my dissatisfaction like this: I asked for an explanation, and all I got was repetition.)
Just off hand, and by the way, I don’t see any independent issue about what it is to bet on HU. We understand that in just the circumstance that we understand what it is to win, I would think. (I haven’t thought about this a great deal, so maybe I’m missing something.)
I feel like I’m taking up too much of this comment space, but on the other hand I also feel I owe Jussi a response, so just briefly (kind of):
Wouldn’t he accept, however, that there is a natural property which is the thing to do?
Yes, if your “is” is the “is” of constitution. Normative properties, according to Gibbard, are constituted by natural ones.
Betting that hu is true would be to bet that hu captures that property.
Hm, I’m not sure — that is, not sure exactly what that means. We all know and agree on which natural property hu captures. The bet would be about whether this is the property that constitutes being the thing to do. And this is a normative question, of course.
In this case, to bet on a moral proposition would inter alia be to bet on the obtaining of a state of affairs that can also be represented in a non-normative way.
It’s hard for me to understand the bet that way (if I’m understanding your suggestion correctly). Are you thinking that Jack Smart and Tim Scanlon might have made a bet, and it would turn out that they were betting on a natural proposition, but they would disagree about which proposition they were betting on? This is not how I understood the bet.
“I don’t understand what it is to win the bet, according to a realist. If the answer is, “You win if people ought to do the optimal act in every circumstance”, then that is just the same answer that the expressivist gives. Like you, I think that expressivist answer is unhelpful, but unlike you I think it is unhelpful when the realist utters the same words.”
But coming from the mouth of a realist, these words express a moral proposition (no plan); coming from the mouth of an expressivist, these words express a plan. I understand what it is to bet on (the truth of) a proposition but I don’t understand what it is to bet on (the truth of??) a plan.
“(I might put my dissatisfaction like this: I asked for an explanation, and all I got was repetition.)”
I wonder if this is a legitimate request. What could an ‘explanation’ be from the point of view of the realist? (Since I am not inclined to accept realism myself I do not know. I do feel, however, that I have given you yet more repetition so mybe I should stop.)
“Just off hand, and by the way, I don’t see any independent issue about what it is to bet on HU. We understand that in just the circumstance that we understand what it is to win, I would think. (I haven’t thought about this a great deal, so maybe I’m missing something.)”
That might well be right.
Jamie,
that seems right. It doesn’t seem plausible but it might be how we should see things in the framework. I’ve always wondered whether people on Gibbard’s view refer to the same natural property of thing-to-doness or different ones.
Of course naturalist realists like Railton or Jackson would have a story to tell about moral propositions such that it makes sense of betting and figuring out who has won. Anyway, thanks for the discussion and clearly this is an area where more research needs to be done.
But coming from the mouth of a realist, these words express a moral proposition (no plan); coming from the mouth of an expressivist, these words express a plan. I understand what it is to bet on (the truth of) a proposition but I don’t understand what it is to bet on (the truth of??) a plan.
I must have lost the thread. I thought you agreed that the sets of ordered pairs were propositions (here). If you do agree, and you understand what it is to bet on a proposition, then you must know what it is to bet on those expressivist things. (They are not plans, but sets of ordered pairs; admittedly the HU one contains worlds in a pretty trivial way.)
If you don’t agree that the sets of ordered pairs are propositions, then we can backtrack and discuss that part.
Jamie,
I thought I understood but maybe I did not. I’ll try again. As I understand Gibbard, for me to think that HU is true is to ACCEPT a proposition that pairs world descriptions with a plan that I (in all contingencies) maximize happiness. (Right? I don’t have Gibbard’s books with me so I rely on memory and your previous comments.) I agree that this is a proposition (world description + plan). But it seems to me that this is a non-normative proposition (I see however that accepting it is to make a normative as opposed to ‘prosaic’ judgement, according to Gibbard). So I don’t see how betting on this proposition can capture moral certitude. (I feel I am repeating myself here so maybe I am still missing something.) I now think that this is my real concern; my earlier claim that I don’t understand what it is to bet on a plan might have been badly put.
Btw, certitude is, I take it, a matter of degree of belief. Accepting a plan is, I take it, something like forming an intention or a decision. Do you think that intention and decision come in degrees? It seems to me that unlike belief they do not.
Finally, I still don’t understand why you don’t understand the realist story. For the realist, to bet on HU is to bet that there is a (naturalistic or non-naturalistic) fact to the effect that HU is true. You win the bet iff there is such a fact. Like me, you might be inclined to believe that there are no such facts, but that is not to say that we don’t understand what it is to bet on them. You might still protest that this is not an explanation. But in that case I am not sure what kind of explanation you are after.
Jonas,
I agree that this is a proposition (world description + plan). But it seems to me that this is a non-normative proposition (I see however that accepting it is to make a normative as opposed to ‘prosaic’ judgement, according to Gibbard).
What could be at issue when there is a dispute about whether a certain set theoretic object is a ‘normative proposition’?
I can see two questions here. One is a question about how the formal model works: which sets of pairs are used as normative propositions, formally speaking? In Gibbard’s model, a normative proposition just is a plan-laden one. That’s how the model works.
Second, there could be a question about whether a certain formal object gets used to represent or index psychological states that we independently think of as normative attitudes or normative judgments. But your parenthetic addendum makes it clear that this is not your question; you agree that the state indexed by hu (the set of ordered pairs) is a normative judgment.
So, all the questions I can find are answered. What question is left? Gibbard’s hu, the content of HU, is a normative proposition.
Finally, I still don’t understand why you don’t understand the realist story. For the realist, to bet on HU is to bet that there is a (naturalistic or non-naturalistic) fact to the effect that HU is true.
Is that an explanation?
To me it sounds like a restatement, rather than an explanation. I think replacing a proposition, P, with the proposition that there is a fact to the effect that the proposition is true, is a pleonastic paraphrase.
You wonder what kind of explanation I am after. Good question. I don’t have any particular kind in mind, but one example would be some kind of functionalist characterization. That seems like an explanation (I don’t say it’s a true one).
Thanks, Jamie. Just a follow-up on the first question you pose (and answer). I worry that I don’t have a clear grasp of what a ‘plan-laden proposition’ is. (I find talk about plans confusing since ‘plan’, like ‘belief’, can refer either to state or to content.)
I thought that to plan is to take up a certain attitude (a ‘planning attitude’) to a certain proposition. So, for instance I might take up the planning attitude to a world description and the proposition that I maximize happiness in all the contingencies described. This, I thought, is the propositional content of the plan. But in what sense is this content itself plan-laden? I thought plan-ladenness came in (so to the speak) with the attitude of planning taken up towards the proposition.
So my question is not about normative judgements (in the ‘state’ sense of ‘judgement’) are plans (in the state sense of ‘plan’); my question is about the content of these plans.
Have I missed something fundamental? (I’ll check Gibbard’s books when I get the chance.)
Here is a way that perhaps develops the objection (a cousin of the “wishful thinking” objection). If this is too complicated for the comments we can start another thread.
Start with the idea that credence of degree D in a proposition P can be operationalized (in Jamie’s correct sense) as the plan to purchase a lottery ticket costing up to $D that pays $100 iff P. Call such a plan C(P,D). In order to make this work, we have to impute some more plans to the agent which I will just describe loosely: the desire to have $100, non-diminishing marginal utility for dollars up to 100, non-risk-averseness on monetary bets, and a few other implausible assumptions. But let’s stipulate them as background plans or policies, B.
The idea, then, is that it is practically rational to move from the general plan B, plus credence of degree D in P, to the plan C(P,D). If one changes one’s degree of credence in P from D1 to D2, and one is practically rational, one will abandon the plan C(P,D1) and adopt instead C(P,D2).
If change of degree of credence is a change in a cognitive state, then we can understand why this is rational in a fairly straightforward way. It is a somewhat sophisticated form of instrumental reasoning, drawing on probability theory, etc. On the other hand, if change of degree of credence is a change in a non-cognitive state, i.e. a change in plan, then it is not so clear why this is rational.
Suppose I consider the arguments for vegetarianism and become more doubtful of the permissibility of eating meat. That is, let P = “It is permissible to eat meat” and my credence in this proposition has changed from a relatively high D1 to a relatively low D2. According to the expressivist I have changed my meat-eating plans. Now also according to the expressivist, who wants to preserve objectivity, whether meat-eating is permissible does not depend on my plans. So now the situation is this. I have a stable background plan B, about my desires for money. I have changed my plans about meat-eating. Why should I change my willingness to purchase lottery tickets that pay iff meat-eating is permissible, when nothing has changed which would affect whether meat-eating is permissible?
Jonas,
Normative judgment is (often? always?) a certain attitude (planning) toward some non-normative proposition, and a different attitude (accepting) toward a normative proposition.
Heath,
Interesting, I don’t have time to think about that properly now.
Let me try to help Jonas in responding to Jamie.
Here’s the realist’s story. The sentence ‘all non-optimal acts are wrong’ expresses a proposition, that is, a set of possible worlds. If you bet that all non-optimal acts are wrong then you win your bet iff the actual world is in this set.
Now, what would Gibbard’s story be? The sentence ‘all non-optimal acts are wrong’ expresses a proposition, that is, a set of world-hyperplan pairs. If you bet that all non-optimal acts are wrong then you win your bet iff … ?
Does the story have a happy ending?
The following comment comes from Justin D’Arms who is having trouble posting here just now.
Interesting thread, thanks Jussi.
As I understand it, your revised proposal is that while wholeheartedness may require more than certitude, certitude “means not having any reservations or conflicts in concerns that would move one to initiate or support attempts to stop having the initial concern or desire”. In your reply to Mark Schroeder you said that you think this specifies a feature of concerns that is what we mean by the certitude of evaluative judgments.
I don’t think it does, yet. It seems to me there are kinds of “reservations” or “conflicts” in attitude that could move a person to try to change his normative convictions without calling into question his certitude about those convictions. Consider this fictional but topical example: Jack is sure that waterboarding is morally permissible under certain conceivable circumstances. Jack is also a truthful guy, so he will admit this, if asked under oath. He knows that he will be asked about this at the hearing, and that when he responds truthfully he will not be confirmed as a new Supreme Court Justice. (He also thinks waterboarding is torture, by the way, and that it is illegal under international law, and he would not condone it’s use by his government if he were on the court—but he rightly assumes that the Senators will ignore this complexity in his view and that he won’t be confirmed.) Jack wishes that he did not have this moral conviction, because it is not going to make any difference to anything, and he really wants to be on the Supreme Court. His wish moves him to initiate or support attempts to not have the moral conviction about the permissibility of waterboarding. He looks for absolutist arguments against torture, but he doesn’t really expect to be persuaded by them, because he’s sure absolute prohibitions on anything are crazy. He watches videos of Christopher Hitchens having himself waterboarded in hopes that revulsion will induce a non-rational change in his moral convictions. So he’s got a ‘reservation’ about his belief that waterboarding is permissible, in a sense. It’s an attitude that moves him to initiate attempts not to have that belief. But it seems to me that this sort of motivation is distinct from uncertainty about the underlying normative conviction.
If that’s right, we’d need to rule out such cases of motivated attempts to change moral beliefs. But how, without appealing to a sense of “reservations” on which an attitude that motivates you to reconsider p only counts as a “reservation” if it diminishes your degree of confidence in p? Such an appeal, I take it, would leave the expressivist back where he started. (How bad a place that is I am still not sure.)
Thanks Justin (and David for posting the comment. Hope you guys are well!)
Anyway, I’m not overtly worried about the objection. It seems to show that my proposal needs a bit of fine-tuning but not much. What moves Jack to initiate his process of attempting to give up his permissive attitude for waterboarding is his desire to be in the Supreme Court. This seems like a version of the wrong kind of reasons objection.
To avoid this problem, I think I could add a simple proviso.
I could say that certitude means not having any reservations or conflicts in concerns that would move one to initiate or support attempts to stop having the initial concern or desire *in circumstances in which whether or not one has the relevant desire-like attitude would not affect the situation in which one is in*. This would make Jack’s desire to be in Supreme Court irrelevant for the assessment of his certainty. It only makes him reassess his view in circumstances in which his having the desire-like attitude makes a difference to how he is treated.
Jussi, I agree this is like the WKR problem. But I think that’s a reason for suspecting that your solution is insufficiently general.
“I could say that certitude means not having any reservations or conflicts in concerns that would move one to initiate or support attempts to stop having the initial concern or desire *in circumstances in which whether or not one has the relevant desire-like attitude would not affect the situation in which one is in*.”
That might handle the example I used, but my thought was that there are many kinds of “reservations” or “conflicts” in attitude that could move a person to try to change his normative convictions without calling into question his certitude about those convictions. And I don’t think they all take the form of strategic concerns about the situation a person is in. For instance, someone might be certain that something is true and wish it were not. Or he might be certain that something is true but think that morally better people would not be certain—they manage to maintain an open mind even in the face of the considerations that he finds decisive. Each of these could motivate him to seek counter-arguments that might change his mind, and the latter could motivate him to seek non-rational change in belief. But that’s different from his not being certain.
I agree with the claim that not all of these are strategic concerns (which many wrong kind of reason problems are – that’s why I used my more general solution that problem).
About wishes: note that my definition mentions beliefs or concerns (which I under as more stable desire-like attitudes) need to be states that lead to the relevant kind of reservations and conflicts. So wishes are ruled out by this condition.
On the other hand, in the moral case, the person does not have a reservation or conflicts of concerns that leads her to attempt to revise her view. Her attitudes seem to be a coherent whole (well, except she might feel more certain than she thinks she should) so that doesn’t seem to be moving her to try to stop having the moral concerns she has.
I think with a bit of polish this account seems to lead to pretty intuitive results.
Here’s a clearer way (I think) to push the point I was making above. Let a “bet” on some proposition P at odds D be any offer of the form, “if X pays Y $D, Y pays X $100 iff P.” Then accepting the proposition, “A bet on P at D is offered by Bookie to Bettor” is to accept the proposition, “If Bettor pays Bookie $D, Bookie will pay Bettor $100 iff P.” Suppose one knows that Bettor has paid Bookie $D, so then one is to accept the proposition, “Bookie will pay Bettor $100 iff P.”
Suppose one already has the view that P is true. Then where P is a normative proposition, one is in a “wishful thinking” position according to expressivism; one’s plans are determining one’s predictions about whether Bookie will pay Bettor. But surely whether Bookie will pay Bettor is independent of any plans you have, unless (a) your plans influence the issue, e.g. you plan to force Bookie to pay, or (b) some common factor is influencing both your plans and whether Bookie will pay. I take it that (a) is not of much interest.
The realist takes (b)—it is quite likely that the realist’s view on P will influence the realist’s plans, but (according to the realist) the realist’s view of P is influenced by whether P, which is also influencing whether the Bookie will pay. (I use “influence” here very neutrally.)
It is not clear that (b) is available to the expressivist. If not, and there is no other way for the expressivist to handle the “wishful thinking” problem, then one might conclude that accepting the proposition that a bet has been offered is problematic for expressivists. But if the expressivist cannot unproblematically accept that bets are offered on normative propositions, then she cannot make sense of degrees of credence in them.