It is often thought that one central advantage of expressivism over subjectivism is that expressivism can make sense of moral disagreements. Whereas according to subjectivism, people end up talking past one another, expressivism enables speakers to express disagreements in attitude as Stevenson famously put it. This orthodoxy has been recently challenged in two ways. Subjectivists have tried to create new ways of making sense of disagreements, and it has turned out that the traditional expressivist accounts of disagreement are more problematic than previously thought. The latter issue has become even more pressing because of the negation problem. The questions of when two people disagree and when one person holds inconsistent attitudes seem to be very much the same question, and so many expressivists have thought that by giving an account of disagreement they can also give an account of inconsistency. In a recent paper entitled “Disagreement” (PPR) and in a corresponding chapter on disagreement in his new Impassionate Belief book, Mike Ridge has tried to develop a new account of disagreement (which he calls "disagreement in prescription") to solve these worries. I want to argue below that this account fails because it commits the conditional fallacy.
Here’s Ridge’s approximate version of the view (the later complications will not affect what I’ll say below):
Two people, A and B disagree in prescription about D’s phying in C just in case in circumstances of honesty, full candor and non-hypocricy, A would advise phying in C and B would advise psying in X, where phying and psying are incompatible.
Intuitively, this is a pretty attractive view. Think of a case in which Jane thinks that Mary ought to tell a lie in the circumstances she is in, whereas Jill thinks that Mary ought not to lie in her situation. Presumably, Jane would advise Mary to lie whereas Jill would advise her not to do so. This suggests that we could use these dispositions to give incompatible advice to make sense of what the disagreement between Jane and Jill consists of. For what it’s worth, I’d want to run the order of explanation the other way. I would want to say that Jill and Jane would give different advice just because they disagree about what Mary ought to do.
In any case, the reason I am sceptical about Ridge’s view is that it is formulated in terms of a counterfactual conditional and philosophical theories that are formulated in this way usually fail because they commit my favourite objection – the conditional fallacy. The right hand side of Ridge’s account first places A and B into idealised hypothetical conditions. The theory then says that A and B disagree if they do certain things in those new circumstances. The problem is that placing A and B in the idealised circumstances changes them and so what A and B do in the new circumstances will no longer be relevant for whether they disagree in the actual circumstances.
To see the problem consider the following two debates:
Ann: Harry ought to be honest to Larry.
Ben: That’s not true. He ought not to be honest to Larry.
Mark: Kerry should to give advice to Pam.
Val: that’s not true. Kerry shouldn’t advice anyone.
Intuitively, Ann and Ben disagree and so do Mark and Val. The problem is that Ridge’s account can’t make sense of their disagreements. Ann would advise Harry to be honest to Larry. In the real world Ben is against honesty. However, when we place Ben in the idealised world we have to change him so that he too is an honest person. Presumably honest people are for honesty. So in the idealised circumstances Ben too would advise Harry to be honest to Larry. This means that the right hand side of Ridge’s view is not satisfied and so Ann and Ben don’t disagree on his view in the actual world.
The same goes for Mark and Val. Mark would advise Kerry to give advice to Pam. In the real world, Val is against giving advice but in the hypothetical idealised circumstances we have to make her willing to give advice so we have to make her to be for advising others. This means that in the idealised circumstances Val too advices Kerry to give advice. So, against the disagreement between Mark and Val disappears on Ridge’s account.
Here’s what Ridge says about worries of this sort:
“These definitions are couched in terms of conditionals. One might worry that these should be read as counterfactual conditionals, and then object that in the nearest world in which a given person would offer advice of any kind, his state of mind would be quite different. This, though, is not the intended reading. The idea is rather that we keep the states of mind of A and B fixed and ask, given those states of mind, what each of them would advise D to do, if they had to advice one way or another, and moreover had to do so honestly, candidly, and without hypocrisy of any kind (Ridge 2014, 187).”
I don’t think this response works. Firstly, I am little worried about being told that I should not read a counterfactual conditional as a counterfactual conditional. If the theory is not based on a counterfactual conditional, then why formulate it in terms of one? The second thing to note is that the fix in the end of passage just replaces one counterfactual conditional with another. It says that when we place the agents in the idealised conditions, we make no psychological changes (we keep attitudes fixed) but rather we change the external circumstances so that they have to give advice honestly. (By the way, the only way I can make sense of this requirement to give advice is that now in the idealised circumstances there is a threat: think of a demon insisting that you give honest advice or they shoot you.)
The problem is that the conditional fallacy still doesn’t go away. Consider the following case:
Imagine that Freda’s states of mind is such that she is disposed to advice people not to tell lies except in situations in which she has to give advice one way or another and she has to do so honestly, candidly and without hypocrisy of any kind. In these situations, she gets so nervous that she advises people to tell lies. Erin in contrast is always disposed to advise people to tell lies. Then, assume that we are in an ordinary situation in which Freda doesn’t have to advise anyone even if she has an opportunity to do so. In this situation, Freda and Erin have the following discussion:
Freda: Olly should not tell a lie to Polly.
Erin: You’re mistaken. Olly should tell a lie to Polly.
Intuitively, Freda and Erin can sincerely say these things given their attitudes towards lying (Freda is against Olly lying to Polly whereas Erin is for this). And, intuitively they disagree. However, the revised version of Ridge’s view doesn’t support this intuition. If Freda had to give advice honestly, she would advise everyone to lie because of her nerves (note that Ridge explicitly says that we keep their states of minds fixed – I assume this includes the disposition to get nervous when under pressure and to advice people to lie in that case). And so, on Ridge’s view, there’s again no disagreement. So the conditional fallacy is still a problem. This makes me doubt that there’s a way to makes sense of disagreements in terms of dispositions to advise in idealised circumstances – the conditional fallacy is too much of a problem.
If you hurry, Jussi, you can get to the Author Meets Critics session on Impassioned Belief, Friday 9AM Pacific Daylight Time.
Part of my talk will be about disagreement, and I’ll make a point related to yours, as a matter of fact.
Aww – I so wish I could just hop on the plane. Could you perhaps set up a skype link :-)? Anyway, have a great session, best wishes for everyone, let me know what the response is, and wish I was there.
Hi Jussi,
I think you over-read the idealization conditions in your first line of attack. We don’t imagine that the people are honest and for honesty, I don’t think, we just imagine that they are in this one case honest. And I people who are seldom honest and against honesty can still be honest in one case.
Similarly, people who give advice in one case can be against advice and be characteristically reticent about giving advice. So I don’t see how your first two cases can work, given a charitable reading of Ridge’s view.
But maybe I am missing why you think Ridge is committed to the much stronger idealization conditions?
The later objections seem much stronger to me. It seems that by focusing on the advising rather than thinking advisable, Ridge leaves him open to worries of the sort you press. Could Ridge modify his account to focus on what someone would honestly think to be advisable?
Hi Brad
I hope you are well and thanks for this point, which is very helpful. I agree that the first cases are a bit artificial. They were meant to be exaggerations for illustrative purposes, but I now realize that I didn’t quite explain the point they were supposed to illustrate. So, here’s the point:
1. Why does Ridge need strong idealisation? He is trying to reduce disagreement to giving incompatible advice. For this reason, he needs extensional equivalence between the two. Without idealisation there is no hope of this. In real life, we don’t always advice solely on the basis of what our view about the topic is. Our personal interests might be better served, for example, by advising others to do something else, or we might just give advise that fits what we are supposed to say given the views of our society. So, real cases of incompatible advice doesn’t reflect genuine disagreements. Idealisation is needed to get rid of distorting features like this to get a match between disagreement and advice, and to get rid of all cases like this you need to idealise strongly.
2. You are right that the idealisation in terms of making people more honest could be fairly local and for that reason there wouldn’t be the cases I described above. But all I need for the conditional fallacy and the reduction to fail is that there is just one case in which the change in the person’s psychological make-up through the idealisation process affects the relevant state that leads the person to give advice in the hypothetical. In the cases I described this happens in rough and direct way, but in a functionalist picture this could happen much more indirectly. Also, making the honesty in question in the idealisation process local wouldn’t solve this problem as the initial disagreement could be about just the act of giving advice in question (“I ought to give honest advice”).
3. I don’t think that thinking something to be advisable in the idealised conditions would fix the problem. You could develop same kind of conditional fallacy issues there but here’s a more basic problem. Thinking something advisable is to think that you ought to advice someone in some way. This is a normative thought and one that people can disagree about. But, what is at issue in such disagreements was just what the expressivist theory was supposed to explain in a reductive way.
Hi Jussi,
That makes sense.
Here is another objection to Ridge’s account as you gloss it and I am interested in what you think.
If you think someone ought to do something then you think they have compelling reason to do it. Now assume there are cases in which people have sufficient but not compelling reasons. In some such cases people could disagree about which sufficiently rational action to advise, but they would not be disagreeing what the relevant agent ought to do. So disagreements about ought claims cannot be reduced to different propensities to advise (or think advisable either)
If Ridge’s account of disagreement about prescription is supposed to ground an explanation of disagreements about ought claims, then this objection seems to apply.
Hey – thanks so much for the attention to my work guys! I want to chime in on this, of course, but need to think about it some more and also see what comes out of the exchange with Jamie. Mainly, though, I’m very distracted with conference stuff and last minute preparation for the session on the book – in short, if you can leave the comment thread on this open for a bit longer than you usually might, I’m hoping to chime in for some good discussion!
Mike
Hi Brad
I think there’s an objection to be made along these lines. I also think that’s there’s something the expressivist can say in response. Let me phrase the objection slightly differently and then say a bit about what that is.
Ridge is right to define disagreement in prescription in the way that he does: two people disagree in prescription when they would give incompatible advice what some person is to do if they were in certain hypothetical situation.
The controversial question is can we use disagreement in prescription to explain what is going on in the cases in which people utter seemingly conflicting normative claims where these claims are understood to express desire-like attitudes. As you correctly, observe this should go for all normative disagreements. So, consider the following disagreement:
Iris: that there is dancing is some reason for Ann to go the party.
Uma: that there is dancing at the party is no reason for Ann to go to the party.
The problem is that Iris and Uma disagree even if there could both advice Ann to go to the party (or not to do so). This is because advice is an overall level notion whereas their disagreement is about contributing reasons. This objection matches Dancy’s objection to Smith (and Gibbard if I remember right).
Here’s a response using a thought by Gibbard, Blackburn, Kauppinen, Setiya and others. Ridge could say that Iris would advice Ann to take the fact that there is dancing at the party into account in deliberation in going-to-the-party-friendly way whereas Uma would advice Ann not to do so. There are probably problems with this account but at least it is one way of using advice to make sense of the contributory. I think a more complicated story along these lines could be told to deal with your case (which is slightly hard to translate as in the Ridge’s picture people don’t really disagree about what to advice but they just advice different things, which counts as a disagreement in prescription which is used to make sense of disagreements between people who accept different sentences).
Mike – no worries at all and sorry about the bad time. It’s been hard to find time to write this up either and I only got the book last week too. Have a great session at the APA!
Hi Jussi,
Thanks! I think that is interesting, but I am not sure how it would lead up to a response to my objection (of course it still might!.
I guess I should give a case? Here is a go..
Jim is trying to decide between going to law school and grad school in philosophy. He visits me for advice and then, later in the day, you. We agree that there are overall more good than bad features of each option, and agree there is sufficient reason to choose each. We may even agree on what features are good and bad and why. But, when pressed to advise one or the other, I advise law school and you advise grad school. This looks like a case in which we advise differently but do not disagree about what Jim has compelling reason to do. So we don’t disagree about what he ought to do or, maybe, what one could rightly prescribe.
Given what I say, it seems we can agree about what the contributory reasons are (and what he should take into account when reasoning). Any way, not having read Mike’s book, I am ready to be told this worry is just off base!
Hi Brad
thanks. The case helps a lot. Assume that we both agree that Jim has equally good reasons to go both to grad school and law school and that it’s not true that he ought to go to either. But, I just happen to be disposed to advice him to go to grad school in the ideal condition whereas you are disposed to advice him to go to law school.
Here’s what I think Ridge would say. In this case, this means that you and I disagree in prescription. This is a technical term he has introduced and I think it’s fine that we grant this for him. I just seems true that we disagree in this way – we advice different things.
Now, this would only be a problem for Ridge if disagreeing in prescription would amount itself to disagreeing about what Jim ought to do or what Jim has reason to do or the like. In that case, the account would create too many normative disagreements. But, this doesn’t need to be the case. Ridge can say that incompatible advice is a necessary but not sufficient condition for having a disagreement about what someone ought to do.
So, take something like Gibbard’s account. On that view, thinking that Jim ought to go law school means ruling out all hyperplans where you don’t go to law school in Jim’s shoes. Now imagine that we agreed that it’s not the case that Jim ought to go to law school and not the case that he ought to go grad school. In this case, we are not willing to rule out hyperplans where you don’t go to law school or the ones where you go to grad school in Jim’s shoes. And yet we can give different advice as your case illustrates. So, Ridge’s account doesn’t create too many ought disagreements here even if we disagree in prescription.
But, now imagine that I think that Jim ought to go to grad school and you think that he ought to go to law school. On Gibbard’s view, I rule out the law school hyperplans and you rule out the grad school hyperplans. The crucial question is why do we disagree? Why is ruling out different hyperplans a disagreement? After all, if I don’t plan to eat chocolate but you do we don’t disagree. Ridge’s idea is that we disagree because we would give different advice. And, even if Ridge’s view isn’t anything like Gibbard’s, in this story the explanation is that because I have ruled out all law school plans, I would advice Jim to go to grad school whereas because you have ruled out all grad school plans you would advice him to go to law school – and this answers the question in virtue of what we disagree.
Hi Jussi,
Thanks! Read mike’s article last night and realized he was not using ‘prescription’ the way I was thinking he might..just got confused by the initial post. Thanks for clearing this up in your last post. Too bad we are not going to the session this morning.
Have fun, Mike!
Thanks again for the discussion of my account of disagreement!
The most important thing, I think, is to get clear about the content of the account, given that it is not intended as a counterfactual. As you point out, I did say this in the book, but I think it is fair to say that I did not provide enough positive guidance about how the conditional is supposed to be read, thus making counterfactual readings of various kinds seem like “the only game in town” – and then the counterexamples are legion, I agree. So let me try to clarify that.
The way I am thinking about it is this. Suppose we are wondering whether A and B disagree at time t about C’s Φ-ing in D. Start with a full description of the psychology of A and B at time t – in particular, all of their propositional attitudes. Put these descriptions into the antecedent. Then add to the antecedent that A and B advise C about Φ-ing in D. Further add that their advice is fully honest, candid and non-hypocritical.
Once the antecedent includes all of these details, if you can derive from the antecedent that A would advise Φ-ing in C and B would advise Ψ-ing in D, where Φ-ing and Ψ-ing are incompatible, then A and B disagree about C’s Φ-ing in D – and the conditional is true. By ‘derive’ I really mean infer on the basis of a valid deductive argument which takes only the information in the antecedents plus any relevant conceptual truths (in particular, conceptual truths about the constraints imposed by full honesty, full candour, etc.) as its premises.
Because this is a conditional whose truth is fixed by what can be derived in this way, rather than by what would be true in the nearest possible world in which the antecedent is true, it isn’t a counterfactual conditional. It is possible that I shouldn’t have used a subjunctive form to convey the theory, since that does naturally invite the counterfactual reading, I guess, my caveat notwithstanding.
Now let me turn to the counter-examples.
In the case of Ben and Ann, the fact that Ben is against honesty doesn’t mean he wouldn’t advise honestly in the idealized case. In fact, I’ve stipulated that he does so. Does this make the antecedent of my conditional inconsistent? No – someone who disapproves of honesty can still advise honestly. He will just disapprove of what he is doing when he does so.
In the case of Mark and Val, I’d make exactly the same move. Being against advice doesn’t entail that you never give it. I suppose there is a more interesting worry about the antecedent being inconsistent in this case. In particular, one might reasonably worry that it is inconsistent because of the ‘non-hypocrisy’ constraint – someone who is against advice but offers it is therefore hypocritical. However, I think there is a useful distinction to be drawn here in terms of whether advice is hypocritical because of its content and whether it is hypocritical for some other reason – e.g. because the person disapproves of offering advice or disapproves of speaking at all for that matter, or whatever. I have in mind the concept of hypocrisy which depends on content. I’d make a similar reply to an analogous version of the worry in the honesty case.
The Freda case is irrelevant because the conditional is not counterfactual. The ‘have to’ is not meant to signal any sort of coercion by another agent – a demon or whatever. It was the ‘have to’ of logical entailment – given everything else stipulated in the antecedent, it has to (logically) be the case that the advice given is incompatible.
Finally, I agree with Jussi’s reply on my behalf to Brad’s objections.
OK, let the follow up objections commence!
Mike
Hi Mike
thanks for these clarifications and responses. As you might guess, I don’t think these come close to addressing the worry. I mainly have two reasons for this:
1. about the counterfactual conditionals. In the account, you still talk about there being a conditional on the right hand side. In addition, the antecedent as you describe it specifies an idealised situation which is a counterfactual. Putting a conditional and counterfactual together gives us a counterfactual conditional. Now, you are completely entitled to say that we should not evaluate this counterfactual conditional with the standard Lewisian semantics for counterfactual conditionals. It is fine to say that the truth of the conditional is determined by what can be a priori derived from the antecedent. I don’t think this avoids the objection even if it might bring some other problems with it. I think it will be pretty hard to derive a priori anything about a description of mental states and the fact that the person is to advice something. In fact, I don’t think we can get anywhere without adding information about psychological laws – just the kind of thing which the closest world would bring with it. In any case, this isn’t the way to avoid the problem.
2. So, let’s go back to the examples. I would to bring up a stronger version of Mark and Val to show why the response doesn’t deal with the problem. Imagine that Val’s most fundamental concern is that no one advices anyone. She feels as strongly about this as we feel about boiling newborn babies or wanton killings. This concern is what Frankfurt and others would call a volitional necessity – as far as she remains the same person she just can’t get herself to advice anyone on anything. In her case, really being against advice means not giving any advice. Giving advice for her would mean giving up what is sacred for her.
Now, you stipulate the Val gives advice. How could this be? One option is that the antecedent is inconsistent in which case any conditional about Val’s advice will be trivially true. The second option is that Val’s psychological makeup isn’t responsible for her advice but rather a miracle does this, but then in that case I have no idea what she would advice. This would be just random. The last alternative to get Val to advice is to make her do so by changing her attitudes about advising people. In this case, given how much we have to change her, she’ll be little relevance for what real Val disagrees with people about.
I do have a deeper diagnosis about what is going on. We start from the question of when are two people in mental states in virtue of which they disagree. I think giving advice is particularly bad way of making sense of this because whether you advice someone doesn’t depend on just the mental states we are interested in but rather one your whole psychological make-up given the standard functionalist and holist picture about the mental and actions. The problems we get to with the kind of conditionals above just illustrate this problem.
Hi Jussi:
Well I’ll try harder to at least come close then!
I’m not sure whether your first few sentences in this most recent post are an objection to the form of words I used to formulate my position originally, or to the content of the position as I have now clarified it. It seems like the former, really, so I’ll let that pass unless you say otherwise. That is, it seems like you still think the view is open to counter-example even when interpreted as I intend it, rather than thinking my intended interpretation is somehow incoherent, as opposed to simply being difficult (for me, anyway) in a way that is transparent.
In your (1) you do raise an important and interesting objection, though. You say, “I think it will be pretty hard to derive a priori anything about a description of mental states and the fact that the person is to advice something. In fact, I don’t think we can get anywhere without adding information about psychological laws – just the kind of thing which the closest world would bring with it.”
Perhaps, but I had thought I packed so much into the antecedent that it wouldn’t be so hard. For example, I think it is a priori true that anyone who intends to Φ in C but who advises someone else to refrain from Φ-ing in C is thereby hypocritical. I also think that it is a priori true that anyone whose beliefs and intentions rationally commit them to Φ in C but who intends not Φ-ing in C is thereby hypocritical. These and other a priori theses make me cautiously optimistic that the account as I intend it can generate the right results.
OK, now let me discuss the strengthened Val case. I’m assuming that having an overriding commitment or super-strong desire not to Φ, and even treating not Φ-ing as sacred, is logically consistent with someone’s still Φ-ing in a moment of weakness, which is all I need. This may mean that we implicitly have to make some collateral assumptions simply to make the scenario internally consistent (e.g. that in the idealized advice context the agent has a momentary but overwhelming urge to give advice or some such) but I don’t think that is obviously problematic – we are just adding whatever is entailed by the description of the scenario, and part of the description is that the person gives advice, and advice is an action (a speech-act) and so must be caused in the right way by the relevant bits of the agent’s psychology. I also don’t think this makes the account into a counterfactual conditional theory, though I can see why it might look more like one.
I suppose you could now go for an even more gimmicky counter-example which builds something like ‘desires not to give advice so strongly that no contrary desire could ever overwhelm this desire’ but I am then not clear that this is a coherent description. Desire strength doesn’t seem to have an upper bound, so a desire that could never be overwhelmed might be something like a specific natural number n such that there could be no other natural number larger than n.
Lastly, on your diagnosis: I agree that whether anyone actually gives a piece of advice will depend on a number of factors. I don’t think that obviously undermines my account given that I get to stipulate that advice is given and ask whether that (and other assumptions) entail something.
OK: lets see if you think I’m coming any closer!
Mike
Hi Mike,
Is this account supposed to apply to all sorts of disagreements? For example, is is supposed to apply in this case:
Jim: Raquel, you are morally obligated to keep that baby.
Raquel: False! I under no such moral obligation.
Brad: Um, yes?! I have an account of discourse on truth and falsity that may be relevant here too, though – it is chapter 7 of the book.
Hi Mike
thanks for this and my apologies for not having been able to respond earlier.Yes- this is much better and it does come close to alleviating the kind of worries they still have.
Of these two:
1. “For example, I think it is a priori true that anyone who intends to Φ in C but who advises someone else to refrain from Φ-ing in C is thereby hypocritical.”
2. “I also think that it is a priori true that anyone whose beliefs and intentions rationally commit them to Φ in C but who intends not Φ-ing in C is thereby hypocritical.”
It’s true that you’ll need this type of bridge-premises to get required derivations going for the conditionals. Personally, I don’t find either obviously true (I could think of counterexamples, but that wouldn’t get us anywhere). Especially I don’t think these are conceptual truths about hypocrisy.
About this:
“I’m assuming that having an overriding commitment or super-strong desire not to Φ, and even treating not Φ-ing as sacred, is logically consistent with someone’s still Φ-ing in a moment of weakness, which is all I need.”
hmh. I don’t quite follow you here: like if I am against boiling newborn babies, I would in a moment of weakness do so? Without anything changing about my mental states? With me still being rational and responsible? I just don’t think this kind of assumptions should be an essential part of a theory of disagreement.
“This may mean that we implicitly have to make some collateral assumptions simply to make the scenario internally consistent (e.g. that in the idealized advice context the agent has a momentary but overwhelming urge to give advice or some such) but I don’t think that is obviously problematic – we are just adding whatever is entailed by the description of the scenario, and part of the description is that the person gives advice, and advice is an action (a speech-act) and so must be caused in the right way by the relevant bits of the agent’s psychology.”
Right, but this is exactly what creates the problem. In order to make the antecedent true, we change the agent’s psychology. This means that what she advices will not always be relevant to what the real unidealised person disagrees with. This just is admitting that your view suffers structurally from the conditional fallacy.
I think it’s becoming clear where the disagreement is. This has been very helpful – so I am extremely thankful. Hopefully, I have a chance to write this up at some point in a more careful form.
Jussi: On your last point: My idea is that we don’t *eliminate* any of the agent’s actual mental states, but we may need (implicitly) to *add* a momentary desire of some kind to explain why they offer advice at all. Because it is a momentary desire, like an immediate urge, to do something at that very moment, I’m assuming it won’t lead to conditional fallacy sorts of problems. In particular, I don’t think a momentary but overwhelmingly strong urge/desire constitutes an agent’s take on the ‘thing to do’ either at the moment of action or in terms of her more general views about what people should do in various other scenarios. The agent might, after all, be deeply alienated from this momentary whim – and indeed this seems especially plausible in the scenario we are considering, in which she has an opposed pro-attitude which is elevated to the level of identity-constituting sacredness! I guess if there is a counter-example arising out of this specific sort of assumption then I’d like to hear more about the details of how it would go. There may well be one, but it isn’t obvious. Perhaps if you do write this up as a full paper or whatever you can send a draft to me and it might have the relevant counter-example and we can take it from there? I agree that this exchange has clarified exactly where and in what way we disagree, which is very helpful.
Oh, also: I grant, of course, that I am committed to certain truths being not only truths (already controversial in some cases), but a priori and conceptual truths, and that these commitments might reasonably be disputed. But I’m willing to defend those – that would be another debate, though.
Jussi: A further thought, just reflecting on the sort of counter-example naturally suggested by my refinement and elaboration of the theory.
Perhaps the right way to think about the case of someone deeply opposed to giving advice might be that they are under the power of some drug which compels them to give some advice or other. Given that they *must* give advice (given this drug) the question is what would be the minimally hypocritical advice about giving advice they could give. I think the answer, ‘Don’t ever give advice if you can help it’ would be less hypocritical than ‘give advice sometimes’. This advice is therefore non-hypocritical in the sense of being the least hypocritical thing they could do, given the overpowering influence of the drug. Hmm – perhaps given the pressure of this sort of case I should switch from ‘non-hypocritical’ to ‘minimally hypocritical’ – which in most cases will amount to the same thing, but not in your test cases. That actually already sounds like a kind of progress, even if you aren’t satisfied with the resulting theory, so thanks, Jussi!
Actually, in the drugged-up scenario, the subject could be fully non-hypocritical by giving the following advice: “Don’t give advice if you have any choice in the matter.”
As a footnote, for those of you who may find this sort of theory undermotivated (especially given the pressure from objections like Jussi’s!), in the rest of the original paper I try to argue that alternative theories of disagreement (in general, and not just ‘in attitude’) face deep and generally unappreciated problems – hence the pressure to somehow do better. If I’m being fully candid, honest and non-hypocritical, I’d add that I am much more confident of the negative part of that article than I am in my positive proposal, much less in the details of that proposal. As usual it is easier to critique than come up with something clearly better. Also relevant for those interested in this sort of debate – Gunnar Bjornsson has developed a theory of disagreement which is in some ways like mine but some ways different – worth checking out. Perhaps he can reap the benefits of my view without facing worries about the conditional fallacy! I need to think more about his view.
Thanks, Jussi, for this interesting thread! I have a question about the entailment that Mike sees as explaining disagreement in prescription. I think banning the use of the subjunctive would help here. Once we do that, if i’m following, we’ve got the following view: To see whether A and B disagree about advising C to phi in D, we compare two inferences. To construct the first, we list the truths about A’s psychology and then add the premise that A honestly, etc. advises C about whether to phi in D. Then we check whether or not “A advises C to phi in D” is a priori entailed by those premises. Suppose that it is. We then assemble similar premises for B and check whether or not “B advises C to phi in D” is a priori entailed from those premises. If it is, then A and B agree in prescription and if it isn’t, then they disagree.
I have a few questions, if that’s the right picture, but I’d like to make sure that’s the right picture first.
Hi Jan: That is close to right, but for the ‘if it isn’t then they disagree’ bit. Disagreement requires more than mere non-agreement. A and B disagree if A advises doing something in D which is incompatible with what B recommends doing in D.
Also, something that arose in my exchange with Jussi is the following: In certain puzzle cases (e.g. ones involving advice about whether to give advice by someone who deeply disapproves of all advice giving), we may have to make some auxiliary assumptions (e.g. the overwhelming urge or the being in the grip of a drug scenario I sketch above) to make the conjunction of all of the relevant truths as much as intelligible. In such cases I want to insist that we do not *subtract* anything from the agent’s psychology and only add whatever minimum is necessary to make sense of the giving of advice in the relevant way. The fact that I’m using something like ‘the minimum addition’ to make sense of the scenario under these oddball cases can make the view begin to look more like a counterfactual conditional (‘minimal further assumptions’ can look a lot like ‘closest relevant world’, I admit), but I think it should be clear why it isn’t really a counterfactual, but an entailment test with some guidance for how to make sense of cases in which the premises being tested might seem mutually incompatible (e.g. A hates giving advice with an enormous passion, but A gives advice at the same time).
Hope that helps!
Mike
Sorry everyone for being slow with responses and I am very thankful about this discussion.
Brad,
are you thinking about disagreements about moral permissibility in particular? Is the idea that Raquel would not advice herself to keep the baby if she thinks that this is merely permissible and so her advice would not be incompatible with Jim’s advice?
Good point. I think Mike would have to say that in this case the advice is for third parties and whether they are to blame or even stop people from doing something. When Raquel says that keeping the baby is morally permissible, this seems to entail that she would advice other people not to stop her whereas Jim would advice them to do so, which would create the required disagreement in prescription.
Mike,
yes – seems like we are getting closer. Adding momentary urges and drugs seems to be the way to go. I don’t think there’s a difference between the two as presumably the drug would create the desire in any case. I just worry that these will bring the conditional fallacy worries back just as long as we find suitable subject matter for the disagreement to be about. In general, conditional fallacy is just a name for a method for generating counterexamples for theories with certain features (like your theory arguably).
With the urges and drugs to give advice in the individual case, we can consider advice the agent gives to herself about giving advice. There I am fully against giving any sort of advice and I am debating with someone whether I should advice myself.
I say: I should not advice myself.
You say: No, you should advice yourself.
In testing the conditional in your account to see whether we disagree, in the antecedent I suddenly get a strong urge to give advice for myself about whether or not to give advice to myself whilst my other mental states remain the same. Now, in order for you to get the disagreement, the non-hypocritical advice would need to be for myself not to advice myself. But, given that I really badly want to advice, it seems like it would equally be non-hypocritical for me to advice myself to advice myself. After all, I really want to advice given the new strong desire I have. Would be less hypocritical to advice myself not to advice? Not sure I know how this could be measured.
Thanks for the hint on Bjornsson’s theory. I think from a neutral perspective in the bigger picture, it seems like a strike against expressivism if you are right about your objections to the previous expressivist treatments of disagreement *and* new alternatives appear equally problematic.
Hi Brad and Jussi: This is something Jamie Dreier pressed me on at the author meets critics session, actually. [I think I didn’t get what you were worried about because you framed the issue in terms of falsity in your original post, Brad – assuming Jussi’s gloss is accurate, that is!]
What I say in the book about this is that such disagreements are understood in terms of the person who thinks the action permissible being poised to advise keeping the relevant action ‘on the table’ when deliberating about what to do in such circumstances – not to rule it out ex ante, whereas the person who thinks it is required is poised to insist that the option be off the table in the relevant sense.
I’m no longer sure I want to say exactly that, though, in part because of the concerns Jamie rasied. I’m mulling over some alternative explanations of the disagreement which are compatible with my account of disagreement just now, but I want to think more about which of those alternatives I think is most promising.
On your proposal on my behalf, Jussi: Although I am sympathetic to the idea that duties entail the blameworthiness of those who fail to live up to them (barring excuses), I’m not sure I want to build that into the semantics. Also, this won’t be nearly so plausible for a range of non-moral deontic necessities, in my view, not to mention judgments about what one has most reason to do, or ought to do, but is under no obligation to do. Think, e.g. of supererogation.
Hi Jan
thanks. Mike has been very helpful here and he’s responded to you already. Just what it’s worth, the view really is formulated in detail only above and less so in the book so this is all very interesting.
Hi Mike
thanks. I didn’t mean the account really as a fullblown theory but rather more as a placeholder. I take it that the issue is that we use a vast range of normative predicates (ought, right/wrong, good and bad, permissible and required, reasons of many kinds, thick notions…). I take it your view is that whenever there is a disagreement in these terms we can always explain that this is a disagreement in terms of disagreement in advice/prescription. Now, what notions like reasons and permissible and so on mean is that the advice can’t always be about whether to do the action itself. So, your account requires that we can always find something else than doing the action itself with respect to which the people would give conflicting advice. There’s a range of alternatives here from advising to take things into account in deliberation in different ways, to advice about reactions to the action, to I don’t know what. It is just a question of creativity to find the right conflicting advice. I might worry that this will be ad hoc and create unwanted normative commitments (as your objection to my sketch show). They might also be something like the normative attitude objection lurking about (does this advice amount to disagreeing about that?).
Jussi:
My thought is that the literally *overwhelming* desire makes one’s behaviour non-voluntary (though still intentional – otherwise it wouldn’t count as advice). That is why the drug (or brainwashing, or whatever) might help make the idea more vivid. And I had thought that the charge of hypocrisy was less potent when one really has no choice about whether one engages in the behaviour. I could add ‘culpable hypocrisy’ if you think that helps, I guess. This is also why I contrasted the desire with one’s normative perspective in terms of the momentary desire being one from which the agent is radically alienated, etc.
I admit that there is still a sense in which anyone who gives the advice ‘never give advice’ is thereby hypocritical, to some extent. But if the agent had to choose between giving that advice and instead giving the advice ‘sometimes give advice’ that the former would be less hypocritical because of the alienation, non-voluntariness, etc.
But I then thought that there was a way that the agent could be entirely absolved of hypocrisy, namely if they simply advised ‘Never give advice if you can help it’ – the thought being that the overwhelming desire/drug/brainwashing whatever means the agent can’t help it and so there is zero hypocrisy in play.
Can I make this last move in the case you’ve just offered? I think so. Because the desire is overwhelming, I act on it, but because I am alienated from it, and because my action is not entirely up to me, it is not hypocritical. I’m assuming I can still choose which advice to give about giving advice, but that I must give some advice or other. If I then advise ‘never give advice if you can help it’ then I’m not being hypocritical in the sense of not living up to the standards I’m laying down for others – because when I give advice, I precisely can’t help it. I’m also not hypocritical in the sense of telling others to do something in a circumstance when I fully intend not to do that in their circumstances (a momentary urge is not an intention in the intended sense, whereas my normative perspective is partly constituted by an intention).
I do, of course, do something which conflicts with one of my desires, but it is a desire from which I am deeply alienated, do not endorse, was implanted against my will etc. I don’t think we’d pre-theoretically think of this as any kind of hypocrisy at all. When the unwilling addict who still strongly desires heroin but somehow manages to resist temptation says ‘don’t do drugs’ I don’t think his addictive desire would make us say he is being a hypocrite at all – do you? That is the kind if alienated, momentary desire I have in mind, and why I thought this might help with the conditional fallacy.
On this comment from Jussi: “I think from a neutral perspective in the bigger picture, it seems like a strike against expressivism if you are right about your objections to the previous expressivist treatments of disagreement *and* new alternatives appear equally problematic.”
That is true, but note that at least some of my objections were actually not just to previous *expressivist* treatments of disagreement, but to alternative theories of disagreement *in general* including cognitivism friendly ones which hold that disagreement is simply a matter of having beliefs with incompatible contents. See especially sections 1.2 and 1.3 of the book – though you might not find those objections compelling, of course. Part of the point of that chapter (and the PPR article it is drawn from) was that our concept of disagreement is actually richer than our concept of inconsistency. So if my objections are all compelling, then, from a neutral point of view, *everybody* has a problem, and not just expressivists!
[sections 1.2 and 1.3 of chapter 6, that is!]
Hi Mike,
This might be another red herring, but here goes!
I am wondering about cases in which people think there are reasonable disagreements. In such cases it seems like A and B could disagree about what X is morally obligated to do, but both advise the same thing under your specified conditions.
Here is a sample that might work…
Raquel: Jim, do you think I should keep the baby?
Jim: Well, you know I am a devote Catholic and I think that it is morally wrong to do anything else. But I also think this is a case in which there is room for reasonable disagreement, and I think you have to make up your own mind and follow your conscience.
Raquel: Thanks, Jim. What do you think, Stan?
Stan: Well I disagree with Jim about what you are morally obligated to do here. But I advise the same thing he does, and for the same reason.
Thanks, Mike. I wasn’t sure whether the bit about disagreement was right.
Here’s how I think of testing entailments like this: We check to see whether every world that makes the premises jointly true is a world at which the conclusion is true. Is that how you see things as well? And then the entailment counts as a priori just in case I don’t need any further premises in order to be in a position to see that there is such an entailment?
Hi Guys, Oh, and sorry about not following up on the earlier exchange!
Yeah, the falsity bit should have just been left out, Mike. And thanks, Jussi, for suggesting the nice way of developing my opaque post into a specific point!
Brad: No red herring here, this is an interesting case – and for that matter, once I understood what you were driving at I think what you raised earlier is a very interesting issue too, for that matter!
I think what Jim says in this dialogue is not transparent, though people do say things like this. I’m more than a little tempted to say that there is an element of hypocrisy in saying that something is morally wrong (perhaps seriously wrong, as in this case), and being deeply committed to never having an abortion (easy for someone named Jim, I admit, to live up to, most likely!), but at the same time advising someone to do whatever she thinks (on reflection) is right. And if this is hypocritical, then this isn’t something Jim would say, given my idealizations.
Another way of reading a comment like Jim’s is as a polite way of refusing to give any real advice, though then it won’t be what we have in my idealized scenario, where it is stipulated that the person does give advice about the option on the table.
The comment about reasonable disagreement also suggests a recognition by Jim of his own fallibility, which might mean his credence in ‘abortion is wrong’ is less than 1. I think I have implicitly been working with an idealized conception of belief as all or nothing in discussing examples, and this is partly because my views on fundamental normative uncertainty are themselves in flux right now – in light of an objection to the view I put forward in my ‘Best of Both Worlds…’ paper from Bykvist and Olson in their very nice co-authored piece on that topic. I think it will probably be a little hard to know what to say about cases like this (insofar as they implicate such uncertainty) until I sort out what I think about the nature of normative uncertainty itself on my view. I’ll need to come back to this when I get a clearer view about that stuff.
Here is another gloss on the case: Jim is really offering two pieces of advice, one first-order and one second-order. The first order advice is ‘don’t have an abortion’ but the second-order advice is ‘don’t decide whether to have an abortion based on anyone else’s advice – think for yourself!’. There is a tension between these two pieces of advice, I admit, but then there is a kind of tension in Jim’s speech-act, as I suggested earlier, at least imho. On this reading (which I now think I like the best), Jim does disagree with Stan about the first order question about whether to have an abortion, but they seem to agree about the second-order question.
Jan: That would be enough for a priori entailment, yes, though I actually need something even stronger – conceptual entailment – I don’t want to rule out the synthetic a priori here.
Ok, thanks. I have two questions that are in the neighborhood of Jussi’s. First, consider a case in which among the truths about A’s psychology is the truth that A is a dishonest hypocrite. The premise that A honestly and non-hypocritically advises C about whether to phi in D can’t be consistently added to the relevant set of truths. To get consistency, we’d need to alter the truths about A’s psychology. (So, you’re right that being pro-dishonesty is compatible with being honest, being dishonest isn’t.) There’s a parallel issue in a case in which a truth about A’s psychology is that she doesn’t give advice (or maybe it’s a truth about her psychology that she just doesn’t give advice about whether to phi in D). I don’t see a fix for that, but perhaps there’s one I’m not thinking of.
Second, so long as A doesn’t honestly, etc. advise C, etc., the actual world won’t be among the set of worlds at which the premises are jointly true. So, all of the worlds we’re checking to see whether “A advises C to phi in D” comes out true are counterfactual worlds. Perhaps they aren’t all nearby worlds, so there’s some sense in which, in evaluating the inference, we’re not evaluating an ordinary counterfactual statement in English (at least they’re not on one view of counterfactuals). But we are checking subset relations among two sets of worlds each of which is counterfactual.
I’m not seeing why this would be a problem, though. Presumably, some psychological truths are dispositional. Since we generally can’t identify the underlying bases for those dispositions, counterfactuals will often be our best means to do so (finkish difficulties aside). So, that A would advise phi-ing were she to speak to the issue of whether to phi in C seems like a fact about A’s actual psychology–or, at any rate, it could be, so long as the truths about her psychology are consistent with her giving honest, etc. advice about whether to phi in C. In cases in which they aren’t consistent, it does look to me like we’ll need to alter those truths to get the premises coming out true, in which case we may get an entailment of the right kind, but not one that characterizes A’s actual psychology
Thanks Janice.
On the first question: I don’t actually agree that being a dishonest hypocrite is inconsistent with offering honest and non-hypocritical advice, at least not in the ordinary language sense of ‘dishonest hypocrite’, because people can act out of character and I’m assuming that ‘dishonest hypocrite’ in ordinary language is a characterization of someone’s character. One way to act out of character is on an overwhelming urge that seems to come out of nowhere; another is if you are drugged; another is if you are brainwashed.
Is it a truth about A’s *psychology* that she doesn’t give advice? That sounds like a fact about her behavior, in which case it isn’t something included in the premise set we use to test for entailment. It can be a fact about her psychology that she has a very, very strong desire not to give advice, but I discussed why that isn’t a problem above (Cf. the stuff on overwhelming urges and there being no upper limit on strength of desire).
On your second question: You are right that we need to look at lots of other worlds to test for entailment, so this is in some ways like a counterfactual conditional. But I think the differences between a standard counterfactual in English (on a broadly Lewisian construal) and the kind of reading I actually have in mind may make it easier to deal with Jussi’s objection.
Your last point is that perhaps I’m OK even if the account is relevantly like a counterfactual conditional one. I’m going to mull that over (its late just now, and I am tired…) but I bet Jussi has something to say about why that isn’t right!
Thanks for the dialogue on this, Janice. Very interesting.
Hi Mike,
Nice, great points. I agree that some cases might involve hypocrisy or a tacit refusal to give advice but I doubt all cases must go that way. I think the normative uncertainty stuff and the epistemic peer stuff is very interesting in this context, and never saw it might shed light on appeals to reasonable disagreement. Thanks!
Here is another option that I might favor – it might be a sort of twist on your two level idea:
The substance of the disagreement here is not about what Raquel should do but about how an ideal observer should react to her action. The idea would be that some normative disagreements are about what merits approval or disapproval and that one can rationally believe (full bore) that someone’s action merits moral disapproval while doubting that the person has most reason to refrain from doing it.
This seems like an appealing option, any way, once we move away from non-moral cases. The general suggestion is that you could keep the idea that disagreement involves different tendencies to advise but that you broaden the target of the advice and the type of reaction/attitude being advised. Jim and Stan will advise Raquel the same way, but they would advise an ideal adviser to react to her action in different ways & that reflects their disagreement about the moral status of her action.
Hi Brad,
Yes – that might be a plausible reading too; I agree. I think that a lot may depend on how we fill out the rest of the story, in terms of speaker’s intentions, dispositions to elaborate, etc. It could be that more than one of our readings is intended in some cases, too, actually. In particular, it seems in some sense consistent to (a) advise (because pressed) against abortion, (b) advise against taking your first-order advice about that, and (c) offer further advice about how an observer should react to various courses of action and/or ways of deliberating.
Thanks Brad,
Mike