One of my favourite objections is the conditional fallacy. It reveals a structural flaw in theories which attempt to give a philosophical account of something in terms of what happens in some specific counterfactual circumstances. It was first formulated by Robert Shope in his 1978 “The Conditional Fallacy in Contemporary Philosophy”. Despite this objection, many still continue to construct counterfactuals-based theories. As an illustration, I want to look at deliberative contractualism – a new form of contractualism recently introduced by Nicholas Southwood in his Contractualism & Foundations of Morality. The book itself is brilliant – it clarifies the distinguishing features, advantages, and problems of different forms of contractualism. I just worry that Southwood’s theory too commits the conditional fallacy.
Shope introduced the objection against all attempts to analyze or define ‘a statement p by presenting its truth as dependent, in at least some specified situations, upon the truth (falsity) of a subjunctive conditional φ of the form: ‘If state of affairs a were to occur, then state of affairs b would occur’.’
One version of the conditional fallacy (version 2) consists of overlooking ‘the fact that, in some specified situations, statement p is actually true, but, if a were to occur, then it would be at least a partial cause of something that would make b fail to occur.’
One of Shope’s examples of this fallacy is Rawls’s definition of a person’s real good in terms of what is the most rational plan of life for her. The rational life-plan sneaks in a subjunctive conditional to this theory. A plan of life is rational for Rawls when it would be chosen by the person if she were to have full deliberative rationality.
Now, consider the question of whether it is good for an actual troubled person to go to psychotherapy. Rawls’s account entails that satisfying the antecedent of the counterfactual conditional (‘if this person were fully deliberatively rational, …) would make it true in the counterfactual scenario that the person already has what would be achieved through psychotherapy in the actual world. In the counterfactual circumstances, it would be irrational for this person to try to achieve what she already has – so, if one were fully rational, one wouldn’t go to psychotherapy. Unfortunately, this means that it could not be good for the troubled agent to go to the psychotherapy in the actual world either. This is because that was made to depend on what is included in the person’s life-plan when she is fully idealised.
Consider then Southwood’s deliberative contractualism. He claims that the truth of moral statements (here the ‘p’s) depend on what common code we would agree to live by (the ‘b’), if we were perfectly deliberatively rational (the ‘a’ in Shope’s scheme).
Here’s how I think Southwood’s theory works (I’m still new to the theory so my apologies for misunderstandings). We start from all of our actual attitudes which we have in our first-personal standpoints. These attitudes include our desires, preferences, beliefs, normative judgments and the like. We then go to a resembling counterfactual scenario in which we are only idealised so that we fully satisfy the norms of deliberative rationality. This means that in those circumstances a) any decision to act must be preceded by deliberation with others who are affected by decision, b) co-deliberation is free and open exchange of relevant information, c) we argue and persuade one another to act in this way or that, while remaining amenable to being persuaded in turn (we present what we take to be considerations for and against options that others are capable of recognising as normatively salient), and d) we work out and render coherent the content of our beliefs, desires, hopes, fears, and so on and reorient this content in light of communication and discourse. We then, in the counterfactual idealised circumstances, go through a process of reaching an agreement on what rules we are to live by. Those rules then fix what is right and wrong also for us in the actual world.
The problem here is that our idealisation to perfectly deliberatively rational transforms the relevant counterfactual world and hence what principles are needed there. In that world, we, by definition, *cannot* for instance resolve our disagreements by coercion, threats, bribery, blackmail, and so on (181) – or otherwise we just would not be in the fully deliberatively rational world. This means that there would be no point for us to agree in that scenario on rules that prohibit solving disagreements by coercion, threats, bribery, blackmail, and so on. After all, in that world, no one could do any such thing. It would be just as useful as making an agreement that we all breathe (or for Rawls’s fully rational person to plan to go to psychotherapy). Likewise, the idealised contractors would have no use for rules for how to punish people for this kind of actions, or for duties of reparation for the offenders. There would never be offenders of these rules to whom these further principles would apply.
On Southwood’s view as I understand it, it seems like what the idealised co-deliberators would agree for themselves fixes what is right and wrong for us actual agents. But, this would mean that it would not be wrong for us to coerce, make threats, bribe, or blackmail to solve our disagreements. There would also not be a requirement to punish people who do these things, or a requirement for the offenders to repair the harm they have done. However, these things are still wrong for us. So, this looks like a typical case of the conditional fallacy as specified by Shope.
Now, there might be a way for Southwood to reformulate the view so as to avoid this problem. One way would be to switch from the example model (the contractors choose for themselves) to the advice model (they choose for us). Sadly, Southwood explicitly rules this option out (fn 34). Maybe going from something like the basic equations to Wright’s provisional equations would help, but I don’t really know how.
Hi Jussi, just before getting to the end of your post (nice post by the way) I was going to suggest that Southwood could switch to the advice model. I then noticed that you already bring this up. Since I do not have the book available to me right now, I’ll just ask this. In that footnote in which Southwood rules out the advice version of his deliberative contractualism, does he give any argument/reason for ruling it out? I’m wondering because, just as you suggest, it does indeed seem like a natural way for him to go precisely for the reason you bring up.
Hi Sven, thanks! The footnote is in the context of a discussion of how Southwood’s view can respond to an objection to idealised response theories that seem to apply to contractualism too. In the footnote, Southwood explains how an idealised advisor view might be able to overcome that objection (even if Southwood doubts this).
After this, here is what he says:
“Even if this response works in the case of ideal-observer theory (which I doubt), it does not work in the case of contractualism, for contractors must be exemplars rather than advisers, at pains of conceptual incoherence. One cannot *agree* that someone else lives according to a common code”.
So this seems like Southwood thinks that an advisor version of contractualism makes no sense because the advisors could not agree on how we actual people live. But, maybe the contractualists could get rid of the agreement part and just stick to advice. It’s true – this is one way of going.
I see. Thanks.
I guess another way of doing things — which Southwood might dislike since it would involve two sets of counterfactual conditions — would be to have the contracting parties agree on how to themselves interact with others for the case of being in non-ideal conditions. The basic question would then be: how would these ideally deliberating counterparts of us agree to interact with others in our actual non-ideal conditions (where our actual conditions would from their perspective be counterfactual conditions).
yeah – maybe that would work. On the other hand, I find your basic question very hard to answer. Also, he is after the fundamental right- and wrong-makers and that would be a strange kind of property to do that sort of fundamental work.
Hi Jussi –
Interesting post. I haven’t read the work you’re discussing, but I’m not sure I understand his objection to the advice model. Is he saying that all agreeing parties must be the actual parties that are to live by the rules?
That puzzles me a little, insofar as no one is actually deliberatively rational. Perhaps he means that the contractors must at least be idealized versions of us? If so, why couldn’t one adopt a Railton-like fix, i.e., “what y would agree to x’s living by, where y is an idealized version of x”? Would y not be an exemplar in this case? (And if not in this case, why on his view?)
Hi Dale,
thanks. I’m very new to this work as well so I hope I’m not completely misunderstanding Southwood.
I think what he is saying is that anyone (be it an actual person or a hypothetical one) can only agree on rules that govern his or her behaviour and not someone else’s. One agrees about what one will do oneself.
Of course we, as actual persons, are not parties to the hypothetical agreement which our idealised selves will make. However, Southwood is not attempting to ground our actual reasons to follow that agreement on any facts about what we have agreed (after all, we didn’t). There’s a separate story about what those reasons are based on normatively significant relations between us as codelibating agents.
The contractors I assume are idealised versions of us. That Railton-fix seems like an advice model to me. The idealised self agrees how the actual self is to live rather than how she is to life herself. And, the quote I gave to Sven from footnote 34 seems to suggest that Southwood thinks this is an incoherent idea. I’m sure he can explain why – I’m not sure I can say more.
Hi Jussi,
Cool post.
(1) Does he discuss trustees when it comes to animals and the handicapped? If he accepts them as a coherent part of the contractualist theory then that seems to be at odds with the claim in the footnote.
(2) Along these lines, perhaps he should adopt a trustee model instead of an advice or exemplar model. Your idealized self is your trustee and you invest him with the power to agree to things on your behalf.
Hi Jussi,
Can Southwood escape the charge of a conditional fallacy by claiming that while idealized deliberators would not need to agree to rules that prohibit solving disagreements by coercion, threats, bribery, or blackmail, they may agree to rules that prohibit unjustified coercion, threats, bribery, blackmail in general? For example, bribery may not be able to be used in this idealized world to resolve any sort of disagreement because people wouldn’t be persuaded by such efforts. But it may still come in handy to get something done that isn’t a matter of resolving a disagreement (for example, getting a lucrative contract over competitors who are equally qualified). If rules are still needed to be in place to guard against bribery in more general terms, then they would also guard against bribery in instances where disagreements need to be resolved.
Perhaps one can respond that at the heart of any conflict in which we would like to guard against unjustified coercion, threats, etc., there lies a disagreement that needs to be resolved. However, that doesn’t seem right to me. Sometimes conflicts arise out of differences rather than disagreements. My company wants the lucrative contract and so does yours; we both are equally good candidates for the job and recognize that about each other. Were one of the companies to get the contract we would agree that the decision was justified. However, we have a difference of interest in who gets the contract. It seems that bribery can come into play in this scenario to tip the scale in one direction in a way that would be unjust. Therefore there needs to be a rule against such actions even in circumstances where individuals are fully deliberatively rational.
Hi Jussi –
That helps, and I think I agree that this is a problem for the view as you’ve described it.
Hey Jussi,
I think there may be a subtle equivocation in your argument.
The last sentence is ambiguous. It could mean either of the following:
The preceding argument establishes only (1). But to infer your conclusion (i.e. that rules prohibiting such actions would be pointless), what you need is (2).
I’m not familiar with Shope’s text but let me add a brief note: the problem described seems very similar to one sometimes posed to versions of virtue ethics where the right action is that which a fully virtuous agent would actually do in the situation. Such a theory has trouble giving answers in cases that (intuitively) calls for reparative action e.g. after breaking a promise to a friend for no good reason. A fully virtuous agent wouldn’t do such promise breaking and so can’t be in the situation that calls for reparation. Here some virtue ethicists have also turned to the advice model.
Thanks for helpful comments everyone!
Brad,
Southwood has an explicit discussion of the objection to contractualism based on atypical persons (this is from Brad Hooker’s Ideal Code, Real World). Scanlon of course considered the trustee model as a response to this problem. Southwood says that, on his view, ‘atypical persons, since they are clearly affected by the outcome of the hypothetical agreement with the deliberative contractual situation, are quite literally included therein – included, that is, in the broad sense. More precisely they are included as representees.’
Now, I’m not quite sure how to understand this – how animals, for instance, could be parties of the agreement as representees. In any case, this doesn’t sound like the trustee model. Southwood I think talks about the trustee model in his Philosophy Compass contractualism paper but I think he rejects it there. But, yes, that would be one way to go quite different from the current position.
Campbell,
you are right – I was being sloppy with the modalities. The most accurate way to make my argument would probably be to drop the modalities altogether when talking about the idealised world. All I need is the prediction that in the idealised world the contractors will not agree on principles that forbid solving disagreements by coercion, deception and the like. As a ground for making this prediction, I think it is enough that no one ever does or will solve disagreements by coercion, deception, and the like and that this is common knowledge to all. This would seem to be enough to make the idea of the agreeing on these things not to come up.
Dana,
that’s an interesting proposal. I do have a couple of worries. First, if the contractors agree on really general rules and the actual life cases of solving disagreements by coercion, deception, and bribery are supposed to be covered by them, I worry that these rules will have counterexamples in the actual world. Second, as you say, coercion, deception, and bribery without disagreements may be quite rare so it’s not certain what kind of principles the contractors would set up for them. Finally, it might be that the norms of deliberative rationality rule out overcoming differences (and not merely disagreements) too by coercion, deception, and bribery. In that case, the ideal contractors would not need principles to forbid this either.
Finally, there is a more general structural issue on background. For your solution to work, it would have to be that there is no such activity that (i) we do it in the actual world, (ii) it is intuitively wrong, (iii) the norms of deliberative rationality rule it out, but (iv) none of the more general principles on which the deliberators agree on cover it. It seems like such activities are always at least a possibility.
Martin,
that’s exactly right. There is a wonderful discussion of this problem in virtue ethics by Robert Johnson in his paper called ‘Virtue and Right’ (Ethics 2003). Johnson does have a footnote on Shope but he does believe that there is more to his objection than just merely the conditional fallacy (which seems bad enough for me). In ethics, the classic view the fallacy has been discussed with is the so-called response-dependence theories (virtue ethics in a way belongs to these views). The classic work on this topic was done by Mark Johnston in the 80s. One of my favourite discussions of this whole topic is the appendix on the Euthypro contrast in Crispin Wright’s Truth and Objectivity.
In any case, far too many virtue ethicists still continue to formulate their views by using the example model and the implications for virtue ethics of moving to the advice model have not been fully explored. Final the similarities between virtue ethics and deliberative contractualism are interesting. Maybe the latter could be seen as virtue ethics for an extended communal mind.
Interesting post Jussi, and thanks for referring Southwood’s book which sounds like a must-read. As for the conditional fallacy, I believe it poses a general problem for contractarian/contractualist theories that rely on hypothetical agreement, making use of idealized agents that abstract from or augment the features of actual agents. If S+ is the ideal version of the actual agent S, then these theories will generate biconditionals of the form:
S ought to X iff if S were S+ then S/S+ would X.
(Abbreviating the righthand side of the biconditional: S ought to X iff S+ would X.)
But so long as S actually isn’t S+, the possibility can’t be ruled out that S’s normative reasons do not align with S+’s. E.g., if S is rich and knows it, he has no self-interested reason to go along with what S+ behind the veil of ignorance has self-interested reason to do. And if S recognizes rational norms of benevolence as well as of self-interest, then she can have (or at least see) reasons to do things that a purely self-interested or non-tuistic S+ does not have (or cannot see). These seem to be the kind of cases where hypothetical contract theorists pay insufficient attention to interdependencies in the truth values of the three component statements of the biconditional.
There is a sense in which I think hypothetical contract theorists ought to take the conditional fallacy to be trivial, and another sense in which they ought to take it seriously. The conditional fallacy can be generated by a technical problem that could be addressed by modifying or reinterpreting the conditional in question, and this is the sense in which it is trivial. But it can also be generated by substantive issues, namely, different rational norms could apply to S and to S+. (The Rawlsian and the Hobbesian forms of contract theory I alluded to in the last paragraph seems to be faced with the substantive version of the conditional fallacy. And as far as I can see, Scanlon’s and Southwood’s contractualism with the merely technical version.)
(To elaborate on my distinction between technical and substantive versions of conditional fallacy….)
To some extent I think the conditional fallacy is a merely technical problem that can be solved by modifying, or giving an appropriate interpretation to, the counterfactual in question. Taking the counterfactual:
…if it were the case that S = S+, then S/S+ would X,
we may distinguish two strategies for solving the technical problem. The consequent-modifying strategy interprets the antecedent as the supposition that S becomes or is replaced by S+, and then modifies the consequent to re-align S+’s reasons with S’s. The advice model seems to be following the consequent-modifying strategy. The other, antecedent-interpreting strategy begins by noting that the counterfactual as it stands is ambiguous between different interpretations. To illustrate, take for instance the fabled exchange between Alexander and Parmenion (“I would accept, if I were Alexander”; “So would I, if I were Parmenion”). The point here is that the consequent of Parmenion’s statement would be true only on the supposition that he became Alexander with certain of Parmenion’s features (but not if he became Alexander with Alexander’s great-making features). More generally, the truth of the consequent in such counterfactuals cannot be evaluated without determining which features of which person we are to hold fixed in a disambiguated reading of the antecedent. And it seems to me that what we need to hold fixed are certain features of S and not of S+, namely those features of S in virtue of which rational norms apply to S. After all, we are primarily and practically concerned with what S rationally ought to do, and only derivatively and theoretically concerned with what S+ would do. So, too, we are only derivatively and theoretically concerned with what S+ would advise S to do.
Moving on to the substantive problem: it could be that S+ has certain features in virtue of which rational norms apply to S+ that do not apply to S, or lacks certain features in virtue of which rational norms that apply to S do not apply to S+. Then any advice that S+ gives to S that conforms only to S+’s norms are misguided. So, the advice model could stipulate that S+ only give advice that conforms to S’s norms, norms applying to S in virtue of certain features of S. In either case, it is certain features of S and the norms applying in virtue of them that determine what S ought to do; the device of the ideal advisor seems derivative, and neither definitive nor constitutive nor determinative in regard to what S ought to do.
Thanks Boram.
For what it’s worth, I do endorse a form of contractualism that’s close to Scanlon’s. And, I don’t think this theory makes us of either hypothetical agreement or idealised agents (funnily enough, I don’t think I therefore count as a contractualist by Southwood’s lights).
Also, I do want to emphasise that Southwood does have an elaborate story about what our actual reasons are to follow the agreement by the fully deliberatively rational agents. These reasons are not based on the reasons of the idealised agents but rather on our actual relationships to other people. So, I’m not quite sure your argument would apply to his view.
I’m interested in why you think the technical problem is trivial. How could we avoid it by modifying the conditionals?
Ah – thanks Boram. I’m not sure I quite follow your proposal so I need to think about it more – but I do worry about couple of things. First, it does seem to me like the antecendent may not be well formed or it in many cases impossible. Finally, there’s a worry that now we are using our prior beliefs about reasons to determine which features of S we are keeping fixed. This makes me worry that the account isn’t really doing much work. I need to think about this though. Thanks.
Jussi, thanks for your replies. It’s been some time since I’ve thought through the conditional fallacy, so it will take me some time to recall and reorganize my thoughts to answer the worries in your second response!
To comment on your first response: if you maintain that there are normative facts independent of subjective facts about us, which either constitute or provide normative reasons for us to do things or to have certain mental attitudes, then I guess you could endorse a Scanlonian version of contractualism that need not appeal to hypothetical agreement or to idealized agents. Is this your view? Of course, one would then have to deal with the redundancy objection: why use contractarian notions like “rational bargain” or contractualist notions like “reasonable rejection” when one could directly appeal to the reason-constituting or -providing normative facts? I’m guessing that might be the source of Southwood’s reluctance to count your position as contractualist.
I will have to see the details of Southwood’s elaborate story to see if I find it convincing. Gauthier too, it seems to me, has a nuanced and careful discussion of the normative reasons actual agents have, to follow the normative results arrived at by the ideal non-tuistic agents in his state-of-nature scenario (in the last chapter of his book). Actual agents can have tuistic reasons to follow the normative results arrived at by the ideal agents that the ideal agents in his construction cannot have, and I grant this. But my objection is that tuistic reasons ought to figure as inputs in the normative results that the ideal agents arrive at, as reasons that the ideal agents themselves are responsive to, for otherwise the ideal agents are not adequately idealized versions of ourselves, and cannot deploy reasons for their normative conclusions that we ourselves would find convincing upon rational reflection.
For what it’s worth, I myself find the Hobbesian or Gauthier’s foundationalist approach of seeking fundamental evaluation (of certain norms in the moral domain in terms of norms in the non-moral rational domain) more appealing than the Rawlsian or Scanlonian methodology (which seem to me more coherentist than foundationalist). If the foundationalist approach can be successful, it will confer genuine understanding, and moreover understanding that can be used to convince amoralists who are rational in the requisite sense(s). Of course that’s a big if, but there is the possibility that we have not fully explored all criteria as to what counts as a “non-morally rational norm”, capable of generating further norms in strategic contexts co-extensive with what we would count as cooperative moral norms. Historically speaking, we have explored the norm of rational self-interest, and more lately the formally well-defined norms of rational choice theory (including decision and game theory), and these norms by themselves do not seem capable of generating norms that are co-extensive with moral norms. But Gauthier for instance was able to get more (morally normative) mileage out of his rational norms by expanding our conception of what counts as “rational”, namely by supplementing the norms of rational choice with the norms of resolute choice. This procedure seems to me more preferable than the Scanlonian appeal to the slippery notion of “reasonableness”.
I’ve only seen chapter-length comparisons of the strengths and weaknesses of (Hobbesian) contractarianism and (Kantian/Rawlsian/Scanlonian) contractualism. It seems Southwood’s book contains extensive discussion of both forms of contract theory, so I look forward to reading it.
Jussi, below I will explain why I think the conditional fallacy in many cases raises a merely technical issue, and a problem that seems trivial or perhaps even perverse. It’s already gone on too long so I will stop at that and not address your other worries (though I plan to keep writing on it while I’ve got the momentum), unless you want to continue. (I may well have overstayed my welcome here already!)
Actual motivation accounts of internalism suffer from extensional problems: they either undergenerate or overgenerate reasons; to use Hubin’s colorful terms for these, the former is the problem of truant reasons, and the latter of counterfeit reasons. Hypothetical motivation accounts are attractive because they promise to overcome these extensional problems by appealing to agents who are idealized versions (S+) of actual ones (S). But because these accounts, in their turn, offer counterfactual analyses or explanations of internal reasons, they are vulnerable to the conditional fallacy, which provides a formula for coming up with specific counterexamples showing that hypothetical motivation accounts are also afflicted with truant or counterfeit reasons. If the counterfactual analysans or explanans of “S has a reason to X” has the form:
If S were to become S+, S would be motivated to X.
Then the formula for generating specific counterexamples might be: precisely because S does not have S+’s ideally rational features, S has reasons to acquire those features that could not motivate S+; and precisely because S+ does not have S’s actually limiting features, S+ would be motivated to do certain things that S has no (good) reason to do. For instance, suppose that S+ is idealized with respect to available information: S+ is fully informed. Then S can have reason to gather more information that could not motivate S+. And where S+ would maximize, it could be too costly for S to do so due to time constraints and the increasing cost of gathering more information, and so S would have better reason to satisfice instead.
Now here is why I’m inclined to think the conditional fallacy raises a merely technical problem, insofar as the above formula is concerned. Suppose that a hypothetical motivation account, HM, is successful in solving most of the important extensional problems facing actual motivation accounts. HM tells us that S has impartial reasons to be equally concerned about goods or bads whenever or wherever they occur, across intertemporal and interpersonal dimensions of distribution. HM also tells us that S has personal reasons to be specially concerned about S’s own projects and for the projects of those with whom S has formed intimate relationships. HM also weights and situates personal reasons vis-a-vis impartial reasons in such a way that, miraculously, we find upon due reflection to be satisfying and convincing. And HM does this regardless of S’s actual limitations, while somehow correctly deriving these reasons from internal features of S, and even showing that standards of correct derivation here used depend on S’s own inference dispositions. Even then, even on this hypothesis about HM, it seems to me that the above formula will generate counterexamples against HM. But, ex hypothesi, the extensional worries generated by the formula will be trivial ones, that we may hope, given the virtues of HM, will be solved by technical innovations in interpreting or modifying or eliminating the counterfactual element of the analysis.
Here’s another related reason why I think the formula for generating the conditional fallacy raises a merely technical problem. Both the partially informed S and the fully informed S+ (or at least the theorist who constructs S+), in our earlier counterexample, share the norm that, other things being equal, more veridical information relevant to our decision-making is better than less. That’s precisely why S+ is idealized in this respect, and S has reason to gather relevant and veridical information. The formula generates the counterexample in spite of S and S+’s sharing the same norm, and that seems perverse. In order to show that the counterfactual analysans/explanans in question overlooks certain interdependencies in the truth values of the component statements in the analysis/explanation , the conditional fallacy advocate overlooks the fact that these interdependencies can be explained by S and S+’s having the same norm. When one can accuse an account of a fallacy precisely when the account is successful at its task, it seems to me something must be wrong with the accusation.