I recently completed an independent study with a student interested in Mackie’s error theory, and we spent a good deal of time discussing Mackie’s argument from relativity or disagreement. For those unaware, the late Australian philosopher John Mackie favored an error theory of morality, according to which, although our ordinary moral language presupposes that our moral beliefs can correspond to moral facts, there are not moral facts with which those beliefs correspond. So as Mackie understood it, if Matilda believes that capital punishment is wrong and Nancy believes capital punishment is not wrong, their disagreement is in reality only an apparent disagreement, since there is no fact about the wrongness of capital punishment that would render their disagreement intelligible. Mackie held that this error-theoretical account offers a better explanation of the apparent widespread intra- and intersocietal moral disagreement than the alternative, namely, that either Matilda or Nancy is guilty of irrationality, ignorance, misperception, etc., with respect to the alleged moral facts. Moral discourse is thus akin to fairy discourse: The parties to a moral disagreement are arguing, literally, about nothing, just as those who argue about whether fairies’ wings are translucent or opaque are arguing about nothing.
I’ve often felt that the disagreement argument is an important advance over the sophomoric and soft-headed “anthropological” argument that simply says (a) there’s lot of moral disagreement in the world, so (b) there are no objective moral facts or truths. Mackie’s point is that deep-seated disagreement is rationally intractable disagreement, resulting not from any failure of rationality by one of the parties but from the parties holding conflicting normative attitudes toward one and the same sort of act or policy. This is what distinguishes moral disagreement from factual or scientific disagreement: In a factual or scientific disagreement, we have a good idea about what sort of evidence would rationally settle the matter, but with fundamental moral disagreements, we not only can’t point to what sort of evidence would rationally settle the matter, there doesn’t seem to be *any* evidence that might settle it.
Now those in the moral realist camp have replies to Mackie. (Many of these can be found in David Brink’s “Moral Realism and the Foundation of Ethics.”) These include:
1. Anthropological evidence suggests a high degree of cross-societal moral agreement.
2. Many moral disagreements are really non-moral disagreements. (Matilda and Nancy in my example above may agree that capital punishment is wrong if it is often imposed on innocent people, but disagree about whether it is often imposed on innocent people.)
3. Apparent moral disagreements are often explicable by societies’ differing material circumstances. Society A and society B might agree that we should be generous to children, but what counts as adequate generosity in wealthy society A might differ greatly from what counts as adequate generosity in poorer society B.
4. The ‘in the same boat’ reply (as I call it): Because our factual or scientific beliefs are underdetermined by the evidence for them, factual or scientific disagreements are sometimes conflicts between opposing paradigms between which rational agreement should not be expected. (Notice this reply is less a vindication of moral facts than the claim that if there are facts at all, there might as well be moral facts.)
5. Individuals who agree at the level of general moral principle often fail to recognize, or having difficulty imagining, the implications of those principles.
I’d be curious to know if these replies to Mackie’s disagreement argument are convincing, but I’d also like to suggest another direction: The five replies above all try to give morality back its rational credentials by narrowing the scope of moral disagreement or by showing that such disagreement is not rationally intractable and can be explained in terms of the parties’ irrationality, ignorance, etc. But I worry that these kinds of replies are too ambitious. Why not acknowledge that some moral disagreement really is rationally intractable? Here’s what I have in mind: Suppose that there are disagreements like those Mackie envisions, disagreements traceable to conflicts between deeply held normative attitudes where neither party will be rationally moved an inch. Given the peculiar normative aspirations that moral beliefs have (their supposed universalizability, categoricity, etc.) and the fact that moral beliefs have implications about people’s behavior (that some may have to make sacrifices for the greater good, e.g.), might not there be deepseated psychological mechanisms that ‘protect’ individuals’ moral beliefs from rational dissuasion? In other words, can’t moral realists just say that some moral disagreement is rationally intractable and that such disagreement has a psychological explanation related to the close relationship between morality on the one hand and social order, identity, self-interest, etc., on the other? Obviously, such a position opens up a can of worms about how such disagreements would be rationally resolvable if they could be resolved (i.e., how one of the parties can be mistaken). But the usual realist replies seem utopian, suggesting that all moral disagreement turns out to be rational in the end. Perhaps I’m proposing a more realistic reply for realists!
Is this possibility really open for realists? I’m not sure. The distinction between a “rationally tractable” disagreement and a “rationally intractable” one is built, for example, in Rawls’s distinction between questions relating to the good and questions relating to the right. The former have an intractable character, while we can have some hope to deal reasonably with the latter (I think we can find this pattern – I mean, the distinction between these two “domains” of questions – in many liberal thinkers). But, of course, Rawls is no realist. This might make us suspicious of the real availability to realists of this reply (it might be available to a cognitivist of sorts, but not to a realist). I think a realist, at the end, will be commited to a more “unitary” (can we say this?) view, like Iris Murdoch’s in “The sovereignty of good”.
I would think that something along Michael’s line is exactly what the bulk of ancient and medieval moral philosophers would say. The idea that virtue is knowledge/reason is not so much an easy way to get virtue as a hard restriction on knowledge to people of good character. Plato would not think it was surprising that people in the cave couldn’t “rationally” (i.e. by discussion, reflection, etc.) be brought to know moral truth; Aristotle would likewise insist that discussants be well brought up, or ethical discussion is useless. Both are certainly realists. And, FWIW, I think Iris Murdoch would be right there with them.
The idea that realism about X implies that any normal human being can come to know about X is a distinctly Englightenment conviction.
C. Reis and Heath- Tell me more about the relevance of Murdoch’s work with respect to addressing error theory. I read ‘Sovereignty’ nearly ten years ago.
Hi Heath. I’ll never forget our Stevens Point tacos.
You anticipated me rightly: The alternative reply to Mackie that I was trying to develop was indeed inspired by Aristotle’s picture of individuals’ moral beliefs as characterological systems with their own respective internal logics that need not intersect with the moral beliefs of those with different value commitments. As I see it, that sort of picture has the advantage of accepting Mackie’s claim that some moral disagreement between actual individuals will turn out to be de facto rationally intractable, while resisting the conclusion that this shows there are no belief-independent moral facts. Such a reconciliation demands the articulation and defense of a moral epistemology that sees moral knowledge as ‘theory-laden’ (i.e., only those properly habituated latch on the truly morally salient features of the world, thus making their moral beliefs causally prior to their moral judgments). The obvious task of such an epistemology will be to indicate how a person can “get morality right” even if her being right is not something that can be demonstrated to those who get it wrong. I gather this has been a central task for neo-Aristotelians and ‘sensibility theorists’ (McDowell, etc.) for the past two decades.
It’s easy to see how a realist about x could say that disputes about x are intractable relative to the current state of our knowledge. Thus, for instance, a realist about physics might think that there are certain questions that cannot be answered with our current resources. Obviously enough this doesn’t prevent it from being the case that some people, at this very moment, hold the right (true) theory, while others hold wrong ones. So in this sense there is no problem understanding ‘how a person can “get [x] right” even if her being right is not something that can be demonstrated to those who get it wrong.’
More interestingly, perhaps, the same can be true in an area like mathematics. At a certain stage in the development of mathematics it might be true, not only that mathematicians are not able to settle a certain question, but that they cannot even imagine what sort of procedure would settle it. Disagreements might then seem rationally intractable. Progress in such areas is not just a matter of accumulating evidence or argument, but a matter of seeing what OTHER sort of evidence or argument might do the trick.
So why can’t the same be true in morality? And why should moral realism be in tension with such a position? If ‘realism’ means that we hold moral facts to be independent of what we think about them (which is what most of us think in physics and a lot of us think in mathematics) then such a position seems plausible and attractive.
Having said that – which is another way of saying (I think) that I am very sympathetic to something like Michael’s proposal – I have two reservations. First, in the physics case, I can see how someone can get it right though she can’t prove that she is right; I’m not sure, however, that such a person KNOWS that she is right. A lot of moral realists will be hesitant to deny that we have any moral knowledge. If the analogy with physics implies that this is what we must say, they will deny the analogy. (People like McDowell, I take it, bite this bullet and claim that A can know x even in cases where A does not have a conclusive argument for x. I think McDowell may well be right – but that does suggest that morality and physics are not as analogous as I began by suggesting.)
Second, though I regard Michael’s position as somewhat promising, I am not as convinced as he by the necessity for it, because I don’t find all of the standard realist responses to Mackie’s disagreement argument to be as unconvincing or ‘utopian’ as he does. In particular, the ‘in the same boat’ argument seems to me very compelling, not only as applied to factual claims (where all disagreements, if pushed to a sufficiently fundamental level, turn out to be rationally intractable) but especially as applied to other sorts of practical claims: the claims of prudence, for instance. It has always seemed odd to me that Mackie presented himself as skeptical about the normative force of morality but not about that of self-interest, when his arguments (from queerness as well as from disagreement) work as forcefully against the latter as they do against the former.
Michael,
I’m wondering whether your psychological explanation can carry the full realist load. I’m not sure I’m fully reading you right, so let me take a stab at it. On your proposal, the realist could say that many purported moral disagreements don’t, in the end, speak to real moral differences, a la the conventional arguments (1)-(5). But, in a modest addition, some moral disagreements might be rationally intractable, perhaps because arguer A has psychological function f, while arguer B has psychological function g, and f and g aren’t getting along. (Is that right? If so…) I think we’d need a story about how to tell which moral disagreements are psychologically-rooted rationally intractable, and which are only surface-level arguments that are resolvable. For if a significant chunk of moral disagreements turn out to be intractable, it seems hard for the realist’s reply to Mackie to gain much, well, traction.
Incidentally, a really interesting body of work in experimental philosophy is getting a lot of attention these days. I’m thinking of work using fMRI scans to track the neurological events that accompany different judgments that subjects make about the best resolutions to moral dilemmas. Presumably, this kind of work might provide some real data about whether there are, in fact, biological causes of moral disagreement, and, in a scary, science-fictiony kind of scenario, whether neurological “therapy” might make seemingly intractable moral disagreement all-of-a-sudden tractable.
Thanks to all for these comments. Allow me a few clarifications.
First to Troy: I tried to suggest my own position as an alternative and tried to *sell* it by contrasting it to the ‘utopian’ realist responses, which are often given short shrift, in my opinion. (Indeed, I’d still be curious to know effective the standard realist replies are.) So maybe a compromise would be for the realist to say that many moral disagreements are rationally tractable, but some are not because of the sort of psychological explanation I sketched above. Perhaps this turns out to be a matter of actual disagreement vs. disagreement in principle in that if two individuals were adequately habituated, etc., they in fact would not have any moral disagreements. But since Macke isn’t interested in hypothetical agreement or disagreement but in ordinary diversity of moral opinion, it struck me as worthwhile to ask whether realists ought not concede that (a) some moral disagreements between otherwise cognitively normal might prove rationally intractable, yet (b) this fact is no skin off the realists’ nose. I think your remarks about McDowell, phsyics, etc., suggest the attractiveness of a moral epistemology that conceptualizes moral knowledge in a more coherentist way, more akin to our ordinary perceptual knowledge, than to physics, with its aspirations to universal laws standing in a deductive structure. And I concur with your comments about disagreement relative to our state of knowledge. If I remember correctly, I think Parfit provides a similar defense of realism, suggesting that fully secular ethical theory has only been pursued for a few centuries at most and so would not produce agreement this quickly.
Josh: I think you have the general thrust of what I’m suggesting. I’d be curious to know the empirical work you mention. (Aristotle would be aghast! Bypass habituation with pharmacology? Never!) As for which disagreements might prove intractable, I was suggesting in a very armchair way: (a) those in which one of the parties would have to make a significant self-sacrifice, and (b) those in which one of the parties is so personally invested in her position that to countenance it would destroy her characterization identity. I’m not suggesting that these would be examples of insincerity, false consciouness, or the like, but that there might be powerful ego-protection mechanisms that keep individuals from even entertaining counterarguments and counterevidence.
So I guess the question becomes: how often are moral disagreements entrenched in ego-protection? I’m not sure what the answer is to this question, but I’m also not sure that many realists would want to hang their projects’ success on its answer. Perhaps, though, it’s a route that needs to be explored further.
As for the fMRI work, I must confess that I while I find it interesting, I’m pretty ignorant about it, so I can’t point you to any real sources. Newsweek had a piece on it (July 5) and so did the LA Times (May 2), pointing to work by folks at Princeton and Cal Tech, I think. (I hope those are the right issues, judging by my admittedly vague recollection and the pithy and uninformative little abstracts they give you for free on the Web, but I’m not positive). Of course, there’s a lot of work in similar areas (psychology and ethics, rather than neurology and ethics) being discussed over at the Experimental Philosophy blog. One of their contributors, Joshua Knobe, has a helpful list of thsoe doing empircally-related work in ethics and other areas of philosophy at http://www.princeton.edu/~jknobe/ExperimentalPhilosophy.html. (I think the Josh Greene, et al article cited there is one of the influences on those popular articles.)
Maybe the position Michael is suggesting is a version of the “non-cognitivist moral realism” which Lillehammer suggests at the end of his review of Shafer-Landau’s book. (Lillehammer’s review can be found at http://ndpr.icaap.org/content/archives/2004/5/lillehammer-russ.html).
Hi Michael. I’ve been away for a bit, so let me also officially welcome you to PEA Soup. Thanks for the thoughtful post, which actually caused me to go back and read my Mackie! Let me first see if I’ve got his argument and the resulting dialectic correct and, if so, offer a friendly alternative to your view that I think has all the things your looking for, without raising the additional complications that you raise about your own view.
If I understand it correctly, here is Mackie’s Argument from Disagreement.
Mackie’s Argument from Disagreement
(1) There is genuine, inter- and intra-societal moral disagreement that is rationally intractable (empirical (or conceptual?) observation)
(2) The best explanations for the existence of genuine, rationally intractable disagreement are either: (i) the disagreement is about whether certain things have a property that, in fact, does not exist; or (ii) the disagreement is about which attitude toward certain things are apt (Assumption)
(3) The best explanation for 1. (i.e., for genuine, rationally intractable, moral disagreement) is (2ii). ((2ii) better tracks disagreement about moral codes; see Mackie’s Ethics, p. 36)
(4) Therefore, genuine, rationally intractable, moral disagreement is about which attitude toward certain acts are apt (1, 2, 3)
(5) Therefore, genuine, rationally intractable, moral disagreement is not about whether certain acts have a property that, in fact, does not exist (2, 4)
(6) If genuine, rationally intractable, moral disagreement is not about whether certain acts have a property that, in fact, does not exist, then there are no objective moral properties (Assumption)
(7) Therefore, there are no objective moral properties (4)
(If this is Mackie’s argument, then unlike you, I don’t see it as much of an advance over the more traditional argument from disagreement. Conclusion (5) does not follow from (2) and (4), nor does there seem to be any reason at all to accept (6). So, I don’t like attributing this form of the argument to Mackie, but I’m not sure how else we can get to the conclusion that there are no objective moral properties, which is clearly Mackie’s point in raising this argument in the first place.)
At any rate, as I understand it, most of the traditional responses to the argument reject (1), that moral disagreement is rationally intractable. On the other hand, you want to accept (1) and, instead, reject (2) as a false dilemma and, in doing so, also reject (3). That is, you want to add a third option–(2iii) “the existence of deep-seated psychological mechanisms that ‘protect’ individuals’ moral beliefs from rational dissuasion”–and hold that it, rather than (2ii) is the best explanation for (1).
Here is another friendly alternative. If you want to hold onto moral realism as well as the claim that there is genuine, rationally intractable moral disagreement, why not just reject (5) and (6)? After all, (5) does not follow at all from (2) and (4), nor does there seem to be any reason whatsoever to accept (6). It is certainly possible for there to be two kinds of moral disagreements: disagreement about whether a certain act has a certain property and disagreement about which attitudes towards that act is apt. In the case of rationally intractable disagreement, the disagreement will be disagreement about the latter. Of course, as I say, I may not have Mackie’s argument correct, so, if not, please let me know where I am going wrong.
One thing to observe is that nearly all judgments about philosophical issues – not only moral issues – have the features that Mackie points to as a basis for his anti-realism.
If that’s so and if Mackie’s argument(s) for his conclusion re. morality is(are) sound, then analagous arguments for comparable anti-realisms about all sorts of philosophical judgments are likely sound also. Perhaps Mackie-ish motivated moral anti-realists *should* be philosophical anti-realists generally.
But that claim that I just made itself is subject to serious, perhaps intractable, disagreement, so – on a Mackian view – it (and its negation) might be false as well!
Dan-
Thanks for the welcome and for the thoughtful reply. Your reconstruction of Mackie’s argument is excellent and therefore most welcome. You’re right that I am adding a third option to premise 2 and therey rejecting premise 3.
Your alternative seems to embody a couple of different elements: One, you seem to suggest that moral disagreement could amount to disagreement in attitude, in cognition of the pertinent moral facts, or both. Certainly Mackie appears to suppose that the former is not moral disagreement, or if it is, it’s disagreement that is itself rationally irresolvable and hence works against any claim morality might have to ojectivity. Granted, Mackie was writing before the advent of quasi-realism, sensibility theory, Kantian constructivism, neo-Aristotelianism, and various other positions that try to win morality its objectivity without resting it on moral facts. In the end, Mackie’s position rests on two closely related assumptions that strike me as dubious:
(1) a strong form of desire internalism, according to which, if moral judgments turn out to be expressions of, or refer to, attitudes, these attitudes are psychological happenings with no truth-aptness or standards of warrant a, so there can be no sense in rationally appraising them; and
(2) the only kind of objectivity is metaphyiscal realism.
Second, I’m not sure why you say that (5) “obviously” does not follow from (2) and (4). I take the argument to be an enthymemic inference to the best explanation. IBE’s are not intended (as I understand them) to be deductive inferences, so if you mean simply that the existence of moral facts is logically consistent with the best explanation of moral disagreement being disagreement in attitude, then I agree (5) doesn’t follow. But on the other hand, I find IBE a powerful form of argument in metaphysics, and while I don’t keep up with the literature on the topic, I don’t see any grounds for simply discarding IBE here.
Finally, one more clarification on my view: My own “ego-protecting psychological mechanisms” approach to moral disagreement is consistent with moral realism or with Mackie’s error theory. The original motivation for my proposal, then, is that moral realists might opt for an explanation of moral disagreement that is consistent with moral facts without going in for the more outlandish (in my opinion) thesis that *all* moral disagreement is rationally tractable (under the right conditions).
Hi Michael–
Yes, the latter part of this supposition is what I am suggesting be denied (which, amounts, basically, to a denial of premise (6)). Even if we assume that disagreement in attitude is rationally intractable moral disagreement, I don’t see at all how this works against the claim that moral facts are objective in the sense of being metaphysically “real.” Moral facts may very well be objective in this sense, even though there may still be rationally intractable disagreement in attitude. For example, suppose the property rightness just is the property maximizing total welfare. This property is certainly objective in the appropriate sense, even though there may be rationally intractable disagreement about which attitude toward this property is apt. So, there can be two types of disagreement (even two types of moral disagreement): disagreement about whether a certain act has a certain property, and disagreement about what attitude toward the act or property is apt. Disagreements about the former are rationally tractable, while disagreements about the latter may be rationally intractable. (So, this proposal fits with your stated original motivation for your position, viz., “that moral realists might opt for an explanation of moral disagreement that is consistent with moral facts without going in for the more outlandish (in my opinion) thesis that *all* moral disagreement is rationally tractable (under the right conditions).”)
Further:
Yes, this is certainly the assumption Mackie is making. But I think it falls victim to a false assumption that most of us in metaethics make, namely, the assumption that a moral judgment is a (logically) simple psychological state, i.e., that it is either a belief or a pro- or con-attitude. It is certainly possible that moral judgments are (logically) complex psychological states consisting of a belief and some kind of pro- or con-attitude. If moral judgments are, in fact, complex psychological states, then it seems to be false that “these attitudes are psychological happenings with no truth-aptness or standards of warrant, so there can be no sense in rationally appraising them.” The judgment that donating to charity is right may consist, in part, of a belief that donating to charity has a certain property, and so, at least part of this judgment, would be truth-apt and rationally appraisable. Your comments here, along with Josh’s discussion of Shafer-Landau, has inspired me to write up a short post on the importance of recognizing that moral attitudes may be complex attitudes. I’ll have it up shortly.
Also, I agree with your point about the use of IBE in Mackie’s argument. Thanks for pointing that out to me. My main complaint with the argument is simply with premise (6), for the reasons I have just mentioned.
Thanks again for a good post Michael.
Hey Michael,
I am attempting to do a paper on mackie’s error theory. I know very little about it and my professor cant seem to explain it in detail in simpler terms. I really loved how you made it seem simple to understand, but am still confused as to which part is mackie’s linguistic thesis and which is his ontological thesis, i apparently have to distinguish between the two but dont even know what they really are! And also im confused about how to understand his relativity argument and his queerness argument. And im also having a hard time finding a specific and clear objection to his error theory, probably because i find myself lost if the philosophical terms! Any help would be greatly appreciated :).