Even though I’m not a real expert on his work (and his new book is way too expensive), I’m a huge fan of Timothy Williamson. The part of his work I know the best is his comments on anti-realism vs. realism debates of the Dummett and Wright kind. I want to reconstruct his argument from the margin of error for realism (i.e., the anti-luminosity argument) whilst applying it in the moral realm to argue for moral realism. I want to then ask how we should react to this argument.

     Let us begin from assuming that there is a sorites sequence of thousand lies. Let us say that the first lie, lie1, is definitely bad – one lies to a friend about whether an electric wire is live in order to laugh at the funny way in which he gets fried. The last lie, lie1000, is definitely at least morally neutral and perhaps even good – one lies to a friend about her outfit in order to make her feel good. In between these lies, there are 998 lies where the circumstances of the lies change ever so slightly so that the difference in badness of any two consecutive lies is imperceptible in appearance.

     Assume also that there is moral knowledge. Furthermore, assume that we know that the lie1 is bad (and that the lie1000 is not bad). Assume also, more controversially perhaps, that knowing that the lie1 is bad requires that one is ‘reliably right’ about the badness of lies. This reliability, furthermore, requires that in all relevantly similar lying-cases that could easily arise and which ‘one could easily fail to discriminate from the given case, it is true that’ these lies are also bad. Thus, reliable knowers in a certain domain only have an outright belief that p in that domain when it is true that p.

     Let realism be ‘the claim that the alethic status of propositions is determined independently of the intentional attitudes taken by actual or ideal persons towards such propositions or the states they represent’ (Shafer-Landau, ‘Vagueness, Borderline Cases, and Moral Realism’). Moral realism then is realism about morals. It seems that if there are moral truths which no-one could even in principle know, the alethic status of moral propositions would have to be independent of our intentional attitudes towards moral propositions. Thus, unknowable moral truths would be sufficient (but not necessary) for moral realism, whereas it would be necessary (but not sufficient) for moral anti-realism that moral truths were evidentially constrained.

     Now, given the earlier assumption, I know that the lie1 is bad. Because the difference between the badness of the lie1 and the lie2 is imperceptible in appearance, I’m disposed to believe that lie2 is also bad. These two facts and the fact that knowledge requires reliability (believing that p only when it is true p) mean that it must be true that the lie2 is bad. Otherwise, I would not be reliable. For a reductio, let us assume that moral anti-realism is true; there are no unknowable moral truths. This and the fact that it is true that the lie2 is bad imply that I must be in position to know that the lie2 is bad. Assuming that I’ll be considering whether the lie2 is bad, I will then have to know that the lie2 is bad.

     We can then repeat the argument. I now know that the lie2 is bad. Given that it and the lie3 are not different in appearance, I will also believe that the lie3 is bad. Given that knowledge of the badness of the lie2 requires reliability, it must be true that the lie3 is bad. And, again, given anti-realism, I will know that it is. Repeating this argument gets us eventually to it being the case that I know that the lie1000 is bad. And, thus we end with a contradiction – we knew from the start that it isn’t. We have reached the Sorites paradox.

     Williamson’s own way of stopping this argument is to deny anti-realism. If realism is true, then along the sequence there can be lies of which we cannot know whether they are bad or not. Thus, I can, for instance, still know that the lie345 is bad. I’ll still be disposed to believe that the lie346 is bad. But, given realism, I cannot know that this lie is bad. It does not have to be the case that the lie347 is bad even if I might be disposed to have some degree of belief that it is (it could be perceptibly different from lie345 of which I’m certain that it is bad). Given that I only have some degree of belief to the truth that lie347 is bad, even if this lie is not bad, this does not take away my reliability as the knower of the fact that the lie345 is bad.

     Now, I’ve been thinking of the possible reactions we could have to Williamson’s argument. Here are the options I have thought about:

     1. The argument never works because there are global problems with its premises (and thus it fails as an argument for moral realism). One could argue for instance that knowing the truth of some proposition does not require reliability in other cases (see, for instance, Sosa on animal knowledge).

     2. The argument does not work here because there are local problems with its premises in the moral case. One could deny that there is moral knowledge and thus that we know that the lie1 is bad. One could also deny that one can construct sorites sequences in the moral case. This seems difficult. As Shafer-Landau argues, moral properties ‘are multidimensional, i.e., depend on the satisfaction of a number of distinct constitutive criteria for their instantiation’. These constitutive criteria are based on the non-moral properties of the actions. Given that we can construct sorites sequences for these properties, we should be able to construct sorites sequences also for the moral properties.

     3. Anti-realism can accommodate unknowable moral truths. Thus, the argument cannot establish moral realism even if it does establish that there are unknowable moral truths. So, assume for example that whether an action is bad depends on whether fully rational agent who knew all the relevant non-moral truths and had maximally coherent set of motivations would advice against the action. On this view, whether an action would have a moral property would depend on the propositional attitudes of an ideal person. This implies moral anti-realism. But, given that we cannot know all that a person idealised to this degree would advice, we could not know all moral truths even if anti-realism was true.

     4. It seems that we can accept that knowledge requires reliability in other cases but only in other ‘good’ cases just as long as the beliefs in the rest of resembling cases are not ‘faultfully’ mistaken. So, assume some sort of response-dependence view about badness. X is bad iff all normal persons in normal circumstances judge it to be bad. Assume that all normal persons (add your favourite description of her) judge the lie1 to be bad. Therefore, it is bad. At some point of the series, there is a first lie where the opinions of the normal judges diverge. This lie is the first one that is not definitely bad (even if it is not definitely not bad either). Now, as a normal judge I can be disposed to believe that this lie is bad. But, given that it is not true that it is not bad, I can do so faultlessly. Thus, my status as a reliable knower of the badness of the previous lie is not in question.

     5. We should take it seriously that there is a case for moral realism on the basis of Williamson’s argument.  

     Any thoughts about where we should go?

17 Replies to “Williamson and (Moral) Realism

  1. My initial reaction is that the sorites here is like the sequences of shades between blue and green, or shapes between squarish and roundish. There are points where I don’t know what to say about the situation, but I’m inclined to think that’s because the situation is vague, not because I grasp the concepts (good/bad, blue/green, square/round) imperfectly. I’m not sure which of your responses this fits into.
    It also occurred to me that there are people who think all lies are wrong, so this may not be a great example. But you could run the argument with less controversial examples: giving $1 to a beggar is good; giving him my entire net worth is not good (I need to provide for my family); somewhere in the middle things change over.

  2. What happens if we try to apply the argument to a case where anti-realism is presumably correct (e.g. replace ‘good’ and ‘bad’ with ‘judged +/-vely by ideal rational agents’, or some such)? If the argument is sound then it shouldn’t overgeneralize like this. But does it? (I can’t immediately see where it would fail to carry over…)

  3. Heath,
    I think your idea is close to 4. Accepting metaphysical vagueness is quite controversial though and so I tried to do without in the form of anti-realism I had in mind. You might be right about the example. But, I could have picked a better example of the morally not bad lie. For instance, it seems that Kant is amongst the very few that think that lying to the murderer at door is bad.
    Richard,
    I think Williamson does want the argument to have a very general scope – pretty much all domains where one can speak of knowledge. So, it is supposed to go for even self-knowledge about one’s own mental states like ‘I’m in pain’ or ‘I feel cold’ (his examples). In making these judgments, there needs to be the similar margin for error that can only be provided by unknowable facts. Thus, I think he would go for realism about the property of being judged positively by ideal rational agents. As a result, whether something is judged positively by ideally rational agents would not be dependent of the attitudes ideally rational agents have towards their own judgments.

  4. Jussi —
    I agree with practically everything that you say here. If Williamson’s anti-luminosity argument works (and I think it does), it works across the board — nothing prevents it from applying to ethics just as much as to everything else.
    So really I think that of the five responses to your argument that you mention, only options (3) and (5) have anything to be said for them. Either way, we will have to dump the “epistemic” forms of anti-realism. (We will also have to disagree with Thomas Nagel when he says that “moral truth cannot outstrip our ability to know it”.)
    By the way, I don’t understand how your response 4 is supposed to work. Given classical logic, every lie is either bad or not bad. (That’s just a generalization of the relevant instances of the Law of Excluded Middle. You might want to tamper with classical logic in some way but if so you need to tell us how.) So according to the sort of response-dependence that you’re considering, the first lie where the opinions of the normal judges diverge is not a bad lie. Suppose that the last lie that is (definitely) bad is lie412. According to anti-realism, it must be knowable that this lie is bad. But intuitively, this can’t be knowable, since lie 412 is so incredibly close to lie413, which is not bad. If you believed that lie 412 is bad, your belief would be far too close to error to count as knowledge. So it seems that if lie412 is bad and lie413 is not, then you can’t know that lie412 is bad. So there are some moral truths that cannot be known.

  5. Ralph,
    you saying that you agree worries me *a lot*. I actually defend a sort of anti-realism about wrongness that is based on evidentially constrained notion of epistemic truth. In addition to Nagel, there are actually a lot of people in the literature defending the view that moral truths must be knowable for their action-guidingness – Jackson for instance if I remember correctly.
    The reply 4 is actually something based on Crispin Wright’s reply to Williamson. You are right that it rejects classic logic and namely the principle of bivalence – it has to admit truth-value gaps. But, I believe there are intuitionistic logics that can deal with such things. I know that some ordinary inference patterns would have to be given up but I’m not sure how damaging this is.
    In the case you give, what is suppose to save the knowledge that the lie412 is bad is the fact that even though it is not true that lie413 is bad believing that it is is not a mistake or error because the best opinions diverge.
    Constantine,
    thanks. That’s great news – I had only seen the hard-backs that were like £45.

  6. Option 3 looks right to me, but “unknowable” ought to be more clearly disambiguated—does it mean “unknowable by any currently-existing actual human beings” or “unknowable by any idealized moral agents?” If I have the right handle on this, anti-realism entails that moral truths can be unknowable in the first but not the second sense.

  7. Dale,
    you got it right. It is the latter. However, when we idealise moral agents here we cannot give them special faculties that normal human beings don’t have. What we can give them is being in a right epistemic position (which no actual human happened to occupy), information that normal humans could gather, time and resources for inquiry, and so on.
    There is a worry with the option 3 about how anti-realist position it would ultimately be. I take it that the motivation for anti-realism is to account for one set of contentious facts in terms of something less contentious. One way the facts that anti-realist uses in such an account can be less contentious is for them to be more easily epistemically accessible. Yet, in the option 3, the ‘anti-realist’ is accounting for moral facts in terms of mental facts no-one could ever know. At this point, it is questionable whether these facts are any less contentious than the original facts we wanted to give a theory of.

  8. £45?
    What is that, like, 600 US dollars?
    I’m inclined towards option 5. I am worried about in-principle-unknowable facts about what is, e.g., funny. I think I am now leaning on there being some of those now too.

  9. Jussi —
    Wrongness and badness can still be generally action-guiding even if there are a few unknowable cases of wrongness or badness that couldn’t be action-guiding. So long as we can normally know about, or at least have rational expectations of, the available actions’ degree of wrongness, that seems enough to secure the action-guiding character of wrongness.
    I have to say that I just don’t understand how Crispin Wright’s response to Williamson is supposed to work. Suppose that lie412 is bad (all suitably qualified judges agree), and lie413 is not definitely bad (the suitably qualified judges diverge) — indeed, it’s not in any way a mistake to think that lie413 is not bad at all. Then obviously even if it’s not a mistake to believe that lie413 is bad, this belief couldn’t count as *knowledge* that lie413 is bad. Indeed, this belief wouldn’t fall just slightly short of being knowledge — it falls very far short, almost (if not quite) as far short as believing a proposition that is definitely false. So given how incredibly similar lie412 and lie413 are, surely lie412 is still far too close to these borderline cases to count as knowledge.

  10. I have misgivings about what I gather is a reliabilist assumption behind Williamson’s argument (which I haven’t read, I’ll admit). Reliabilism makes sense for perception, perhaps, but it’s not clear that it’s plausible for knowledge of all other matters. If we start with an antirealist metaethics according to which the alethic status of moral propositions -is- a function of the intentional attitudes of actual or ideal moral agents, etc., then it’s not clear there’s anything out there for our perceptual apparatus to “bounce off”, so to speak. Thus, I wonder if the reliabilist assumption here is begging the question.
    To carry this a step further, if internalism about epistemic justification for our -moral- beliefs is correct, then when we cross from lie412 to lie413, aren’t we simply passing from one true justified belief to one false justified belief? How does that mean we lack knowledge concerning lie412? This seems a harmless sort of epistemic luck.
    What am I missing here?
    John K. Davis

  11. An observation: if we begin at the other end of the sequence, i.e. the neutral-lie end, we will infer from the (x+1)==x rule that all the lies are neutral. This still sets up a reductio conclusion, of course, but the fact that the status of all the lies is diametrically opposite, suggests that our rule of inference is likely faulty. I think we should disallow this premise:
    “This reliability, furthermore, requires that in all relevantly similar lying-cases that could easily arise and which ‘one could easily fail to discriminate from the given case, it is true that’ these lies are also bad.”
    We ought to substitue: “Reliability requires that very similar cases are adjudicated to have very similar status.”
    …and where the similarity relation, unlike the equality relation, is not transitive.
    This is a fairly generic reply to any ‘heap’ paradox, of course. What’s actually interesting about a ‘margin of error’ argument doesn’t seem to hinge on being embedded in a sorites sequence, however. The key issue is whether two cases with _starkly different_ actual moral status, could at the same time be _imperceptibly different_ to a qualified judge.
    The (revised) reliability premise says not. And I don’t see any problem with constructing cases with arbitrarily similar perceived moral status. Therefore: actual(!) moral status must come in values separated by no more than the margin of error of an ideally-qualified judge.
    Apparently, the margin of error argument doesn’t tend to show anything about realism or antirealism. Rather, it shows something about the phenomenon that both are addressing.

  12. Pardon my clumsiness: in the penultimate paragraph, please ignore “The (revised) reliability premise says not” and substitute, “An appropriate assumption about knower-reliability would say not.” I forgot to add it on mine 🙂

  13. Dear all,
    sorry for lack of replies. No internet access on weekends… If anyone’s still following, here’s few thoughts.
    Kris,
    Dollar isn’t quite that weak yet. £45 is a bit over $90 which for a book seems a lot. I share your worry on funniness facts and there are similar worries in the Williamson/Wright exchange on whether rhubarb is yammy or yacky. I think the need for making room for at least some judgment dependent facts is behind the motivation to develop something like the option 4 which I seem to be failing to explain to Ralph.
    Ralph,
    I’m not sure why the belief in the badness of the lie413 falls *very far* from knowledge (even though it does). It helps to think for me that, in these cases and under the view we are discussing, the best opinions are constitutive of the objects having the property in question. Thus, we need to be able to describe the normal judges in a conceptually independent way. Such agents have a disposition to react to the presented cases. If they react in an uniform way, then the object has the given property in virtue of this.
    Maybe in the sorites sequences such judgments start to diverge at one point. I don’t see how this calls into the question the earlier judgments. Any one individual can be reliable of the earlier ones – she would not have got that one wrong. In the cases where the judgments diverge, there isn’t anything to get wrong about – unless one makes a performance-error. You are probably still not happy with this. I’m sorry.
    John,
    I’m not sure if you are missing anything at all. I wonder if the anti-realist starting point calls reliabilism into question. If anti-realism were true, then the truth-values of moral claims would be determined by the attitudes of either actual or ideal judges. Moral knowledge, as getting moral judgments reliably right, would then consist of getting the judgments about the attitudes right (and possibly knowing that moral truths are dependent of them). The same argument would still get started.
    I guess the idea is that knowing any particular proposition to be true cannot be accidental. But, it does seem accidental if in similar cases one is making mistakes.
    Patrick,
    I’m sorry I’m not sure I follow the argument. Could you try again a bit slower.

  14. In response to Jussi’s comment that, “If anti-realism were true, then the truth-values of moral claims would be determined by the attitudes of either actual or ideal judges. Moral knowledge, as getting moral judgments reliably right, would then consist of getting the judgments about the attitudes right (and possibly knowing that moral truths are dependent of them). The same argument would still get started.”
    Yes, I think this is the path a reliabilist must take here. I wonder how it would get started, though, without turning into a coordination problem where each agent tries to figure out what all other agents are thinking, and all other agents are doing the same. We might look to an ideal agent, for example, but the ideal agent probably wouldn’t look to other ideal agents. There must be something the attitudes are tracking other than other attitudes, so the reliabilist account would have to include some description of what that “something” is.
    This is not to say that a reliabilist couldn’t get around this, just that I’m inadequate to that task, and not sure how it might be done.

  15. Jussi,
    My worry about what you label as response 4 to Williamson’s argument (as applied moral conditions like wrongness, etc.) is basically quite simple.
    Response 4 doesn’t in any way give up the idea of a sharp boundary: It’s just that the sharp boundary that is recognized by response 4 isn’t the boundary between *being wrong* and *not being wrong* (i.e. the boundary between truth and falsity for judgments of wrongness). Instead, it is the boundary between *being definitely wrong* and *not being definitely wrong* (i.e. the boundary between truth and lack of truth for judgments of wrongness).
    Presumably, any proposition that is known must be true. So no proposition that is in the grey zone between truth and falsity can be strictly speaking known.
    Now Williamson has argued, convincingly as it seems to me, that knowledge requires allowing a margin for *error* (in the sense that if the case is too close to the borderline between truth and falsity, you can’t know that the relevant condition obtains in that case). But the very same considerations seem to show that knowledge requires allowing a margin for *failure to achieve truth* (in the sense that if the case is too close to the borderline between truth and the grey zone between truth and falsity, it is still impossible to know that the condition obtains in this case).
    So this is why it seems to me that Williamson’s argument still works if we postulate a grey zone between truth and falsity, as your response 4 seems to do.

  16. Small comment. Ralph wrote:
    “Presumably, any proposition that is known must be true. So no proposition that is in the grey zone between truth and falsity can be strictly speaking known.”
    I wonder if this should be granted.
    Suppose that P entails Q. Suppose it is indeterminate that that Q. It follows, I suppose, that it is not determinately true that P. But it doesn’t follow that it is false that P. P might be indeterminate.
    Likewise, couldn’t one say about these borderline cases that it is indeterminate whether they are known?

Leave a Reply

Your email address will not be published. Required fields are marked *