Many philosophers doubt the possibility of unknowable moral truths. E.g. Thomas Nagel said (in The View from Nowhere, p. 139):
I do not believe that the truth about how we should live could extend radically beyond any capacity that we might have to discover it (apart from its dependence on nonevaluative facts that we might be unable to discover).
But in fact, there is a simple argument — unsurprisingly, broadly Williamsonian in inspiration — that shows that there must be unknowable moral
truths.
Note (added on 10 August 2010): Jussi Suikkanen has correctly pointed out that he made what is in all essentials the very same argument on this blog over two years ago. The record also reveals that I read his post at that time (although I somehow now have no recollection of having done so). So it is Jussi, and not I, who should be regarded as the first to have presented this argument in public!
The argument rests on the following assumptions:
- Classical logic applies to moral propositions.
- Our powers to discriminate very similar cases from each other are limited; as a result, knowledge is subject to a “margin for error” principle.
- There is a continuous spectrum of possible cases, which leads smoothly from
cases where acting in a certain way (e.g. killing an innocent person) is not permissible, all the way to cases where acting in that way is permissible.
Here is one example of the kind of spectrum of cases that we need to consider.
All the cases in this spectrum involve a choice between two options:
(i) killing an innocent person, and (ii) refraining from killing the innocent
person. For every case Ci, the following case Ci+1 does not differ from case Ci at all — except that in Ci+1, killing the innocent person does very slightly more good than in Ci (and killing the innocent
person is also the only available means of achieving this good). At one end of the spectrum, we have case C0, in which killing the innocent person does no good at all. At the other end of the spectrum we have case Cn (where n is some huge number), in which killing the innocent person is the only way to save the whole world from imminent destruction.
Almost all moral philosophers, whether they are consequentialists or not, will
say that it is not permissible to kill the innocent person in case C0, but it is permissible (indeed perhaps even obligatory) to kill the innocent person in case Cn. (The only dissenters are extreme absolutists, who say that killing is always impermissible, no matter how terrible the consequences of refraining from killing might be.)
Classical logic includes the law of excluded middle (LEM). LEM tells us that
every case Ci is such that either
it is permissible to kill the innocent person in Ci, or it is not permissible to kill the innocent person in Ci.
So there must be a lowest number j such that it is not permissible to kill the
innocent person in case Cj, but it is permissible to kill the innocent person in case Cj+1. In effect, the difference between case Cj and case Cj+1 marks the threshold between the cases in which the killing is not permissible and the cases in which it is permissible.
But we could obviously never know for certain where this threshold lies. Only
godlike powers of discrimination could enable a moral thinker to know such
things.
Moreover, even though it is permissible to kill the innocent in case Cj+1, Williamson’s “margin for error” principle guarantees that it is impossible for us to know that killing is permissible in this case. This is because case Cj+1 is so close to case Cj (where killing is not permissible) that Cj+1 falls into the “margin for error”. If you believed the true proposition that killing is permissible in Cj+1, your belief would not be “safe”: there is a very nearby case in which you believe almost the same proposition on almost the same basis, but believe something false.
So even though it is permissible to kill the innocent in Cj+1, it is impossible for you to know that it is. (Indeed, if it is reasonable for you to think that either (a) killing is not permissible in Cj+1, or (b) both killing and refraining from killing are permissible in Cj+1, it will probably be safer for you to act as if killing is not permissible in Cj+1 — even though in fact it is permissible.)
At all events, it seems quite clear that if the assumptions of this argument are
correct, there must be unknowable moral truths.
This is interesting, but notice that it only works for moral theories that accept the existence of some pre-moral good on the basis of which we can judge moral permissibility. For Kantians, for example, since whether some action’s consequences count as good depends on its moral permissibility, we can’t do the kind of evaluation required to produce the problem. That is, whether the products of killing an innocent person count as good at all is determined by the moral permissibility of that action.
I’m interested in the use of Williamson’s “margin for error” principle. Why does it lead us to claim that we lack knowledge, rather than to a weaker claim of fallibilism?
Finally, I’m not sure you show that there is an unknowable moral truth here. You seem to rely on a principle like the following: in cases where we cannot know for certain that killing an innocent person is permissible, we ought not kill innocent persons. Isn’t this a moral principle, and if it’s a correct one, doesn’t it provide us moral truths even in the particular sorts of hard cases you’re talking about?
A few reactions:
1) If this argument works, so will structurally similar arguments for unknowable truths about what’s enjoyable, boring, disgusting, funny, grammatical, confusing, and obvious (and also for person- and even time- relative versions of all of these, e.g. obvious-to-Ralph-at-t). This strikes me as proving too much!
2) If you choose to bite the bullet on 1, I would be led to wonder what the significance of your claim about unknowable truths of these kinds is. It *looks* like a claim with metaphysically realist implications. But applying it in all these cases suggests it has no such implications. So what difference does it make if true?
3) Relating to Pete Murray’s last point, I doubt the argument works, because I suspect that an action could only be permissible if you can know that it is. So there’s feedback from what we can know to what is in fact true in this domain. It’s therefore not true that “we could obviously never know … where the threshold lies”, and the “margin for error” principle can’t properly be applied here (nor in the other cases I mentioned, or some of them anyway).
First, I’d like to throw my hat in with Simon’s #3.
Barring that, I still think that this is not really a case of an unknowable moral truth in the relevant sense. When it is said (or at least when I say) that the moral truths must be knowable, I mean the basic moral truths–propositions about values, reasons, or whatever the basic normative entities whose truth does not depend on any contingent facts.
So consider the Classical Utilitarian. The CU knows (he thinks) all of the basic moral truths. There’s just one after all: utility is the good. The CU will of course not know where the threshold between Cj and Cj+1 lies (any more than anyone else). But this is only because he can never know precisely at what point killing the innocent will tip the scales of utility, so to speak.
I also throw my hat in with Simon’s #3. Perhaps this reaction suggests a response to Simon’s #2, with the help also of Simon’s #1.
The significance of the argument is that it provides a consideration against holding a view such that there’s no feedback from what we can know to what is true in the domain of morality. That is, if we grant that given a view on which there is no such feedback there are some unknowable moral truths; and if we notice that there might be similar arguments for unknowable truths in other domains for which the conclusion is also uncomfortable; then we might conclude not that we should accept that there are unknowable moral truths, but rather that we should give up the views that get the argument going.
In line with Davids comment (section 2) we could add that the Nagel quote is compatible with unknowability of the exact moral threshold truth. For we can have rough knowledge of the threshold. By working from both ends and excluding cases we narrow the span. In the end we have a rough answer that does not deviate “radically” from the moral truth.
I’ll throw a hat in for Ralph.
Consider this response:
“This is interesting, but notice that it only works for moral theories that accept the existence of some pre-moral good on the basis of which we can judge moral permissibility. For Kantians, for example, since whether some action’s consequences count as good depends on its moral permissibility, we can’t do the kind of evaluation required to produce the problem. That is, whether the products of killing an innocent person count as good at all is determined by the moral permissibility of that action.”
I pick this one because I suspect that it is one of the best responses on offer. Here’s a worry about this style of response. Ralph says that there is at least one unknowable moral truth. Suppose someone rejects this on the grounds that they accept the kind of Kantian view that rejects (i) consequentialism, (ii) pluralist non-consequentialism of the sort that Ross defended, and (iii) denied that there can be competing imperfect duties that a person is under where they are obliged to pick one duty over the other without knowing that this is so. Such a view is clearly false. (Owing to (iii), I suspect Kant himself would agree). So, someone can’t know that the tenets of such a view are true (owing to the factivity of ‘knows’).
I think we can have cases of conflicting p.f. duties, d1 and d2, where at one end of the spectrum it is clear that d1 defeat d2 and at the other d2 defeats d1. Somewhere in the middle will be cases where d1 defeats d2 (or vice-versa) where the correct belief that it is d1 that defeats d2 is not safe. You just run W’s anti-luminosity argument at this point to establish that there is at least one moral truth we cannot know (e.g., consider a series of increasingly dangerous neighborhoods and consider the plight of a parent who has to determine whether to let the kids out to play unsupervised or to keep the kids indoors where it is safer). There surely will be points in the middle where one duty defeats another but no one is in a position to knowingly judge that this is so.
This is an interesting argument, Ralph. One response would be to deny assumption 3, on the grounds that permissibility is graded, rather than absolute. But maybe this could be gotten around by being very specific about the moral proposition — perhaps something to do with the exact point at which it is no more permissible than impermissible to X.
Another thing occurred to me when reading Ralph’s post. It reminded me of van Inwagen’s Atlantis case, which forms part of his response to the problem of evil. He contends that there might be no minimum amount of evil necessary to bring about some good, and we certainly couldn’t know what the minimum was, if there were one, due to our inability to discriminate finely enough between very nearby cases. This was back in the 80s, if I’m not mistaken. So it’s Williamsonian in a pre-Williamsonian kind of way!
Several people have suggested that it is it is permissible to A only it is knowable that it is permissible to A. Formally, P(A)->K(P(A)). Since K(P(A))->P(A), this implies that P(A) iff K(P(A)). Assuming that if you know that p iff q, you can know p if you know q, then we can conclude P(A) iff KK(P(A)).
In short, the suggestion leads to a KK principle for permissibility. KK principles are dodgy, so I don’t know why I’d expect this one to hold here. Just take some Williamson counterexample to KK (Mr. Magoo), and have something of moral relevance turn on it. I suspect P(A)->K(P(A)) isn’t going to work out.
However, Simon may be right that little of realist consequence hangs on this, since you can make similar arguments with “boring”, “funny”, and “tasty for dogs”.
Pete —
You say that my argument “only works for theories that accept the existence of some pre-moral good”. I disagree. As Clayton explains, any theory according to which there are positive duties to save lives, to alleviate suffering, to develop one’s own talents, etc., will allow for the kind of spectrum of cases that leads smoothly from cases where acting in a certain way (say, ignoring someone’s plea for help) is not permissible to cases where acting in that way is permissible. All that we need is that whatever makes the act permissible or not permissible is something that comes in degrees (e.g. the number of lives saved, the amount of suffering alleviated, etc.).
The best way to understand Williamson’s “margin for error” principle is to read his book Knowledge and Its Limits, chap. 4 and 5. I won’t try to summarize his arguments here!
Simon —
I accept that the same argument can be run for any domain of truths where such a continuous spectrum of cases is possible — which is practically all domains of truths whatsoever. But I don’t think that the argument proves too much. It does indeed show that one kind of anti-realism — the kind that claims that truth is “epistemically constrained”, and cannot outrun knowability — is not true of any of these domains. But it doesn’t show that there aren’t many other kinds of anti-realism that are perfectly sound. I.e. it only refutes one kind of anti-realism, not every kind.
What the argument shows is that there actually can’t be the sort of “feedback” from what is knowable to what is true that you postulate. You can’t just assert that there is such feedback without explaining what is wrong with the argument!
Still, I agree that we would advise the agent not to kill the innocent person in case Cj+1, and so there is one sense in which he “ought” not to kill the innocent in this case. Thus, I think we have to recognize a kind of context-sensitivity in ‘ought’. You can intelligibly ask, “Given that I can’t know what I ought to do, what ought I to do?”! Somehow there must be two kinds of ‘ought’ here — roughly, one is more subjective than the other. So it can still be true that for every kind of ‘ought’, there are cases in which you cannot know what you “ought” in this sense to do.
David —
In my view, the classical utilitarian doesn’t know all basic moral truths because classical utilitarianism (CU) is false! Admittedly, though, if CU were true, then the utilitarian could know all basic moral truths (at least so long as the interpersonal comparison and measurement of pleasures and pains was a purely psychological, non-moral question).
Still, only the tiniest step away from classical utilitarianism will reinstate the argument. E.g. consider Mill’s higher- and lower-quality pleasures. There seems to be a spectrum of possible cases here. E.g. consider a series of cases ranging from pleasures that are definitely of lower quality than (say) carefully reading the New York Times over Sunday brunch to cases that are definitely of higher quality. There will be cases in which it is unknowable whether the pleasure in question is of a higher quality or not.
Ben —
You seem to think that the conclusion of my argument is so incredible that it counts as a reductio ad absurdum of its premises. But you don’t say which of the premises you reject. I have to say that the premises all seem fairly obviously true to me. So I think rejecting the premises of my argument would be far more implausible than accepting my conclusion!
Martin —
I agree completely with what you say. This is why I didn’t explicitly claim that Nagel’s claim was mistaken: Nagel never explicitly claims that there cannot be unknowable moral truths (although an incautious reader might interpret him as doing so).
Clayton —
Thanks! I agree completely with what you say here. Kantians cannot plausibly claim that are no such cases. The idea that nothing that is important to morality comes in degrees (and so can vary continuously along a range of possible cases), seems utterly implausible to me. Indeed, it seems like a childish insistence that everything in life has to be clearly black or white, with no shades of grey in between….
John —
Strictly, permissibility isn’t graded, although there are in a way degrees of impermissibility — one act can certainly be more gravely or seriously wrong than another. Still, as you say, this point doesn’t undermine my argument.
Nick —
I completely agree with what you say here. KK principles are very dodgy, and there’s no reason to think that morality is going to be an exception!
Ralph,
Interesting example. I’m not sure about this, but I’d still be tempted to think that the moral truths in question are non-basic. The Millian believes that there are higher and lower pleasures. The onus is on him, then, to tell us which properties (and to what degree) increase or decrease the value of any actual pleasure. If he can’t do this, then he hasn’t really offered us a proper moral theory. Of course, even if he does, cases like you offer will still arise. But insofar as they do, they are really just instances in which we are uncertain of the descriptive facts regarding precisely which properties are present and to what extent.
I suspect that you will find this response unconvincing. It probably looks like I’m grasping at straws at best, or perhaps just begging the question, all to save an unfounded wish that right action is always within our reach. So I just want to say something briefly about why I (though hopefully not I alone) am so adamant that all basic moral truths are knowable. The short answer is that to me, an unknowable moral truth is not merely bizarre or unpleasant, it is conceptually incoherent. Here is an (perhaps a bit slapdash) argument:
1. If M is a moral truth, then we ought to guide our actions by M. [This is not meant to be a substantive claim; it is a claim about what it is to be a moral truth.]
2. If a truth is unknowable, we cannot guide our actions by it.
3. Ought implies can.
4. Therefore, if M is an unknowable truth, it is not the case that we ought to guide our actions by M.
5. Therefore, if M is an unknowable truth, M is not a moral truth.
6. Therefore, there are no unknowable moral truths.
Ralph,
Thanks for your response. You say,
“All that we need is that whatever makes the act permissible or not permissible is something that comes in degrees (e.g. the number of lives saved, the amount of suffering alleviated, etc.).”
My contention is that for someone who endorses a particular brand of Kantianism, which rejects the claim that there is any state of affairs that is good without reference to how it was brought about (I believe this is Kant’s position), then there is nothing that admits of degrees in the way the argument requires.
This isn’t, I think, just a childish insistence that everything must be black and white. Rather, it is the claim that the appropriate way to measure moral permissibility does not involve in the first instance measuring how much good will be brought about, and this is because whether the outcomes of my actions count as good at all depends on the permissibility of the means I use to achieve them, and on the reasons I take that (or those) outcome(s) to be good in the circumstances.
Take Bernard Williams’ famous case of Jim and the Indians. The idea is that no matter if it is 10 or 30 Indians, the mere number of lives that would be lost isn’t by itself a measure of the permissibility of Jim’s killing one. Still, the Kantian can capture the intuition that more lives matter, but not by this kind of direct evaluation of the total goodness of saving so many lives. Rather, the Kantian requires Jim to think about what the Indian might want. If the Indians are communicating to Jim, somehow, to please take the evil captain up on his offer so as to save the lives of as many as possible, then this can count as a good reason for Jim to kill one. If however the Indians wish not to take this way out, perhaps as a symbolic gesture of solidarity, then Jim has good reason not to take the captain up.
But assume that communication is not possible, or that Jim doesn’t or can’t pick up on it. Then Jim should think about what the Indians might want, and the more of them that will be executed, the more reasonable it is to think that the former condition holds (a reasonable person, the idea is, would want Jim to kill one, even if it is them, to save very many). Now, there is certainly a kind of indeterminacy here, but I don’t it is appropriately captured under the rubric of moral truth. There isn’t an in principle unknowable moral truth at issue here, even if there also is no clear and obvious solution to the problem. At best, we get a framework within which to think about the problem that doesn’t reduce to simply adding up putative goods.
I ran the same argument here on Pea Soup two years ago:
http://peasoup.typepad.com/peasoup/2008/03/williamson-and.html
I think you commented at well 🙂
Ralph,
I don’t see how your argument is supposed to show in a non-question-begging way that there cannot be feedback from what’s knowable to what’s morally permissible. I said that because I suspect there is such feedback, I doubt your claim that we could “obviously never know” where the threshold of permissibility is, and I doubt that the “margin for error” principle is applicable in this domain. But that’s a rejection of your premise 2. So I’ve told you what’s wrong with your argument – it is, precisely, premise 2!
Nick,
I’m curious about your concern about a KK thesis for moral permissibility in particular. Could you fill out the argument a bit to show why accepting such a limited KK thesis would be problematic? I should clarify that I’m willing to accept (with Nagel!) that it may not be knowable whether a particular act A is permissibile because it may not be knowable how the relevant nonevaluative facts stand. I want to defend the knowability of only the purely evaluative facts.
Simon —
OK, so it was unfair of me to suggest that you’ve said nothing about how you’re responding to my argument. But do you reject the “margin for error” principle across the board, or just for ethics? If you reject it across the board, you have all of Williamson’s arguments to deal with. If ethics is so extraordinarily special that it alone is exempt from the “margin for error” principle, what could possibly explain that? (Is it some radical kind of expressivism, perhaps?) I think it is fair for me to ask to hear more!
This is a quick follow up on David’s comment. It’s an interesting argument, but as I’m on the team that is manifestly confident that Ralph is right, I think any anti-Ralph argument is an argument that should be turned on its head.
The argument:
1. If M is a moral truth, then we ought to guide our actions by M. [This is not meant to be a substantive claim; it is a claim about what it is to be a moral truth.]
2. If a truth is unknowable, we cannot guide our actions by it.
3. Ought implies can.
4. Therefore, if M is an unknowable truth, it is not the case that we ought to guide our actions by M.
5. Therefore, if M is an unknowable truth, M is not a moral truth.
6. Therefore, there are no unknowable moral truths.
Here’s a stab at identifying where the argument goes wrong. Consider 1 and 2. We can distinguish between (i) guiding as conforming and (ii) guiding as complying. I think the “conforming” and “complying” talk is taken from Raz’s _Practical Reason and Norms_, but I might be mistaken. Here’s what I take the difference to be (I intend these terms to be technical terms). You _conform_ to a rule that enjoins you to V iff you V. You _comply_ with a rule, reason, or command to V only if you V and only if you V for that rule, reason, or command.
Let the moral truth, M, be that you are obligated to be in Samarkand. Suppose you cannot know M but M is a moral truth. Let’s suppose you can get yourself to Samarkand. If M is unknowable, you might not be there because you complied with whatever reasons there were by virtue of which you ought to be in Samarkand, but you can surely conform to them. You can conform to them iff you are in Samarkand. If “guide” is understood as “conform”, I think 2 is false.
Suppose we read “guide” as comply. So understood, 2 is true. So understood, 1 is false. You can be under an obligation to be in Samarkand and so obliged to act in a way that suffices to get you to Samarkand. You cannot, however, be obligated to act from a particular motive (e.g., the motive of duty). Why not? Because of 3. If ought implies can, you can be obligated to act from one motive rather than another only if you can do that. But, that’s not something you can do. Even if you can V, V-ing from some motive rather than another is not something you do. (It is something that happens when you do something, but that’s not the same thing.) So, this cannot be what your obligation is.
I can’t think of a third way of understanding “guide”, so I think the argument against the possibility of unknowable moral truths fails.
I think it’s right to argue that at least *some* moral truths must be knowable because of the action guidance aspect of morality. In fact, I gave another argument for that here on Pea Soup earlier. But, I don’t think it’s plausible to say that all moral truths must be knowable.
Here’s another way to argue for that conclusion. Consider the following two claims:
1) Julius Caesar did something wrong on his 16th Birthday.
2) Julius Caesar did nothing wrong on his 16th Birthday.
Now, intuitively and assuming the law of the excluded middle, one of these claims is true. Whichever one of them is true, that moral truth is an unknowable truth if it happens (as it probably does) to be the case that all evidence is lost about what he did on his 16th Birthday. So, it does seem like there are unknowable moral truths. Or, otherwise, certain moral truths that were moral truths stop being moral truths once we lose the evidence about the past.
Also, many expressivists like Gibbard who think that moral truths must be action-guiding accept that this kind of past moral truths can be and are action-guiding (well, beliefs about them more properly). So, I don’t think you get a direct argument from the action-guidingness to there not being unknowable moral truths. I do think there’s slightly more complex argument to be made to the conclusion that some moral truths must be knowable.
Ralph,
Instead of what’s morally permissible, let me introduce the concept of “the things Fred is rationally required to do”, or for short: what’s FRR. I will assume that you accept a structurally similar argument showing that there are unknowable truths about what’s FRR. And I’m pretty certain I would want to deny that conclusion, and deny your premise 2 in that case. So you ask: What’s special about FRR, such that the “margin for error” principle doesn’t apply to it, whereas we can arguably all accept that it does apply to knowledge about, for example, tree heights?
I think the answer has to be that you’re misunderstanding the concept of what’s FRR by presupposing a picture of it as a set of truths that exists prior to Fred’s coming along and discriminating between cases, as for the case of tree heights. Fred’s powers of discrimination between cases are of course still limited in the sense that he is often unsure of what’s FRR – for example: when there are considerations of very different kinds pointing in opposite directions, Fred may be unsure of whether one set of considerations outweighs the other, or whether they are equal. But crucially, Fred is not unsure in these instances because he’s trying to properly describe some fact about what’s FRR that exists prior to and independent of his actual or ideal judgments. What he’s unsure about is, rather, how to solve his practical problem about what to do.
Your argument for unknowable truths depends on the margin for error principle: Fred can’t know that doing p is FRR in a given case unless doing p is FRR in the neighbouring cases that Fred can’t distinguish from it – cases, say, where the considerations in favor weigh just slightly more or slightly less – otherwise Fred’s belief would be too unreliable to constitute knowledge. Then you need to assume that there exists a case right at the margin, where the considerations in favour have _barely enough_ weight to make doing p FRR. From these two assumptions, you can derive that doing p is FRR in this marginal case, can’t Fred know that it is.
But how can this description of the marginal, unknowable case be a coherent description of any case of what’s FRR? After all, what I *mean* when I say that Fred is rationally required to do p is that Fred’s own reasoning should lead him to the conclusion: do p. And that’s why what’s FRR is different from tree heights. The fact that a tree is 60 inches high does not depend on whether or not Fred ought to believe it. But the fact that doing p is FRR does depend on whether Fred ought to reach the conclusion: “do p” by his own reasoning. So whenever Fred does reason in the right way to the conclusion: “do p”, it’s implausible to deny either that doing p is FRR in this case, or that Fred knows that doing p is FRR in this case. The marginal case could only be a case where, if the considerations in favor of doing p were very marginally weaker, Fred’s reasoning in the right way would produce a different result. But then the margin for error principle can’t be applied: there *are* no neighbouring cases where the truth might outstrip Fred’s ability at least in principle to recognize it. Fred *can*, at least in principle, know just where the threshold lies.
If you will agree with me about the case of what’s FRR, then you could only run the Williamsonian argument with respect to moral truths by start out with the assumption that moral truths are like tree heights rather than like what’s FRR. So I think the argument about unknowable moral truths is question begging.
Jussi,
Since (1) and (2) are not basic in the sense I discussed, I don’t really think their unknowability is problematic. I agree with you that only some moral truths must be knowable. But it’s a specific class that must be knowable: moral propositions whose truth does not depend on anything contingent (i.e., basic moral truths). It is those truths that are meant (ultimately) to be action-guiding, and thus those that are the referred to in Premise 1 of my argument.
Clayton,
This is great. Your reply is just the sort I tend to worry about (this stuff is part of my dissertation) and if you have time to read a bit more and hash it out a little, it would be great. All that is by way of explaining why the following is so absurdly long.
I think it is clear that we have a classic sort of “one man’s modus ponens…” going on. You take it that there can be unknowable moral truths, and thus you read “guide” as “conform” because otherwise there is a conflict with OIC. I, on the other hand, start from a different reading of “guide” (more on this below) and thus conclude there can be no unknowable moral truths. We may, of course, just have a conceptual stalemate. But I’d like to say a bit more before we reach that sad conclusion.
One clarification before I start: One might object to my Premise 1 because one disagrees that the moral ought is overriding (thus it is not always the case that we ought to be guided by morality). If that’s right, then we can just replace each instance of “moral” with “normative.” Since the moral is a subset (though perhaps not a proper subset) of the normative, if all normative truths are knowable, so are all moral truths. So I don’t think this will affect the argument between us in any way.
Here is an expression of the action-guiding nature of morality (similar to my Premise 1):
Premise*: The moral [or normative] facts are facts by which we ought to guide our actions.
If we read “guide” as “conform,” this is the same as saying:
Conformation: The moral facts are facts about what we ought to do.
Conformation is, of course, a conceptual truth. But I take it that the action-guiding nature of morality is stronger than this. It is not merely that we ought to act as morality dictates. Rather, we ought to be guided by morality.
One way of cashing this out is by saying that we ought to do what is right because it is right. This seems to be what you have in mind when you talk about compliance. Here’s how I understand compliance (I hope this is what you meant):
Compliance: When I act, my reason for acting ought to be *that the action I’m taking is my duty* [or “is what morality dictates,” or “is right”].
And I think you’re right: This is obviously absurd. If my Premise 1 is talking about Compliance, then it is clearly false. But (as you suggest), I deny that guidance is about either Conformation or Compliance.
Suppose I am deliberating about how to act. Proper deliberation, I take it, involves consideration of the various reasons for acting in the various ways I might act. Proper deliberation ends with my recognition that some reason (or reasons, if there can be ties) to act in some particular way is decisive. Now suppose I have most reason to go to Samarkand. The mere fact that I go to Samarkand is not enough to ensure that I have deliberated properly. Nor, even, is the fact that I have gone to Samarkand with the right reason as my reason. It must be, beyond this, that my deliberation ended with my intending to go to Samarkand for the right reason because it was the right reason. But this does not mean that the right reason is *that it is the right reason.* Rather, the “because” is about the explanation of my relevant beliefs. If S is a decisive reason for me to go to Samarkand, I act rightly only if this fact is a part of the explanation of why I believe that S is a decisive reason for me to go to Samarkand. This explanatory connection is, I take it, what action-guidance is all about.
With that in mind, I offer another argument:
1. If M is a moral truth, then M explains some possible agent’s belief that M–a belief that factors into the deliberation that leads the agent to act rightly.
2. If a truth T is unknowable, then it does not explain any possible agent’s belief that T.
3. Therefore, if M is a moral truth, M is not unknowable.
4. Therefore, there are no unknowable moral truths.
There are some sticky issues here in that the agent doesn’t really have to believe M, but only something entailed by M. For instance, the fact that X is of value might explain why I have reason R, and might explain why I believe I have reason R without my believing that X is of value. In that case, I might still be said to be appropriately guided by the moral facts. But I don’t think this issue affects anything discussed here.
The other thing, of course, is that you might just deny Premise 2. It could be that some truths explain our beliefs yet we can (modally) never know that they are true. This sounds odd to me. But if it’s right, then it doesn’t seem worrisome. It would turn out that some moral truths are unknowable, but still guide us through our beliefs in every relevant sense. In any case, I take it that the purported examples of unknowable moral truths throughout this discussion are not truths of this sort.
Hey David,
Quick follow up. I think we agree that this is trivial:
Conformation: The moral facts are facts about what we ought to do.
You want something stronger, but what I can’t tell from your remarks is whether you still endorse the idea that the reasons that settle what to do demand that you do their bidding and do their bidding for _them_.
That’s what I think is problematic. So, here’s a view. A moral reason to V, MR, demands:
(i) that you V;
(ii) that you V for the reason that MR.
Problem for the view. Suppose there are two reasons to V, MR1 and MR2. I don’t think you do anything less than MR1 requires if you V for M2; you don’t do anything less than MR2 requires if you V for M1, and I don’t think you “let down” either reason if you V for some non-moral reason.
If that much is right, what’s the motivation for the general picture, which is that right action requires having beliefs that “track” the moral facts?
Clayton,
Let’s take a case: Billy has a kitten in front of him and is considering whether or not to burn it. Let us stipulate that pain is morally bad. Further, let us stipulate that kittens feel pain when burned. It follows (more or less directly, depending on how you understand “morally bad”) that the fact {that the kitten would feel pain if burned} is a reason for Billy to not burn the kitten. Suppose, further, that killing innocents is morally wrong. And let us stipulate that the kitten is innocent and that burning the kitten would kill it. It follows that the fact {that burning the kitten would be killing an innocent} is a reason for Billy to not burn the kitten.
Now, as you suggest, it would be a problem if each of these reasons insisted that Billy not burn the kitten with that reason as his reason. What I don’t think is a problem, though, is to insist that for Billy to have acted rightly (which is, I take it, different from acting merely as morality dictates), it is necessary (though perhaps not sufficient) that his action be guided by some basic moral truth. As argued above, I take this “guidance” to require that the relevant basic moral truth help explain some of the normative beliefs that factor into Billy’s deliberation. And, as before, I think this means that basic moral truth will have to be knowable.
Now this is, I think, compatible with Billy’s acting for any number of reasons. Suppose Billy knows both that killing innocents is wrong and that pain is bad. And suppose that he has this knowledge because each belief is explained, in part, by the fact that belief is about. Whether Billy doesn’t burn the cat with {it would be wrong}, or with {it would be bad}, or with {it would be killing an innocent}, or with {it would cause pain} as his reason makes no difference. In each case, he acts for that reason because he is guided by the moral truth.
Again, my concern here isn’t that I think morality says, “Do as I say because I say it.” Rather, I understand it to be a conceptual truth that what the moral facts are are facts that are meant to guide us. And this means that proper deliberation (and right action) requires that the moral facts play a role in that deliberation. The mere fact that we do as those facts dictate does not ensure that this role has been played. The facts must (in my view) explain our beliefs. And this (I take it) requires that they be knowable.
Hey David,
Quick follow up. There’s a nice paper by John Gardner that you might like in his OUP collection (“Justification and Reasons” is the paper’s title, I think.) He defends a view in the neighborhood of yours, which (I take it) that you don’t have to act for every undefeated reason, but you should act for some undefeated reason.
A worry about the view you’ve stated. You seem to grant this much:
(i) The individual reasons that bear on whether to V do not include among their demands that you V/refrain from V-ing for them.
But, you think:
(ii) When there’s a plurality of reasons to refrain from V-ing, there’s a reason to refrain from V-ing from one in this collection.
So, you don’t have to refrain from burning the cat from the reason that it’s painful, the cat is innocent, the cat belongs to your sister, etc…, but from one in that sprawl.
I’m skeptical that there is such a demand if it is not grounded in the demands of any particular reason. I also worry that, intuitively, there is nothing wrong with acting in such a way as to not burn the cat for the reason that you want to go outside for a jog. I take it that the reasons to go jogging are not reasons not to harm cats, but I worry that you might be on the hook for saying that you’ve “let down” some of the reasons if your reasons for acting all have to do with jogging. Similarly, suppose there’s no Zeus but you believe in Zeus and believe he’s a great lover of cats. I see nothing wrong with refraining from burning the cat in the belief that such an action would upset Zeus.
Clayton,
I’m sorry that I’m having a bit of trouble expressing my view; this is the first time I’ve written much of this out.
Your jogging case: If we assume that I actually have a decisive reason to go for a jog, then I don’t see any problem. I deliberate about what to do. I know (say) that I have reason at this moment to do what would be in my best interest so long as it doesn’t involve causing pain or killing an innocent and that jogging would be in my best interest and that jogging doesn’t involve causing pain or killing an innocent. And I know all this (in part) because the relevant normative facts help explain my beliefs. Well, then, when I don’t burn the cat because I go jogging with {jogging would be in my best interest} as my reason, my action is justified.
On my view, proper/right/justified action involves acting as one has most reason to act. This is not a substantive claim like (ii) regarding a “meta-reason” to act from some subset of my reasons. It is a claim about the nature of justified action. (Does it make sense to ask whether we have a *reason* to act with justification?)
But the fact that I acted as I had most reason to act does not guarantee that my action was justified. There being a reason for me to A when I A does not make my A-ing justified any more than its being true that T when I believe that T makes my belief that T justified. Something more is required. What I want to argue is that this something more involves a fact-belief explanatory connection. Since (I claim) the normative (and thus, as a subset, the moral) facts just are those facts that can guide us to justified action, they must be capable of explaining our normative (or moral) beliefs. And if they are capable of explaining our normative beliefs, it would be surprising if they were unknowable.
David,
I think part of my worry with the view is that in the case as I described it, the reason to jog is’t/needn’t be decisive. It’s just an ordinary reason to jog. Maybe there’s better reason to cycle, keep a promise, write a check, etc… Suppose there’s decisive reason to write a check but you go jog and so leave the cat unharmed. Why would _that_ now call into question the justificatory standing of the action? True, that you didn’t write a check you should means that the action is unjustified, but the cat seems to be doing no work here.
Also, we can run the kind of worry using facts that aren’t facts. That was the Zeus point. (I’m an atheist, so I don’t believe in Zeus.) If someone acts in the belief that Zeus requested it and so happens to do what there is decisive reason to do anyway, I guess I don’t see why it’s p.f. wrongful.
Hi Ralph,
I don’t know whether it is mentioned in the comments or elsewhere (on very quick review I did not notice it) but, for what it’s worth, Roy Sorensen has the argument in ‘Unknowable Obligations’ Utilitas (1995), 7:247-271 and Ted Sider has a reply to Sorensen here http://tedsider.org/papers/access.pdf. Maybe you’re aware of this, and maybe your argument is not quite Sorensen’s, but (no surprise) Sorensen does run an indeterminacy argument for unknowable obligations.
Clayton,
Here’s how I look at it: If there’s decisive reason to right a check, but I go for a jog, then I have succeeded in satisfying one decisive reason (the reason to do something other than burn the cat) but failed to satisfy another (the reason to write a check). I still think that I was (at least in some counterfactual sense) guided by the former reason (if I thought jogging might cause pain or kill an innocent, I wouldn’t have done it, a fact which may or may not have entered consciously into my deliberation). But my action was ultimately wrong because I failed to be guided by the reason to write the check. Is there another problem here? I fear I’m missing something. Is it just that calling it “wrongful” rather than simply “unjustified” seems inappropriate? That might be, but I think that’s just because we tend to link “wrongfulness” with other-regarding aspects of morality. But if I really have normative reason to write the check, then going for a jog was a mistake, much as burning the cat would have been.
As to Zeus: I guess what I think in that case would depend on whether I am culpable for having a false belief in Zeus. Suppose I am justified in believing in Zeus. And I am justified in believing that Zeus’ command is the right-maker. Then it seems my action would be justified, because I’m doing the best I can do to be guided by (what I justifiably take to be) the normative facts. But of course I haven’t really been guided by the normative facts at all, because there is no Zeus, and thus Zeus is not the right-maker.
This seems to me parallel to how we talk about epistemic cases (though you’d know better about this than I). The Zeus case is like a normative Gettier case. I might justifiably believe it is 2:30, and it is 2:30, but because the clock is wrong my belief isn’t connected up to the truth about what time it is in the right way, and thus I don’t have knowledge. Analogously, I justifiably do as I actually have most reason to do (don’t burn the cat). But my action isn’t connected up to the true normative facts in the right way (because I cite Zeus), and thus I haven’t acted rightly.
Quick follow-up: Just to clarify, I think that the false normative beliefs (like that Zeus is the right-maker) that play a role in these “normative Gettier cases” have to be rationally resolvable (so on some level one would always be culpable for having such a belief).