Suppose that a reliable Oracle tells us that there’s a non-natural property shared by most but not all of the things we antecedently believed to be bad (and that no other non-natural property more closely tracks our beliefs about badness). What’s a non-naturalist to do? Should we conclude that the Oracle is talking about badness, and revise our normative beliefs accordingly? Or revise our belief in non-naturalism so that no normative revisions are necessary? Presumably it depends upon the details, i.e. how confident we are in the normative belief that would be revised vs how confident we are in our non-naturalist metaethics. I’d certainly sooner give up my non-naturalism than believe that torturing animals is okay, for example. But I think there are at least some cases where it’d be reasonable to revise my moral beliefs instead (e.g. if the alternative view struck me as substantively plausible enough to be a serious contender for being the moral truth of the matter, even if it wasn’t what I was antecedently inclined to believe).
In his new paper, ‘A Dilemma for Non-Naturalists‘, Matt Bedke argues that this makes me immoral. In this blog post I want to establish two things: (1) that some such moral belief revisions can be perfectly reasonable and innocuous, and (2) that the fundamental structure of the ‘dilemma’ has nothing to do with non-naturalism. It can be generalized to apply regardless of one’s metaethical view.
The key feature of the scenario is that it seems to the agent independently likely (bracketing any concerns they have about the implied moral revisions), but not certain, that the property the Oracle is talking about is the normative property of badness. I call such testimony “ambiguously normative”. Ambiguously normative testimony does not require non-naturalism. Consider the following two scenarios:
(A) The Oracle tells you that one of your concepts is such that its true extension is as follows: [she goes on to list all the things you antecedently believed to be bad, with one exception, as before]. How confident should you be that the concept she’s talking about is your concept of badness? Is there any chance you should revise your moral beliefs on the basis of her testimony?
(B) The Oracle tells you that she flipped a 100-sided die. If it landed on any of 1 – 99, she uses ‘F’ to mean bad. If it landed on 100, then she’s using ‘F’ to mean something different. She goes on to tell you that [all those things as listed before] are F. How confident should you be that ‘F’ means bad? Is there any chance you should revise your moral beliefs on the basis of her testimony?
Clearly, scenarios A and B are puzzles for everyone, not just non-naturalists. That is, there is a general puzzle about how we should respond to ambiguously normative testimony. One can generate specific cases that are only ambiguously normative for non-naturalists, but that doesn’t create a special problem for non-naturalists, unless there’s some reason to think that the correct general solution won’t carry over to that specific case.
Bedke (in section 2.1) argues as follows for the immorality of revising our moral beliefs on the basis of testimony regarding non-natural properties:
The repugnance is similar to the repugnance of thinking that someone’s pain does not matter because they have a particular genetic heritage, or certain color of skin, or because the winds on Mars are swirling in one way rather than another. Just as we morally should not change our normative views about pain based on those considerations, so too we should not change our normative views based on the presence or absence of a non-natural property. […] It makes one’s moral views objectionably hostage to a peculiar metaphysical fortune. One should not have moral views that aim to track the patterns of a non-natural realm, however those patterns turn out.
Of course, we should not think that any non-natural property is a necessary bad-making feature (without which the natural stuff is insufficient to qualify as mattering), the way the cartoon racist thinks of skin color. But non-naturalists don’t think that. We think that the features identified in our axiology — pain and such — are sufficient for badness. Since we think badness is a non-natural property, we thereby also think that pain and such are sufficient for having a non-natural property. But it’s the pain, not the property of badness, that provides the normative explanation of the state’s badness. (Badness is what the pain is qualifying for, not what does the qualifying.)
It’s also important to stress that non-naturalists don’t “aim to track the patterns of a non-natural realm, however those patterns turn out.” Our credences in animal torture being okay, even conditional on whatever non-natural patterns you like, can be as low as you like. We may be certain of some bedrock normative assumptions, and less than certain of our non-naturalism, and hence be willing to give up our non-naturalism in the event that no non-natural pattern is available that matches what we know about badness (say).
There’s a formal sense in which, for as long as we accept non-naturalism, we aim to have moral beliefs that “track the pattern of a non-natural realm.” But this means no more than that (i) we aim to track badness, and (ii) while we accept non-naturalism, we believe badness to be a non-natural property. There is nothing so obviously objectionable about this.
Similar claims about formal aims can be made regardless of one’s metaethical views. There’s a formal sense in which, inasmuch as we believe that ‘F’ denotes badness, we aim to have moral beliefs that track the things the Oracle asserts are F. But this sort of merely formal “tracking” of a mere placeholder for normativity surely cannot be objectionable. We obviously don’t believe (in cartoon-racist-like fashion) that others only matter when and because they have the semantic property of being referred to in this way by the Oracle. To attribute such a pattern of concern to us would reveal a serious confusion about the nature of these merely formal aims.
But that isn’t to say that such formal tracking is entirely toothless (or “maximally fragile”, as Bedke suggests on the non-naturalist’s behalf, towards the end of section 3). Sometimes revising moral beliefs on the basis of ambiguously normative testimony may be called for. Just consider a view that you antecedently reject but nonetheless consider a “live candidate” for taking seriously: prioritarianism, say. If the Oracle told me that there is a non-natural property that closely matches my normative beliefs, but matches prioritarianism even better, I would take that to be grounds for concluding (i) that the Oracle is talking about a normative property, and hence (ii) prioritarianism is (somewhat surprisingly) the correct normative view. (I suggest a more complex example, involving chickens, survival, and personal identity, in my old post on ‘Intelligible Non-Natural Concerns‘ — this also made it into section 1.1 of my paper, ‘Why Care About Non-Natural Reasons?‘)
And again, if you aren’t a non-naturalist, you can consider a more neutral form of ambiguously normative testimony, along the lines of scenarios A or B above. Suppose that the Oracle reports something about a superficially non-moral domain (e.g. concept extension), but (i) it seems independently likely that she’s actually referring to badness, just indirectly; and (ii) if so, the resulting ethical claims are substantively plausible, even if not ones you were antecedently inclined to accept. In such a case, it seems to me perfectly reasonable for you to conclude that she really is referring, indirectly, to badness, and hence the resulting ethical claims are correct. Indeed, what basis would you have for thinking otherwise? Do you think it is impossible to refer indirectly to badness? Or are you so certain of all your normative beliefs that you cannot imagine revising any of them on the basis of (seemingly extremely reliable) indirect testimony?
[Cross-posted from philosophyetc.net]
Having not read Matt’s paper, perhaps this is addressed but presumably we don’t take the Oracle as infallible, just reliable (as you mention in the first sentence of this blog post). If so, surely the most plausible response is conservative: you believe that, given your significant agreement, the Oracle *is* talking about a non-natural property, namely badness, and that the disagreement could well be a function of the reliability (as opposed to infallibility). Then you can hold onto all of your antecedent beliefs about badness. The plausibility of this position should depend on, inter alia, how reliable you take the Oracle to be and how much disagreement you have.
If the Oracle is taken to be infallible (and not merely reliable), however, I do think revising your first-order beliefs is rationally required. But then it doesn’t seem so immoral.
So what is your assessment of the Oracle’s propensity towards accurate testimony supposed to be?
Hi Kian! Right, I was implicitly assuming infallibility, for simplicity. Otherwise, as you say, there is a third kind of revision available, namely revise downward your confidence in the oracle’s accuracy. But supposing that the Oracle has a sufficiently robust track record of reliability, we presumably can’t (reasonably) be *certain* that they’re inaccurate now, suggesting that at least some degree of revision in our credences may be called for. (IIRC, Matt thinks it’s immoral to revise one’s first order credences even slightly in response to this kind of testimony, which again I’d want to resist along the lines suggested in the main post. Glad to hear your share my sense that this needn’t be immoral!)
Great, so let’s suppose the Oracle is reliable. So I think the kind of revision is not necessarily the same as your third way (i.e. downgrade the reliability of the Oracle), but rather to retain your beliefs in the reliability of the Oracle and believe that some of the beliefs you disagree with the Oracle’s testimony are the product of chance (even a highly reliable Oracle will sometimes get things wrong–and I have a *lot* of beliefs about what is morally bad). So, for instance, suppose it is super duper reliable and gets 99% of its testimony right–reliable on any interpretation of reliable! I still have many more than 100 beliefs about badness. If it is working exactly as normal (which statistically is the most likely outcome, but by no means guaranteed), it would still get 1/100 things wrong. If we only disagree on a couple things, then this is perfectly possible. Now of course, it could be performing better or worse in these cases than its long-term reliability but that can be factored in using standard methods as well.
Hmm, do you think there is any possible way to set up the scenario such that you would be confident (or at least give significant weight to the possibility) that the Oracle is *accurately* reporting on the distribution of non-natural properties despite these not exactly matching up to your antecedent normative beliefs?
I’d been thinking of the Oracle’s testimony as a single “unit”. Suppose that 99% of the time when the Oracle has given similarly detailed testimony, it has been correct in its entirety (not decomposable into atomic claims 1% of which were incorrect).
Okay, so we have a difference here. It seems weird to me to think that the Oracle’s testimony is a single unit. If someone tells me their normative beliefs, they will cover all sorts of topics and principles. I have little reason to think that all of those beliefs are required for others (for one thing, they may conflict or have tension!) Neither do have reason to think that they are equally likely–let alone a package deal. People have reflected differentially on different normative domains.
Regardless, I think you’ve pinpointed our difference.
Suppose the Oracle just tells you the following: “The non-natural property that most closely matches your beliefs about *badness* is not instantiated by [X].” That seems like one claim.