Sometimes you wonder whether your own philosophical
convictions block your ability to fairly evaluate the arguments of others. In
my case, my anti-relativism is one such conviction. I find it hard to be
sympathetic to arguments for relativism. I try to do my best do be charitable
but I know I often fail. I guess I would now want some help in being charitable
about Sharon Street’s
argument against Realism based on evolution. The argument is in her paper
‘Constructivism about Reasons’ which she presented at Shafer-Landau’s
prestigious metaethics workshop (the paper is still available online through pea soup). For me, it just seems obviously fallacious.
The same form of argument gives elsewhere a false conclusion from the analogous
premises. But, I must be missing something. A small caveat – she says that her
goal in the paper is not to argue against realism (even though she does). She
does this apparently elsewhere with more success I hope.

The argument is supposed to be a thought-experiment. It
starts from the basic, actual evolutionary story of how life developed. It then
supposes that one day to first valuing creatures were born. They were otherwise
identical but due to a random chance in genes they valued completely different
kinds of things. One of them valued only its own survival and nothing else while
the other valued only its destruction and nothing else (I wonder on what
grounds would be attribute such valuation to her but let that drop). Both of
these beings sought what they valued successfully. The other survived and left
off-spring while the other destroyed itself and the genes it was carrying.

 

Street’s anti-realism holds that it was not the case that
the first being recognized or tracked a normative truth while the second failed
to do so. Neither was it the case that the first creature survived because it
made true evaluative judgments but rather because it tended to do just what
helped its survival. Now, my realist intuition is the opposite – I believe that
the second creature failed to recognise what was valuable for it. To get me out of this intuition, Street asks
me at this point Why do I think this.
And, her answer for me is not that I recognise something of a true value but
rather the answer is that I merely think this because I’m a descendant of the first
being (or one like it presumable). And, this is supposed to establish that
there really are no evaluative standards but rather evolution only explains the
ways in which we tend to go on making evaluations. Thus, realism must be false.
The possibility of error attributed by our judgments, and not by the standards,
came only about as a result of evolution.

 

Now, I must be missing something. I feel no pull against
realism at all as a result of the thought-experiment. I can’t see a difference
between it and an analogous case. Start from the same evolutionary story. At
some point first two beings come about as a result of random mutation. They are
otherwise identical except they judge distances differently. Where one judges
that the river is narrow enough to jump over the other thinks it is too wide,
where the other thinks that the alligator is far enough the other does not. As
a result of these judgments, the second survives and the first does not. My
realist intuition again is that the other one of these beings gets the
distances right and the other one wrong.

But, if we follow Street in the
previous argument, we need to ask why I think this. And, the answer must be the
same – I don’t recognise something about true distances but only think this
because I’m a descendent of the second being who judges distances in the same
way as me. And, therefore there are no standards of distances but only
evolutionary explanations of the ways in which we tend to judge distances. But,
no. The argument does not establish this. Surely no matter what the explanation
is for our distance judgments and their reliability there must be distances in
the world, full stop. True premises, false conclusion. So, the evolutionary
explanation for our evaluative judgments cannot be an argument against the
existence of evaluative truths. That conclusion must be established in some
other way. But, maybe I’m missing something.

14 Replies to “An argument from Evolution against Realism

  1. One difference is this. Think about the content of specifically moral values. They have a content that is other-directed. I find myself valuing not only my own existence, but also that of others (indeed, it may seem to me that I only value my own existence prudentially or derivatively, so that moral values are entirely other-directed). Now are they really valuable? There is a disanalogy between the survival value of judging distances and the survival value of imputing value. In the first case, my ancestors did well because they judged distance accurately. Their beliefs were useful because they were true. In the second place, however, their beliefs were useful for reasons that had nothing directly to do with their truth values. It isn’t because others really are valuable that believing that they were valuable was adaptive. It was because that belief helped in getting more copies of my ancestor’s genes into the next generation. My disposition to have moral beliefs therefore is quite independent of their truth. And that gives us grounds for scepticism.
    Here I’m just expounding (something like) Richard Joyce’s view.

  2. Jussi,
    I think the difference between the two cases (facility in judging normative truths, and facility in judging distances across rivers) is supposed to be that in the second case, there is a plausible evolutionary story to tell about why a process of natural selection would favour the development of such a facility—a story in which it is essential that the facility is an ability to get things right—i.e. to make accurate judgments. In the first, by contrast, it is alleged that there is no plausible evolutionary story. The explanation of why creatures like us tend to value their own survival is the same as the explanation of why creatures like us tend to value the survival of other such creatures (and thus to feel sympathy, have parental instincts, etc.): both of these tendencies make creatures like us more likely to survive. That the ethical ‘beliefs’ in question are true, on the other hand, adds nothing to the explanation: tending to value one’s own survival tends to increase the likelihood of one’s survival whether or not ‘one should value one’s own survival’ is a normative truth.
    Of course, whether or not it contributes to the explanation, one is tempted to insist that ‘one should value one’s own survival’ is a normative truth, dammit! – and the fact that it does no explanatory work doesn’t imply that it isn’t such a truth. But this is where Street will point out that of course we are tempted to say this, after all, we descended from a long line of creatures who tended to think the same thing. So the anti-realist argument rests on three related claims: (i) you shouldn’t expect to find complex abilities in evolved creatures that don’t somehow contribute to survival; (ii) tendencies to accept what we call ‘ethical’ or otherwise normative ‘truths’ do not contribute to survival; (iii) our tendency to think that these things really are ethical truths can be explained away (such beliefs themselves have survival value).
    Now let me suggest a criticism. The argument only succeeds if (i) and (ii) true; and I think both are questionable. The ability to understand Kant’s Critiques, or higher mathematics, or the ability to compose the perfect three minute pop song, have no obvious survival value; yet they exist among human beings and need to be explained. Of course, one could argue that they are simply extensions of more mundane abilities that do make contributions to the likelihood of one’s survival. But this case needs to be made out in a convincing way; and at the same time, it needs to be shown that a similar case cannot be made out with respect to morality. And why think this? Suppose that the concept of the good for x, where x is some creature, can be made coherent through some sort of philosophical analysis. (This is debatable, but the issue is independent, I think, of the evolution argument.) If so, then surely the ability to make accurate judgments concerning one’s own good is a useful ability for a creature to have, in terms of survival value. (A creature that can’t distinguish between what will nourish it and what will poison it isn’t long for this world.) The ability to make accurate judgments concerning the goods of creatures like oneself (particularly close family members or neighbors) will also have survival value in social creatures like ourselves. And the ability to grasp more subtle or complex ethical facts (not concerning one’s own good or the good of those close to one) may arise as extensions of these ‘ground-level’ ethical abilities.
    Or will Street claim that her argument is supposed to cast doubt on there is such a thing as the concept of one’s good, which a creature might develop an ability to grasp? My intuitions, like Jussi’s, may be too unsympathetic to anti-realism here – somebody else needs to help!

  3. Troy,
    The problem with comparing the ability to read Kant with morality, and arguing that neither has any survival value, but that doesn’t make us sceptical about the value of Kant, is this: reading Kant, doing maths, and so on, are all uses of what psychologists call system 2 processes: slow, effortful and rational processes that are extremely flexible and creative. And we can tell a story about how system 2 processes develop, for quite other functions, and can then be utilised for new purposes. But morality at base level – our moral intuitions, in particular – looks like a system 1 process. Indeed, there is evidence (which I won’t review, unless you really want to know) that it is a system 1 process that keeps on doing its thing all by itself, without the interference of system 2 rational processes. So it looks far more like something that is directly the product of evolution. And that’s what makes worries about realism plausible (for what it’s worth, I think the way to confront them is by some kind of response-dependence story, but that’s another issue).

  4. Neil,
    Can you say a bit more about the distinction between system 1 and system 2 processes? I’m suspecting that I’m going to dispute the contention that moral thinking is system 1, given the kinds of examples you give of system 2; but I need to hear more about what the terms mean first.
    Troy

  5. Troy,
    A system 1 process is fast, automatic, probably has an evolutionary basis and doesn’t degrade under cognitive load (ie, when the subject is given a distractor task) . It is more or less universal, usually. System 1 processes are capable of inferences. It’s these processes that make people both so bad at statistics and so certain that they’re not bad. They have system 1 processes that tell them it’s more likely that Linda is a feminist bank teller than a bank teller, for instance.
    System 2 processes are slow, effortful, use lots of cognitive resources and therefore are prone to degrade badly under cognitive load. Of course, when people do moral philosophy they’re using system 2. But the evidence is very strong that the basic intuitions are generated by system 1. People pretty much universally judge cases in the following way: harms caused by actions are seen as worse than those caused by omissions, harms caused intentionally are seen as worse than harms that are foreseen, and harms caused up close are worse than those caused at a distance. Since people (a) can’t justify these judgments in terms of the principles operative and (b) continue to give these judgments under cognitive load (indeed, those who deny these claims come back to the field under cognitive load), it seems that the basic moral intuitions are system 1.

  6. I think Troy answers your quesion correctly when he says that the disanalogy is that it’s hard to find some compelling evolutionary story about why our moral intuitions would reflect truth. In contrast, finding such a story about our judgement of distances is easy.
    I’ve just finished writing my (MA) dissertation, drawing to some extent on Street’s “Darwinian Dillemma for Realist Theories of Value”. I haven’t read the paper of hers you mention, but it sounds very similar to her Darwinian Dillemma.
    What I found suspect was this: The argument establishes a conclusion about the likelihood that our normative beliefs reflect truth ably (i.e. there’s no compelling evolutionary story to tell as to why our moral intuitions would reflect truth). But that’s an epistemic issue entirely distinct from the ontological (/semantic) debate over realism/anti-realism.
    Why not accept her argument as supporting the claim that our everyday moral intuitions are likely to be unreliable, but nonetheless maintain the view that there are actual moral truths to be found? (Utilitarians: Rejoice!)

  7. I’m trying to track the reasoning in the thought-experiment. There either are or aren’t moral facts.
    1. Suppose that beleving and acting in accordance with what would be moral facts (if there were any) has survival value.
    In this case, whether or not there were moral facts, we would have strong evolutionary reasons to believe that there were. But then what the argument shows depends on one’s antecedent assumptions about whether there are moral facts. If you assume there are, then it is getting the moral facts right that has survival value. If you assume there aren’t, then getting the moral facts right has nothing to do with surviving.
    2. Suppose that beleving and acting in accordance with what would be moral facts (if there were any) does not in general have survival value.
    If there are moral requirements that are believed and observed, but have no obvious survival value, then I think the thought-experiment fails. Perhaps that I would regard myself as especially moral were I to sacrifice more for others would be an instance of such a moral view. I can see why I would value others sacrificing. Such behavior helps my survivial. But what about when moral action conflicts with self-interest, as it does so often. I don’t see why I would, on grounds of my own gene survival, see the great moral value sacrificing myself for others. But the fact is that everyone does see the moral value in that.

  8. Let me see if I can tease out the relevant disanalogy between the two cases in a way that doesn’t rely on Street’s “intuition of the survivor’s descendants” reasoning.
    In each scenario, you actually have two separate psychological processes in play; in the Alligator Jumper case, these are distance evaluation and the desire to not drown or be eaten by alligators, the latter of which is held shared and held constant between individuals. Whereas in the self-destructing monkey case, both individuals presumably have the same beliefs about the relationship of objects in space, their pointiness, and the consequences of coming into high-velocity contact with that pointiness. It’s just that the one has a different motivational set.
    So in Alligator Jumper, if we hold the motivational set constant (and assume that both individuals want to not die), if Wrong Distance Judger becomes convinced that his factual belief is in error, then Wrong Distance Judger will refrain from the behavior that would otherwise get him killed. Whereas, in Monkey Self Destruct, holding the motivational sets of each constant, if Suicide Monkey was convinced of the factual belief that suicide was not “good for her”, no change in behavior would result — since in order to remain analogous to the other scenario she still desires what is “bad for her”. The intuition here is supposed to be that the Realist is unable to articulate what it would mean to say that one had convinced Suicide Monkey that suicide was “bad for her”, given that the result would remain unchanged.
    This is what I take to be Blackburn’s meaning when he says (paraphrasing from memory) that the teleology of spatial perception is ineluctably spatial, but the teleology of morality is not ineluctably moral. Since shades of non-realism are shades of rejecting or relaxing this or that component of robust objectivity, what’s supposed to be attractive for a non-realist here is the prospect of eliminating anything but an agent’s motivational set from an explanation of their behavior vis-a-vis normative evaluations.

  9. Mike,
    I’m a bit confused by your account; but aren’t you just assuming what many realists will deny, that an agent’s motivations can remain unchanged even as her beliefs about the good change? If Suicide Monkey becomes convinced that suicide is not good for her, then of course her behavior will change! Unless, of course, we are imagining her as suffering from a severe case of akrasia or something like that—but that’s not the sort of case we were initially imagining; and besides, one could run the same problem against realism about ordinary facts, by appealing to a slightly different form of akrasia. If Wrong Distance Judger changes his beliefs but isn’t motivated to act on his new beliefs, then his behavior won’t change; but surely it would be fallacious in this case to argue against the Realist about Physical Distances that she “is unable to articulate what it would mean to say that one had convinced [Wrong Distance Judger] that [the opposite bank was further than he thought], given that the result would remain unchanged.” Or have I misunderstood the argument?
    “Wrong Distance Judger,” by the way, is one of my favorite Moody Blues albums.

  10. I think there is something to Alex’s suggestion that what the evolutionary argument shows is that we may have reason to be skeptical, or at the very least cautious, with respect to at least some moral intuitions, but that what it does not show is that we should be skeptical about all our moral intuitions and judgments—and if this is so, then it really isn’t an argument against moral realism.
    After all, consider the analogy with statistical intuitions. Neil points out that many people’s system-1 derived intuitions support the view that “it’s more likely that Linda is a feminist bank teller than a bank teller.” What’s interesting about this case is that obviously, there is no suggestion here of being an anti-realist about statistical facts, or even a deep skeptic about our ability to arrive at true beliefs about statistical facts. Rather, the account suggests only that in certain particular cases, our untutored intuitive responses are likely to go astray.
    The anti-realist position demands much more than this—not just that some intuitive responses may lead us astray if we’re not careful, but rather—well, there’s no uncontentious way of putting this, but something like: that none of our intuitions in this area reflect reality, though they appear to do so. Imagine concluding that in the statistical case. Then again, imagine concluding it in the moral case. Take my intuitive judgment that intense physical pain is, in most if not all circumstances, an intrinsically bad thing. Frankly, if somebody told me that this intuitive judgment was mistaken, or a mere product of evolution that expressed no truth, well, I don’t think I know what a person could even mean by such a claim.
    Notice, by the way, that the analogy with ‘statistical realism’ casts some doubt on Alex’s suggestion that the evolutionary argument favors the utilitarian. After all, in the statistical case we may be misled in our initial reactions; but when we stop and really think about it (or, when someone who understands statistics better than we do explains it to us) we can see what is wrong with our immediate reaction and at that point no longer feel its pull (though it may come back if our attention wanders). With respect to anti-utilitarian intuitions, on the other hand, I don’t think we observe the same phenomenon—at least not in a manner that takes us all the way to utilitarianism. Perhaps some of our initial intuitions—the idea that morality asks nothing of us other than to respect the negative rights of others, for instance—are indeed undermined by ethical arguments in a way analogous to the statistical case. But as for the intuition that I have reason to be more concerned with my own conduct than that of other agents, or that acts at least tend to be more morally serious than omissions—for my part, at least, I have yet to hear the utilitarian argument which robs those intuitions of their force.

  11. Thanks for everyone for the comments. Many of them are very illuminating. I’m sorry about not being able to take part more in the discussion. I have had a couple of hectic days. I wonder if some of the discussion gets to a sidetrack. Many comments bring in the notion of morality in to the picture. But, I take it that that is not supposed to be the point but rather the existence of values or reasons – of any kind.
    I’m starting to realise that the crux of the argument is supposed to be whether we need values or distances in explaining the evolutionary chain that leads to my reaction of thinking that one of the beings is mistaken. Presumably distances are not needed. But, I cannot see that anything has said to show that values are not needed either. Street’s own explanation is that the kind of genes the other being had lead to action that tend to help survival. But, that sounds like a really bad explanation if it even is one. Explaining survival with tendency to do actions that lead to survival sounds to me like the classic case of explaining the consequences of taking a sleeping pill by its dormative powers. Saying that pursuing what’s good for the being made it survive seems much more explanatory.

  12. “the analogy with ‘statistical realism’ casts some doubt on Alex’s suggestion that the evolutionary argument favors the utilitarian.”
    I should just add that I was being somewhat tongue-in-cheek! I think something like Street’s argument might help those of a consequentialist leaning deal with /some/ supposed counter-examples to their theories, but unlikely to solve them all.

Leave a Reply

Your email address will not be published. Required fields are marked *