We are pleased to present the latest installment of Ethics at PEA Soup, in which we host a discussion of one article from each issue of Ethics. The article selected from Volume 123, issue 1, is Katarzyna de Lazari-Radek and Peter Singer's "The Objectivity of Ethics and the Unity of Practical Reason."  Ethics has kindly provided free access to the article here.  We are also very grateful to Roger Crisp for providing the critical précis, which begins below the fold.  Everyone is invited to join the discussion.


————–

In this bold and interesting paper,
Katarzyna de Lazari-Radek and Peter Singer (LRS) claim that a Sidgwickian
response to certain attempts to use evolutionary theory to debunk moral
judgements can itself resolve Sidgwick’s own ‘dualism of practical reason’. I
am persuaded by the argument against debunking, but I believe the Sidgwickian
response can do less than LRS suggest.

LRS
begin by outlining Sidgwick’s dualism between, on the one hand, an egoistic
axiom, according to which each of us ought to aim at her own good, and, on the
other, an axiom of universal benevolence, which requires us to aim impartially
at the good of all. They then explain Sidgwick’s arguments against the view
that, once we understand the origin of our moral intuitions, we will see them
as caused by factors outside our control and hence as unreliable. First, an
intuition’s being ‘self-evident’ – that is, such that understanding it is
sufficient to justify it – is quite consistent with its being caused. Second,
we do not even have to show that the causes in question are likely to lead to
true judgements, since this will lead us into a regress of justification.
Finally, the causal judgements in question are within the domain of science,
and this does not extend to propositions concerning what we ought to do.

Sidgwick
distinguishes this form of general scepticism about moral intutions from more
limited claims about particular ethical beliefs. Using this distinction, LRS
first examine the general argument in more detail, and especially that form of
it developed by Sharon Street. According to Street, moral realists, once they
recognize that our evaluative attitudes have evolved, face an awkward dilemma.
On the first horn, they accept that evolutionary forces have no tendency to
select beings with objectively true evaluative attitudes, and so must draw the
unpalatable conclusion that most of our evaluative judgements are unjustified.
On the second, they claim that these forces were likely to select those able to
grasp objective moral truths; but this claim goes against the most plausible
scientific understanding of evolution, which sees it as heading in the
direction of survival rather than truth.

Street
suggests that, had we evolved to be more like, say, lions, we would have been
readier to accept the killing of others’ offspring than we are. LRS note the
echo here of Darwin’s suggestion that, were we like bees, we would think it a
duty of a mother to kill her fertile daughter. Sidgwick responded that such
arguments do not touch the abstract principle of utilitarianism, which allows
for much variation in the rules of common morality. This, LRS plausibly claim,
suggests that a modern Sidgwick, more informed that the real Sidgwick about the
influence of evolution on morality, might readily impale himself on the first
horn of Street’s dilemma, allowing that many
of the rules of common-sense morality are not based on objective truth. (It is
worth noting that reference to evolution might also enable the modern Sidgwick
to avoid the somewhat implausible notion that the utility of common-sense
morality suggests that human beings have been ‘unconscious utilitarians’.)

But,
Street might object, if the principle of benevolence is objectively true, isn’t
our arriving at it without any steer from evolution just a huge coincidence?
LRS rightly point out that Sidgwick can offer a plausible explanation of how we
understand such principles: we use our reason. And at this point he can embrace
the second horn of Street’s dilemma. A rational capacity would advance success
in reproduction, and it might do that most effectively in a general,
‘untargeted’ form, which would allow us to enquire into the foundations of
mathematics or physics as well as to recognize self-evident moral truths.

LRS
then turn from the general to the particular form of the sceptical argument.
They cite Sidgwick’s claim that no theory of the origin of our ethical
intuitions has been offered that might throw his own abstract principles into
doubt, as arising from sources which were likely to make them false. LRS
suggest that this is still the case as far as universal benevolence is
concerned, since the kinds of judgement most consistent with reproductive
success will recommend helping one’s own children rather than complete
strangers. Since LRS are going to use Sidgwick’s arguments about evolution in
an attempt to resolve his dualism, it is worth noting that egoism is in as
strong a position as universal benevolence in this context to resist debunking
evolutionary arguments. What we would expect to evolve would indeed be
something like kin altruism, which is neither egoistic nor impartially
benevolent.

It
is true, of course, that some concern
for self might be expected to arise through evolutionary development, and LRS
later approvingly cite Folke Tersman’s attempt to use this point to debunk my
own defence of a principle of self-interest, according to which each of us has a reason (not necessarily overriding) to
advance her own good. So it might be claimed that egoism is just a development
of that bias towards the self, a development tainted by its source in
non-rational evolutionary processes. I myself am not persuaded by Tersman’s
argument, and would want to appeal to some of the very Sidgwickian counters to
debunking arguments which LRS state earlier in their paper. Egoism, or the
principle of self-interest, are justified by appeal to their self-evidence, and
the conclusions of reflection upon them, though that reflection must be fully informed
by an impartial grasp of evolutionary development, need not be overturned by
that grasp. If it is pointed out to me that the reason I think that 7 + 5 = 12
is that my hunter-gatherer ancestors needed to develop some system for sharing
out food at the end of the day, my belief will be unshaken. But note also that
if Tersman’s point has any force, it applies equally to the principle of universal
benevolence. We would expect evolution to produce some concern for others, and universal benevolence can be seen as
an extension of that concern in the impartial direction in just the way that
egoism might be taken to be an extension of concern for oneself in the
direction of partiality.

Since
self-evidence can withstand reflection on the origins of beliefs, even kin
altruism can resist debunking. LRS cite – without questioning it – Sidgwick’s
somewhat remarkable claim that it is ‘certainly
not’ [my italics; LRS paraphrase as ‘not at all’] ‘self-evident that we owe
more to our own children than to others whose happiness equally depends on our
exertions’. I have little doubt that, were people to reflect properly on this
conception of extreme impartiality, the vast majority would reject it. But that
of course is not the issue, as LRS point out: ‘This is not to say that the
judgment that we have greater obligation to help our own children than to help
strangers cannot be justified, but rather that if it is to be justified, it
needs a form of justification that does not start from the idea that because we
strongly feel that it is right it must be true’. This is hard to deny; but it
is a point that applies as much to universal benevolence, and indeed egoism, as
it does to kin altruism.

In
support of the principle of universal benevolence, LRS claim that it results
from ‘a process of careful reflection that leads us to take, as Sidgwick puts
it, “the point of view of the universe”’. This idea, they suggest, has been
converged on by leading thinkers in various traditions, including the Judaic,
Christian, Confucian, Hindu and Buddhist traditions. Nor is there any plausible
evolutionary debunking argument against it. There are, then, three elements to
establishing that an intuition has the greatest degree of reliability: (1)
reflection; (2) agreement among careful thinkers; (3) lack of any debunking
argument.

By
the point of view of the universe, LRS seem to mean something considerably less
rigorous than Sidgwick’s own utilitarian conception of pure impartiality,
seeing it in the Judaeo-Christian tradition, for example, as the Golden Rule.
And if the egoist is permitted the same degree of latitude, she can claim that
her view meets each of these three conditions as effectively as the principle
of universal benevolence. The Golden Rule itself seems to imply that it is no
less rational to have concern for oneself as for others, and self-love can
plausibly be claimed to play an important role in the other three traditions
mentioned by LRS, as well, of course, as the ancient western philosophical
tradition, where if anything egoism rather than universal benevolence plays the
more important role. And there has been no more careful thinker on these
matters than Sidgwick himself! Further, as we have seen, the mere fact that
some principle is partial is not enough to debunk it, even if – as with kin
altruism, and not with egoism – it
lines up with evolutionary expectations.

LRS’s
conclusion, then, is that, because impartial universal benevolence withstands
evolutionary debunking arguments, whereas partial principles such as egoism do
not, Sidgwick’s dualism can be resolved in favour of benevolence. I have
suggested that egoism, and indeed kin altruism, can withstand reflection as
effectively as the principle of universal benevolence. I agree with LRS in
rejecting appeal to reflective equilibrium in ethics (something I remember
Joseph Raz’s pointedly describing as ‘unreflective equilibrium’). What is
required is just the kind of rational, informed, impartial reflection on
ultimate ethical principles advocated, and often (though not always) engaged in
by Sidgwick. LRS are right too that Street’s evolutionary arguments are
misdirected against moral realism. Such arguments might often be useful in
debunking certain, unreflective, spontaneous moral responses, such as a
visceral disgust at incest or homosexuality. But, at least as far as current
evolutionary theory is concerned, they are largely irrelevant to first-order
normative ethics (which philosophers do you know campaigning against incest or
homosexuality?), as indeed is the neurological evidence based on fMRI scans
used elsewhere by Singer, Greene, and others to support the principle of
universal benevolence (put Frances Kamm or Judith Thomson in a scanner, and
their brains will – I’m willing to bet – light up in the same way as Singer’s
or Greene’s when they’re asked to state their fundamental ethical principles). There
are no quick fixes here.

It
is somewhat remarkable that, having concluded the Methods in a state of such internal incoherence, Sidgwick appears
to have done little to try to resolve it. It was the same with disagreement
with others: like LRS, when stating his own first-order view he largely ignored
its implications and focused on agreement, though his own discussion of
intuitionism demonstrates a clear awareness of the threat to self-evidence
posed by disagreement with epistemic peers. What is needed now in normative
ethics is a general facing up to the existence of such interpersonal
disagreement, and a non-dogmatic and co-operative attempt to make progress
towards greater convergence. Indeed this may be an area in which evolutionary
theory (along with neuroscience, history, anthropology, psychology …) turns out
to have real purchase.

– Roger Crisp

41 Replies to “Ethics Discussions at PEA Soup: Katarzyna de Lazari-Radek and Peter Singer, “The Objectivity of Ethics and the Unity of Practical Reason”

  1. Thanks for starting up the discussion of this excellent and very interesting article.
    I worry a little bit about the move from the capacity to reason (which includes the capacity to understand abstract principles) to the idea of also having an intuitive capacity to recognize the supposedly self-evident truth of some of these abstract propositions. I find it quite plausible that having the ability to make inferences, to think using abstract categories and principles, and, as Hobbes put it, “conceive of the consequence of the names of all the parts, to the name of the whole; or from the names of the whole and one part, to the name of the other part” might have a general adaptive advantage. But why add the further idea of also intuitively apprehending certain contents of these thoughts as intuitively true, rather than adding the further idea that some of these thoughts (or their contents) elicit like or dislike, pleasure or pain, or other kinds of affective responses? (This, incidentally, is, I think, how Sidgwick’s two main sources of utilitarian inspiration – Bentham and Mill – would have replied to this argument.) I worry, in other words, that de Lazari-Radek and Singer start with a very plausible premise into which they smuggle something that might not as plausibly be included in it. That said, I think, as already noted, that this is a very interesting contribution to this debate.

  2. Interesting article and well done Roger for the response. I just had one question of clarification.
    I wasn’t sure why the defenders of other moral theories could not say the same thing. Exactly in the same ways as the utilitarian principle of benevolence, Kantians, contractualists, virtue ethicists, Rossian pluralists, rule-consequentialists, and so on all offer an impartial justification for the central moral requirements.
    All these views too lead to dualism of practical reason (well, not sure about virtue ethics). Why cannot the defenders of these views too say that their view results from careful reflection from the point of view of the universe and that there are no debunking arguments to contest them? So, why is it that only utilitarianism can use the argument to solve the dualism of practical reason? What’s special and different about it compared to all other ethical theories in this respect?

  3. Hi Jussi, I wonder if Singer would respond to your question by referring to the empirical literature that he in an earlier paper (“Ethics and Intuitions” The Journal of Ethics. October, 2005) took to show that those other kinds of basic intuitions are subject to debunking arguments. He probably still thinks that the other main views are open to debunking objections and that because the principle of universal benevolence (according to him) is not, it is better placed to solve the dualism of practical reason.

  4. Hi Sven,
    I cannot see how that could work. Imagine I am a Kantian. I think that my theory certainly has unintuitive consequences (lying at the door and so on). So, instead of justifying my theory on the basis of my basic intuitions (after all, these can be debunked), I give a transdental argument for my theory on the basis of self-constitution. I take it this argument cannot be debunked on the basis of evoluation. I can do the same for all other theories that have counter-intuitive consequences and for which I can give an argument that doesn’t begin from my debunked intuitions. So, I still cannot see how utilitarianism is any different from the other views in this respect.

  5. Hello again Jussi, I see: what you meant was that the underlying rationales behind the other theories are also not open to the debunking arguments. In the case of Kantian theory, for example, the underlying rationale is an argument about how to constitute ourselves as a certain kind of agents (and, presumably, the idea that we for some reason are committed to thinking of ourselves as such agents). And there is no clear debunking evolutionary explanation of why those who are convinced by such arguments are convinced by their basic arguments. So why – you ask – can we not appeal to those basic arguments in the same way in which (as Roger calls them) LRS appeal to their arguments? If that is your point, I at first misunderstood it.

  6. Thanks to Roger Crisp for an excellent critical précis. Since I share many of Crisp’s same worries about LRS’s article and since he has already expressed them better than I could of hoped to, I’ll comment instead on what I take to be a major shift in Peter Singer’s thinking.
    @Peter Singer: Going back as far as the early 1970s and as recently as 2009, you’ve held that although we are morally required to maximize utility, we are not rationally required to maximize utility. For instance, recently you’ve said: “I do not think, however, that to be rational we must take up and act from the utilitarian point of view—what the utilitarian philosopher Henry Sidgwick called ‘the point of view of the universe. I agree with Sidgwick that it is rational to care more about your own interests and the interests of those close to you than the interests of others” (Voorhoeve 2009, p. 61). But now you seem to conclude (at least, tentatively) that all normative “reasons for action are impartial” and “that when one of two possible acts would make things go impartially better, that is what we have decisive normative reason to do” (LRS, pp. 28 & 31). Am I right that this represents a major shift in your thinking? And to what extent do you think this shift undermines your earlier defenses against the demandingness objection to utilitarianism? Previously, you seemed to defuse the demandingness objection by pointing out that, unlike those critics who have pressed the demandingness objection, you did not hold that “if morality did demand that we give so much to famine relief [that is, as much as you claimed it demands in ‘Famine, Affluence, and Morality’], then there must be overriding reason to do so” (Singer in Jamieson 1999, pp. 308-309). But now it seems that you do. So what effect do you think that this shift in your thinking will have on your ability to respond to the demandingness objection?

  7. I agree with Jussi Suikkanen that LRS’s argument could work equally well with other first-order normative beliefs. I wonder if, on the other hand, one could not apply the argument to show that the principle of universal benevolence does not stand on such firm ground.
    Say we have an “intuition” that there are some independent normative truths (that withstand evolutionary debunking arguments). It could be asked, about that intuition, whether it
    1. results from “careful reflection leading to a conviction of self-evidence”;
    2. rests on, or yield, “independent agreement of other careful thinkers”; and
    3. might not be explained “as the outcome of an evolutionary or other non-truth-tracking process”.
    As to (1), insofar as it could be justified as a meta-ethical claim, the intuition is likely to be held by some thinkers as a result of “careful reflection”, and some “careful” thinkers will take it as self-evident. Granted, but it is equally plausible that the intuition is maintained only insofar as it is required for purposes of first-order normative discussion (see Parfit’s worries about wasting his life). If no *self-evident* higher-order belief is available to support the first-order normative beliefs, in what sense can the latter be self-evident?
    Then, ss to (2), the least we can say is that not every thinker agrees today that there are such independent truths. Many will agree that in some sense normative claims are truth-apt, but much fewer hold that there are independent facts about which they are right (in the realist sense). If the independent agreement of any number (at least as great as 2) of “careful thinkers” is enough for the condition to be met, it’s a very weak condition indeed, and lots of crazy intuitions could meet it. The condition as strength only insofar as the first one holds.
    As to (3), while I don’t have specific suggestions, it can’t be ruled out that an intuition that there are such truths may have been adaptative at some point. Think, for instance, of the evolutionary explanation that is available on the origins of reasoning (Mercier and Sperber 2011).
    Hence, if the meta-ethical belief that there’s a truth which the principle of universal benevolence might track is itself less justified on those conditions than the principle is alleged to be, LRS may have been too quick to conclude that Sidgwick’s dualism can be resolved so easily, and in the direction they hoped it could.
    Of course, the objection is only grounded on a meta-ethical skepticism that the authors are free to reject on independent grounds. If so, their argument is not as weakly supported as I’ve argued. But they don’t seem to have answered all of possible challenges to moral realism, their responses to Street’s dilemma notwithstanding. For meeting evolutionary debunking challenges (supposing they did) is not enough answering all possible non-realist challenges. Finally, they could answer that most non-realist meta-ethical theories are consistent with the principle of universal benevolence. But what they’re not necessarily consistent with it that it is an objective and independent principle.

  8. Thanks to Katarzyna de Lazari-Radek and Peter Singer for an interesting article, and to Roger Crisp for illuminating commentary. Let B a belief of ours — such as our belief in the principle of universal benevolence — for which there is no evolutionary explanation. LRS suggest that the lack of an evolutionary explanation of B shows that B cannot be “debunked”. If there is no evolutionary explanation of B, then a fortiori there is no “debunking” such explanation. But there might, of course, still be a non-evolutionary “debunking” explanation of B. After all, there must surely be a causal explanation of B. When B is moral, one might suspect that any such explanation will be “debunking”.
    If memory serves, Street makes just this point. But it is hard to assess absent a clear sense of what a “debunking” (or “non-truth-tracking”) explanation of a belief, B, is supposed to be. The obvious proposal is roughly this. A causal explanation of B is “debunking” just in case it fails to assume that B is (probably) true. If this analysis were correct, then, granted the explanatory impotence of moral hypotheses, any moral belief, B, would admit of a debunking explanation, whether or not it admitted of an evolutionary explanation. But the analysis is doubtful. First, it suggests that a “vindicating” (or “truth-tracking”) explanation of B would be an explanation that assumed that B was true. But, if that were so, then it would be trivial to “vindicate” our reliability about (first-order) logic, since its theorems are entailed by any explanation at all. Second, it suggests that B could be debunked even if it could be shown (to the debunker’s satisfaction) that B could not have easily have been false. Suppose that the content of B is a proposition whose falsity is not just impossible, but unintelligible. Suppose, also, that, as a matter of natural law, B could not have easily been different. Then it is very hard to see how B could be irrational hold, whether or not its (probable) truth was assumed by its causal explanation.
    To sum up: it is not clear whether all of our moral beliefs admit of a debunking explanation because it is not clear whether those that fail to admit of an evolutionary explanation still admit of some other debunking explanation. This is because it is not clear what a debunking explanation is supposed to be.

  9. I am not yet convinced that Prof. Crisp has a sound reply to LRS’s attack. I think there are interesting issues to discuss here and would like to hear what others think and why.
    First, Crisp seems to deny that the mere existence of a plausible evolutionary explanation casts doubt on the reliability of a belief. Specifically, he seems to think it casts no such doubt if awareness of the explanation does not undercut our sense that the belief (or its content) is self-evident. He illustrates with a case involving a simple mathematical belief. It is interested to note in this regard that LRS talk about higher order mathematical beliefs as the ones that we can probably vindicate, but, my general question here is about what others think of Crisp’s position. Do I have his view right? And, if so, how can we adjudicate the disagreement between Crisp and LRS on the epistemic significance of a plausible evolutionary explanation (in a case where awareness of this does not undercut the apparent self-evidence).
    Second, Crisp claims that, “if Tersman’s point has any force, it applies equally to the principle of universal benevolence. We would expect evolution to produce some concern for others, and universal benevolence can be seen as an extension of that concern in the impartial direction in just the way that egoism might be taken to be an extension of concern for oneself in the direction of partiality.”
    Here, I think we need to hear more from Crisp. LRS are careful to claim that a rational concern for coherence and generality cannot get us from concern for some others to universal benevolence, while it can get us from selfish concern to a general principle of self-interest. This is the upshot of their siding with Mackie against Hare in section IV. I take it that this is the move that is supposed to block applying Tersman’s point to universal benevolence in the way Crisp suggests, and I would like to hear more about why Crisp thinks this part of their argument fails.
    The sticking point here is that LRS think that the principle of self-interest can be explained as a *rational* extension of evolutionarily produced self-interest, and they argue (contra Hare) that universal benevolence cannot be explained as a rational extension of evolutionarily produced concern for kin or members of one’s own group. Assuming that Crisp shares their pessimism about Hare’s project, I am not sure how he can respond.

  10. LRS’s paper is a fantastic contribution to discussions of evolutionary debunking. Here are a few comments on LRS’s byproduct explanation of our capacity to grasp the (independent) truth of Sidgwick’s axiom.
    In discussions of our cognitive capacities, byproduct claims are often tossed about: “Such-and-such cognitive capacity is a byproduct of our big brains.” Making such a claim is one thing, backing it up is quite another. Byproduct explanations, if we are to take them seriously, should generally be held to rigorous standards of assessment, as rigorous as the standards applied to selection-based explanations, all the more so if the byproduct capacity invoked in the explanation is contentious and the explanandum (in this case, a basic moral belief) is susceptible to alternative explanations (e.g. evolutionary based and/or socialization-based explanations)
    LRS’s byproduct claim is this: our capacity to grasp the (independent) truth of Sidgwick’s axiom of universal benevolence emerged as a byproduct of our capacity to reason. This byproduct capacity would apparently detract from fitness, since being disposed to universally benevolent behavior would detract from fitness. LRS would agree, since they cite Dawkins, R. Alexander, Sober/Wilson in support of the claim that strong selection pressures would tend to eliminate universal benevolence. So if our byproduct capacity detracts from fitness, LRS incur the burden of explaining why it persisted over generational time and was not eliminated or significantly altered, despite the selection pressures against it.
    LRS anticipate this objection and argue that our byproduct capacity to grasp basic (independent) moral truths is tied to our capacity to reason in such a way that it could not be eliminated without eliminating our other rational capacities. In other words, once the capacity emerged as a byproduct, we were stuck with it and there was no feasible evolutionary way out.
    LRS write:
    “Either we have a capacity to reason that includes the capacity to do advanced physics and mathematics and to grasp objective moral truths, or we have a much more limited capacity to reason that lacks not only these abilities, but others that confer an overriding evolutionary advantage. If reason is a unity of this kind, having the package would have been more conducive to survival and reproduction.”
    But why suppose with LRS that our rational capacities are structurally tied to one another in this strong sense? What about our capacity to recognize basic (independent) moral truths structurally ties it to our capacities to do advanced mathematics and physics? Absent an explanation, it does not seem to me that LRS have discharged their explanatory burden.
    An additional explanatory burden LRS incur is to explain (or support the existence of) the causal relation between the byproduct capacity to grasp moral truth and the underlying capacity to reason. LRS do not make it clear how and why this byproduct capacity emerged in the first place. As with any byproduct explanation, there must be a close causal connection here, but LRS do not make clear what the nature of the causal connection is. Without some explanation, it is difficult to assign the causal claim credibility.
    Our capacities to do advanced mathematics and physics are obviously byproducts, even if we cannot specify the close causal connection between them and their underlying capacities. This is arguably so because we obviously have these capacities and they obviously arose too late in the evolutionary game to have been the product of selection pressures. But the capacity to grasp basic (independent) moral truths is not a capacity that obviously exists—this makes for an important disanalogy between it and our capacities to do advanced mathematics and physics. Nor is this capacity obviously of recent provenance in our evolutionary development (There is lively debate about the dating and origins of our normative capacity). When an alleged capacity’s existence is contentious, then it seems the claim that it is a byproduct (or an adaptation) demands significant empirical and explanatory backing, of the sort requested in my comments.

  11. A response to Roger Crisp.
    We thank Roger Crisp for his helpful introduction to our article, and for starting the debate about it.
    Roger agrees with us that Sidgwick’s principle of universal benevolence is not vulnerable to debunking based on an understanding of our evolutionary origins, but he thinks that this is also true of the principle of egoism. Hence he denies that we have solved the problem posed by Sidgwick’s dualism of practical reason.
    As Roger notes, we proposed three conditions for establishing that an intuition has the highest possible degree of reliability: reflection leading to a conviction of self-evidence, independent agreement of careful thinkers, and the absence of a plausible explanation of the intuition as the outcome of an evolutionary or other non-truth-tracking process. On the basis of these principles we rejected egoism and supported universal benevolence. Roger argues that egoism can meet all of these conditions. We think that it fails the third condition, because we think that evolution provides a plausible explanation of our intuition that we ought to act in our own interests. Roger thinks, however, that what evolution would lead us toward is neither egoism nor universal benevolence, but “something like kin altruism, which is neither egoistic nor impartially benevolent.” The relation between universal benevolence and what evolution would lead us toward is, in Roger’s view, the same as the relation between egoism and what evolution would lead us toward. Roger seems to be saying, if we understand him correctly, that they are roughly equally close to (or equally distant from) a principle based on kin altruism, and therefore either both of them are susceptible to an evolutionary explanation, or neither of them is.
    We do not agree that egoism and universal benevolence are in this sense equidistant from what evolution could lead us towards. Evolution explains altruism towards kin by seeing it as promoting the survival of the genes we carry – whether in our offspring, or in siblings or other kin. We ourselves, however, are not excluded from this evolutionary account – indeed we are central to it. As long as we live, and can continue to reproduce, we can promote the survival of our genes, either by reproducing or by enhancing the survival prospects of our genetic relatives. Hence evolution readily explains why we should care about our own interests, in survival, in reproduction, and in the survival and reproductive prospects of our relatives. But how can evolution explain altruism towards distant and unrelated members of our species? How can it explain altruism that goes even beyond the boundary of our own species, towards sentient members of other species (as Sidgwick insisted was implied by utilitarianism, which is based on the principle of universal benevolence)? We do not see how it can.
    Regarding the condition of agreement by other careful thinkers, Roger suggests that, by referring to widespread support for the Golden Rule as evidence of agreement with the principle of universal benevolence, we are in some way loosening Sidgwick’s principle of pure impartiality. The Golden Rule, he points out, “seems to imply that it is no less rational to have concern for oneself as for others.” But if having concern for oneself as for others means that one is permitted to give one’s own interests equal weight with those of anyone else, then that is also implied by the principle of universal benevolence. It does not require altruism in the sense of giving more weight to the interests of any other individual than one gives to one’s own interests. So we cannot see that the Golden Rule is in any way less purely impartial than the principle of universal benevolence.
    We do not deny, of course, that there is general agreement that it is rational to do what is in one’s own interests. Our point is rather that this agreement merely reflects our evolved nature, and so should not be taken as evidence that it really is rational to act in accordance with one’s own interests, where this involves giving more weight to one’s own interests, merely because they are one’s own, than to the interests of others. We do not, however, deny that most humans are much more strongly motivated to do what is in their own interests than to do what is in the interests of strangers. Our claims about what it is rational to do are claims about normative reasons for action, not motivating reasons.
    Thanks to everyone else who has commented. We will respond to other comments shortly.

  12. A response to Roger Crisp.
    We thank Roger Crisp for his helpful introduction to our article, and for starting the debate about it.
    Roger agrees with us that Sidgwick’s principle of universal benevolence is not vulnerable to debunking based on an understanding of our evolutionary origins, but he thinks that this is also true of the principle of egoism. Hence he denies that we have solved the problem posed by Sidgwick’s dualism of practical reason.
    As Roger notes, we proposed three conditions for establishing that an intuition has the highest possible degree of reliability: reflection leading to a conviction of self-evidence, independent agreement of careful thinkers, and the absence of a plausible explanation of the intuition as the outcome of an evolutionary or other non-truth-tracking process. On the basis of these principles we rejected egoism and supported universal benevolence. Roger argues that egoism can meet all of these conditions. We think that it fails the third condition, because we think that evolution provides a plausible explanation of our intuition that we ought to act in our own interests. Roger thinks, however, that what evolution would lead us toward is neither egoism nor universal benevolence, but “something like kin altruism, which is neither egoistic nor impartially benevolent.” The relation between universal benevolence and what evolution would lead us toward is, in Roger’s view, the same as the relation between egoism and what evolution would lead us toward. Roger seems to be saying, if we understand him correctly, that they are roughly equally close to (or equally distant from) a principle based on kin altruism, and therefore either both of them are susceptible to an evolutionary explanation, or neither of them is.
    We do not agree that egoism and universal benevolence are in this sense equidistant from what evolution could lead us towards. Evolution explains altruism towards kin by seeing it as promoting the survival of the genes we carry – whether in our offspring, or in siblings or other kin. We ourselves, however, are not excluded from this evolutionary account – indeed we are central to it. As long as we live, and can continue to reproduce, we can promote the survival of our genes, either by reproducing or by enhancing the survival prospects of our genetic relatives. Hence evolution readily explains why we should care about our own interests, in survival, in reproduction, and in the survival and reproductive prospects of our relatives. But how can evolution explain altruism towards distant and unrelated members of our species? How can it explain altruism that goes even beyond the boundary of our own species, towards sentient members of other species (as Sidgwick insisted was implied by utilitarianism, which is based on the principle of universal benevolence)? We do not see how it can.
    Regarding the condition of agreement by other careful thinkers, Roger suggests that, by referring to widespread support for the Golden Rule as evidence of agreement with the principle of universal benevolence, we are in some way loosening Sidgwick’s principle of pure impartiality. The Golden Rule, he points out, “seems to imply that it is no less rational to have concern for oneself as for others.” But if having concern for oneself as for others means that one is permitted to give one’s own interests equal weight with those of anyone else, then that is also implied by the principle of universal benevolence. It does not require altruism in the sense of giving more weight to the interests of any other individual than one gives to one’s own interests. So we cannot see that the Golden Rule is in any way less purely impartial than the principle of universal benevolence.
    We do not deny, of course, that there is general agreement that it is rational to do what is in one’s own interests. Our point is rather that this agreement merely reflects our evolved nature, and so should not be taken as evidence that it really is rational to act in accordance with one’s own interests, where this involves giving more weight to one’s own interests, merely because they are one’s own, than to the interests of others. We do not, however, deny that most humans are much more strongly motivated to do what is in their own interests than to do what is in the interests of strangers. Our claims about what it is rational to do are claims about normative reasons for action, not motivating reasons.
    Thanks to everyone else who has commented. We will respond to other comments shortly.

  13. A response to Roger Crisp.
    We thank Roger Crisp for his helpful introduction to our article, and for starting the debate about it.
    Roger agrees with us that Sidgwick’s principle of universal benevolence is not vulnerable to debunking based on an understanding of our evolutionary origins, but he thinks that this is also true of the principle of egoism. Hence he denies that we have solved the problem posed by Sidgwick’s dualism of practical reason.
    As Roger notes, we proposed three conditions for establishing that an intuition has the highest possible degree of reliability: reflection leading to a conviction of self-evidence, independent agreement of careful thinkers, and the absence of a plausible explanation of the intuition as the outcome of an evolutionary or other non-truth-tracking process. On the basis of these principles we rejected egoism and supported universal benevolence. Roger argues that egoism can meet all of these conditions. We think that it fails the third condition, because we think that evolution provides a plausible explanation of our intuition that we ought to act in our own interests. Roger thinks, however, that what evolution would lead us toward is neither egoism nor universal benevolence, but “something like kin altruism, which is neither egoistic nor impartially benevolent.” The relation between universal benevolence and what evolution would lead us toward is, in Roger’s view, the same as the relation between egoism and what evolution would lead us toward. Roger seems to be saying, if we understand him correctly, that they are roughly equally close to (or equally distant from) a principle based on kin altruism, and therefore either both of them are susceptible to an evolutionary explanation, or neither of them is.
    We do not agree that egoism and universal benevolence are in this sense equidistant from what evolution could lead us towards. Evolution explains altruism towards kin by seeing it as promoting the survival of the genes we carry – whether in our offspring, or in siblings or other kin. We ourselves, however, are not excluded from this evolutionary account – indeed we are central to it. As long as we live, and can continue to reproduce, we can promote the survival of our genes, either by reproducing or by enhancing the survival prospects of our genetic relatives. Hence evolution readily explains why we should care about our own interests, in survival, in reproduction, and in the survival and reproductive prospects of our relatives. But how can evolution explain altruism towards distant and unrelated members of our species? How can it explain altruism that goes even beyond the boundary of our own species, towards sentient members of other species (as Sidgwick insisted was implied by utilitarianism, which is based on the principle of universal benevolence)? We do not see how it can.
    Regarding the condition of agreement by other careful thinkers, Roger suggests that, by referring to widespread support for the Golden Rule as evidence of agreement with the principle of universal benevolence, we are in some way loosening Sidgwick’s principle of pure impartiality. The Golden Rule, he points out, “seems to imply that it is no less rational to have concern for oneself as for others.” But if having concern for oneself as for others means that one is permitted to give one’s own interests equal weight with those of anyone else, then that is also implied by the principle of universal benevolence. It does not require altruism in the sense of giving more weight to the interests of any other individual than one gives to one’s own interests. So we cannot see that the Golden Rule is in any way less purely impartial than the principle of universal benevolence.
    We do not deny, of course, that there is general agreement that it is rational to do what is in one’s own interests. Our point is rather that this agreement merely reflects our evolved nature, and so should not be taken as evidence that it really is rational to act in accordance with one’s own interests, where this involves giving more weight to one’s own interests, merely because they are one’s own, than to the interests of others. We do not, however, deny that most humans are much more strongly motivated to do what is in their own interests than to do what is in the interests of strangers. Our claims about what it is rational to do are claims about normative reasons for action, not motivating reasons.
    Thanks to everyone else who has commented. We will respond to other comments shortly.

  14. @Doug Portmore: You are right, there has been a major shift in my thinking over the past three years. Your comment focuses on the point about the dualism of practical reason that Katarzyna and I make in the article under discussion, but the change in my thinking goes further than that. As you may know, early in my career I was a noncognitivist, supporting Hare’s universal prescriptivism, but for many years I have had difficulty in reconciling it with the role that I believe reason plays in ethics. (My ambivalence about this is evident in The Expanding Circle). It is only quite recently, however, that I have been prepared to say that I am no longer a non-cognitivist, but an ethical objectivist. (In that, of course, I agree with Sidgwick). My commitment to preference utilitarianism derived from Hare’s universal prescriptivism, so abandoning that opens up other possibilities, and I am now seriously considering whether there are stronger grounds for supporting hedonistic utilitarianism. That is a topic that I will explore in a book that I am currently writing with Katarzyna de Lazari-Radek, in which we draw on Sidgwick to defend utilitarianism. The paper we are here discussing is part of that work, but the book itself is still in progress.
    Parfit’s On What Matters has been a significant influence in all of this, and of particular relevance for the questions you raise in your comment is the distinction that he and others have used between normative and motivating reasons. It is only when this distinction is clearly made that it is possible to defend the view that Katarzyna and I are now defending, that normative reason is impartial. Most people will still, of course, have motivating reasons that are partial, but that is a reflection of the fact that we are not purely rational beings.
    I should also mention Tom Nagel’s review in The New York Review of Books, of The Life You Can Save and Peter Singer Under Fire which concluded by suggesting that deep down I believe that we all have decisive reason to give equal consideration to the interests of all. That may have helped to nudge me over the edge on this question. Nagel’s concluding comments are also relevant to the issue that you raise, of the demandingness of morality. A full answer will have to wait for the book I have mentioned, but in brief, I do now hold that there we have overriding normative reason to do as morality requires. I’m not sure how much this changes things. Since most people frequently do what is contrary to the best normative reasons, judging someone to be acting contrary to reason, in this sense, is not the same as judging a person to be irrational in the sense of, for instance, not doing what will best satisfying the preferences that she has, after careful reflection, decided are most important to her.

  15. @Sven Nyholm: You ask how we can be sure that the capacity to recognize abstract moral truths is one and the same as the capacity to make the kind of inferences that would have helped our ancestors to survive. It’s a good question, and we don’t have a ready answer. To answer it properly would require a larger investigation of the nature of reason. So at this stage, we take it as a plausible hypothesis that the two capacities are different aspects of a single faculty of reason. We would welcome other suggestions on this question.
    @Jussi Suikkanen: Our aims in this paper are to defend ethical objectivism and to show that reason is impartial. We are not seeking to show that utilitarianism is the best normative theory. So we agree that other impartial normative theories may survive our evolutionary debunking argument. See this passage from p.27 of our article, which even refers, as you do, to the view that lying is wrong:
    We have argued that Sidgwick’s axiom of universal benevolence
    passes this test, but we are not claiming that it is the only principle to do
    so. Other principles, including deontological principles, might be equally
    impartial—for instance, the principle that lying is wrong, whether one is
    lying to strangers or to members of one’s own community. Ethical principles
    of respect for human rights might also be thought to be impartial in
    the same way, but to be fully impartial, they would need to be freed from
    any specific association with members of our species and instead to be reformulated
    as rights that are possessed by all beings with certain capacities
    or characteristics.
    Sven suggested that we might follow the line that one of us took in the earlier “Ethics and Intuitions” article, but that argument only works for theories based on commonsense intuitions, like the intuition that it would be permissible to save five lives at the cost of one life by diverting the trolley down the side-track, but not by pushing the heavy stranger off the bridge.

  16. Hi LRS
    thanks for this clarification. The passage on page 27 is helpful – I’d overlooked its significance. I did find this slightly misleading given the emphasis on the principle of benevolence throughout the paper. Even the conclusion is that people can give up the aim of reflective equilibrium and accept that they have decisive reasons to make things impartially better. But, given your admission, they may just as well do any number of other things just as long as these are impartial.
    Just one additional quick question, if I may. Some ethical theories such as rule-consequentialism or contractualism give impartial justification for requirements of partiality. So, the idea is that my requirement to help the nearest and dearest is not justified by moral intuitions but rather because this kind of requirements are justified by principles that are impartially best or not reasonably rejectable.
    What do you make of these theories? If they offer impartial justification for partial demands and are therefore not based on intuitions we can debunk, is your argument neutral about these too? Or, are these cases in which moral theories that are partial on the first-order level but impartial at the higher order level of justification can be debunked too on the basis of evolutionary explanations?

  17. Thanks to all for a very interesting discussion so far.
    @Brad Cokelet: What I wanted to claim was that a claim might withstand reflection on the fact that there is a plausible evolutionary explanation for it. The fact that there is such an explanation certainly needs to be taken into account in reflection. I suppose in a sense such a fact ‘casts doubt’ initially on the claim. But the claim may survive reflection in such a way that any such doubt has been dispelled.
    On the analogy I drew between universal benevolence and egoism as ‘extensions’ from the views provided by evolution. I am aware that LRS deny that analogy, but I fail to understand why siding with Mackie against Hare on universalizability helps them to do that. Mackie’s first stage of universalizability involves denying the moral relevance of indexicals and proper names, and this denial is neutral between universal benevolence and egoism. My thought was this. Evolution might produce a being that cares for certain other beings (its kin). If it provides that being with reason, then the being may conclude that the distinction between kin and strangers is morally irrelevant, and so adopt universal benevolence. Likewise, evolution might produce a being that cares for its own well-being along with that of certain others. If it provides that being with reason, then the being may conclude that the distinction between itself and others is morally relevant, and so adopt egoism. I’m saying not that either of these extensions is reasonable, just that they’re on a par.
    @Matthew Braddock: There’s a long tradition in ethics of analogies between ethical and mathematical truths, according to which they are both e.g. synthetic and a priori. LRS could draw on that tradition to support their claim that it is the same capacity that enables to grasp those truths. A good deal of what is claimed in this tradition is independent of any assumptions about e.g. the dating of the emergence of the capacity in any particular sphere.
    @LRS: I agree, of course, that evolution can explain why we care about our own interests. What it can’t explain is why (if we are egoists) we don’t care about the interests of anyone else, including our kin. Likewise, it can explain why we care about others. What it can’t explain is why (if we advocate universal benevolence) we care about the interests of strangers.
    The Golden Rule seems to me to differ from Sidgwick’s impartial benevolence in the following way. Sidgwick requires me to attach no more weight to my own interests than the interests of others. The Golden Rule requires me to treat others in the way that I would like to be treated. An egoist who (perhaps like Plato’s Callicles) saw a world of egoists as ideal might meet this requirement. And lots of more everyday moral views could meet it without accepting Sidgwickian impartiality. It is hard to believe that the Golden Rule in the various traditions mentioned by LRS has to be understood as involving a commitment to utilitarian impartiality.

  18. @Roger Crisp: Thanks for the clarification on the first point.
    On the two extensions issues. I think LRS argue as follows. Evolution can simply explain some intuitions, and those are suspect (because they can be so explained). Further (type two) intuitions are more general and less arbitrary versions of these starting ones. These may be generated by reason, but the fact that they can be interpreted as more general and less arbitrary versions of basic evolutionary ones casts doubt on their reliability; LSR say that “if a starting point can be debunked, it cannot lend support to a more general or less arbitrary version of itself”. Finally, there are intuitions of a third type which cannot be explained as simple products of evolution or as mere generalized versions of those products. These are reliable because they are not evolutionarily tainted; we might say they are products of autonomous reason.
    Given this background, they argue that the general principle of SI is a type-two intuition; evolution makes it intuitive that we have reason to pursue our own good, and then Mackie style universalization shows that general SI is just a more general and less arbitrary version of that starting point.
    You point out (in response to me) that the intuitive appeal of the principle of general SI might have another explanation – it might be a type-three intuition that results when we use reason to substantively augment, and not just generalize, evolutionarily produced kinship intuitions/emotions. LRS say nothing to rule this possibility out, but I am not seeing you ruling out their debunking explanation either. So perhaps the argument comes down to whether there is empirical evidence that your proposed etiology is more likely than theirs? Or do you think their explanation of general SI as a type-two intuition is suspect because they can’t show (by appeal to Mackie) that the general principle of SI is merely a more general and less arbitrary version of the thought that I have reason to pursue my own good?

  19. @LRS: Can you say a bit more about the claim that the extension from thinking that I have reason to pursue my own good to the principle of general SI is a “modest extension” that is “inherent in the very concept of what it is to have a reason” (25)? I am wondering what concept of having a reason you have in mind and how it supports the claim.

  20. What an excellent discussion of a stimulating paper. I have something to add to the mix that perhaps builds a bit on Crisp’s and Clarke-Doane’s comments.
    I worry about the details of the debunking explanations here, which are certainly crucial. Tersman’s debunking of Crisp’s Self-Interest Principle (SI) is helpful, as it tries to spell some details out. However, I don’t see how it fully gets to the kind of principle that is the focus of LRS’s paper. SI only states: “any agent has a reason to do what makes her life go better….” (p. 23). This is rather weak in that it only provides a reason. Yet the “axiom of rational egoism” says: “each of us ought to aim at her or his own good on the whole” (p. 10). This axiom is presumably a claim about what is rational—something like a judgment about what one ought to do—not just the identification of one reason out there that may well be easily overridden. Reasons are cheap.
    (Note 40 discusses whether Sidgwick’s axiom of Prudence “expresses the idea of egoism.” I’m not sure how this connects up exactly with the problem which started the paper in terms of the axiom of rational egoism. But perhaps there is wiggle room here to be clarified?)
    This seems important because there isn’t much of a profound problem if two axioms simply identify the existence of different reasons. The real clash presumably comes when two principles require incompatible courses of action.
    But then it isn’t clear in the debunking explanation how to get from SI to the stronger axiom. Even if we buy something like Tersman’s debunking explanation of SI, how do we get evolutionary processes strongly influencing anything like the (alleged) belief that “each of us ought to aim at her or his own good on the whole”? We can’t just start from the apparently intuitive SI and then generalize. Perhaps there are some steps that could close the gap, but then the worry may be that the problem with the belief in the axiom is in one of the dubious steps in the cognitive process, so that the problem isn’t an evolutionary one.
    More importantly, this raises a (perhaps more fundamental) worry that the resulting explanation won’t debunk at all. I take the point that “the fact that a cognitive process is involved in the formation of an intuition does not show that the intuition cannot be debunked” (p. 23). The idea seems to be that these cognitive processes are taking as input (potential) “garbage”—namely, “the conviction that we have a reason to act self-interestedly” (as Tersman puts it). Garbage in, garbage out. But I don’t see how this is garbage. Presumably the relevant conviction shouldn’t be the target of debunking in these debates, if by “reason” here we just mean a consideration for or against the act in question. What is meant to be debunked are stronger principles, like the axioms, which don’t just make the modest claim that we have some reason. Such weaker claims can arguably be defended on purely rational grounds in something like the way LRS think universal benevolence can (although this hasn’t been fully fleshed out, as far as I can tell).
    We could arrive at it through something like the following that doesn’t start from self-interest: “If A-ing makes S happy, then this is a consideration that counts in favor of doing A.” At any rate, it doesn’t seem difficult to cook up some principle or line of reasoning here that leads to something like SI but looks a lot like what apparently leads to universal benevolence in that it’s due to something like our reasoning faculties. (Note that I don’t mean to ultimately defend the axiom of egoism; I only suggest it be debunked on non-evolutionary grounds.)
    In short, there seems to be an important difference between establishing the mere existence of reasons and the truth of ought claims. Yet this could lead to some problems in linking the debunking explanations with the axioms that yield Sidgwick’s profoundest problem.

  21. What an excellent discussion of a stimulating paper. I have a comment that perhaps builds a bit on Crisp’s and Clarke-Doane’s comments.
    I worry about the details of the debunking explanations here. Tersman’s debunking of Crisp’s Self-Interest Principle (SI) is helpful, as it tries to spell some details out. However, I don’t see how it fully gets to the kind of principle that is the focus of LRS’s paper. SI only states: “any agent has a reason to do what makes her life go better….” (p. 23). This is rather weak in that it only provides a reason. Yet the “axiom of rational egoism” says: “each of us ought to aim at her or his own good on the whole” (p. 10). This axiom is presumably a claim about what is rational—something like a judgment about what one ought to do—not just the identification of one reason out there that may well be easily overridden. Reasons are cheap.
    This seems important because there isn’t much of a profound problem if two axioms simply identify the existence of different reasons. The real clash presumably comes when two principles require incompatible courses of action.
    But then it isn’t clear in the debunking explanation how to get from SI to the stronger axiom. Even if we buy something like Tersman’s debunking explanation of SI, how do we get evolutionary processes strongly influencing anything like the (alleged) belief that “each of us ought to aim at her or his own good on the whole”?
    Perhaps there are some steps that could close the gap, but this raises a (perhaps more fundamental) worry that the resulting explanation won’t debunk at all. I take the point that “the fact that a cognitive process is involved in the formation of an intuition does not show that the intuition cannot be debunked” (p. 23). The idea seems to be that these cognitive processes are taking as input (potential) “garbage”—namely, “the conviction that we have a reason to act self-interestedly” (as Tersman puts it). Garbage in, garbage out. But I don’t see how this is garbage. Presumably the relevant conviction shouldn’t be the target of debunking in these debates, if by “reason” here we just mean a consideration for or against the act in question. What is meant to be debunked are stronger principles, like the axioms, which don’t just make the modest claim that we have some reason. Such weaker claims can arguably be defended on purely rational grounds in something like the way LRS think universal benevolence can (although this hasn’t been fully fleshed out, as far as I can tell).
    We could arrive at it through something like the following that doesn’t start from self-interest: “If A-ing makes S happy, then this is a consideration that counts in favor of doing A.” At any rate, it doesn’t seem difficult to cook up some principle or line of reasoning here that leads to something like SI but looks a lot like what apparently leads to universal benevolence in that it’s due to something like our reasoning faculties. (Note that I don’t mean to ultimately defend the axiom of egoism; I only suggest it be debunked on non-evolutionary grounds.)
    In short, there seems to be an important difference between establishing the mere existence of reasons and the truth of ought claims. Yet this could lead to some problems in linking the debunking explanations with the axioms that yield Sidgwick’s profoundest problem.

  22. It seems to me that it would be quite easy to produce an evolutionary origin story for impartial benevolence. Policies similar to this are the most successful in terms of an individual’s reproductive fitness in indirect reciprocity evolutionary social games.

  23. @Peter Singer: Thanks for confirming this change in your views. And I’m very much looking forward to the book that you two are working on.
    At the end of your response to me, you wonder whether this change in your views matters much. For, as you say, “Since most people frequently do what is contrary to the best normative reasons, judging someone to be acting contrary to reason, in this sense, is not the same as judging a person to be irrational in the sense of, for instance, not doing what will best satisfying the preferences that she has, after careful reflection, decided are most important to her.”
    Well, admittedly, how people will behave won’t change. But you have changed your mind about how people ought, all things considered, to behave. That seems very significant to me. Before, you held that people morally ought to give most of their incomes away to charities that would help distant needy strangers, but you conceded that if some people cared more about the interests of themselves and their loved ones than they did about the interests of distant needy stranger, then giving most their income away to such charities would not be what they ought to do, all things considered. Moreover, you held that people did not have decisive reason to care just as much about the interests of distant needy strangers as they did about the interests of themselves and their loved ones. But now you would claim, I take it, that people do have decisive normative reason to care just as much about the interests of distant needy strangers as they do about the interests of themselves and their loved ones. This means that those who fail to give almost all their income away to such charities (and I take it that includes yourself) are either ignorant of the relevant reason-providing facts or are failing to respond to these facts in a rationally appropriate way. After all, on reflection, you can discern that you ought not to care more about yourself and your loved ones. And your cares/desires are attitudes that are sensitive to your judgments about reasons — at least, insofar as you are rational they are. So aren’t you committed to saying that you are being irrational in not caring as much about distant needy strangers as you do about yourself and your loved ones? For even those who draw a distinction between normative reasons and motivating reasons usually claim that there is something like the following necessary connection between them: if we have decisive normative reason to phi, then if we knew the relevant facts, and were fully substantively rational (having all the desires and other attitudes that we have decisive normative reason to have), we would be moved to phi. Presumably, you’re not ignorant of the relevant facts. So you must be substantively irrational, failing to have the cares/desires that you ought to have.

  24. @Justin Clarke-Doane: after proposing a plausible analysis of what a debunking explanation might be, you then reject this analysis, on the grounds that “ it suggests that a “vindicating” (or “truth-tracking”) explanation of B would be an explanation that assumed that B was true.” But we don’t see that that follows. It might be an explanation that allows for the possibility that we believe B because B is true, without assuming that B really is true. If B could be true, and we have no better explanation of why we believe B, then we do not have a debunking explanation of it.
    A counter-argument to this claim could be based on what Parfit refers to as “the causal objection” – that is, the view that all of our beliefs must have a causal explanation, and that reason or normative properties cannot be causes. (See On What Matters, ch. 32) We are not sure what to say about this, so we’ll just say that it is too big a topic to go into here.
    @Roger Crisp: OK, we grant that Sidgwick’s principle of impartial benevolence is more demanding than the Golden Rule. At least on common interpretations of the GR, its practical outcomes are closer to common sense morality than fully impartial utilitarianism.
    @Matthew Braddock: The point you make about the need to show that our rational capacities are strongly tied to one another is similar to the point that Sven Nyholm made, and to which we have now responded, above. Related to this is your demand for “significant empirical and explanatory backing” for our byproduct claim. The empirical backing we have is admittedly very limited (see fn 22 on p.16). We would welcome more, if anyone has any ideas. As for explanatory backing, we suggest on p.26 that our account explains some facts about our understanding of morality but of course there are alternative explanations of these phenomena as well, so probably this backing is not as significant as you would like. To that extent, our suggestion cannot be proven to be right; it remains a hypothesis, but we hope a plausible one.
    @Brad Cokelet: in saying that this modest extension is inherent in the very concept of what it is to have a reason, we meant that if I maintain that I have a reason to do A in circumstances C, then I cannot intelligibly deny that if you are in the same circumstances, you also have a reason to do A. The idea of “reason” is universal in that minimal sense.
    @Josh May: You’re right to point to the difference between claims that someone has a reason to do what is in her own interests, and the principle that one ought to do what is in one’s own interest. But we don’t need to debunk the former claim, because on our view – and Sidgwick’s – everyone has a reason to do what is in the interests of every sentient being (including oneself). This is true of the principle that you mention: “If A-ing makes S happy, then this is a consideration that counts in favor of doing A.” We agree, and do not wish to debunk this principle. So the only principle we need to debunk is the stronger one that the fact that something is in my interests gives me a an overriding reason to do it, or at least, a reason that overrides the fact that it is in the interests of a stranger, even when the interests of the stranger are more seriously affected than mine, or there are many strangers whose interests are affected as much as mine, etc.
    @David Duffy: Can you elaborate? We are not sure that we understand what you have in mind.

  25. @Doug Portmore: You ask if it follows from my current position – and my admitted failure to give as much as I ought to give to those in much greater need than I am – that I am being irrational. You define being “fully substantively rational” as “having all the desires and other attitudes that we have decisive normative reason to have.”
    I am not sure if it follows that I am irrational, in this sense. The reason I hesitate to accept that I am irrational is that, given my present nature, and my situation, I am not sure if I could do more good by changing my desires so that I cared as much for strangers as for my own children, etc. Yes, I would then give away more, but I might also cease to be effective at anything I do. (I’ve seen this kind of “burn out” happen to people who dedicated themselves very intensely to a cause for several months, or even a year or two, but could not stay in it for the long haul). And maybe I would also be less effective at persuading other people to live more ethical lives.
    But you may well think that this is special pleading, a weak excuse for not doing what I have most normative reason to do. And it may be. I don’t know.

  26. @Peter Singer: If changing your desires so that you cared as much for strangers as you do for your own children would result in your doing less good, then I would think that you would have decisive reason not to change your desires in this way. And if it’s not possible for you to be motivated to give all but the income you need to keep generating more income to charity without changing your desires in a way that you have decisive reason not change them, then it seems to me that your giving that much to charity is not an option for you in the sense in which ‘S ought to do X’ implies that ‘Doing X is an option for S’. So perhaps you should not admit that you’re failing to do all that you’re morally required to do with respect ot giving to charity.
    In any case, it seems to me that these four are incompatible (where n% is the percentage of your income that you are currently donating to charity):
    (1) you are morally required to give more than n% of your income to charity,
    (2) if you are morally required to give more than n% of your income to charity, then you have decisive/overriding normative reason to do so.
    (3) if you have decisive/overriding normative reason to give more than n% of your income to charity and you do not do so, then your failure to do so is either the result of your ignorance of some relevant fact or the result of substantive irrationality on your part, and
    (4) your failure to give more than n% of your income to charity is neither the result of your ignorance of some relevant fact nor the result of substantive irrationality on your part.
    I wonder which of these four you deny. Of course, you’ve already been incredibly generous with your time. And so if you don’t have time to respond, I understand. Thanks for your interesting and thoughtful responses.

  27. I would think that the possibility of a debunking explanation is the least of the benevolence principle’s worries. The real issue is that it fails the first two of Sidgwick’s criteria for reliability. The principle is not self-evident and there is nothing close to agreement from other careful thinkers about its truth. After all, this is a principle that forbids us to favor the welfare of our children over the welfare of strangers. The vast majority of human beings would reject that principle as deeply immoral. Yet somehow we’re supposed to regard it as self-evident?
    Now you might say that most human beings reject the principle because of kin selection etc. and maybe that’s right. Maybe principles favoring the welfare of kin over strangers are vulnerable to debunking explanations. But from that it doesn’t follow that the universal benevolence principle gets the exalted status of self-evidence. Yes, some “leading thinkers of distinct traditions have independently reached a similar principle” but many other leading thinkers have not. And contra LRS, I believe very few regard it as “the essence of morality”–and those who do are steeped in culturally specific traditions that emphasize impartiality norms. (Of course, that’s an empirical question.)
    Perhaps this objection goes beyond the scope of what LRS were trying to accomplish in their essay which (I agree) is an excellent contribution to the important topic of evolutionary debunking explanations in ethics. But I’m surprised that even critics of the paper seem to concede that the impartiality principle can pass the first two conditions in Sidgwick’s test. We can’t just assume that, can we?

  28. Just one point of clarification: I’m not challenging the self-evidence of a principle requiring us to consider the welfare of other people. I’m challenging the self-evidence of a principle that requires us to consider the welfare of other people equally. The latter is the benevolence principle as I understand it.

  29. It’s really pleasing to see this fruitful discussion do so much service to LRS’s remarkable piece. I hope I can contribute something. I’m in sympathy with Roger Crisp’s worry that if an evolutionary debunking argument discredits his Self-Interest Principle (“SI”), then such an argument is equally threatening to Sidgwick’s principle of universal benevolence. LRS argue that the SI is vulnerable to evolutionary debunking, because it can be derived by means of a Mackie-style “first-stage universalization” from a “contaminated starting point”–i.e., a specific psychological disposition that was selected for by evolutionary forces (cf. LRS 24 – 25). (The “contaminated starting point” for SI would presumably be an egoistic concern to promote one’s own interests.) But LRS also claim that the principle of universal benevolence cannot be similarly arrived at through first-stage universalization from an evolved disposition. (A “contaminated starting point” for universal benevolence might be a limited altruistic concern for one’s kith and kin.) Now, Mackie’s first-stage universalization involves recognizing “the irrelevance of numerical differences,” such as indexicals and proper names. LRS suggest that this sort of universalization is “inherent in the very concept of what it is to have a reason” (LRS 25). At the same time, the authors deny that first-stage universalization obliges us to regard differences between the interests of our own kith and kin and the interests of distant strangers as irrelevant (LRS 24). It seems to me, however, that first-stage universalization is precisely the move that Sidgwick makes when he tries to establish the self-evidence of universal benevolence. As I read Sidgwick, both the principles of prudence and universal benevolence are apprehended through “the consideration of the similar parts of a Mathematical or Quantitative Whole” (Sidgwick, METHODS OF ETHICS, 381). For instance, (according to Sidgwick) a mere difference in timing doesn’t make one moment in your existence more important than another, similar moment in the whole of your life. Likewise, a mere difference in the identities of individuals does not make one person’s good more important than the similar good than another person. In the latter instance, Sidgwick considers the similar goods of different individuals to be similar parts of a “whole,” where the “whole” in question is “the notion of Universal Good by comparison and integration of the goods of all individual human…existences” (Sidgwick, METHODS, 382).
    I fail to see the difference between Mackie’s first-stage universalization and Sidgwick’s consideration of similar parts of a whole. It seems to me that both turns of thought involve the rejection of numerical differences as irrelevant. Only for Sidgwick, arbitrary numerical differences include differences in the identities of individuals who experience similar increments of well-being. LRS tell us that the principle of benevolence, unlike SI, rests on a “substantive claim” (LRS, 25). I am not sure what they mean by this. And even if Sidgwick’s principle of benevolence was based on a substantive claim, how would that show that it isn’t derived from a contaminated starting point?

  30. Hi, LRS. Thanks for the response. My concern is this.
    According to the obvious analysis, the causal explanation of our belief, B, is debunking if it fails to assume the (probable) truth of B. You deny that our belief in the principle of universal benevolence (PUB) admits of a debunking causal explanation, whether evolutionary or not. Hence, you deny that the causal explanation our belief in PUB fails to assume the (probable) truth of that belief. But surely you concede that our belief in PUB admits of some causal explanation. Thus, you hold that there is a causal explanation of our belief in PUB, and, moreover, that explanation assumes the truth of our belief in PUB.
    If you are right, then Harman’s view that moral hypotheses fail to figure into (true) causal explanations is wrong. I saw no direct argument for this surprising conclusion in your piece. The issue is not whether moral facts (properties, states-of-affairs, or whatever you take to be the relata of causation) are causally efficacious. The issue is whether their assumption figures into causal explanations. First-order logical facts are not themselves causally efficacious, but their assumption figures into causal explanations.

  31. @LRS: In the light of the exchange with Josh May, I am now unclear, like Roger Crisp, about how your argument can work.
    That exchange encourages us to distinguish these:
    Egoism: All agents ought to pursue their own good
    Universal Self-Interest: All agents have reason to pursue their own good
    Now you seem to argue as follows:
    (1) The dualism of practical reason can be avoided if we show that the intuitive appeal of Egoism is unreliable.
    (2) The intuitive appeal of a principle is unreliable if we can plausibly explain it as a direct product of evolution or as a more general and less arbitrary version of some such direct product.
    (3) We can explain the intuitive appeal of Egoism as a more general and less arbitrary version of the directly produced intuition that we have reason to promote our own good.
    (4) The dualism of practical reason can be avoided
    As my responses to Crisp indicates I am happy to follow you in accepting
    (3′) We can explain the intuitive appeal of Universal SI as a more general and less arbitrary version of the directly produced intuition that we have reason to promote our own good.
    But your argument seems to need (3), which is stronger, and I am not seeing how you establish that yet.
    The Tersman passage you quote starts from the direct intuition that I have some reason to promote my own good. It is hard to see how you can get from that to Egoism, and not just Universal SI, by appeal to Mackie/the concept of a reason you mention.
    I would therefore like to hear how, by appeal to the concept of having a reason, we can show that Egoism, and not just Universal SI, is just a more general and less arbitrary version of the intuition that I have reason to pursue my own good.
    (Or to how you would like to strengthen the relevant “starting intuition” beyond what Tersman assumes and justify the claim that it is something that evolution would likely directly produce)

  32. “Can you elaborate? We are not sure that we understand what you have in mind.”
    Further to a plausible evolutionary origin for impartial benevolence, I can see two avenues. Jorge Moll has suggested an “extended attachment” model, for example, in
    http://www.ncbi.nlm.nih.gov/pubmed/18400930
    and presents evidence that donations to charity involve the same brain regions (subgenual area) “activated when humans looked at their own babies.”
    http://www.pnas.org/content/103/42/15623.full.pdf
    So that would be a virtuous escalator but not necessitating moral realism.
    The other approach would be through the game theoretic cum evolutionary literature on indirect reciprocity, where in some setups it is in the agent’s interest to demonstrate to others that one’s actions are based on impartial benevolence.
    These usually require reputation based on good information, and a net positive payoff for prosocial behaviour that falls off if there are too many defectors. This latter will be a property of nature that one will argue is tracked by natural selection.
    The paper by Chalub et al (2006) The Evolution of Norms
    http://www.ncbi.nlm.nih.gov/pubmed/16388824
    doesn’t directly address this, but is interesting because it has inheritance, multilevel selection, and a higher level “norm” that gives rise to heterogenous local strategies that tend to be generally cooperative.

  33. @LRS in response to Josh May. It’s true that something like the self-interest principle can be derived from the principle of universal benevolence. But then universal benevolence will be the ultimate principle. As I was understanding it, the self-interest principle states an ultimate reason, which might (but need not) come into conflict with universal benevolence. (Of course, like Sidgwick, I think these reasons do conflict. But, unlike him, I think that they can be weighed against one another.)

  34. @Doug Portmore: I’d love to be able to say that I am giving exactly the right % of my income to charity – that if I gave any more, it would be counterproductive in the long run, or that given the kind of person I am, I just can’t give more, in the sense relevant to “ought implies can” – but I don’t really believe either of those claims. So I have to deny your (4) and acknowledge that my failure to give more is the result of substantive irrationality on my part. This sounds odd – a philosopher admitting that he is substantively irrational! – but I think the oddness results from the fact that when we speak about people being rational or irrational, that’s a very broad judgment that doesn’t distinguish between normative and motivational rationality. In fact it doesn’t even distinguish between acting irrationally and holding irrational beliefs. So saying that I am irrational may conjure up the idea that I believe the world will end in 2012, or that I want to spend my time counting the blades of grass on the Princeton campus, etc. But if we rephrase your (4) to say that my failure to give more to charity is the result of my failing to act as a fully rational being, knowing what I know, would act, then I accept that that is the case.
    @other commentators: LR is traveling at present, so “LRS” is not able to reply to the other comments right now, but hopes to be able to do next week.

  35. @ Tamler Sommers: we were focusing on defending the principle of universal benevolence from Street’s evolutionary debunking argument, rather than on defending its self-evidence or the claim that it has the independent agreement of other careful thinkers. We gave (on p.24) only the bare bones of Sidgwick’s argument for the self-evidence of the principle. We agree that more argument would be needed to establish that it really does satisfy the criteria you mention.
    @Andres Luco: Mackie’s first stage of universalization rules out “first person egoism,” i.e. principles like “If I am in trouble, you ought to help me, because that would benefit me, but if you are in trouble, it is not the case that I ought to help you, because that would benefit you, and not me.” Mackie agrees with Hare that the idea of universalizability excludes such principles. (Hare thought that they are ruled out by the logic of the term “ought” but we prefer to say that they are ruled out by the universality inherent in the idea of providing reasons for our actions that could be acceptable to others.) This first stage of universalization does not, however, exclude principles like “When forced to choose between helping one’s own child or someone else’s child, everyone ought to help his or her child.” The principle of universal benevolence does rule that out, unless there are some other relevant factors, such as having greater ability to help our own children, or the importance for both us and our children that we act out of love for them. It is in that sense that we claim that the principle of universal benevolence involves a more substantive moral claim than the first stage of universalization. It involves a claim that cannot be derived from the meaning of “ought” nor from what is inherent in providing reasons for our actions.
    @Justin Clarke-Doane: Thanks for the clarification. If there are moral truths, is it really so implausible to hold that they can play a role in true causal explanations? If you accept that logical facts can play a role in causal explanations, why can’t a causal explanation of our belief in a moral principle (and the actions that follow from that belief) include the fact that its truth is, to beings like us, self-evident?
    @ Brad Cokelet, David Duffy, and Roger Crisp: we hope to respond in the next day or two.

  36. Hi, LRS. Thank for the follow-up.
    The moral case and the logical one are not analogous. The argument that the logical fact that p is assumed by the causal explanation of our belief that p is this. Every logical fact is assumed by every explanation (in that every explanation implies every logical truth). But it is not the case that every moral fact is assumed by every explanation. Thus, when p is an alleged moral fact, a different argument for the claim that p figures into the causal explanation of our belief that p is needed.

  37. Apologies for the delay in posting this comment, but it has been a busy week for both of us.
    @Brad Cokelet and Roger Crisp: The point we made in our response to Josh May is that “All agents have reason to pursue their own interests” isn’t strong enough to set up the dualism, since, as Roger Crisp points out, it can be derived from the principle of universal benevolence. We assume that someone defending a distinctive principle that could give rise to the dualism will be defending either:
    i. All agents have more reason to pursue their own interests than to pursue the interests of strangers
    or
    ii. All agents ought to promote their own interests rather than the interests of strangers,
    Admittedly, as Roger points out, it would be enough to defend:
    iii. All agents have for an ultimate reason, not derived from any other principle, to pursue their own interests
    We think that these three versions of the self-interest principle can be explained, and hence debunked, by an evolutionary account of how we reach the intuition on which they are grounded, which is the intuition that my own interests give me a reason for acting. Perhaps what we said in response to Andres Luco, above, helps to explain why we think that the universality inherent in the idea of giving a reason is sufficient to get us from that intuition to any of i-iii above.
    @David Duffy: [Thank you very much for the interesting references. We agree – and indeed we stated it in our article (p.26, fn 45) that our claim about the lack of an evolutionary explanation for impartial benevolence is empirical, and hence open to scientific assessment and possible refutation. But we don’t see any such refutation in the articles you cite. The paper by Moll et al developing the “extended attachment model” actually contains evidence that points in our direction. See p. 173 where the authors state:
    In this sense, extended attachment would not only drive intragroup altruism and cooperation, but also help demarcate out-group boundaries, enhancing group distinctiveness and promoting aggression towards outgroups—a haunting feature which has permeated human evolution (Bowles & Gintis 2004; Bowles 2006).
    The other papers do not, as far as we can see, bear directly on the claim we are making.
    @Justin Clarke-Doane: No doubt in the sense you explicate, every logical fact is assumed by every explanation, but that isn’t a particularly interesting way in which an explanation can assume a logical truth. We thought you had something else in mind, for example that Mary’s decision to take B may be at least in part explained by the fact that she wanted either A or B, she knew that she could not get A, and the logical truth of the disjunctive syllogism, of which she was aware. It is in that sense, we believe, that, when Jane has to choose between rowing her boat to the north, where X is drowning, or to the south, where Y and Z are drowning, one element in the explanation of her decision to row to the south may be the self-evidence of the principle that it is better to bring about a greater good rather than a lesser one.

  38. @ LRS. Thanks for the response. I’m not denying that egoism may be debunkable (nor indeed am I accepting that). What I’m saying is that, if it is debunkable in this way, so is universal benevolence. Just as evolution gives me ‘my own interests give me a reason for acting’ (which might ground egoism), so it gives me ‘the interests of others give me a reason for acting’ (which might ground universal benevolence). You might say: ‘But not *any* others’. Indeed. But the egoist might say: ‘But not *just* my own interests’.

  39. @LRS and Peter Singer:
    Thank you for your reply. I can see how Mackie’s first stage of universalization (hereafter “FSU”) would permit principles like Crisp’s Self-Interest Principle (“SI”), as well as the principle that “Everyone ought to prioritize the interests of their own kin over other people’s kin.” I also see how Sidgwick’s principle of universal benevolence would exclude these two principles. And now, thanks to Peter Singer’s reply to me, I recognize that there is a difference between Mackie’s FSU and Sidgwick’s rationale for the principle of universal benevolence. The crucial question, though, is whether this difference establishes that the universal benevolence principle, but not Crisp’s SI, withstands evolutionary debunking. I am not sure about this, because it seems to me that Tersman’s evolutionary debunking argument against SI can also apply to universal benevolence, DESPITE the difference between Mackie’s FSU and Sidgwick’s rationale for universal benevolence.
    Surely there is a common feature in Sidgwick’s argument for universal benevolence (as well as the prudence principle) and Mackie’s FSU—namely, the rejection of irrelevant differences. It seems to me that this common feature makes BOTH principles vulnerable to Tersman’s argument. Let me explain. Sidgwick tells us that the well-being of different individuals constitute similar parts of a whole. Differences in the identities of who experiences an increment of well-being are not morally relevant. When I look at the passage quoted from Tersman (LRS 23), I fail to see why we can’t rather mechanically change Tersman’s point into a point against universal benevolence–just replace every occurrence of “SI” with “universal benevolence principle.” Indeed, the negation of universal benevolence would suggest that the well-being of one person can be of greater moral significance than a similar increment of well-being in others, just because of who that person is. Wouldn’t we need a complex explanation of this priority in terms of relevant differences between persons? And wouldn’t the universal benevolence principle have the same virtue of simplicity that Tersman attributes to the universal SI principle (cf. LRS 23 – 24)?
    It would seem that we can arrive at the universal benevolence principle by accepting that there is no morally relevant difference between the well-being of people in our “in-group” and the well-being of other people. In that case, are we not just reasoning toward a less arbitrary and more general version of an evolved disposition to be altruistic towards in-groups? If so, isn’t universal benevolence a “reasoned extension” from a “contaminated starting point,” just like the SI?
    I realize that my post here raises basically the same question as Roger Crisp’s above. I am just trying to grapple with the details of LRS’s simultaneous debunking of SI and vindication of universal benevolence, as the authors present them on pp. 23 – 25.

  40. Hi, LRS. Thanks for the response.
    What is the more fine-grained sense of “explain” that you have in mind? It is not a causal sense, since logical facts (states-of-affairs) are not causally efficacious. Nor is it counterfactual. The claim that had the (elementary) logical truths been different, our beliefs would have been correspondingly different seems to involve an unintelligible antecedent. Perhaps you take the relevant sense of “explain” as primitive? If so, then I wonder how you hope to argue (in a non-question-begging way) that moral truths — including the Principle of Universal Benevolence – explain our corresponding moral beliefs.

  41. Hi LRS,
    In case you do want to continue the discussion of this, I am not seeing how the strategy used in your response to Andres Luco can get you from
    (Base) My interests give me a reason to act
    to
    (i) All agents have more reason to pursue their own interests than to pursue the interests of strangers
    or
    (ii) All agents ought to promote their own interests rather than the interests of strangers,
    Your response to Luco focuses on principles that cannot be universalized. As far as I can see, the only way to apply that to the case at hand would be to claim that a principle like the following cannot be universalized:
    (P) Every agent has some reason to promote its own interests but need not have more reason to promote its interests than the interests of strangers.
    I can’t see how this would fail any relevant universalization test. So I am still unclear how you intend to get from BASE to i or ii above.

    Finally, I can see how you could get from
    (STRONG BASE) I have more reason to pursue my own interests than to pursue the interests of strangers
    to i or ii. But I do not see why evolution would generate the intuition that STRONG BASE is true.

Leave a Reply

Your email address will not be published. Required fields are marked *