the eleventh of twelve “meetings” of our virtual reading group on Derek
Parfit’s Climbing the Mountain—see here
for further details. Next week, we will discuss the final chapter, Chapter 13,
of the July 22nd version of the manuscript, which can be found here.)
chapter, Parfit argues that the Kantian Contractualist Formula (KC) requires
everyone to follow the same principles that the universal acceptance version of
Rule Consequentialism (UARC) requires everyone to follow: that is, those whose
universal acceptance would make things go best in the impartial
reason-involving sense. Such principle are, as Parfit calls them,
“UA-optimific.” Here is the argument, pretty much in Parfit’s own words:
(A) Everyone ought to follow the
principles whose universal acceptance everyone could rationally will, or
(B) Anyone could rationally choose
any principles that they would have sufficient reasons to choose.
(C) There are some principles
whose universal acceptance would make things go best.
(D) These are the principles whose
universal acceptance everyone would have the strongest impartial reasons to
(E) These impartial reasons would
not be decisively outweighed by any relevant conflicting reasons.
(F) Everyone would have sufficient
reasons to choose that everyone accepts these UA-optimific principles.
(G) There are no other
significantly non-optimific principles whose universal acceptance everyone
would have sufficient reasons to choose.
(H) It is only these UA-optimific
principles whose universal acceptance everyone could rationally choose.
(I) Everyone ought to follow the
principles that are UA-optimific.
argument is valid. (A) is what KC explicitly requires, and (I) is what UARC explicitly
requires. So, if the argument’s other premises are true, the argument shows
that KC requires us to follow UA-optimific principles, just as UARC does. So
let’s consider the other premises.
everyone knows the relevant facts, then (B) is true. And Parfit says, “we
should suppose that, when making these imagined choices, everyone would know
all the relevant facts.” So (B) is true. (C) is quite plausible, and (D) is
true by definition. (G) is fairly easy to defend—see section 45. This leaves
(E), which Parfit spends the bulk of the chapter defending.
(E) is true depends on what reasons we have not to choose the UA-optimific
principles and whether, on the correct theory about reasons, any of these
reasons would ever decisively oppose the impartial reasons we have to choose
the UA-optimific principles. For reasons of space, I’ll discuss only whether
anyone could ever have decisive self-interested reasons not to choose the
notes, if everyone accepted the UA-optimific principles, that would be very bad
for certain people. For instance, suppose that White is stranded on one rock
and five others are stranded on another rock. Parfit can save either White or
the five but not both. If Parfit accepts the pertinent UA-optimific principle (viz.,
the Numbers Principle, which directs us to save the greatest number when
everything else is equal) and acts accordingly, this will be very bad for
White. He’ll die. White would be better off if Parfit accepted and followed the
Nearness Principle, since the Nearness Principle directs Parfit to save the
group nearest to him and White happens to be on the rock nearest to Parfit.
Thus White has weighty self-interested reasons not to will everyone to accept
the Numbers Principle. If premise (E) is true, these reasons mustn’t be
argued in the first few chapters, we should accept a wide value-based theory
about reasons according to which we often have sufficient reason to do what’s
impartially best. On the most egoistic version of this theory (the version
where we least often have sufficient reason to do what’s impartially best), we
are rationally required to give strong priority to our own self-interest. But
even on the most egoistic version of this theory that is still plausible, one
would have sufficient reason to sacrifice one’s own life if this would save millions of other lives. And since in
this sort of Kantian thought-experiment we are to imagine that White has the
power to choose which principles everyone would accept, White would be saving
millions of other lives by choosing the Numbers Principle over the Nearness
Principle, for there are billions and billions of people who would then accept
this principle now and forever. The cumulative effect of everyone’s accepting the
Numbers Principle over the Nearness Principle is that millions of lives will be
saved over time. Thus even on the most egoistic version of the wide value-based
theory about reasons that’s still plausible, White will have sufficient reason
to choose the Numbers Principle. This is, of course, only one example, but
similar claims can be made about other cases. And thus this, along with his
other arguments, support (E).
does Parfit stop short of saying that, given certain plausible assumptions, KC
entails UARC? My answer would be that someone could accept (I) and not be a Rule
Consequentialist. So even if KC entails (I), (I) is not UARC. UARC is, I would
claim, the view that (I) is a fundamental moral principle, a principle that
isn’t derivative of any more fundamental moral principle. But if this is Parfit’s
thinking, then Parfit shouldn’t say that (A) is KC, which he does.
shouldn’t KC be revised to KC*: “Everyone ought to follow the principles whose
universal acceptance everyone has sufficient reason to will, or choose.” If KC
and KC* differ, then isn’t KC* more plausible? If KC and KC* don’t differ, then
why do we need (B)?
there be instances where we’ll know that following the UA-optimific principles
(the principles that both KC and UARC direct us to follow) will entail our
refraining from doing what will make things go both partially (best for us
and/or those to whom we have close ties) and impartially best? In such cases,
KC and UARC will, nevertheless, direct us to follow the UA-optimific principles
even though we know that this will make thing go both partially and impartially
worse. But, if we accept a value-based theory of reasons as Parfit suggests,
won’t we have, in such instances, most reason to act contrary to the
UA-optimific principle? And so isn’t Parfit, then, headed down a path where he’ll
be force to accept that we sometimes have most reason to act wrongly? That, I
think, would be a bad result.