(This marks
the eleventh of twelve “meetings” of our virtual reading group on Derek
Parfit’s Climbing the Mountain—see here
for further details. Next week, we will discuss the final chapter, Chapter 13,
of the July 22nd version of the manuscript, which can be found here.)

 

In this
chapter, Parfit argues that the Kantian Contractualist Formula (KC) requires
everyone to follow the same principles that the universal acceptance version of
Rule Consequentialism (UARC) requires everyone to follow: that is, those whose
universal acceptance would make things go best in the impartial
reason-involving sense. Such principle are, as Parfit calls them,
“UA-optimific.” Here is the argument, pretty much in Parfit’s own words:

(A) Everyone ought to follow the
principles whose universal acceptance everyone could rationally will, or
choose.

(B) Anyone could rationally choose
any principles that they would have sufficient reasons to choose.

(C) There are some principles
whose universal acceptance would make things go best.

(D) These are the principles whose
universal acceptance everyone would have the strongest impartial reasons to
choose.

(E) These impartial reasons would
not be decisively outweighed by any relevant conflicting reasons.

Therefore

(F) Everyone would have sufficient
reasons to choose that everyone accepts these UA-optimific principles.

(G) There are no other
significantly non-optimific principles whose universal acceptance everyone
would have sufficient reasons to choose.

Therefore

(H) It is only these UA-optimific
principles whose universal acceptance everyone could rationally choose.

Therefore

(I) Everyone ought to follow the
principles that are UA-optimific.

 

The
argument is valid. (A) is what KC explicitly requires, and (I) is what UARC explicitly
requires. So, if the argument’s other premises are true, the argument shows
that KC requires us to follow UA-optimific principles, just as UARC does. So
let’s consider the other premises.

 

If
everyone knows the relevant facts, then (B) is true. And Parfit says, “we
should suppose that, when making these imagined choices, everyone would know
all the relevant facts.” So (B) is true. (C) is quite plausible, and (D) is
true by definition. (G) is fairly easy to defend—see section 45. This leaves
(E), which Parfit spends the bulk of the chapter defending.

 

Whether
(E) is true depends on what reasons we have not to choose the UA-optimific
principles and whether, on the correct theory about reasons, any of these
reasons would ever decisively oppose the impartial reasons we have to choose
the UA-optimific principles. For reasons of space, I’ll discuss only whether
anyone could ever have decisive self-interested reasons not to choose the
UA-optimific principles.

 

As Parfit
notes, if everyone accepted the UA-optimific principles, that would be very bad
for certain people. For instance, suppose that White is stranded on one rock
and five others are stranded on another rock. Parfit can save either White or
the five but not both. If Parfit accepts the pertinent UA-optimific principle (viz.,
the Numbers Principle, which directs us to save the greatest number when
everything else is equal) and acts accordingly, this will be very bad for
White. He’ll die. White would be better off if Parfit accepted and followed the
Nearness Principle, since the Nearness Principle directs Parfit to save the
group nearest to him and White happens to be on the rock nearest to Parfit.
Thus White has weighty self-interested reasons not to will everyone to accept
the Numbers Principle. If premise (E) is true, these reasons mustn’t be
decisive.

 

As Parfit
argued in the first few chapters, we should accept a wide value-based theory
about reasons according to which we often have sufficient reason to do what’s
impartially best. On the most egoistic version of this theory (the version
where we least often have sufficient reason to do what’s impartially best), we
are rationally required to give strong priority to our own self-interest. But
even on the most egoistic version of this theory that is still plausible, one
would have sufficient reason to sacrifice one’s own life if this would save millions of other lives. And since in
this sort of Kantian thought-experiment we are to imagine that White has the
power to choose which principles everyone would accept, White would be saving
millions of other lives by choosing the Numbers Principle over the Nearness
Principle, for there are billions and billions of people who would then accept
this principle now and forever. The cumulative effect of everyone’s accepting the
Numbers Principle over the Nearness Principle is that millions of lives will be
saved over time. Thus even on the most egoistic version of the wide value-based
theory about reasons that’s still plausible, White will have sufficient reason
to choose the Numbers Principle. This is, of course, only one example, but
similar claims can be made about other cases. And thus this, along with his
other arguments, support (E).

 

Some Questions:

 

(1) Why
does Parfit stop short of saying that, given certain plausible assumptions, KC
entails UARC? My answer would be that someone could accept (I) and not be a Rule
Consequentialist. So even if KC entails (I), (I) is not UARC. UARC is, I would
claim, the view that (I) is a fundamental moral principle, a principle that
isn’t derivative of any more fundamental moral principle. But if this is Parfit’s
thinking, then Parfit shouldn’t say that (A) is KC, which he does.

 

(2) Why
shouldn’t KC be revised to KC*: “Everyone ought to follow the principles whose
universal acceptance everyone has sufficient reason to will, or choose.” If KC
and KC* differ, then isn’t KC* more plausible? If KC and KC* don’t differ, then
why do we need (B)?

 

(3) Won’t
there be instances where we’ll know that following the UA-optimific principles
(the principles that both KC and UARC direct us to follow) will entail our
refraining from doing what will make things go both partially (best for us
and/or those to whom we have close ties) and impartially best? In such cases,
KC and UARC will, nevertheless, direct us to follow the UA-optimific principles
even though we know that this will make thing go both partially and impartially
worse. But, if we accept a value-based theory of reasons as Parfit suggests,
won’t we have, in such instances, most reason to act contrary to the
UA-optimific principle? And so isn’t Parfit, then, headed down a path where he’ll
be force to accept that we sometimes have most reason to act wrongly? That, I
think, would be a bad result.

17 Replies to “Parfit’s CtM, Chapter 12: Consequentialism

  1. This main argument is the place where I think things go wrong in the book. At least I’m not convinced yet. I don’t think (G) is true and (F) seems very doubtful too.
    The problem I have in mind is easy to see in Scanlon’s TV-transmitter case. As everyone knows in that case during the World Cup final a TV-transmitter falls on Jones. He’s stuck and getting painful electric shocks. We could save him by cutting the transmission but that would deny the billion or so viewers their excitment and enjoyment.
    Now, I take it that the optimific principles on pretty much any value theory are going to be those that require the transmission to be continued. This is because RC is aggregating the small benefits so that at some point with enough of viewers the aggregated benefit will outweigh Jones’ suffering. The principles which require cutting the transmission are then non-optimific.
    (G) says that everyone could only rationally accept optimific principles. In Jones’ case then the argument requires that it won’t be rational for someone to accept the principles which require saving Jones. I cannot see who that person could be. Everyone else looses just a trivial benefit for the sake of saving Jones. That has to be rational. So, contrary to (G), there are other non-optimific principles everyone could accept.
    Moreover, in the case of (F), I’m not sure Jones has sufficient reason to choose that everyone accepts the optimific principles. I don’t think he has sufficient reason to choose that he accepts the principles that require from him a significant burden for the trivial enjoyment of others.
    So given that (G) and (F) fail, I think the argument to get RC from Kantian contractualism fails. That shouldn’t be a surprise if contractualism and RC, due to structural differences in aggregation, are not even coextensive.

  2. Doug: Nice job. As to your second question, I had the same question, so obviously I don’t have an answer for you.
    As for your third question, presumably you have in mind cases in which adhering to some specific principle would make things partially and impartially worse, i.e., the following of this particular principle here and now will be worse than it would be both for me and for others than if I didn’t follow the principle here and now. But you always have decisive reasons to adhere to the set of principles whose universal acceptance would make things go best. So it turns out that adhering to one principle in one local circumstance that makes things worse won’t defeat your ongoing decisive reason to adhere to all the principles in the favored set. And if it were the case that following that specific principle very often made things worse, then presumably it wouldn’t be part of the set of optimific principles to begin with.

  3. Dave,
    You write,

    you always have decisive reasons to adhere to the set of principles whose universal acceptance would make things go best [in the impartial reason-involving sense].

    Can you explain to me why that’s true.
    If Parfit’s arguments for (E) are sound, then it follows that we always have sufficient reason to choose the UA-optimific principles, but that’s not to say that we always have sufficient (let alone decisive) reason to follow the the UA-optimific principles.
    From the fact that I have sufficient reasons to choose the UA-optimific principles, it doesn’t follow that I have sufficient reason to abide by those principles when doing so is contrary to what I have both best partial and impartial reasons to do.

  4. Jussi,
    The argument for (F) is valid, right? So, if you’re going to reject (F), then you’ll also need to reject one of (A)-(E). Which one would you reject?
    Regarding (G), I think that you raise an interesting worry that I wish Parfit had addressed. I had the same worry at some point, although for me it was weighing a life against very many minor headaches. But can’t we just accept a value theory where no number of minor headaches is ever as bad as one death and where no amount of World-Cup-watching pleasure is ever good enough to compensates for the evil of one person suffering hours of painful shocks?

  5. Dave,
    Suppose that the principle “Don’t intentionally kill an innocent person without his or her consent (call this ‘murder’) unless millions of lives (or some comparably good) is at stake” is UA-optimific. Now suppose that I’m in a situation where I can murder one stranger to prevent the murders of my wife and child. I can see that I have sufficient reason on wide value-based theories to choose that everyone accepts this principle. But it seems to me that, on any wide value-based theory, I’ll have decisive reasons not to abide by this priniciple in this instance. Violating this principle in this instance is better for me, better for those to whom I have close ties, and better impartially speaking.

  6. Doug,
    you’re right. I’d probably say that (E) is not true in the TV case for Jones.
    We could. And, I think this is what Parfit is probably going to do. But, there are problems. First, that value theory would have to be either non-aggregative or have various cut-off points of degrees of moral relevance. I’m not sure how consequentialist the former views are and the latter most certainly are highly problematic. The second problem is that if we are allowed to fiddle with value theory in this way to guarantee that everyone has the most reasons to accept the optimific principles, then the consequentialism for which we get the Kantian argument becomes empty. Arguments for empty conclusions seem rather uninteresting.

  7. Doug: It seems that your case is a nice illustration of why the possible principle you cite wouldn’t in fact be a UA principle after all, for a principle that didn’t allow exceptions for just such instances wouldn’t be one that would be universally acceptable. Parfit talks about a case similar to this on 243.

  8. Jussi,
    Regarding your first point, why think that a consequentialist can’t countenance lexical priority relations across distinct categories of goods?
    Regarding your second point, it becomes empty only if there are no independent motivation for thinking that the value theory in question is correct.

  9. Dave,
    Please don’t focus on my specific example, for I think that the broader point still stands. Presumably, there will be examples (whether the one that I provide is such) where violating a UA-principle will be better both partially and impartially speaking. No? Or do you think that Rule Consequentialism, as Parfit formulates it, just collapses into Act Consequentialism? If you don’t want to accept my supposition about what the UA-principle is, then tell me what you take the UA-principle that pertains to murder is, and I’ll give you an example where violating it is best partially and impartially.
    Assuming there are such examples, it seems that the agent will, on wide value-based theories, have sufficient reason to choose the principle but not to abide by it. Right?

  10. Doug,
    I’ve always found lexical priority relations untintuitive especially when I put my consequentialist hat on. Take a choice between one object slightly above the cut-off point and a big number of objects just a tiny bit below the line. Now, as a lexicalist consequentialist I should say that having the one object has more value and is therefore right. But, if the differences between the objects in the categories is so small this is hard to accept.
    Your second point is right. But, I take it that it would be Parfit’s burden to come up with an independent argument for the axiology that provides an outcome that the optimific principles are the only ones everyone can rationally accept. I haven’t seen such an argument. Until that point, I smell emptiness – there’ll be an axiology, whatever it is, that is such that everyone has decisive reason to only accept the optimific principles that give the intuitively correct moral prescriptions.

  11. Jussi,
    Perhaps, lexical priority isn’t the best choice. My point was only that a consequentialist needn’t accept aggregation. Perhaps, Parfit would prefer to say that the two kinds of value are only imprecisely comparable. This, I think, would save him from your worry about there being some precise cut-off point.
    You’re right about the burden being on Parfit. I agree that he should address this kind of worry explicitly and defend the needed value theory on independent grounds if this is indeed the route that he wishes to take.

  12. One of my points was that it would actually be quite difficult to come up with a principle whose local violation would be best, but I agree that there will be such cases. So in such cases, then, your claim is simply that “the agent will, on wide value-based theories, have sufficient reason to choose the principle but not to abide by it.” But then this goes back to what I said earlier: the argument starts with (A), that “Everyone [including me]ought to follow the principles whose universal acceptance everyone could rationally will, or choose.” Presumably accepting this Kantian formula entails that you already accept that one has decisive reason in any circumstance to adhere to the principles, even if what they demand isn’t optimific in that particular circumstance.

  13. Dave,
    This goes back to the stuff from the first few chapters, and I draw here from Dan’s post on Chapter 4. According to Parfit,

    one of the most practically significant questions in ethics is (1).
    (1) Do we often have most, or decisive, reason to act wrongly?
    For if it turns out that we often have decisive reason to act wrongly, morality would, according to Parfit, lose much of its practical significance. Answering (1), however, requires answering (2) and (3) for any given decision.
    (2) What ought we to do?
    (3) What have we most reason to do?

    Now (A) is an answer to question (2), not (3). Right? And it’s an open question how often, on the correct theory about reasons, the answers to (2) and (3) will overlap. So I think that it’s a mistake to presume, as I take you to be suggesting, that one’s accepting (A) entails that one already accepts that one has decisive reason in any circumstance to adhere to the UA-optimific principles. Whether (A) does or does not entail this depends on our answer to (3). Now Parfit’s answer to (3) is the wide value-based theory. But, on that theory, the answers to (2) and (3) won’t always overlap — it won’t overlap in those instances where the UA-principles direct us to act contrary to what would be both partially and impartially best. Of course, Parfit may argue that, nevertheless, these instances will be sufficiently rare and that it’s enough that they overlap fairly often. But it seems to me a bad result if the answers to (2) and (3) ever come apart. In those instances where they do come apart, I don’t see what practical significance the UA-optimific principles would have (for me then and there). Why should I care, in some particular circumstance, that my X-ing would violate some UA-optimific principle if I have decisive reason to violate that principle?
    Am I misunderstanding how this stuff fits in with the rest of the book, or does this concur with your understanding as well?

  14. OK, this is good (and helpful). I’m seeing how this issue now looms over the book. So then given the earlier distinction you note, what’s the problem with saying that (a) most of the time (2) and (3) will overlap, and (b) on the rare occasions when they don’t, one can have decisive reason to do what one morally ought not do? And there’s no reason to think that, on those rare occasions when they come apart, that morality in general will lose its practical significance. Indeed, why think moral reasons are always morally overriding? (I seem to recall you yourself writing something about this — oooh, snap! Or, more formally, tu quoque, hombre!)

  15. Doug,
    at least you would have to have weighed the reasons you have for following the optimific principles before concluding that you have more reason to do something else. And, it might be that there are some residual duties on the basis of the defeated moral reasons to follow those principles. In some cases, maybe one ought to regret that one did something else than follow the principles everyone can reasonably accept even if one had better reasons to act in this way.
    My favourite example of this kind of cases comes from Susan Mendus. So, think that you have a child who is seriously ill and can only be saved with an operation. There is a waiting line for the kind of operations that is needed. Because the line is quite long, it is uncertain whether your child can have the operation in time. But, you work at the hospital and you could pull strings to get your child in front of the waiting line. In this case, it seems like morality comes to the side of pulling strings being forbidden. The other kids too are going to die without the operation and the other parents are going to undergo a similar loss. However, you (I at least) might think that in this case there is more reason to pull the strings.

  16. You ask,

    [W]hat’s the problem with saying that (a) most of the time (2) and (3) will overlap, and (b) on the rare occasions when they don’t, one can have decisive reason to do what one morally ought not do?

    The problem, I think, is that it conflicts with a very deep intuition that many of us have about the nature of morality, that is, that morality is overriding. Let’s call the thesis that morality is overriding “the Overridingness Thesis” or “OT” for short (Here, I draw from Sarah Stroud’s article on the topic).
    OT: If S is morally required to do (or to refrain from doing) x, then S has decisive reason to do (or to refrain from doing) x.
    Insofar as there are other moral theories (agent-relative consequentialism, for instance) that are compatible with OT and Parfit’s favored trifecta theory isn’t, we have a reason to favor these theories over Parfit’s trifecta theory.
    N.B. OT differs from the thesis that I’ve argued against, i.e., the thesis that moral reasons are morally overriding. To say that moral reasons are overriding is to say that even the weakest non-moral reasons defeats the strongest moral reason in determining the deontic status of an action. You can accept OT while denying that moral reasons are morally overriding. Snap! — right back at ya.

  17. Jussi,
    You write,

    In this case, it seems like morality comes to the side of pulling strings being forbidden. The other kids too are going to die without the operation and the other parents are going to undergo a similar loss. However, you (I at least) might think that in this case there is more reason to pull the strings.

    Whether you are in fact forbidden to pull strings in this case, as it seems to you, depends on what the correct moral theory says. I happen to think that the correct moral theory will say that your pulling stings in this case is not forbidden. So I agree that you have decisive reason to do so, but I think that a moral theory that is compatible with OT is, other things being equal, more plausible than one that isn’t.

Leave a Reply

Your email address will not be published. Required fields are marked *