This marks the 2nd of eleven “meetings” of our virtual reading group
on Derek Parfit’s Climbing the Mountain—see here
for further details.   Next week, we will discuss Chapter 4 of the
latest version of the manuscript–the June 7th version–which can be
found here. (Chapter 4 of the latest version is Chapter 3 of the older version we’ve been working with.)   Below the fold is a précis of Chapter 2 of the version we’ve been working with, though it also discusses some substantive changes that Parfit has made to that Chapter in his latest version.  Page citations in this post will be to the older version, unless otherwise specified.  I join Doug in apologizing for any confusion.





</p> <p>Parfit, Climbing the Mountain, Chapters 2 and 3</p> <p>

Chapter 2 is best understood,
I think, in light of its role in Parfit’s overall project.  Consider the following five questions:

 

(1) Can we ever have most reason to do what is
morally wrong?

(2) What ought we morally to do?

(3) What do we have most reason to do (or desire)?

(4) What makes our actions (and desires)
rational?

(5) What kinds of facts provide reasons?

 

(1)-(5) are listed in what (I
think) Parfit believes is an increasing order of conceptual importance: in
order to answer (1), we must first answer (2) and (3), and (3) is more
fundamental than (2); in order to answer (3), we must first answer (4); and in
order to answer (4), we must first answer (5). The purpose of Chapter 1, which Doug summarized last week, is to answer
(5). The purpose of Chapter 2 is to
answer (3) and (4) and to explain why (1) is perhaps the most practically significant
question to be answered in ethics.  The purpose
of the remainder of the book, beginning with next chapter, "Possible Consent,"
is to answer (2).

 

Can we ever have most reason to
do what is morally wrong? This is,
perhaps, the most practically significant question in ethics. Most people believe that morality matters,
and for some, morality is supremely
important. But, according to Parfit (in
the latest version of the manuscript), morality matters only if we have reasons
to care about morality and to avoid acting wrongly (latest version, pp. 69-70). (Parfit thinks that we cannot justify this
criterion of importance, though we may use it.) Notice, though, that answers to (2) and (3), which are required in order
to answer (1), can differ for any given decision about how to act.  Thus, if their answers were often to differ, morality’s
practical significance would thereby be undermined. Therefore, (1) is arguably the most
practically significant question to be answered in ethics. To answer it, we must first make some headway
into answering (4), (3), and (2), in that order.

 

What makes desires and
actions rational? I’ll focus on desires,
though I’m not sure I completely understand Parfit’s answer even about desires. Define ‘normative belief’ as ‘a belief about
which facts give reasons’. (So, we can
have normative beliefs about which facts give us reasons to desire something.) I think Parfit’s answer to (4) is the
following:

 

(Rd) P’s desire D to φ is fully practically rational
if and only if:

 

(i) If
D depends on nonnormative beliefs B1, B2, …Bn,
then the truth of B1, B2, …, Bn would give P
reasons to have D; and

 

(ii)
If D depends at least in part on normative beliefs Bn1, Bn2, … Bnm, then (a) Bn1, Bn2,
… Bnm must be rational, and (b) P must respond to Bn1, Bn2,
… Bnm by having the desires that Bn1, Bn2, … Bnm
are about.

 

Parfit argues for (Rd)
mainly by appealing to intuitions about various cases that appear to be
counterexamples to other analyses of practical rationality. For example, he argues for (i), in part, by
appealing to intuitions about a person who desires to smoke because she desires
to protect her health and she believes that smoking is the best way of achieving
this aim (pp. 23-24).  Parfit intuits that, although her belief is
irrational, her desire is not, since she wants what, if the belief were true,
she would have strong reasons to want. Parfit
argues for (ii) mainly by appealing to intuitions about cases involving three
people—Scarlet, Crimson, and Pink (pp. 27-31)—that appear to be counterexamples
to Scanlon’s analysis that "people are most clearly irrational when they
fail to respond to what they themselves acknowledge to be reasons" (p. 29). Though we can’t describe these cases in detail
here, the upshot is that Parfit intuits that (ii) is true, and Scanlon’s
analysis is false, because: (a) Pink is less than fully practically rational (though
not highly irrational), because he fails to prefer what his normative beliefs
inform him to prefer; (b) Pink is more rational than Crimson and Scarlet—indeed,
Parfit thinks, Crimson is extraordinarily irrational, and Scarlet is probably
insane—who do prefer what their normative beliefs inform them to prefer; and (c)
Crimson and Scarlet are highly irrational because their normative beliefs are
highly irrational. Parfit rejects at
least five other analyses of rational or irrational desires using the same
strategy (pp. 31-33).

 

Having made some headway into
answering (4), Parfit makes some headway into answering (3), "What do we
have most reason to do (or desire)?" Again, I’m not sure I completely understand Parfit’s positive answer,
but the material in this section is some of the most interesting I’ve read in a
long time. I think that Parfit’s answer
is both ambitious and cautious. It is
ambitious in the sense that it is a kind of "dualist" answer; it is
cautious in the sense that he says only that the correct answer is "some
kind" of dualist answer. 

 

He arrives at his answer to
(3) by providing a penetrating discussion and incisive revision of Sidwick’s
dualist thesis:

 

Sidgwick’s Dualism of Practical Reason: We always
have most reason to do whatever would be impartially best, unless some other
act would be best for ourselves. In such
cases, we would have sufficient reasons to act in either way. If we know the facts, either act would be
rational.

 

Parfit thinks that Sidgwick’s
dualist thesis must be revised in light of some cases (p. 36) to which our
intuitive reactions are that: (i) the wrongness of some acts give us decisive
reasons not to do them, even though such acts might be best for ourselves; (ii)
the benefits of some acts for those with whom we have close ties might provide
stronger reasons to do those acts than other acts that would benefit us; and (iii)
we can have impartial reasons to care about the well-being of others. The revised dualist answer that Parfit appears
to advocate is "some view of this kind" (p. ):

 

Wide value-based views: When one
possible act would be impartially best, but some other act would be best either
for ourselves or for those to whom we have close ties, we often have sufficient
reasons to act in either way.

 

Parfit concludes his
discussion of question (3) by rejecting a companion thesis that many wide value-based
views also hold, namely: when we are choosing between morally permissible acts,
our reasons to benefit ourselves are always stronger than our reasons to give
some equal benefit some stranger, though the difference in strength is
imprecise. Again, in the most recent
version of the manuscript, Parfit consults his intuitions about some Shipwreck
cases (latest version, pp. 62-63) to conclude that this view is "too
simple and too egoistic" (latest version, p. 62). What seems more plausible to Parfit is a companion
view of the following kind:

 

Pure Dualism: when we are choosing between two morally
permissible acts, of which one would be better for ourselves and the other
would be better for one or more strangers, we could rationally either give
greater weight to our own well-being, or give equal weight to everyone’s
well-being. (This view appears in the
newest version of the manuscript, and is a more precise version of he called "Rational
Dualism" in the older version.)

 

Parfit thinks this view is
correct, though he sees no need to decisively defend it at this point in the book.

 

Let’s take stock. Recall that, according to Parfit, (1) is perhaps
the most practically significant question we can answer in ethics.

 

(1) Can we ever have most reason to do what is
morally wrong?

 

However, in order to answer
(1), we must first answer questions (2) and (3) for any given decision.

 

(2) What ought we morally to do?

(3) What do we have most reason to do (or desire)?

 

Having made some headway into
answering (3), Parfit will spend the rest of his book, beginning with the next
chapter, "Possible Consent," trying to make more significant headway
into answering question (2).

 

Unlike Doug’s précis last
week, I don’t have a long list of questions to get our discussion going.  I am hoping that the remarks here will be
sufficient for that purpose. I am
curious, though, about three things in particular. First, does anyone have a clearer understanding
of Parfit’s answer to the Question (4), "What makes our actions (and
desires) rational?" Second, since Parfit
places such hefty emphasis on his intuitions about various thought experiments,
do your intuitive reactions about these cases match Parfit’s. I must confess that my intuitions about these
cases matched Parfit’s almost perfectly. 

25 Replies to “Chapter 2: Rationality and Morality

  1. Dan, thanks for the helpful summary. Here’s one point on which my intuition faltered a bit. I haven’t yet looked at the new version, but on p. 41 of the old, when glossing Rational Dualism, Parfit says that “on all plausible versions of [Rational Dualism], we could not rationally give ourselves some minor benefit rather than saving many strangers from death or agony.” Now, of course, I agree that we can’t slightly benefit ourselves at the expense of having many strangers greatly suffer. But that suggests to me that it’s not the case that both of these courses of action are morally permissible. So my question is this: what grounds the distinction that one course of action is rational and the other is not? Since Rational Dualism (or Pure Dualism) stipulates that both are morally permissible, I’m unsure how to make sense of that distinction.
    Or, put differently, I agree that we have reasons to sacrifice some small benefit to end the suffering of many strangers, but I normally think of those as moral reasons. So if they’re not moral reasons, what are they? (Note that this doesn’t seem like a distinction within the permissible, between a supererogatory act and a merely permissible act. Or at least it doesn’t seem that way to me, but then again I’m tempted to say that securing a small benefit for oneself at significant expense to others is wrong, whereas Parfit has stipulated that this is not so.) Maybe I missed something here by not having read the new version.

  2. This is a terrific summary, Dan. You helped to clear some things up for me, and you certainly helped me to see how everything is supposed to fit together. Thanks.
    I have a question about his dualist view. What exactly is it? It’s some kind of wide value-based view, I take it. Are we supposed to add the claim that Pure Dualism makes to the one that all wide value-based views accept to arrive at some specific wide value-based view? If so, what exactly does that view hold?
    It doesn’t hold that we always have sufficient reason to do what’s partially best. It doesn’t hold that we always have sufficient reason to do what’s impartially best. The reason for these two is that, as Parfit formulates things, wide value-based views hold only that we often have sufficient reason to act either way. So what does it hold?
    On another matter: to the three important cases that you mention that leads Parfit to revise Sidgwick’s Dualism of Practical Reason, I would add that sometimes we have decisive reason not to do what’s morally wrong even when doing so would be impartially best. I believe this is one of his main reasons for rejecting Sidgwick’s view.

  3. Josh,
    You write,

    I’m tempted to say that securing a small benefit for oneself at significant expense to others is wrong, whereas Parfit has stipulated that this is not so.

    You seem to think that Pure Dualism stipulates this, but I don’t see how. Pure Dualism doesn’t say anything about what is and isn’t morally permissible, so why can’t he hold that securing a small benefit for oneself at significant expense to others is wrong. Indeed, I know that one of his reasons for using the hedge word “often” in his formulation of wide value-based views is that he doesn’t think that it’s plausible to hold, as Sidgwick does, that it is always rational to do what’s either partially or impartially best. In fact, he thinks that it is not rational to do what is either partially or impartially best when doing either of these things involves doing something that is morally wrong.

  4. Here’s another thought. My intuition is that it is always rational to do what is partially best. So it’s even rational to act so as to secure an extremely small benefit for oneself (or for one’s loved one) at great cost to others. Moreover, it’s even rational to commit murder for one’s own benefit if one has Gyges’ ring and is, therefore, immune to any adverse repercussions. Also, it seems rational to do what I want even if that is something that is not what’s best self-interestedly, partially, or impartially — like watch reality TV. It seems rational for me to watch a trashy reality TV show (if that’s what I want to do) even if I know that there is an alternative (say, watching an educational show on PBS and discussing it with my family) which is better in all three respects. I think that our ordinary notion of what’s rational is extremely permissive. Nevertheless, if we substituted some technical term like ‘reasonable’, meaning what one has sufficient reason to do, for Parfit’s ‘rational’, then I would have all the intuitions that Parfit has.

  5. Hi Doug,
    I agree that Pure Dualism does not stipulate it. Rather, it seems as though Parfit stipulates it, at least in the early version–it’s just after he introduces the Rational Dualist thesis. Actually, after writing that, I just took a quick look at the new version, and that passage appears to be gone now.

  6. Doug,
    You wrote, “It [Parfit’s dualism] doesn’t hold that we always have sufficient reason to do what’s partially best. It doesn’t hold that we always have sufficient reason to do what’s impartially best. The reason for these two is that, as Parfit formulates things, wide value-based views hold only that we often have sufficient reason to act either way. So what does it hold?”
    Why can’t that be all there is to it? Is the worry that there’s an objectionable amount of indeterminacy in the view?

  7. Josh,
    If I often, but not always, have sufficient reason to do what is either partially or impartially best, then I still want to know when I do and when I don’t. It’s important to know this, because I think that if it turns out that we often have sufficient reason to do what’s morally wrong, then this would be enough to undermine morality.
    Parfit is right that if we often had decisive reason to act wrongly, this would undermine morality. But it seems to me that if it turns out that we often have sufficient reason to act wrongly, this would undermine morality as well. Suppose that I always have sufficient reason to do what is self-interestedly best even when doing so would be morally wrong. Let’s suppose that I have sufficient reason to, say, do A even when (1) B is is a morally permissible alternative, (2) A is morally wrong, and (3) A and B are tied for first-place in terms of self-interest. It seems, then, that I could rationally ignore morality entirely and focus solely on my self-interest. If I could rationally ignore morality, then I would argue that morality doesn’t matter.

  8. Doug,
    I thought that Dualism was only a thesis that covered morally permissible acts, such that it just focuses on the relation between the partial and the impartial within the set of morally permissible acts. So, given this narrow scope, I guess I’m confused on how it could undermine morality (I think I’m confused partly because I don’t understand how you got from the partial/impartial distinction to the sufficient-reason/morality distinction in your comment.)

  9. “If I could rationally ignore morality, then I would argue that morality doesn’t matter”
    I can’t see it. Suppose I am the fortunate sort of person (alas, I’m not) who identifies his interests with the interests of all impartially considered. I have arrived at a Humean ideal. Hume asks “what theory of morals can serve any useful purpose unless it can show that all of the duties it recommends are also the true interests of each individual” (Enquiry, sect. ix, part ii). I can then ignore morality in terms of motivation, and rely on self-interested motivation. But it is certainly not true that morality doesn’t matter. How does that follow?
    Similarly, suppose I can show that moral choices are at least as rational as any other sorts of choices you might make. That would be one hell of a conclusion. It would show that there is no life–no sequence of choices through one’s life–that is more rational than a sequence of moral choices. It does not remotely follow from this that morality doesn’t matter. On the extreme contrary, it shows that moral choices matter as much as any other choices you could make.

  10. Josh,
    When you refer to “Dualism” what are you referring to?
    In any case, I was talking about Parfit’s view, which, as I understand it, is some version of the wide value-based view. On all wide value-based views, we often have sufficient reason to do what would be partially best. And I was wondering whether that meant that we often have sufficient reason to do what is morally wrong? But then I thought you suggested that it was enough just to know that I often have sufficient reason to do what is either partially or impartially best, and that I didn’t need to know when I did or didn’t, as, for instance, whether I did only when it was morally permissible to do what is partially or impartially best. Perhaps, Pure Dualism does restrict the class of rational acts to the class of morally permissible acts, but that’s not clear to me. That’s why I was asking what exactly is Parfit’s view. And if Pure Dualism does restrict the class of rational acts to the class of morally permissible acts, then why does Parfit think it’s still an open question whether morality conflicts with what it is rational to do? Indeed, he makes a big point of saying that we need to figure out what’s right and wrong before we can answer that question.

  11. Mike,
    You makes some good points. Perhaps, it is enough that morality matters in the way you suggest. But I would hope that morality matters more, such that it is not only rational to choose to live morally but that it is irrational to choose to live immorally.

  12. Hi everyone. Thanks for the comments. I have time to respond to just two now. I’ll respond to more comments tomorrow morning.
    Josh writes:

    But that suggests to me that it’s not the case that both of these courses of action are morally permissible. So my question is this: what grounds the distinction that one course of action is rational and the other is not? Since Rational Dualism (or Pure Dualism) stipulates that both are morally permissible, I’m unsure how to make sense of that distinction.

    I agree with you and Doug that nothing in Pure (Rational) Dualism stipulates that both are morally permissible. But I don’t see where, even in the old version of the Chapter, that Parfit stipulates that they are. In fact, an insight similar to yours (and Parfit’s)–that not both are morally permissible–allows Pure Dualism to handle both Shipwreck thought experiments in the latest version of the Chapter.
    Josh also wonders, and Doug wonders for the sake of argument, whether Pure Dualism “restricts the class of rational acts to the class of morally permissible acts.” I don’t see that it does. According to Pure Dualism, “When [read: ‘if’] we are choosing between two morally permissible acts …, (then) we could rationally either ….” I’m not seeing how this formulation explicitly or implicitly restricts the class of rational acts to morally permissible acts. I think the following is a structurally similar analogy: ‘if we are choosing between two Valencia oranges, then either is six inches in diameter’. This formulation does not restrict the class of oranges that are six inches in diameter to Valencias.
    I’ll respond to more comments tomorrow morning.

  13. Sounds like I need to clarify a couple of things.
    (1) The Dualism I am referring to is Parfit’s Pure Dualism. On Dan’s quote from the new version, this states: “when we are choosing between two morally permissible acts, of which one would be better for ourselves and the other would be better for one or more strangers, we could rationally either give greater weight to our own well-being, or give equal weight to everyone’s well-being.”
    As I see it, this is only a claim about how to rationally decide between two morally permissible acts. The relation between Dualism and Wide Value-Based views is unclear to me, but whatever that relation is, I thought we were here talking about Pure Dualism. In your latest comment (to me, not Mike A.), Doug, you seem to have sort of switched to talking about WVB views, so I’m starting to lose track of the dialectic a bit.
    (1a) Doug also says, and Dan follows him in thinking, that I think that “Perhaps, Pure Dualism does restrict the class of rational acts to the class of morally permissible acts…” I don’t think Pure Dualism says anything about the scope of the class of rational acts. It is just talking about rationality within the class of morally permissible acts. So I think we’re talking past each other a bit here. Doug also said, “I thought you suggested that it was enough just to know that I often have sufficient reason to do what is either partially or impartially best, and that I didn’t need to know when I did or didn’t, as, for instance, whether I did only when it was morally permissible to do what is partially or impartially best.” I didn’t mean to suggest that. I just meant to ask why you think Parfit’s view is a problem. I’m guessing you do not think it’s a problem of indeterminacy, but I’m still not clear on why his Dualism is a problem, since that’s only a thesis about rationality within the set of morally permissible acts, and so I don’t understand how it would undermine morality.
    (2) Dan writes, “But I don’t see where, even in the old version of the Chapter, that Parfit stipulates that they are [both morally permissible].” On p. 41 of the old version, right after stating Rational Dualism, Parfit glosses it. He holds that Rational Dualists can differ about how much priority to give to oneself versus others, but also that “on all plausible versions of [Rational Dualism], we could not rationally give ourselves some minor benefit than saving many strangers from death or agony.”
    Now if Dualism is a thesis about morally permissible options only, then I take it that this gloss only makes sense as a gloss of Rational Dualism if both of those options are permissible. That’s why I read Parfit as stipulating their permissibility at this point.
    (3) I’ve been quoted as saying, “Since Rational Dualism (or Pure Dualism) stipulates that both are morally permissible…” Again, this was a misstatement: I should have said that Parfit, not Rational Dualism, and only in the older draft, seems to stipulate this. See (2).

  14. Josh,
    You write,

    I just meant to ask why you think Parfit’s view is a problem. I’m guessing you do not think it’s a problem of indeterminacy, but I’m still not clear on why his Dualism is a problem.

    I didn’t suggest that his view is problematic. I asked what his view was exactly. Can you tell me what, according to Parfit’s view, (which apparently is some specific version of the wide value-based view that involves Pure Dualism in a supplemental capacity) we have sufficient reason to do, decisive reason to do, or decisive reason not to do? I’m not trying to set up some trap for some future objection.
    Parfit can tell me that I often have sufficient reason to do what’s either partially or impartially best, but this isn’t helpful. I want to be able to figure out what, in specific cases, I have sufficient or decisive reason to do. Or does his view not give any guidance of this sort? If it doesn’t, how will we be able to figure out in the end (after determining what is right and wrong) whether we ever have decisive reason to act wrongly?

  15. Doug,
    Right, you never said it was a problem. Let me re-phrase: I’m not sure why Pure Dualism would undermine morality, since it’s only a thesis about rational choices within the set of morally permissible acts, leaving it open whether and how morally permissible acts might be rationally balanced against morally wrong acts.
    I’m not sure how to answer your questions about what Parfit’s view is about what we have sufficient reason to do, all things considered.

  16. Doug writes:

    I have a question about his dualist view. What exactly is it?
    It doesn’t hold that we always have sufficient reason to do what’s partially best. It doesn’t hold that we always have sufficient reason to do what’s impartially best. The reason for these two is that, as Parfit formulates things, wide value-based views hold only that we often have sufficient reason to act either way. So what does it hold?

    Are you looking for an answer of this form: “…we often have sufficient reason to act either way, except when….”? If so, I don’t know how Parfit would respond. My guess is that Parfit would claim that he cannot yet fill in the ‘except when…’ clause, but that he doesn’t need to. The wide-value based analysis, though incomplete, is sufficient for his ultimate purpose of showing, at the least, that we do not often have sufficient reason to act wrongly, and so the importance of morality is not undermined. If that would be his response, I guess we’ll have to wait to see whether he is right about that.
    Doug also writes:

    On another matter: to the three important cases that you mention that leads Parfit to revise Sidgwick’s Dualism of Practical Reason, I would add that sometimes we have decisive reason not to do what’s morally wrong even when doing so would be impartially best. I believe this is one of his main reasons for rejecting Sidgwick’s view.

    You’re absolutely right, Doug. Thanks for adding this.

  17. Doug writes:

    I think that our ordinary notion of what’s rational is extremely permissive. Nevertheless, if we substituted some technical term like ‘reasonable’, meaning what one has sufficient reason to do, for Parfit’s ‘rational’, then I would have all the intuitions that Parfit has.

    I think that is how Parfit is using ‘rational’. We should also keep in mind that Parfit thinks that rationality is gradable. So, let’s take your example of watching reality TV rather than watching some PBS show and discussing it with your family, and let’s use a scale of 0-10, where ‘0’ represents ‘practically insane’ and ’10’ represents ‘fully practically rational’. If your intuition is that watching reality TV, rather than watching the PBS show and discussing with your family, is rational to degree 2, then Parfit might agree with you. That is, the difference between you and him might only be in terminology; you say watching reality TV is rational (to degree 2), he might say it’s irrational (because it’s toward the lower end of the rationality scale).

  18. Doug:

    Parfit is right that if we often had decisive reason to act wrongly, this would undermine morality. But it seems to me that if it turns out that we often have sufficient reason to act wrongly, this would undermine morality as well. Suppose that I always have sufficient reason to do what is self-interestedly best even when doing so would be morally wrong. Let’s suppose that I have sufficient reason to, say, do A even when (1) B is is a morally permissible alternative, (2) A is morally wrong, and (3) A and B are tied for first-place in terms of self-interest. It seems, then, that I could rationally ignore morality entirely and focus solely on my self-interest. If I could rationally ignore morality, then I would argue that morality doesn’t matter.

    In my post yesterday, I said I had three questions, but I see now that I actually only asked two questions. You have put your finger on what was supposed to be my third question. Parfit often slides back and forth in the Chapter between talking about what we have most reason to do and what we have sufficient reason to do. He says that morality would be undermined if we often had most reason to do what is morally wrong. But wouldn’t morality be undermined if we often had sufficient reason to do what is morally wrong? So, I’m with you on this worry, Doug. I also wonder whether there are other places in the chapter where sliding back and forth between what we have most/sufficient reason to do creates difficulties.

  19. Mike A writes:

    But it is certainly not true that morality doesn’t matter. How does that follow? . . . It does not remotely follow from this that morality doesn’t matter. On the extreme contrary, it shows that moral choices matter as much as any other choices you could make.

    Good points, Mike. I agree with Doug that morality “would not matter” if we often had sufficient reason to do what is wrong, though perhaps not for the reasons Doug (or Parfit) gives. As I see it, the reason most people think that morality matters greatly, and, for some, why morality matters supremely, is because of morality’s demandingness. We feel compelled to do what is morally right, and we feel compelled to do avoid doing what is morally wrong. To me, the reason we feel compelled to avoid doing what is wrong is that we do not believe that we have sufficient reason to do what is wrong, ie., that we believe that we always have decisive reasons to do what is right. There is no question in my mind that morality’s demandingness on me would dissipate greatly if it turned out that we often had sufficient reason to do what is wrong. I would simply no longer feel that I always had to do what is right.
    Ok, enough for me for today.

  20. Dan,
    You write,

    The wide-value based analysis, though incomplete, is sufficient for his ultimate purpose of showing, at the least, that we do not often have sufficient reason to act wrongly.

    I don’t see this at all. According to wide value-based views, we often have sufficient reason to do what is either partially or impartially best. It could, then, turn out that we often have sufficient reason to act wrongly, as would be the case if acting wrongly turns out to often be what is partially best for us.
    However, for Parfit and Mike A., the crucial issue is not whether we could often have sufficient reason to act wrongly, but rather whether we could often have decisive reason to act wrongly (see p. 69 of the new manuscript). But since as far as I can tell, Parfit doesn’t tell us what we have decisive reason to do, but only what we often have sufficient reason to do, I don’t see how we’re going to be able to answer this question even after we read the next eight chapters and figure out what it is wrong to do.

  21. Dan,
    Note that in your original post the primary question (although not the conceptually primary question), your question (1), is “Can we ever have most reason to do what is morally wrong?” It’s not, “Do we often have sufficient reason to do what is morally wrong?”
    And I’m starting to come around to the idea that Mike A. and Parfit are right, that it is enough to show that there is always sufficient reason to refrain from acting wrongly and that, perhaps, this is the best we can hope for.

  22. Doug, we’re apparently failing to communicate. Just to try to clear up some confusion before we go any further:

    (Dan) writes,

    The wide-value based analysis, though incomplete, is sufficient for his ultimate purpose of showing, at the least, that we do not often have sufficient reason to act wrongly.

    I don’t see this at all. According to wide value-based views, we often have sufficient reason to do what is either partially or impartially best. It could, then, turn out that we often have sufficient reason to act wrongly, as would be the case if acting wrongly turns out to often be what is partially best for us.

    I didn’t write what you say I wrote, or rather, I was wrote it as the only response I could think of on behalf of Parfit. And I’ve agreed with you that it could turn out that we often have sufficient reason to act wrongly. I’ve also agreed with you that, if so, morality would thereby be undermined.

    However, for Parfit and Mike A., the crucial issue is not whether we could often have sufficient reason to act wrongly, but rather whether we could often have decisive reason to act wrongly (see p. 69 of the new manuscript) . . . .
    Note that in your original post the primary question (although not the conceptually primary question), your question (1), is “Can we ever have most reason to do what is morally wrong?” It’s not, “Do we often have sufficient reason to do what is morally wrong?”

    I understand that this is the crucial issue for Parfit and Mike A.. That is why I tried to explain to Mike A. why I think morality would be undermined if we often had sufficient reason to do what is wrong.

    And I’m starting to come around to the idea that Mike A. and Parfit are right, that it is enough to show that there is always sufficient reason to refrain from acting wrongly and that, perhaps, this is the best we can hope for.

    Perhaps, then, my response to Mike A. could also serve as my reason for hoping that you don’t actually come around to the idea. If the best we can hope for is that there is always sufficient reason to refrain from acting wrongly, and not that we do not often have sufficient reason for acting wrongly, then morality’s demandingness would lose much of its force for me–which is pretty disappointing.

  23. Dan,
    Sorry, thanks for clearing that up. I realized that you weren’t asserting it yourself, but I did think that you were offering it as a plausible response. But the response seems to be a non-starter.
    And I take your point about the demandingness of morality would lose much of its force. Morality could never then compel us to, say, stop eating meat or give most of our money to charity. Even if I was convinced that I was morally required to do these things, I would, in this case, still have to admit that I have equally sufficient reason to continue on acting in my immoral ways of eating yummy burgers and spending lots of money of frivalous electronic gizmos.
    Mike A.:
    Dan is pulling back to the Dark (Light?) Side. What say you in response?

  24. This comment is just for fun. Above, I wrote this:

    If the best we can hope for is that there is always sufficient reason to refrain from acting wrongly, and not that we do not often have sufficient reason for acting wrongly, then morality’s demandingness would lose much of its force for me–which is pretty disappointing.

    Shortly after writing this, I remembered this humorous comment from the preface to Bernie Gert’s Common Morality, a comment which comes across as even more humorous if you can imagine Bernie saying it with his rather, uh, unique delivery:

    My justification is similarly modest…. I do not try to show that it is irrational to act immorally; I show only that it is never irrational to act morally. I am trying to do far less than what philosophers from Plato on have failed to do. Thus, even if I succeed completely in what I am trying to do, people may be disappointed. It is also disappointing that there is no perpetual-motion machine.

Leave a Reply

Your email address will not be published. Required fields are marked *