Stephen Darwall has advanced the rational care theory of welfare, a metaethical thesis about the meaning of welfare judgments.  Somewhat informally, here’s the view:

RCTW: to say that some state of affairs x would be good for someone is to say that anyone who cares for that person should desire that x occur for his sake; to say that some state of affairs x would be bad for someone is to say that anyone who cares for that person should desire that x not occur for his sake.

On this view, it’s not that you should desire what’s good for those you care about because it’s  good for them; rather, it’s good for them in virtue of the fact that you should desire it.  The ‘should’ here is the ‘should’ of rationality, as in “appropriateness.”  To say you should desire the thing is to say that it is appropriate for you to do so, that the thing merits, or calls for, your desiring it.

Here’s an argument against RCTW, based on Parfit’s case My Past or Future Operations.

Here’s Parfit’s case (Reasons and Persons, p. 165):

I am in some hospital, to have some kind of surgery.  Since this is completely safe, and always successful, I have no fears about the effects.  The surgery may be brief, or it may instead take a long time.  Because I have to co-operate with the surgeon, I cannot have anaesthetics.  I have had this surgery once before, and I can remember how painful it is.  Under a new policy, because the operation is so painful, patients are now afterwards made to forget it.  Some drug removes their memories of the last few hours.

I have just woken up.  I cannot remember going to sleep.  I ask my nurse if it has been decided when my operation is to be, and how long it must take.  She says that she knows the facts about both me and another patient, but that she cannot remember which facts apply to whom.  She can tell me only that the following is true.  I may be the patient who had his operation yesterday.  In that case, my operation was the longest ever performed, lasting ten hours.  I may instead be the patient who is to have a short operation later today.  It is either true that I did suffer for ten hours, or true that I shall suffer for one hour.

I ask the nurse to find out which is true.  While she is away, it is clear to me which I prefer to be true.  If I learn that the first is true, I shall be greatly relieved.

Suppose in fact the first is true: Parfit had his operation yesterday and suffered for ten hours.  Consider the state of affairs of Parfit suffering for ten hours yesterday and call it ‘S’.  Clearly,

(1) S was bad for Parfit.

Suppose (but just for simplicity’s sake) that Parfit’s bias towards the future is extreme: that is, it is such that, for past pains that are absolutely certain to have no effect on the future, Parfit now has no desire that they didn’t occur.  He wouldn’t sacrifice any amount of future pleasure, however small, to relieve any amount of past pain, however big, so long as the past pain will have no bad effects.

We all have the bias towards the future, and my view is that our bias is not irrational.  I don’t claim to be able to explain why the bias towards the future is not irrational.  I think we just have to take it as a brute fact about rationality and time.

I think even Parfit’s extreme bias is rational.  If a pain is genuinely behind us and will have no after effects, it no longer demands any negative attitude towards it on behalf of the person who underwent it.

Since Parfit’s failure to desire that S didn’t occur is not irrational, it follows that

(2) It is false that Parfit should now desire that S did not occur.

This is because if someone is rationally permitted to fail to F, then it is false that he is rationally obligated to F.  In other words, it is false that he rationally should F.

Finally, suppose Parfit cares for himself.  That fact, combined with (1) and (2), gives us a case in which some state of affairs is bad for someone even though not everyone who cares for that person should desire its non-occurrence.  So RCTW is false.

72 Replies to “An Argument Against the Rational Care Theory of Welfare

  1. Great!
    I haven’t read Darwall’s view (you don’t say where he published it). I wonder what the force is of “for his sake”. The obvious worry is that “sake” means “advantage”, “interest”, or “good”. (That what the American Heritage Dictionary says.)

  2. In another PEA Soup post, Doug discusses the ‘evil demon’ objection to ‘fitting-attitude’ accounts of value. I wonder if that objection might be adapted to cause trouble for Darwal’s view.
    Suppose an evil demon threatens to inflict unbearable pain on anyone who cares about Jamie if they do not desire that Jamie eats a saucer of mud for his own sake. Then it seems we might say truly, ‘anyone who cares about Jamie should desire that he eats a saucer of mud for his own sake, but it would not be good for Jamie to eat a saucer of mud’. But RCTW seems to imply that this sentence is not only false, but self-contradictory.

  3. Jamie,
    Sorry for not saying where the view’s from. It’s from Darwall’s 2002 book Welfare and Rational Care. A sample chapter is available here. Here’s a relevant passage from this chapter:

    … what it is for something to be good for someone just is for it to be something one should desire for him for his sake, that is, insofar as one cares for him. The relevant sense of ‘should’ again, is its most general normative sense. We might equivalently say that what it is for something to be good for someone is for it to be something that is rational (makes sense, is warranted or justified) to desire for him insofar as one cares about him. This is a rational care theory of welfare. It says that being (part of) someone’s welfare is being something that it would be rational to want for him for his sake.

    I think your concern about ‘for his sake’ is reasonable. (A similar concern arises for ‘care’, which one would think also requires the concept of welfare in its definition). I don’t have my copy of Welfare and Rational Care with me, and I can’t remember what Darwall says about ‘sake’. He might be trying to convey that the desire must be an intrinsic desire for the good thing and not a mere instrumental desire (so, e.g., if I desire that you’re feeling happy, but only because it means you’ll spend more money in my store, my desire for this good thing for you is not for your sake). This is just a guess.

  4. That does seem radically mistaken. Take any case in which a person must experience one of two evils. Whatever happens to him is bad, no doubt, but a rational person should desire the lesser of the two possible evils.
    But this seems too easy. Maybe Darwall does not mean to limit reasonable desires to experiences that are causally (or more restrictedly) possible. I can then desire that you experience neither evil, knowing that in some restricted sense you must experience one of them.

  5. Campbell,
    I’m not sure your case causes trouble for RCTW. Darwall might say that you’re using a diffferent sense of ‘should’ — the prudential ‘should’. So it might be true that we who care about Jamie prudentially should desire that he eats a saucer of mud for his own sake. But it is not true that the state of affairs of Jamie’s eating a saucer of mud itself calls for or warrants a desire by those who care about Jamie.
    You might reply that it does warrant such a desire since, if we don’t so desire, we will be tortured. But I think Darwall means to be calling attention to a notion of “intrinsic warranting”: the state of affairs of Jamie’s eating a saucer of mud itself doesn’t warrant our desiring it. What warrants our desiring it is that otherwise we’ll be tortured (and we have reason not to want to be tortured).
    I think this notion of ‘should’ is the same one used by Brentano when he says we should love the good. The good always calls for our loving it, even if there are weird circumstances in which we, all things considered, shouldn’t love the good.
    Perhaps all this came up in Doug’s old post. Were problems for this line of thought raised?
    I must admit that in defending Darwall here I’m going beyond his text. He doesn’t addresses this objection, but I think it’s important and definitely needs addressing.

  6. Mike,
    I think you raise another good objection. But perhaps a solution like the one you suggest would work. I would add to it the following. Let’s distingiush between desire and preference. Maybe those who care about you should, at all times, desire to degree 100 a good of value 100 for you. And we should desire to degree 99 a good of value 99. And so on. Until we get to that we should desire to degree -1 an evil of value -1, and we should desire to degree -2 an evil of -2. Etc., etc. (To desire to a negative degree is the same as to be averse to.) Maybe, as you suggest, we should always have these desires, no matter what is available to you at the time.
    Then when, as in your case, your only possibilities are, say, an evil of value -5 and an evil of value -10, strictly speaking I should (positively) desire neither of them. Nevertheless, I should prefer the -5 evil to the -10 evil. And indeed I do, if I desire the first to degree -5 and the latter to degree -10. For to prefer p to q is to desire p to a greater degree.
    This allows us to say, about your case, that we should not desire either evil for you, but we should prefer the lesser of the two evils.
    I think all this would gel with RCTW.

  7. Chris,
    Let me try to put Campbell’s point differently. Any sort of analysis of an evaluative concept like ‘good’ or ‘good for’ in terms of fitting pro-attitudes is going to run into the Wrong-Kind-of-Reasons problem (the WKR problem). That is, certain kinds of reasons (like the reasons that an evil demon can provide by threatening someone) can make it, in some real sense, rational (or fitting or appropriate) to have the attitude in question toward a given object even when it would be incorrect to predicate the given evaluative concept of that object. Thus, where an evil demon has threatened to punish Smith, who cares about Jamie, if Smith doesn’t desire that Jamie eats a saucer of mud for Jamie’s own sake, it can be rational for Smith, who cares about Jamie, to desire that Jamie eats a saucer of mud for Jamie’s own sake even though that wouldn’t be good for Jamie. The response to this kind of problem is to point out, as you’ve done, that that’s not the right kind of reason for wanting something for someone’s own sake. Perhaps, we could say that the reasons have to be object-given as opposed to state-given reasons. That is, the reasons have to provided by facts about the object of the attitude (i.e., facts about the state of affairs where Jamie eats the saucer of mud), not facts about the state of having that attitude (i.e., facts about Smith’s desiring this state of affairs, such as that Smith’s being in this state will shield him from punishment from the evil demon).
    But now look at your claim (2) “It is false that Parfit should now desire that S did not occur.” It seems that Darwall might say that the reason that Parfit has to refrain from desiring that S did not occur is not of the right kind. Perhaps, the reason it is rational for Parfit to refrain from having such a desire is a pragmatic one: having such a desire could lead to less future pleasure, for he would, if he had such a desire, be willing to sacrifice some amount of future pleasure, however small, to relieve a certain amount of past pain. But this seems to be a state-given reason, not an object-given reason.
    In any case, the point is that Darwall will need to revise his analysis in light of Campbell’s evil demon objection as follows: to say that some state of affairs x would be good for someone is to say that anyone who cares for that person should, on the basis of only the right kind of reasons, desire that x occur for his sake. And once Darwall specifies what the right kinds of reasons are he might hope that his revised account will not only avoid Campbell’s evil demon objection but also your Parfitian objection. I guess that I’m both optimistic about the prospects of solving the WKR problem and I’m inclined to think that it’s solution will help Darwall avoid your Parfitian objection. Of course, this is all speculation. But if you really want to deal a decisive blow to the RCTW, I think that you need to show that no plausible solution to the WKR problem will enable Darwall to suitably revise the RCTW such that it avoids your objection.

  8. Douglas, you say,
    “it can be rational for Smith, who cares about Jamie, to desire that Jamie eats a saucer of mud for Jamie’s own sake even though that wouldn’t be good for Jamie”
    Is it true that it wouldn’t be good for Jamie to eat the mud? Certainly, it is not intrinscially good. But, as the situation is described, it sure looks instrumentally good. Jamie’s eating the mud for his own sake (even if not a good thing for his own sake) is good as a means to preventing the Smith’s suffering. But then eating the mud is good. And this seems so even if it is not intrinsically good and even if it is not on-balance good (supposing the mud-eating is worse than the demon-pain).

  9. Hey Doug,
    It still seems to me that modifying (or clarifying) RCTW so that it avoids the WKR problem does not enable it to avoid my objection. The WKR problem misses the mark because that objection makes use of prudential rationality, whereas Darwall’s theory makes use of this different sort of rationality (perhaps we can call it “intrinsic fittingness rationality”).
    But when I claim

    (2) It is false that Parfit should now desire that S did not occur,

    I’m not doing what the WKR problem did. I’m not making use of the prudential ‘should’. (Nor am I, incidentallly, making use of the all-things-considered ‘should’.) I’m making use of a sense of ‘should’ that corresponds to “intrinsic fittingness rationality.”
    So by (2) I mean:

    (2′) It is false that S itself warrants that Parfit now desire that it didn’t occur.

    In still other words:

    (2”) It is false that Parfit’s now desiring the non-occurrence of S is a fitting attitude for Parfit to take towards S itself.

    You say:

    Perhaps, the reason it is rational for Parfit to refrain from having such a desire is a pragmatic one

    It may or may not also be prudentially rational for Parfit not to desire that S didn’t occur. (If it were, then (2)-interpreted-with-the-prudential-‘should’ would be false.) It may or may not even be “all things considered” rational for him not to have this desire. (If it were, then (2)-interpreted-with-the-all-things-considered-‘should’ would be false.) But this is all compatible with what I’m saying, which is that (2)-interpreted-with-the-intrinsic-fittingness-‘should’ is true. S itself no longer demands Parfit’s aversion.

  10. I suppose what I’m after is that eating the mud is instrumentally good for Jamie to do. And it is. But that is consistent with what Doug is after, namely that it would nonetheless not be good for him.

  11. Hey Chris,
    I don’t like Darwall’s view, but I’m not sure I’m convinced by the example. I think that sort of extreme uncaring about the past is inappropriate, whether it’s my own past or someone else’s. If I tell you I got tortured yesterday, and you respond by saying ‘who cares? That was yesterday. It’s over now,’ I think you would be responding inappropriately. Or think of someone who reacts with indifference to tales of past horrors. Something is wrong with that person. But maybe you could get the example to convince even someone like me, as long as I have *some* bias towards the future. Does RCTW have a clause about *how* good or bad something is for someone?

  12. Chris,
    I accept that if you clarify what you mean by (2) as (2”), then Darwall’s potential solution to the WKR problem (whatever that might be) won’t help him avoid your objection. But I think that in spelling out (2) as (2”), your argument against the RCTW is much less compelling, for it’s not at all obvious (not to me anyway) that “[i]t is false that Parfit’s now desiring the non-occurrence of S is a fitting attitude for Parfit to take towards S itself.” Is it not fitting for me to desire now the non-occurrence of S itself? Why, then, wouldn’t it be fitting for Parfit now to desire the non-occurrence of S itself? The object is the same in both cases. That is, S, itself, is the same in both cases. So what about S, itself, makes it such that it is not fitting for Parfit, but fitting for me, now to desire the non-occurrence of S itself? I can see why Parfit has prudential reasons, which I don’t have, for not now desiring the non-occurrence of S itself, but I don’t see why it is not fitting (as opposed to not prudent or not pragmatic) for Parfit to now desire the non-occurrence of S itself. In conclusion, then, although it does seem obvious that there is some sense in which it is false that Parfit should now desire the non-occurrence of S itself, whether 2″ correctly identifies that sense is far from obvious.
    Mike,
    That’s right: what you say is consistent with my claim that eating the saucer of mud would not be good for Jamie.

  13. Campbell et al,
    A version of Campbell’s worry reminds me of Gibbard’s complaint against Railton’s view of a person’s good. First, alter Cambo’s worry so that the evil demon promises to punish Jamie if the people that care for Jamie do not desire for its own sake that Jamie eat a saucer of mud. Thus it would be out of care for Jamie that we would want him to eat a saucer of mud. Yet eating a saucer of mud is not good for Jamie (although our desiring that he do so is good for Jamie).
    On Railton’s view, what is good for me is determined by what an informed counterpart of me (Dave+) would want me (Ordinary Dave) to want. Gibbard complains (not sure if this made it into print, but lots of people express the worry) that I might not want myself to want things for instrumental reasons. Gibbard’s example: Gibbard+ (a scary smart individual) would not want imprisoned Ordinary Gibbard to want to be free as this would only be frustrating–yet being free would be good for Ordinary Gibbard. Michael Smith talks about cases of this sort as well in The Moral Problem, p. 152-3 and elsewhere.
    Railon, in conversation, suggests that we should focus on cases where the advisor has an intrinsic preference that the advisee have an intrinsic preference. Perhaps this will help Darwall as well.

  14. Hey Ben,
    Very good points, but I think they can be answered. I think there are actually two separate objections here. The first has to do with a self-other asymmetry: even if it’s ok for Parfit not to desire that his own past pains didn’t occur, the rest of us shouldn’t be like this — we should want that they didn’t occur. The second has to do with his extreme bias: maybe it’s ok to be somewhat biased towards the future, but Parfit’s complete disregard for his own past pains is too extreme — they merit at least a little aversion on his behalf.
    About the self-other asymmetry. I intentionally restricted my claim to whether it is ok for Parfit himself not to care about his own past pains. I agree that it may be inappropriate for you and I to have an “other-regarding” bias towards the future. That is, maybe even though it’s ok for Parfit to prefer ten hours of pain yesterday to one hour of pain tomorrow, it’s not ok for us to prefer in this way. We should hope that Parfit is the patient who will have his operation later today.
    I’m not sure what to think about this alleged asymmetry. Parfit actually has another case that convinces me there probably is such an asymmetry. This is a bit of digression, but I’ll display the case, just ‘cuz this question is really interesting. It’s from p. 181 of R&P and is called Past or Future Suffering of Those We Love:

    I am on exile from some country, where I have left my widowed mother. Though I am deeply concerned about her, I very seldom get news. I have known for some time that she is fatally ill, and cannot live long. I am now told something new. My mother’s illnes has become very painful, in a way that drugs cannot relieve. For the next few months, before she dies, she faces a terrible ordeal. That she will soon die I already knew. But I am deeply distressed to learn of the suffering that she must endure.
    A day later I am told that I had been partly misinformed. The facts were right, but not the timing. My mother did have many months of suffering, but she is now dead.

    Then Parfit asks, “Ought I to be greatly relieved?” I am inclined (as Parfit apparently is too) to say No. Or at least: Parfit should not be as greatly relieved as he is when his own pains become past. So I think I believe in this sort of self-other asymmetery. (But I admit I’m not certain. If I loved Parfit and were with him as the nurse was checking the records, I think I might share his preference that his operation was yesterday.)
    But, ANYWAY, the question doesn’t matter for my point. I can remain neutral on whether there is this asymmeetry. All I need is that Parfit can rationally have a bias towards the future about his own pleasures and pains. And you didn’t deny that in your post.
    About the extreme bias. I said in the original post that I assumed that only for simplicity’s sake. Here’s what I meant. If you think the extreme bias is irrational but that a weaker bias is not, I could still, as you predict, run a similar argument. As you predict, we’d just have to include degrees of badness and desire in RCTW (as it should have anyway). RCTW would imply that these degrees should march in lock step (the amount of goodness in the good is proportional to how intense the desire should be, and likewise for badness). But Parfit’s Past or Future Operations would show that it is rational for these degrees NOT to march in lock step. Once a pain is past, it may demand its subject have some preference against it, but not as strong as the preference a future pain demands its subject have against it. It might be irrationl for Parfit to be willing to accept a year of torture in the past in order to get a pleasant sip of beer in the future, but it is not irrational for him to prefer as he does in his Past and Future Operations.

  15. Doug,
    Right, you and Ben were on to the same idea (and you can let me know if what I said to Ben satisfies your worry), but you also make a different point. From the fact that you and I should be averse to Parfit’s past pains, you conclude that he should be too. In a nutshell, I think you are denying that the bias towards the future (even concerning one’s own pleasures and pains) is rational after all. The only justification that Parfit can have for preferring the ten-hour operation yesterday to the the one-hour operation later today, you suggest, is pragmatic — that having this perference will somehow lead to him being better off.
    Of course, Parfit’s case is designed exactly to make any such pragmatic justification unavailable. He can’t act on this preference. We can just stipulate that no matter what Parfit prefers while the nurse checks the records will have no effect on how good his life is.
    Since pragmatic justifications are not available, your view seems to be that Parfit is silly to prefer ten hours of past suffering to one hour of future suffering.
    But isn’t it just obvious that Parfit’s preference is perfectly reasonable? I admit I can’t prove it. Every time I try to explain why the preference is rational, I just beg the question. Like I might say, “But Parfit still has to undergo the pain if his operation is later today, but the pain is over and done with if his operation was yesterday.” But this adds nothing. It just repeats in different words what needed explaining. Why is it preferable for a pain to be over and done with?
    So all I can say is: reflect again. Isn’t it just obvious that this is preferable? Consider how relieved you are when your dentist appoinment is over. Or how bummed you’d be if you realized you only dreamed it, and that your dentist appointment is actually tomorrow. Or consider how pleased you are when you awake to find it’s only 3 a.m. rather than your waking time, as you feared it would be. Isn’t it just obvious that it’s reasonable to be biased in this way?

  16. On reflection, I think that I don’t want to deny that it is rational (i.e., fitting) for Parfit to have a bias towards the future. I would admit that Parfit has a stronger reason (or more reason, on balance) to desire that he will not suffer than he has to desire that he has not suffered. By contrast, it seems that I have just as much a reason to desire that Parfit has not suffered as I do to desire that he will not suffer. However, I do want to deny that Parfit has *no* reason to desire that he has not suffered. As you say, S is clearly bad, and as such it seems fitting to wish for the non-occurrence of S. So, I want to deny that Parfit’s extreme bias is rational in the sense of being a fitting or appropriate response to his past suffering. It seems to me that Parfit has just as good a reason to desire that he has not suffered as I do. Thus I think that (2″) is false.
    Now you say that you can run a similar argument even if we deny the extreme bias and accept only a bias. You say,

    we’d just have to include degrees of badness and desire in RCTW (as it should have anyway). RCTW would imply that these degrees should march in lock step (the amount of goodness in the good is proportional to how intense the desire should be, and likewise for badness).

    But doesn’t Darwall want to limit the relevant reasons to agent-neutral reasons. He says on p. 13,

    Moreover, to one caring, considerations of welfare present themselves as agent-neutral, rather than agent-relative. From the perspective of sympathetic concern, what benefits the cared for seems not only good for him; it seems a good thing absolutely (agent-neutrally) that he is benefited in this way.

    So, perhaps, when Darwall says, “what it is for something to be good for someone just is for it to be something one should desire for him for his sake, that is, insofar as one cares for him,” he is referring to what one should desire for agent-neutral reasons. And if we reject the extreme bias and accept the self-other asymmetry, then what we might say is that although Parfit and Doug have the same agent-neutral reason to desire that Parfit not experience future or past pain, Parfit has in addition to this an agent-relative reason (a reason that others don’t have, and a reason that he doesn’t have with regard to his past pain) to desire that he not experience future pain. This additional agent-relative reason is what explains the self-other asymmetry.
    Now I haven’t made up my mind about RCTW and I haven’t read Darwall’s book in a long time, so I’m not sure whether this suggestion that Darwall means to be focusing on only agent-neutral reasons is true to the text, but it does seem worth considering whether it is true to the text or not. That is, we could revise RCTW to say: what it is for something to be good for someone just is for it to be something that there are agent-neutral reasons of the right kind to desire for him for his sake, that is, insofar as one cares for him.

  17. “Why is it preferable for a pain to be over and done with?”
    Here’s one way to answer this. If I learn today that the pain C I thought I had experienced yesterday is one that I will experience tomorrow, then C makes my future much worse than it would have (had it been over). Here’s how. Compare these two life histories h and h’.
    h. |—-C——————–> future
    t
    h’.|——————C——> future
    t’
    I thought my life history was as depicted in(h) where C occurred just after t. In that case the pain in C was a very small part of a long future. It did not affect the value of my future much, since I had so much left.
    But the fact is that (h’) is my real history. I now know that C will occur right after t’. But now my future is much shorter. The disvalue of C has a larger impact on the value of my future. There are fewer good things to outweigh C. So I could want C in the past in order to lessen the impact of C on the value of my future.

  18. Doug,
    That’s an interesting suggestion (to include in RCTW a restriction to agent-neutral reasons). My argument depends upon the idea that when certain events become past, the attitudes it is rational to have towards them can change (even though their value does not). But if we restrict our discussion to the agent-neutral reasons to have the attitudes, then, arguably at least, our reasons do not change.
    So I agree that if RCTW were modified in this way, it might avoid my argument.
    But let me add that there is another, related idea in Darwall’s book that I think the bias towards the future would still make trouble for. Darwall takes one of the important and orthodox-challenging views in his book to be that welfare’s normativity is agent-neutral rather than agent-relative. That is, contrary to conventional wisdom, if some state of affairs would be good for me, the reason I have to want this state of affairs to occur is the same reason you or anyone who cares about me has. I don’t have any special reason to want what would be good for me. My reason is just that it would be good for someone I care about, which is the same reason anyone else who cares about me has.
    (It may be that this thesis is a consequence of RCTW (Darwall’s version of it, that is). I’m not sure, I’d have to think about it.)
    But I think the rationality of the bias towards the future shows that welfare does provide agent-relative reasons. When an event that’s bad for me becomes a past event, there are reasons I had with respect to it that change. But none of your reasons with respect to that event change (or at least not by as much as mine change). Therefore, I had reasons with respect to the event (an event that is bad for me) that you didn’t have. Therefore, welfare’s normativity is at least partly agent-relative.
    Does that seem right?

  19. Mike,
    I’m not sure I’m getting your case. But I’ll focus on this claim:

    I could want C in the past in order to lessen the impact of C on the value of my future.

    If C were in the future, would that lessen the impact of C on the value of your past? If so, then I don’t think someone who rejects the rationality of the bias towards the future would be moved. He would ask, Why should I prefer to lessen the impact of C on the value of my future rather than on the value of my past?

  20. Chris,
    Yes, that does seem right. I was left unconvinced by that part of Darwall’s book. I’m all for the orthodoxy: that welfare’s normativity is at least partially agent-relative. I’m not sure, though, that the revised version of RCTW that I’m suggesting would commit its proponent to denying this. One might think that the correct analysis of welfare should appeal to only agent-neutral reasons, but that a person’s welfare does provide agent-relative reasons for him or her in addition to the agent-neutral reasons it provides everyone with.

  21. Chris,
    Suppose your sister Phyllis is the patient in Parfit’s story, instead of its being Parfit. You are far away from your sister, on the other side of the earth, and you won’t see her for months.
    Are you hoping that she is the patient who already had the surgery, or the one who will have the surgery later today?
    If it were my sister, I would be hoping that she would have the surgery later today. But one of my colleagues has the strong intuition that we should hope the operation is over.

  22. Chris, I have these two choices. I can let the the painful event equal 1/m+n of my future (say, immediately after I am born). It cannot be any more in the past than that.
    h |C—-m—–|–n—–>
    Or I can let the painful event equal 1/n of my future (this is when the painful event happens later).
    h’ |—–m—-|C—n–>
    It is proportionately more of my future as the event occurs later and later, hence it is a more significant part of my future as the event occurs later and later.
    Therefore, I’d want painful events (if they must occur sometime) to occur earlier rather than later. It is better (measured as a bad proportion of my future) that C occurs as in h than that it occurs as in h’.
    There is no denying that, I don’t think.
    Obviously, if C is going to occur, it has to be a part of my life. The only question is which part of the future it occurs in. I think you should prefer that it occurs as in early future rather than late future.

  23. Jamie,
    Good case. Although my intuitions are less clear about it, I think I’m with you. Suppose I first receive news that Phyllis will have her operation tomorrow and that it will last one hour. Suppose a few minutes later I learn that news was incorrect. I learn that, in fact, she had the operation yesterday and it lasted ten hours. I’m pretty sure that, upon hearing the corrected news, I would NOT be relieved. I’d cringe. So, yeah, I’m with you. I lack the bias towards the future in the Phyllis case.
    But what’s weird is that, if I were in the room with Phyllis and she and I were waiting together for the nurse to return with the news, I’m pretty sure I’d adopt Phyllis’s preferences. Or at least I’d be closer to them than I am in your case. I’d be at least somewhat biased towards the future.
    Psychologists should study why we (or at least some of us) are like this. Philosophers should study whether we’re irrational in being like this.

  24. Mike,
    Ok, thanks, I’m getting the case now and also your claim. But I’m still not convinced that a denier of the rationality of the bias towards the future should be moved by your case. You seem to be presupposing what we’re trying to explain. You say:

    It is better (measured as a bad proportion of my future) that C occurs as in h than that it occurs as in h’.

    True. But it’s also true that it is better (measured as a bad proportion of my past) that C occurs as in h’ than that it occurs as in h. So why should I prefer that C occur as in h? Your reason can’t be that it’s just rational to prefer a better future to a better past. For that’s exactly the bias towards the future, the thing we’re trying to explain.
    Don’t get me wrong — I’m with you that it’s rational to prefer a better future to a better past. I just doubt we can explain why this is rational without presupposing the very thing we’re trying to explain.

  25. Doug,
    You say:

    One might think that the correct analysis of welfare should appeal to only agent-neutral reasons, but that a person’s welfare does provide agent-relative reasons for him or her in addition to the agent-neutral reasons it provides everyone with.

    Yeah, I guess that’s a possibility. Interesting. Kinda weird, but something we should be open to.
    For my part, I’ll say that I doubt welfare can be analyzed at all. And I’d be even more surprised if it could be analyzed in terms of reasons. Isn’t it just obvious by the natural light of reason that I should want a good thing because it’s good, and not that it’s good because I should want it? ; )

  26. Chris,
    I’m curious about something. You say:

    On [Darwal’s] view, it’s not that you should desire what’s good for those you care about because it’s good for them; rather, it’s good for them in virtue of the fact that you should desire it.

    Do you think this is implied by RCTW? That is, if one accepts Darwal’s view about the semantics of ‘good’, must one also accept that reasons are prior to the good?

  27. Campbell,
    Yeah, I think the Euthyphronic claim is implied by RCTW. RCTW is an analysis of welfare; according to it, something is good for someone in virtue of the fact that we who care about him have reason to desire it. Reasons are prior.
    Darwall even acknolwedges that the view might seem to some to get the priority relation backwards. He writes (in ch. 1):

    … what it is for something to be good for someone is for it to be something that is rational (makes sense, is warranted or justified) to desire for him insofar as one cares about him. … This might seem to get the relation between care and welfare backward. Surely, it will be said, it is welfare that is the independent variable here and rational care the dependent variable. Concern for someone just is a sensitivity to his good. Unless facts about welfare are fixed independently of concern, how will concern have, as it were, anything to be responsive to?

    Are you thinking that RCTW may not in fact have the Euthyphronic implication?

  28. Chris,
    Apart from the Euthy implication (which it might have) it *seems* to have the implication that if no one cares about person P then x is good for P. I say “seems” since you do caution that your formulation is a little rough. But if the RCTW is supposed to look as it seems to look, viz.,
    RCTW: Some state of affairs x would be good for P IFF for all persons P’, P’ cares for P only if P’ should desire that x occur for P’s sake.
    then if no one gives a damn about P, it follows that x is good for him. That consequence is unhappy. You get a similar result if the IFF is changed to IF. You get the Euthyphro result, I think, only if everyone who cares about P *should* desire that something intuitively not so good occur for P’s sake.
    But it is interesting that the principle requires that *everyone* who cares about P should have the relevant desire. Why not instead an ideally positioned person? Raises interesting worries about everyone who cares about P being justified in believing that x is good for him. Seems like that wouldn’t happen frequently.

  29. RCTW is an analysis of welfare; according to it, something is good for someone in virtue of the fact that we who care about him have reason to desire it.

    What do you mean here by ‘RCTW’? Do you mean the statement in your original post which you labelled ‘RCTW’? I don’t see how that statement says anything about what is in virtue of what.

  30. Chris,
    You ask, “Isn’t it just obvious by the natural light of reason that I should want a good thing because it’s good, and not that it’s good because I should want it?” I don’t think that it’s obvious at all. Maybe, as the buckpasser would have it, “goodness is not a property that itself provides practical reasons…but rather is the purely formal (higher-order) property of having some other properties that provide reasons” (Hooker and Stratton-Lake, “Scanlon versus Moore on Goodness”). Likewise, Darwall might think that to say that x is good for P is to say that x has the purely formal property of having other properties that provide everyone who cares about P with (agent-neutral) reasons to desire x for P for P’s sake.

  31. Mike,
    I think you raise an excellent objection when you point out that, as I have put RCTW, it entails that if there is someone no one cares about, then everything — even a saucer of mud, even endless torture — is good for him. An unhappy consequence indeed.
    I should have made it clear that the conditional in RCTW should be subjuctive rather than material. (My gloss strongly suggested the material.) If it’s the subjunctive, then even if no one actually cares for the poor guy, it’s still true that if they were to care for him, it would be true that they should desire certain things for him (and not torture or saucer of mud).
    I will say, however, that in a (to me) really cryptic passage, Darwall seems to undercut any kind of “conditional” interpretation, whether subjunctive or material; he says our having the reason is not conditional on our caring after all!:

    Insofar as we care for someone, we ought to be guided by his good. So far, these reasons are merely
    hypothetical. The idea, however, is not that the fact that one cares about someone makes considerations of his good reasons for one. The reasons are not conditional on one’s caring. If that
    were so, they would be canceled once one ceased to care. They are conditional, rather, on a hypothesis one accepts or is committed to in caring, namely, that the cared for is worth caring for. (p. 8)

    This is especially odd because he aparently says later that everyone is worth caring for:

    … a person has reason to care about his own good because he has reason to care about himself. And he has reason to care for himself because he, like any person, has worth — he matters. (p. 83)

    All this would reduce RCTW to something roughly like the following:
    RCTW’: x is good for S iff everyone should desire x for S.
    And then your worries about an “ideally positioned” person really seem to arise. Could I desire x for S even if I never heard of him? Since I surely could not, I think Darwall would and should say this sense ‘should’ or ‘ought’ does not imply ‘can’. It is still true that we all have a reason to desire x for S even if we never heard of S. The nature of x demands or calls for our desiring it for S. That doesn’t sound totally unworkable.
    Anyway, none of this affects my argument, since my argument would work against RCTW’ just as well. But your comment, Mike, and many of the other objections made above against RCTW, I think raise important issues Darwall needs to address, or address more clearly.

  32. Campbell,
    I concede that my informal statement above labelled ‘RCTW’ is unclear about what is supposed to be in virtue of what. More formally, I’d put the theory as a definition/conceptual analysis:

    p is good for S =df. for any person x, if x cares for S, then x should desire that p be true for S’s sake

    If we put it like this, would you agree p is good for S in virtue of what it says in the analysans?

  33. Doug,
    Yeah, I was being a little facetious with all that natural light of reason stuff. I admit that I am not totally certain that I should want a good thing because it’s good. Though I do find it more natural to explain reasons in terms of value rather than the other way around, I have to admit the truth of the matter is far from obvious. Thanks for keeping me honest. ; )

  34. Hmm, I think Darwall really does have a serious problem with that conditional, since he doesn’t think typical hypothetical oughts really have the logical form of a conditional; he thinks they can’t be premises in modus ponens (for the reasons that Broome has more recently made a lot out of). So that’s kind of interesting.
    Chris, in your last comment addressed to Campbell it looks to me like you are confusing (or maybe deliberately conflating) semantics with metaphysics. That a certain formula is defined in terms of another does not mean that (an instantiation of) the first is true in virtue of the second. (For example, a good conceptual analysis of ‘gold’ will not mention its atomic number, even though it’s plausible that things are made of gold in virtue of being made of stuff with atomic number 79.)

  35. Jamie,
    About semantics and metaphysics. I agree that for certain predicates (like ‘is gold’), it might be that what the predicate means will not be the same thing as the property it expresses. (Though even this isn’t obvious for terms like ‘is gold’. The lesson some draw is that ‘is gold’ not only expresses the property being made of stuff with atomic number 79 but also means the same as ‘is made of stuff with atomic number 79’. People who say this would then say you can’t do conceptual analysis on ‘is gold’, since you can’t know its meaning a priori.)
    But, anyway, for other predicates — and I was taking the ‘is good for’ predicate to be one of them — I believe they mean the property they express (I use ‘property’ to cover relations). So stating its conceptual analysis or definition would simulataneously reveal those properties in virtue of which the property expressed by the predicate obtains.
    I’m not sure if this addresses your concern. Please let me know.
    However the right way to formulate it is, I’m fairly confident that, according to Darwall’s view, something is good for someone in virtue of the fact certain people should have certain desires.
    I don’t know what to say about Darwall on conditionals. I’ll only say that I don’t think what he says in Welfare and Rational Care will help clear it up.

  36. Here’s a suggestion for fixing up RCTW so as to avoid your objection.
    I notice that you say ‘S was bad for Parfit’. This seems the right thing to say. It would be odd to say ‘S is bad for Parfit’ on the day after S occurred. So it seems that whether S is bad for Parfit changes over time, and this suggests that a better formulation of RCTW would relativise wellbeing attributions to times, something like this:
    RCTW: A state of affairs x is good for a person P at a time t IFF for any person P’, if P’ were to care for P at t, then it would be that P’ should desire at t that x occur for the sake of P.
    So formulated, RCTW is consistent with your two claims:

    (1) S was bad for Parfit.
    (2) It is false that Parfit should now desire that S did not occur.

  37. Thanks Chris. I’m not sure what to think about the subjunctive version of that conditional. I’m just not sure how to evaluate it. What is the closest world to ours in which that antecedent is true? The antecedent says,
    a. (P’)(P’ cares about P)
    But surely I don’t go to the closest world in which everyone cares about P. I think I’m supposed to be concerned with everyone who happens to care about P. So presumably I should go to the closest world in which someone does care about P, and if in that world, everyone who does care about P should desire that x occur for P’s sake, then x is good for P. Something like this.
    To manage the problem of everyone being justified in believing that x is good for P, you say,
    “It is still true that we all have a reason to desire x for S even if we never heard of S”.
    But Darwall’s suggestion is that P’ should desire x for P only if,
    “. . . [it] is rational (makes sense, is warranted or justified) to desire [it] for him. . .”
    But then it is not enough that I have *a reason to desire x for P*. I can have reason to desire x for P even if I am not justified in desiring x for P. I might, of course, have better reasons not to desire x for P. And this is where the problem is, I think.
    Consider each person P’ who cares for P in that closest world to ours. Each one of those persons must have a reason to desire x for P, certainly, but also each must have better reasons to desire x for P than not to desire x for P. But this means that no matter what the epistemic situation of those who care for P, it must be true that the reasons they have to desire x for P are better than the reasons they have not to do so. But obviously, there can be some well-meaning but benighted friend of P who has lots of reasons (all of them, as it happens, mistaken) for believing that x is not good for P. This particular person is not justified in desiring x for P. This sort of problem just multiplies as we imagine different epistemic situations our friends might be in. So Darwall might need some device for filtering out epistemic situations that include information that is bad or irrelevant to determining whether x is good for P.

  38. Campbell,
    That’s a really good suggestion. That really does seem to avoid my objection. However, I wonder if it brings with it a new problem.
    Since RCTW is a metaethical theory about the meaning of welfare judgments, presumably we want it to be as neutral as possible as regards substantive normative theses about welfare. Your formulation of RCTW suggests that the “fundamental locution” is ‘x is good for S at t’. This would imply that for any state of affairs that is good for a person, there is some particular time at which it is good for that person. But some philosophers have endorsed certain normative theses about welfare that might be hard to reconcile with this.
    For example, Velleman (and others before him) claim that the “narrative structure” of a life can impact how good it is for the person. So the state of affairs of your life having such-and-such structure could be good for you — it could make your life better than it would have been. But it is not obvious that there is any particular time at which the state of afffairs of your life having this structure is good for you.
    Another example is Nagel. One theme of his paper “Death” is that “while [a] subject [of benefit and harm] can be exactly located in a sequence of places and times, the same is not necessarily true of the goods and ills that befall him” (p. 77). He gives some examples.
    I’ve never been that moved to believe in such goods. But I wonder if our metaethical theory shouldn’t rule them out off the bat.

  39. Mike,
    You say:

    I’m not sure what to think about the subjunctive version of that conditional. I’m just not sure how to evaluate it. What is the closest world to ours in which that antecedent is true? The antecedent says,
    a. (P’)(P’ cares about P)

    Sorry, I was probably unclear. I mean for the quantifier to be outside the scope of the counterfactual, not part of its antecedent. So the righthand side should say: “For any person, if he were to care about P, then … “. Not: “If everyone were to care about P, then … .”
    But about your bigger complaint. This kinda gets back to what Doug and I were going on about. Darwall doesn’t seem to be sensitive to all of these important issues, so what I’ve tried to do is come up with a way for a theory like his to avoid them. I don’t know if it’s the way he meant.
    Anyway, on the theory I mean, it IS enough that you have a reason to desire x for P. You don’t need to have most reason to desire x for P. I know Darwall says “justified.” And that implies “most reason.” But I don’t think his theory has a chance if that’s what he meant. (The evil-demon cases from above show why.)
    By the way, though I wasn’t really clear about it, throughout all of this, I’ve been wanting to set aside epistemic reason. I’ve been meaning to be using ‘has a reason’ in such a way that, if you have great evidence that x would be good for you, but in fact x would be not all good for you (or anyone), you do not have reason to want or to bring about x. You believe you have such a reason. But, as I mean to be using the term, you in fact don’t.
    Does that, plus all the other restrictions discussed earlier, assuage your worrries?

  40. Chris, I don’t mean to put you in a position to assuage worries. My questions are simply from interest, though it probably seems like I enjoy giving you a difficult time. I don’t mean to complain. What you have said has been very helpful.
    Naturally, I can’t resist another question. I’ll be quick about it.
    On the wide-scope location of the quantifier,
    “For any person P’, if P’ were to care about P, then …”
    You are quantifying over people in this world, of course, and this makes things very interesting. For instance, the closest world in which some of my co-inhabitants care about me, I’m dead. Does that mean it is good for them to throw dirt on me here? I’m kidding (sort of) but you see the idea.
    On the position that it is enough to have *a* reason (sorry I missed the exchange on this), it just seems to me mistaken. I’m probably misunderstanding you. Look, you might have *a* reason to desire that I drink antifreeze. Antifreeze has a sweet taste (or, some of it does, and that’s why so many dogs unfortunately drink it). But you certainly have better reason to desire that I don’t. I’m sure you don’t want to say that drinking the antifreeze is good for me simply because its sweet taste is *a* reason to drink it.

  41. Chris,
    You write:

    It is not obvious that there is any particular time at which the state of affairs of your life having this structure is good for you.

    The time-indexed version of RCTW can say that this state of affairs is good for you at every time, and that seems consistent with saying that there’s no particular time at which it’s good for you. (‘Do you like any particular kind of beer?’ ‘No, I like every kind of beer’.)

  42. Hey Mike and Campbell,
    I’m thinking about both your comments and I definitely want to respond to them, but today happens to be really busy. So please bear with me. I’ll get back to you eventually.
    Thanks, by the way, for all these comments. This is all extremely helpful for me.
    (Boy, how do bloggers with their own personal blogs keep up! This is like a full time job.)

  43. I think that there is some confusion as to the content of Darwall’s rational care analysis of welfare. In his original post on 1-11, Chris Heathwood put the analysis (though granted he acknowledged it was intended as a somewhat informal presentation) as:
    (H) X is good for C iff for all S, if S cares for C, S has reason to desire X for C’s sake
    Now, this formulation, which ties the reasons for desire mentioned in the analysans very closely to the actual attitudes of the caring agents who are the values of S, is simply not Darwall’s view (and this is true whether or not we read the conditional in the analysans as a material conditional or a subjunctive conditional the antecedent of which also speaks of facts about actual attitudes, e.g. ‘if S were to care for C, then S would have reason to desire X for C’s sake’). Darwall’s preferred way of putting his analysis in Welfare and Rational Care [WRC] is:
    (D) X is good for C iff insofar as one cares for C, one has reason to desire X for C’s sake
    Granted, upon simply seeing it and being told nothing else about what it is to mean, the ‘insofar as one cares’ formulation might seem a bit unclear, and (H) might seem to be a natural interpretation of it. However, Darwall explains what he means by the ‘insofar as one cares’ formulation on p.7-8, and it is certainly not the (H) interpretation. In his post on 1-11 at 3:42 PM, Chirs Heathwood expressed some confusion about the paragraph on p.8, but I think that this confusion can be dispelled by recalling what Darwall says in the discussion immediately preceding this paragraph. On p.5-6, Darwall considers a depressed person who “sees himself as UNWORTHY of care,” and consequently thinks that “considerations of [his] own welfare give [him] no reasons [for desire and action].” Darwall contends that while the depressive is mistaken about his not being worthy of care, he is in fact correct about a conditional:
    “What the depressive is right about is that IF he weren’t worth caring for, [THEN] considerations of his own good would not be reasons. It’s just that he is wrong in thinking he is unworthy of care. The deep truth that underlies the depressive’s claim is that it is a person’s being WORTHY of concern…that makes considerations of his welfare into reasons” (WRC, 6) [emphasis added on ‘worthy’].
    Thus, Darwall is thus not identifying a person’s welfare with the reasons for desire people who happen to care about the person have. The depressive might well think that since he isn’t worth caring for, the people who (he thinks incorrectly) care about him are mistaken in wanting things for him out of that care – but surely the depressive doesn’t ipso facto think that anything is good for him or anything is bad for him. Similarly, Darwall seems clearly to be contending that the considerations of the depressive’s welfare can indeed give him reasons, even though he may well not happen to care about himself (perhaps as a result of his mistaken judgment that he is unworthy of care). With this discussion in mind, Darwall gives the ‘insofar as one cares’ formulation and explains what he means by it on p.7-8:
    “What is for someone’s good or welfare is what one ought to desire and promote insofar as one cares for him.
    In this respect the normative relation between care and welfare has a similar status to that of the familiar principle of instrumental reasoning that underlies hypothetical imperatives, namely, that insofar as one aims at an end, one ought (must) take the “indispensably necessary” means that are in one’s power. Kant plausibly claims that this normative principle is guaranteed to be true by the concepts of ends and means. To adopt an end is to place oneself under a norm of consistency requiring that one either take the necessary means or renounce the end. Similarly, caring for someone involves a normative relation to that person’s welfare. Insofar as one cares for someone, one ought to be guided by the person’s good in one’s desires and actions.
    If we take it only this far, however, welfare’s normativity will seem only hypothetical in the same way means/end reasoning is. The consistency constraint that governs means and ends requires only that one either take the necessary mans OR give up the end. It neither puts forward a “categorical” normative reason for taking the means that is conditional on having adopted the relevant end, nor puts forward the fact of having adopted that end as a categorical reason for taking the relevant means. From the facts that one has adopted A as and end and that B is a necessary means to A, it does not follow that one ought or has reason to do B. If one had no reason to adopt A (or worse, reason not to do so), then maybe one should not do B, but give up A. The reasons it puts forward are a conditional, not on the fact of having a given end, but, as it were, on a normative “hypothesis” that one accepts or is committed to in having the end – namely, that the end is to be, or ought to be, accomplished.”
    So Darwall’s explanation of the ‘insofar as one has reason to y, one has reason to z’ locution as it shows up in both the “hypothetical imperative” (i.e. the thesis that “insofar as one has reason to aim at end E, one has reason to adopt the “indispensably necessary” means to E”) and the analysans of his and his rational care analysis of welfare (i.e. “insofar as one cares for C, one has reason to desire X for C’s sake”) is:
    ISF(y,z): IF one has REASON to y (i.e. if y-ing is warranted, justified, the thing to do / the attitude to have), THEN one has REASON to z (i.e. z-ing is warranted, justified, the thing to do / the attitude to have).
    As Darwall stresses, the antecedent isn’t the claim that one happens to be y-ing (having some attitude, performing some action, etc.) – it’s the claim that y-ing is JUSTIFIED or WARRANTED, which can be true whether or not one happens to be y-ing. This is a conditional alright, but a conditional the antecedent of which is itself a normative claim. So the “hypothetical imperative” gets understood as:
    (HI) IF one has REASON to aim at end E, then one has reason to adopt the “indispensably necessary” means to E.
    And (D) above gets understood as:
    (RCTW) X is good for C iff IF one has REASON to care for C (i.e. caring for C is warranted, justified, the attitude for one to have, C is worth caring for), then one has reason to desire X for C’s sake.
    This is why Darwall claims on p.8 that the reasons for desire involved in his analysis are not conditional on caring – the reasons are not conditional on simply having the attitude of care, but conditional on this attitude’s being warranted. What Darwall says on p.83 – that everyone is worth caring for – is simply a substantive normative thesis that is consistent with (though in no way entailed by) his rational care analysis of welfare that, if true, would mean that we in fact have the reasons for desire spoken of in the consequent of the analysans of the RCTW for all values of C. (I hope it should be clear now how Darwall’s analysis in no way reduces to anything like what Heathwood threatened it would if Darwall didn’t intend the ‘insofar as’ relation in the way suggested by (H)).

  44. Howard,
    I don’t follow your argument. You write:

    So Darwall’s explanation of the ‘insofar as one has reason to y, one has reason to z’ locution as it shows up in … the analysans of his rational care analysis of welfare (i.e. “insofar as one cares for C, one has reason to desire X for C’s sake”) is …

    But the analysans of Darwall’s analysis,

    (1) Insofar as one cares for C, one has reason to desire X for C’s sake,

    does not appear to be an instance of the locution:

    (2) Insofar as one has reason to y, one has reason to z.

    Rather, it seems to be of the form:

    (3) Insofar as one ys, one has reason to z.

  45. Campbell,
    Getting back to your earlier interesting suggestion, I can accept that if the state of affairs in question (the one describing the narrative structure of your life) is good for you at every time, then it’s not too much of a stretch to say that there’s no particular time at which it’s good for you. But I still wonder whether believers in the value of such states of affairs should be willing to accept that this state of affairs is good for you at every time. It seems to force them to say that each moment of your life is made better by the fact that your life as a whole has this narrative structure. But that sounds like just the kind of thing they’d want to reject (it also sound independently implausible). Their view is supposed to be holistic: there are factors that contribute to the value of the whole life that don’t contribute to the value of any of the parts (or something like that). But your suggestion has this holistic feature contributing to the value of the parts.

  46. Chris, you’re right that ‘holists’ about narrative structure and the like want to say that there is no time in your life made better by the contribution of the narrative structure. It’s just that it’s not clear what is at stake in this dispute.
    Suppose I say that all the suffering and sacrifice that Lindsey went through in training to become the world’s greatest SBX champion is now retrospectively not bad at all, because she has succeeded. Suppose David agrees with me about (i) how good her life is in light of her SBX success, (ii) how good it would have been had she turned out to be only very good at snowboarding, (iii) how good it would have been if she had suffered just that much in her early teens but that suffering had not been causally implicated in her later success. And about all judgments of complete quality of Lindsey’s life under various circumstances, let’s say. But David insists that the good contributed by the success does not occur at any time, while I say that the success makes the earlier, suffering moments of her life less bad in retrospect.
    What is the substance of that disagreement?

  47. Campbell,
    Yes, I’m sorry; what I should have said was that Darwall’s explanation of the ‘Insofar as one ys, one has reason to z’ locution was:
    ‘Insofar as one ys, one has reason to z’ = IF one has REASON to y, then one has reason to z.
    This is what fits both the hypothetical imperative case and the analysans of the RCTW case, since the hypothetical imperative in ‘insofar as’ language (as Darwall puts it on p.7) is ‘insofar as one aims at an end, one ought (must) take the “indispensably necessary” means that are in one’s power’ (note: it reads ‘insofar as one AIMS at an end’ not ‘insofar as one HAS REASON to aim at an end, which is parallel to the analysans of the RCTW case in terms of fitting the schema of ‘insofar as one ys, one has reason to z’; sorry about the misprint. Darwall is indeed explaining the locution ‘insofar as one ys, one has reason to z’, not (as I misleadingly put it) ‘insofar as one has reason to y, one has reason to z’). So Darwall’s explanation of both (HI) and WCTW from ‘insofar as’ language into conditionals is:
    (HI) Insofar as one aims at end E, one has reason to adopt the “indispensably necessary” means to E
    =
    IF one has reason to aim at end E, then one has reason to adopt the “indispensably necessary” means to E
    (WCTW) X is good for C iff insofar as one cares for C, one has reason to desire X for C’s sake
    =
    X is good for C iff IF one has REASON to care for C (i.e. caring for C is warranted, justified, the attitude for one to have, C is worth caring for), then one has reason to desire X for C’s sake.
    Thanks.

  48. Mike,
    Not at all – your questions and objections have been very helpful to me. I’ll respond to your last post, but it may all be academic given Howard’s last post. (As if there’s any way for this debate not to be academic.)
    Your objection having to with the counterfactuals is funny, but is also a serious objection. (It reminds me a little bit, by the way, of a kind of objection my teacher used to make to virtue-ethical, or “What Would Jesus Do,” theories of right and wrong. If a cop pulls you over and asks you your name, you should do what Jesus would do, namely, say, “My name is Jesus Christ.”)
    Here’s a weaselly (but often made) kind of reply. The similarity relation relevant to the counterfactual in RCTW is one according to which the nearest world in which some person cares about you is never one in which you’re dead. Perhaps, according to this similarity relation, we hold facts about you constant and change only the attitudes (and perhaps a few other features) of the would-be carer.
    I do admit that this reply is hand-wavey and that the objection is a problem for this kind of theory. Many theories that make use of counterfactuals face similar problems.
    About it being enough to have a reason. I think if we put the right qualifiers in the right places (some of which we’ve been discussing) we can avoid your antifreeze objection. One qualification is that the desire in question should be intrinsic. I don’t have a reason to intrinsically want (that is, to want as an end) that you drink antifreeze. But, since I care about you, I do have a reason to intrinsically want that you experience the pleasure you would experience if you were to drink the sweet-tasting antifreeze. So if we take RCTW to be a theory about intrinsic value, then we get the result that your pleasure from the antifreeze, but not your drinking the antifreeze, is intrinsically good for you. (We could also get the result that your feeling miserably ill as a result of drinking the antifreeze is intrinsically bad for you. We’d also probably say (though this wouldn’t follow just from RCTW) that your drinking the antifreeze, despite its having some good consequences, is, all things considered, extrinsically bad for you, since its intrinsically bad effects for you outweigh its intrinsically good effects for you.)

  49. Jamie,
    Why can’t questions about well-being at particular shorter-than-a-whole-life periods of time be substantive disagreements? If one theory entails I had a good day, and another entails I had a bad day, isn’t that a substantive disagreement between the theories? Even if they get the same results about a whole life, we might still prefer one theory over the other depending on what they say about such things.

  50. Howard,
    Thanks for clearing up Darwall’s view for me. As I suggested earlier, I never knew what to make of those passages where he said having the reason is not, after all, conditional upon whether one cares — it’s instead conditional upon whether one has reason to care. I had no idea that this is what he meant by ‘insofar as’. (You must admit it’s quite an idiosyncratic usage.)
    Anyway, what really helps me in your comments is pointing out to me that, although Darwall thinks everyone has reason to care for everyone, this does not mean Darwall would accept

    x is good for S iff everyone has reason to want x for S

    as an analysis of ‘good for’. He would accept it as true, but not as analytic.
    By the way, Howard, you don’t think that my misunderstanding of Darwall’s view affects the argument in my original post, do you? I.e., IF the argument works against the view I was attributing to Darwall, it also works against Darwall’s actual view, no?

  51. Jamie,
    I’ll add one more thing to Ben’s remark (with which I am sympathetic). It’s a question and then a comment. Was your claim that “holists” and “atomists” never disagree about the value of whole lives, and that instead they only ever disagree about the value of parts of lives? I’m not sure if that was your claim, but I don’t think this claim is true. Some people who believe in the value of narrative structure believe the following: there could be two lives that differ in value even though their momentary stages could be put into a 1-1 correspondence that preserves the value of those momentary stages. And this is incompatible with atomism, according to which: if two lives are isomorphic with respect to the values of their momentary stages, then the lives are equal in value.
    But perhaps you weren’t saying something in conflict with this.

  52. Ben, obviously that’s what the substance would be if there were any! But the two views I sketched look to me like notational variants of one another. The difference looks like a bookkeeping difference to me.
    Chris, I’m not seeing your point. I am not, of course, saying that holists and atomists never disagree about the value of whole lives — plainly atomists can disagree with other atomists about the value of whole lives, so what would prevent them from disagreeing with holists? Rather, I am saying that holists and atomists needn’t disagree about the value of whole lives, and when they don’t their disagreement looks very thin and theoretical to me.

  53. Jamie,
    Right, of course (re: your reply to me).
    But about the alleged mere bookkeeping difference. Don’t the following two claims clearly differ in substance?:
    “Last week was really great for me.”
    “Last week was just ok for me.”
    And these are the sorts of facts about which the theories we’re comparing will disagree.
    If we also believe that facts about value generate reasons for acting and having certain attitudes, these claims will also translate into differences in what we ought to do or how we ought to feel. Isn’t that more than mere bookkeeping?

  54. Jamie,
    It would be a mere bookkeeping difference only if the only purpose of a theory of well-being were to give values for whole lives. But that’s precisely what Chris and I are denying. Granted, when people state their theories of well-being, they typically focus on whole lives. But I see no good reason for this.

  55. Well, it would help me if someone could say what difference it makes. For instance, if it made a difference to what someone ought to do, how she ought to feel, who deserved blame or praise for something, that would help. Otherwise, all there is to say is that it makes a difference because it could just be true that between two theories (one holistic, the other atomistic, both agreeing on assessments of whole lives) one gave the correct assignments to goodness-for-a-person-at-a-time and the other gave incorrect assignments. And I suppose it could be, but without more explication than that it’s hard for me to see this difference.
    What about commenting on my example of the Lindsey?

  56. Here’s a quick not-well-thought-out answer. Suppose you think the values of worlds are relevant to the moral statuses of acts. One way to get the value of a world is to first figure out the values of all the lives there, and then add them up (or something). Another way is to start by figuring out the values of all the times there, by looking at people’s momentary well-being, and then add up the values of the times. (Maybe Broome talks about this in one of his books.) If you want to do things the second way, you need to have the facts about goodness-at-a-time.
    Michael Weber has a paper on satisficing where he says (roughly) it can be OK to sacrifice some overall life value for some momentary well-being, or vice versa. Views like that require goodness-at-a-time.
    I don’t particularly care for either of those views. For all I know, facts about momentary well-being might never make a difference to what someone ought to do or feel. Maybe it’s only facts about whole lives that are relevant (though I don’t know of any good argument for this claim). I’m interested in theories of well-being for their own sake, not just for their relevance to other parts of ethics. And I think it’s just part of the theory of well-being to give a correct account of momentary well-being.
    My own view about the Lindsey case is that the value has to be ascribed to the life as a whole. Her success doesn’t make the earlier times intrinsically better. To say that the success makes the earlier times intrinsically better is to diminish the extent to which she can be said to have sacrificed for her goal. But this is a complicated issue and I think it’s getting far from the original topic of the post, which I think had something to do with a book by Darwall. Maybe I’ll post something on it later.

  57. Chris,
    Yes, I certainly agree that the ‘insofar as’ locution is (without further explanation) somewhat unclear and does not carry its intended meaning on its face. (After I first encountered it in Welfare and Rational Care [WRC] I had to do a good deal of re-reading to understand what Darwall meant by it. But I should say that after encountering it in the work of others and similarly being puzzled by it, I did think that I found Darwall’s explanation of the locution on p.7 helpful in understanding their claims too. I am thus inclined to think that others may have something of an intuitive grasp of the concept expressed by the ‘insofar as’ locuation, and that Darwall’s explanation of it successfully captures what they mean by it as well).
    In any event, as to the bearing of your objection on Darwall’s actual view, I agree that what I said by way of clarification of the content of his view does not, all by itself, obviously diffuse your objection (we can easily reformulate your original objection as an objection to Darwall’s actual view with a very minor change – instead of the crucial supposition being that Parfit cares for himself, it is that Parfit HAS REASON to care for himself). My previous posts were primarily intended to clear up confusions / worries about what Darwall’s view says in cases in which no one cares about someone / some creature, whether it entailed what you worried it might entail in your post on 1-11 at 3:42 PM, etc. However, I do think that there may be a problem with your objection (even as intended against the view you were previously attributing to Darwall, and certainly as intended against Darwall’s actual view).
    First, let’s see how your objection revised as an objection to Darwall’s actual view looks. In your case let:
    t1 = the time at which Parfit wakes up,
    t0 = a time on the day before t1 that occurs within the period of time that Parfit does not remember at t1 (in the scenario in which Parfit has the 10 hour operation, this is the time at which the operation begins)
    t2 = a time after t, on the same day on which t1 occurs (in the scenario in which Parfit has the 1 hour operation, this is the time at which the operation begins)
    t3 = a time after t2 on the same day on which t2 (& thus t1) occurs (in the scenario in which Parfit has the 1 hour operation, this is the time at which the operation ends; t3 = t2 + 1 hour)
    We get the intuitive judgments that a.) it’s worse for Parfit to suffer for 10 hours from t0 to t0+10 hours than for Parfit to suffer 1 hour from t2 to t3, and b.) that it’s rational for Parfit at t1 to prefer suffering 10 hours from t0 to t0+10 hours to suffering 1 hour from t2 to t3.
    Now, Darwall did indeed explicitly give his analysis in WRC as an account of the non-comparative welfare concept of ‘X is good/bad for C’ rather than the comparative notion of ‘X is better/worse for C than Y’, but I think that your suggestion in your posts on 1-9 at 9:37 PM and 1-10 at 10:54 AM (or slight modifications of them to correspond to Darwall’s actual view) is exactly what Darwall’s approach would say about the comparative case (indeed, there are several places in WRC in which Darwall pretty explicitly employs just such an extension to the comparative case – e.g. p.43-45). Here, then, is the RCTW analysis of the comparative welfare concept of ‘X is better for C than Y’:
    (RCTW COMP): X is better for C than Y = insofar as one cares for C, one has reason to prefer X to Y for C’s sake
    Now, recalling how to interpret the ‘insofar as’ locution, we can render this in terms of a conditional in the antecedent as follows:
    (RCTW COMP): X is better for C than Y = IF one has REASON to care for C (i.e. caring for C is warranted, justified, the attitude for one to have, C is worth caring for), then one has reason to prefer X to Y for C’s sake
    (This is the sense in which I think that you are quite correct that “RCTW would imply that these degrees [of goodness and badness] should march in lock step (the amount of goodness in the good is proportional to how intense the desire should be, and likewise for badness)” – on the analysis, what it is for X to be better/worse for C than Y is for one to have reason to prefer/disprefer X to Y for C’s sake if it is the case that one has reason to care for C).
    Now, the crucial stipulation is that Parfit (and in fact Parfit at t1) has reason to care for himself. With this supposition, Darwall’s analysis does entail that:
    c.) Parfit (at t1) has reason to prefer his suffering for 10 hours from t0 to t0+10 hours to his suffering for 1 hour from t2 to t3.
    I take it that your worry is that c.) looks inconsistent with b.). But is it? I grant that c.) might be ambiguous between:
    c.’) Parfit (at t1) has MOST reason to prefer his suffering for 10 hours from t0 to t0+10 hours to his suffering for 1 hour from t2 to t3, and
    c.’’) Parfit (at t1) has SOME (but not necessarily most) reason to prefer his suffering for 10 hours from t0 to t0+10 hours to his suffering for 1 hour from t2 to t3, and
    So long as the rational preference is the preference one has most reason to have, c.’ would indeed be inconsistent with b.). (Throughout, by the way, I’m assuming a sense of ‘rational’ and ‘reason’ that excludes pragmatic reasons – i.e. the reasons to want someone to eat a saucer of mud that are mentioned in the evil demon cases, or reasons other than those we might call reasons of ‘fittingness’ for attitudes such as desire and care). However, c.’’) would be consistent with b.), since it is almost always rational not to have all kinds of attitudes that one has SOME reason to have. As you suggested in your post on 1-12 at 11:25 AM, the sense of ‘reason’ that belongs in the consequent of the conditional in the analysans of Darwall’s analysis is indeed SOME rather than MOST reason, so we should go with c.’’). (In fact, I would add that this reading of the analysis is strongly suggested if not almost forced by understanding the translation of the ‘insofar as’ locution). So in this case Darwall’s analysis does not entail c.’), but only c.’’), which is consistent with b.). Does this solve the worry?

  58. Campbell,
    Yes; I’m very sorry and thanks so much for noticing. The post should be as is except for the part describing c.), c.’), and c.’’), which should read:
    c.) Parfit (at t1) has reason to prefer his suffering for 1 hour from t2 to t3 to his suffering for 10 hours from t0 to t0+10 hours.
    I take it that your worry is that c.) looks inconsistent with b.). But is it? I grant that c.) might be ambiguous between:
    c.’) Parfit (at t1) has MOST reason to prefer his suffering for 1 hour from t2 to t3 to his suffering for 10 hours from t0 to t0+10 hours.
    c.’’) Parfit (at t1) has SOME (but not necessarily most) reason to prefer his suffering for 1 hour from t2 to t3 to his suffering for 10 hours from t0 to t0+10 hours.
    (I need to be more careful when copying and pasting. Thanks again)

  59. Hi gang,
    Over the past few days I have, several times, started posts I couldn’t finish. They’ve had to do with an issue that has come up in objections by Mike and by Howard. It’s the issue of whether Darwall’s view makes use of the idea of a reason to desire or most reason to desire. As Howard points out, it could be that the success of my argument turns on this.
    I find that, right now at least, I just can’t figure out what Darwall’s view is. In fact, I find that I don’t even fully understand many of the possibilities.
    If g would be good for S, and we have reason to care for S, I don’t know whether:

    (a) we are rationally required to desire g for S’s sake (so that we are irrational if we don’t desire g).
    (b) we are rationally permitted to desire g for S’s sake (and so would not necessarily be irrational if we don’t desire g).
    (c) we have most reason to desire g for S’s sake. (And if this is the case, I don’t know if this means that we are rationally required to desire g for S’s sake. Is it ever rationally permissible to fail to do what one has most reason to do?)
    (d) we have a reason to desire g for S’s sake. (If this is the case, and there happens also to be no reasons present not to desire g for S’s sake, and there happens also to be no reasons in favor of any of our alternatives to desiring g for S’s sake, does it follow that we have most reason to desire g for S’s sake?)

    (I am assuming in each case that the relevant kind of rationality and reasons has to do with what Doug was calling ‘object-given reasons’. So this eliminates what we were calling pragmatic or prudential reasons for us to desire g for S’s sake. I also assume we’re talking about intrinsically desiring g for S’s sake.)
    So I’m gonna think about this more when I get the time. Maybe I’ll resurrect the post, though by then I suspect it would be positively irrational for anyone still to care. If anyone has any comments about (a)-(d) above right now, of course I’d love to read them.
    In any event, thanks very very much for everyone’s comments. It has really helped me think more deeply about all this.
    P.S. I now notice that some of the first comments made in response to the original post are no longer there. Does TypePad delete earlier comments when the number of comments exceeds a certain number?

  60. Chris: There shouldn’t be any deleting of comments by TypePad, and I don’t see that any have been (at least all the comments I remember being at the beginning are still there on my screen, starting with Jamie’s “Great!”).
    Terrific post and comments, BTW.

  61. Yes. My ‘Great!’ has always been there. So, you are going insane. This will be very epistemologically interesting for you — head over to Certain Doubts instantly!

  62. Chris,
    First, I think that it might be a bit clearer to put things in terms of better and worse for a creature and reasons for preference rather than simply good and bad for a person and reasons for desire. This is because i.) (as you seemed to allude to in your posts on 1-9 at 9:37 PM and 1-10 at 10:54 AM) your initial Parfit case may be put most clearly in these terms (e.g. intuitively it’s worse for Parfit to suffer for 10 hours from t0 to t0+10 hours than for Parfit to suffer 1 hour from t2 to t3, and it’s rational for Parfit at t1 to prefer suffering 10 hours from t0 to t0+10 hours to suffering 1 hour from t2 to t3), and ii.) issues of reasons for desire may be complicated by the fact that we might be able simultaneously to have most reason to have a desire for S and also most reason to have a desire for S’ where S and S’ are (and are known to be) mutually inconsistent – i.e. it might be rational for us to have conflicting desires [but probably not rational for us to have conflicting desires of the same strength (except perhaps in cases of rational indifference if this is what rational indifference is); we might say that even if it is rational for us to desire both S’ and S, it will have to be rational for us to desire one of these more strongly (again, except perhaps in cases of rational indifference), which strongest rational desire we might call a desire “all things considered” (if rational desires of equal strength is rational indifference, then in cases of rational indifference we might do something like say that there is nothing that it’s rational to desire all things considered or call both desires “rational desires all things considered”, etc. – in any event I don’t think how to treat cases of indifference will matter much here, as it does not seem to be operative in the Parfit case)].
    Please let me then re-translate your (a)-(d) options into language of better and worse for a creature and reasons for preference:
    “If x would be better for S than y, and we have reason to care for S, I don’t know whether:
    (a) we are rationally required to prefer x to y for S’s sake (so that we are irrational if we don’t prefer x to y).
    (b) we are rationally permitted to prefer x to y for S’s sake (and so would not necessarily be irrational if we don’t prefer x to y).
    (c) we have most reason to prefer x to y for S’s sake. (And if this is the case, I don’t know if this means that we are rationally required to prefer x to y for S’s sake. Is it ever rationally permissible to fail to do what one has most reason to do?)
    (d) we have a reason to prefer x to y for S’s sake. (If this is the case, and there happens also to be no reasons present not to prefer x to y for S’s sake, and there happens also to be no reasons in favor of any of our alternatives to preferring x to y for S’s sake, does it follow that we have most reason to prefer x to y for S’s sake?)”
    [Alternatively, I would propose a clarification of ‘reason to desire G’ by replacement in terms of ‘reason to desire G all things considered’ (where the ‘all things considered phrase’ means what I suggested above)]
    I think that the short answer is that in the important respect Darwall’s view is going to make use of the idea of a reason to have a preference [or “desire all things considered”] and not most reason to have a preference [or “desire all things considered”] – i.e. Darwall’s view is NOT:
    (ND) X is better for S than Y = if one has reason (even MOST reason, or caring for S is rational all things considered) to care for S, then one has MOST reason (or: one is rationally required) to prefer X to Y.
    [Or, if we really want to put things in terms of good and reasons for desire:
    (ND) G is good for S = if one has reason (even MOST reason, or caring for S is rational all things considered) to care for S, then one has MOST reason (or: one is rationally required) to desire G all things considered].
    His view is rather:
    (D) X is better for S than Y = if one has reason (even MOST reason, or caring for S is rational all things considered) to care for S, then one has SOME reason (but not necessarily most reason) to prefer X to Y.
    [Or, if we really want to put things in terms of good and reasons for desire:
    (D) G is good for S = if one has reason (even MOST reason, or caring for S is rational all things considered) to care for S, then one has SOME reason (but not necessarily most reason) to desire G all things considered].
    (Notice, by the way, that I say “SOME reason (but not necessarily most reason) to prefer X to Y,” not “SOME reason (but not necessarily most reason) to prefer X to Y for S’s sake.” I hope to explain what’s going on with the notion of “preferring X to Y [or desiring G] for S’s sake” and make it clear that in an important sense one can have most reason to prefer X to Y for S’s sake but not most reason to prefer X to Y flat out). That is, Darwall’s view does not entail that “If x would be better for S than y, and we have reason to care for S, then we are irrational if we don’t prefer X to Y.”
    Let me try to explain why, which will involve an attempt to explain what according to Darwall in WRC it is for one to desire / prefer something for a creature’s sake (and thus get a better answer to the questions about (a)-(d)).
    First, the broad questions Darwall is seeking to answer in WRC are:
    (Q1) What do judgments about creatures’ welfare mean?, and
    (Q2) What is the relationship between creatures’ welfare and agents’ reasons for attitudes (care, desire) and action?
    Darwall’s proposal is that we understand the concept of welfare in terms of reasons for care and desire rather than the other way around (as previous posts have suggested, I think that Darwall pretty clearly intends his account as a conceptual analysis of welfare into reasons for care and desire – see especially p. 6-7, 8-9, 11-12), and in particular that we need to understand the concept of an agent’s welfare in terms of reasons for desire conditional on reasons for care for the agent as opposed to the agent’s own desires or reasons for desire. One class of Darwall’s main opponents in WRC is thus proponents of views we might call “agent’s own rational preference accounts” (of which I think that what Darwall calls “informed desire accounts” are special cases – they’re what we get by identifying an agent’s welfare with her rational preferences and then giving a full-information account of rational preferences), which maintain that an agent’s welfare is to be conceptually identified with the satisfaction of her own rational desires / preferences (or rational desires restricted in something like the way suggested by Overvold (1980) or considered by Parfit (1984)).
    I think that Darwall’s strongest objection to such views is what he calls the “scope problem” (p.25-31, 43-46, 52-53). This is the general problem of “it seem[ing] unacceptably broad to include within a person’s welfare whatever he [rationally] wants,” two closely related instances of which are i.) it seems that agents can have reasons to prefer things quite independently of considerations of their own welfare, can still rationally desire things and be motivated by such rational preferences even when they (perhaps mistakenly) care relatively little if at all about their own welfare, and even if the satisfaction of such preferences for things unrelated to the agent’s welfare end up contributing to it, the satisfaction of these rational preferences seems quite distinct from the benefit to the agent that results from their satisfaction, and ii.) agent’s own rational preference analyses of the concept of welfare make rational self-sacrifice (e.g. out of altruism, feelings of obligation, or dedication to a cause) a conceptual impossibility. Darwall considers some possible revisions to the agent’s own rational preference approach consisting of restrictions of the kind suggested by Overvold and considered by Parfit, but I think he pretty convincingly argues that these too run into versions of the scope problem).
    Thus, one of the main arguments behind Darwall’s view (in argument against the agent’s own rational preference approach) is in fact that what an agent can rationally prefer can diverge considerably from what is good for her (direct claims to this effect are all over the place in WRC, e.g. 44-45: “We sense a gap between options Tarzan rationally prefers and what would most benefit him,” “If what it is for something to be for Sheila’s good is for it to make sense for someone who cares for her to desire it for her sake, then if Sheila’s [rational] ranking and that of someone who cares for her were to diverge in this way, this would explain why even though Sheila rationally prefers O1 to O2, O2 is nonetheless better for her than O1 in the sense of being more for her good or welfare”). Add to this the fact that Darwall explicitly claims that it is at least coherent to think that agents have reason to care for themselves (e.g. 6, 83, as he in fact claims that as a substantive normative matter everyone has reason to care about herself and that “most of us would agree, of course that the depressive and self-loather are mistaken in thinking that considerations of their own welfare give them NO reasons [for desire and action; cf. p.5]” emphasis mine), and we see that his view must be that, while each person (since she has reason to care for herself) has some reason to prefer what is better for her to what is worse for her, she does not necessarily have most reason (or it is not necessarily rational for her) to prefer what is better for her to what is worse for her [alternatively: “while each person (since she has reason to care for herself) has some reason to desire all things considered what is good for her, she does not necessarily have most reason (or it is not necessarily rational all things considered for her) to desire all things considered what is good for her]. As such, the sense in which he must intend his rational care analysis must be (D), rather than (ND).
    Note, moreover, that this position just comports with common sense, both in the case of the person whose welfare we are considering (assuming she does have reason to care for herself) and other agents who may have reason to care for her. What is good for a creature (for whom we have reason to care) can make some rational claim on our preferences, but surely it cannot make a CONCLUSIVE rational claim on our preferences. There’s lots of creatures out there (as well as lots of other things that make rational claims on our preferences), and it’s obvious that making one better off can make others worse off (i.e. sometimes state of affairs S is better for creature C1 than S’, while state of affairs S’ is better than creature C2 than S), so if we had to have MOST reason to prefer what makes each creature (or each creature for whom we have reason to care) better off, we would be rationally required to have inconsistent preferences (which I think most would agree would be an absurd consequence).
    Now, one might (for no reason I can see) try to read Darwall very uncharitably by attributing to him the view (ND), claiming that the crucial claim in his main argument that what an agent can rationally want can diverge considerably from what is good for her is inconsistent with his substantive normative view that all people have reason to care for themselves, and claiming that his view flies in the face of common sense and has the absurd conclusion that we are rationally required to have inconsistent preferences. Let me try to offer some further positive reason as to why that this can’t be what’s going on.
    One of Darwall’s central theses in WRC is that our reasons to want a creature to be well off stem from or are constitute by our reasons to care about the creature. One of his central tasks is to explain what it is to care for a creature in this way without analyzing it in terms of the concept of welfare itself; in fact I think that Darwall both gives us compelling reasons to reject the view that care in the relevant sense can be analyzed in terms of welfare and a convincing account of what this sense of care is and thus what it would be for us to have reasons to feel it towards a creature. I don’t have the space to elaborate on this much here, but I will make the following brief remarks. First, Darwall notes that care for a creature seems to involve a desire FOR THE SAKE OF the creature, which may differ from other non-instrumental desires for its welfare, in a way that cannot be captured in the propositional content of the desire itself. For instance, were a person to develop a non-instrumental motivation to promote the welfare of another creature due only to classical conditioning or the other creature’s well being “striking her whim or fancy,” I think that we would hesitate to call this motivation an instance of care in the relevant sense – i.e. in which when one cares for a creature one is motivated to promote its welfare for its sake or out of care for it. (See e.g. WRC, 2). (In support of this view I would invite one to consider the fascination people sometimes have with celebrities, and the consequent (perhaps highly irrational) non-instrumental motivations they sometimes develop to promote the celebrities’ welfare. To call these desires or motivations instances of “caring for / about the celebrities” in the sense discussed above seems (at least to me) to be incorrect).
    In WRC, Darwall speaks of “sympathetic concern” or “sympathy,” which he endeavors to show is a psychological natural kind, and describes as a feeling or emotion (see WRC, chapter 3). Some of the crucial features of this feeling or emotion are:
    (S1) it responds to an individual creature’s situation,
    (S2) has that individual creature (or its imagined or actual response to its situation, or the relevance/ significance of its situation or its response to its situation for its life) as its object, and
    (S3) it involves certain kinds of motivations for the creature to be situated in certain ways or for her life to go in certain ways (in certain cases these may be distinguished from the motivations involved in closely related but distinct mental states Darwall discusses like “projective” and “proto-sympathetic” empathy by these motivations being components of a kind of distress the object of which is that same individual mentioned in (S1) and (S2) (as opposed to something like the person feeling the emotion herself) – i.e. the motivating components of what Darwall notes Martin Hoffman calls “sympathetic distress” as opposed to “empathetic distress (WRC 64-68))
    With this (very cursory) account of sympathetic concern or sympathy in mind, let us return to the concept of a desire for the sake of, or out of care for, the creature whose welfare it is a motivation to promote. Darwall’s proposal in WRC seems to be this: a desire to promote the welfare of a creature is one for the sake of (or out of care for) the creature when one has the desire out of sympathetic concern for the creature, understood as a feeling or emotion described by conditions (S1)-(S3) where “the individual” to which the conditions refer is the creature in question. What does it mean for A to have a desire to promote the welfare of B out of sympathetic concern for B? Sympathetic concern is an emotion, so this seems to be a special case of the general question as to what it means for someone to have a motivation out of, or as a result of having, an emotion. Emotions do not seem to be nothing more than motivations; they seem to involve qualities (e.g. phenomenal and physiological) that go beyond those of the motivations they are responsible for. But some emotions at least seem to involve MOTIVATIONAL COMPONENTS – i.e. part (but not all) of what it is to feel the emotion is to be motivated in certain ways. In particular, sympathetic concern has a motivational component (see S3) (for other examples, I would suggest that part (but not all) of what it is to be afraid of, worried about, or angry at something is to be motivated, respectively, to avoid, attend to, or behave aggressively towards it). I would therefore propose that Darwall’s account of what it is for person A to have a desire to promote the welfare of creature B for the sake of (or out of care for) B is for A to have a desire to promote her being situated in certain ways or her life going in certain ways (which ways in which the creature is situated or her life goes would intuitively answer to the description of (what B takes to be) “her being well-off”), which desire is the motivational component of a feeling of sympathetic concern that A has for B (understood as a feeling or emotion on the part of A described by conditions (S1)-(S3) where “the individual” to which the conditions refer is creature B).
    If this is correct, then I would offer the following account of Darwall’s explanation of the connection between our reasons to care about creatures and our reasons to desire states of affairs in which those creatures’ lives go in particular ways. I think that Darwall is employing a very generalized version of the “hypothetical imperative,” which we might call the “Warrant Composition Principle:”
    Warrant composition principle [WCP]: If x-ing is warranted on the part of A (i.e. A has reason to x), and y-ing is a necessary part of x-ing, then y-ing is warranted on the part of A (i.e. A has reason to y).
    To care about a creature is to feel sympathetic concern towards her, which involves (inter alia) desires that states of affairs obtain in which her life goes in certain ways. Thus, by WCP, if we have reason to care about a creature, then we have reason to desire states of affairs in which her life goes in these ways. (There is of course a separate normative question about what exactly the motivational component of our sympathetic concern should be, which is the normative question involved in welfare assessments – i.e. if sympathetic concern towards a creature is rational, what should this sympathetic concern motivate us to want / prefer for the creature).
    It should be clear, however, that even when this sympathetic concern towards a creature is rational, it may be rational to have many other conflicting motivational states – e.g. sympathetic concern towards other creatures, feelings of moral obligation, feelings of outrage, simply desires for certain experiences or ways one’s own life can go, etc. – all of which may rationally move us to desire things other than what sympathetic concern towards the creature rationally moves us to want. Indeed, it may well often be rational for these other rational motivational states to move us more strongly to want things other than what rational sympathetic concern rationally moves us to want, so that all told preferences for things other than what sympathetic concern rationally moves us to prefer for a particular creature may well often be rational. So given his understanding of the connection between reasons for care (i.e. feeling sympathetic concern) and reasons for desire / preference, Darwall’s view must be (D), not (ND).
    Put in terms of preference, we might say that A prefers X to Y for B’s sake just in case A has this preference and this preference (which is a comparative motivational state – something like being more motivated to promote / bring about X than one is to promote / bring about Y or being most motivated to promote / bring about X rather than Y) is (or is the result of) the motivational component (i.e. that mentioned in S3) of a feeling of sympathetic concern that A has for B. But this might seem to be a little rough – it might seem that there is a sense in which I can prefer X to Y for someone’s sake without these being the preferences I have, all told, at the end of the day (e.g. something like “for your sake I prefer you to have all my money, but given my other desires (e.g my desire to have enough money to eat), I prefer, all told, for you not to have all my money” – though in such cases it might be more felicitous to say “for your sake I WOULD prefer you to have all my money”). In such cases, we might say that the motivational component of the sympathetic concern I feel for you is a preference (or something like “inclines me to a preference,” “tends to cause a preference,” or “exerts causal influence on me to have a preference”) for X over Y, but that due to other motivations I have, I don’t actually have an all-told preference for X over Y. Here, then, are two senses of what it is for A to prefer X to Y for B’s sake:
    (FS1) A prefers X to Y for B’s sake = A prefers X to Y and this preference is (or is the result of) the motivational component (i.e. that mentioned in S3) of a feeling of sympathetic concern that A has for B.
    (FS2) A prefers X to Y for B’s sake = A feels sympathetic concern towards B, and the motivational component of the sympathetic concern A feels towards B is a preference (or “inclines A to a preference” / “tends to cause A to have a preference” / “exerts causal influence on A to have a preference) for X over Y (but A does not necessarily have an all-told preference for X over Y)
    I think, then, that in (a)-(d) there is a considerable ambiguity in the phrase ‘prefer x to y for S’s sake’ corresponding to the two senses of preferring X to Y for B’s sake above. If by ‘prefer X to Y for S’s sake’ you mean (FS2), then it is indeed the case that according to Darwall’s view we are indeed rationally required / have most reason to prefer X to Y for S’s sake if X is better for S than Y and we have reason to care for S. But it would not follow from this that we are rationally required to have an all-told preference for X over Y (since according to (FS2) to prefer X to Y for S’s sake is merely for one to feel sympathetic concern towards S the motivational component of which is a preference (or “inclines one to a preference” etc.) for X over Y, but (in light of the fact that we can (indeed rationally) be in other motivational states that push in other directions and result in different all-told preferences for X and Y). If, however, you mean (FS1) (as seems to be suggested by your parenthetical clauses), then it is not the case that that according to Darwall’s view we are rationally required / have most reason to prefer X to Y for S’s sake if X is better for S than Y and we have reason to care for S – it is merely the case that according to Darwall’s view one has SOME reason (but not necessarily most reason and one is not necessarily rationally reqired) to prefer X to Y for S’s sake (as to prefer X to Y for S’s sake in this (FS1) sense is to have an all-told preference for X over Y (as a result of the sympathetic concern one feels towards S) – which is not required on Darwall’s view since it is (D) and not (ND)).
    But any way you cut it, Darwall’s view is consistent with A having reason to care for B, X being better for B than Y, and A having a rational (all-told) preference for Y over X. This is all Darwall needs to have no problem with your Parfit case, since the intuitions we get there are a.) it’s worse for Parfit to suffer for 10 hours from t0 to t0+10 hours than for Parfit to suffer 1 hour from t2 to t3, and b.) that it’s rational for Parfit at t1 to prefer (in the sense of having an all told preference for) suffering 10 hours from t0 to t0+10 hours to suffering 1 hour from t2 to t3. This is completely consistent with Parfit having reason to care for himself and, as Darwall’s view (D) entails here:
    If one has reason (even MOST reason, or caring for Parfit is rational all things considered) to care for Parfit, then one has SOME reason (but not necessarily most reason) to prefer his suffering 1 hour from t2 to t3 to his suffering 10 hours from t0 to t0+10 hours.

  63. One more way the Parfit case could go:
    In Howard’s previous post, he assumed that the only intuitions we got in the Parfit case were:
    Intuition a.) Parfit’s suffering for 1 hour from t2 to t3 is better for him than his suffering for 10 hours from t0 to t0 + 10 hours, and
    Intuition b.) Parfit at t1 has reason to prefer that he suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3,
    and argued that these intuitions were consistent with the deliverances of Darwall’s view that, if a.) is true, and Parfit has reason to care about himself, then Parfit has SOME reason (but not necessarily most reason) to prefer his suffering 1 hour from t2 to t3 to his suffering 10 hours from t0 to t0+10 hours.
    However, a.) and b.) might not be the only intuitions one has in the Parfit case. One might in fact get the intuition that:
    Intuition c.) (if Parfit at t1 has reason to care for himself, then) Parfit at t1 has reason to prefer that he suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3 FOR HIS OWN SAKE (That is, that the rational motivational component of Parfit at t1’s sympathetic concern for himself is a preference (or “inclines him to a preference” / “tends to cause him to have a preference” / “exerts causal influence on him to have a preference) for him to suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3).
    On Darwall’s view, intuition c.) would indeed entail that in some sense Parfit’s suffering from 10 hours from t0 to t0+10 hours is better for him than his to suffering from 1 hour from t2 to t3, which might seem to contradict intuition a.). However, we would argue that having intuition c.) actually goes hand in glove with an intuition that in some sense it is in fact better for Parfit at t1 to suffer for 10 hours from t0 to t0+10 hours than for him to suffer from 1 hour from t2 to t3 (as some other posts have suggested), just as Darwall’s view would have it, and that this need not contradict intuition a.). We maintain, however, that Darwall’s view can give this result while remaining meta-ethically neutral as between various substantive normative views on welfare over time.
    First, to be fair to Darwall, we think that we need to appreciate the particular kinds of questions his analysis in WRC is intended to answer. As Howard claimed in his previous post, these are the broad questions:
    (Q1) What do judgments about creatures’ welfare mean?, and
    (Q2) What is the relationship between creatures’ welfare and agent’s reasons for attitudes (care, desire) and action?
    Darwall’s proposal is that we understand the concept of welfare in terms of reasons for care and desire rather than the other way around (as previous posts have suggested, Darwall pretty clearly intends his account as a conceptual analysis of welfare into reasons for care and desire – see especially p. 6-7, 8-9, 11-12), and in particular that we need to understand the concept of an agent’s welfare in terms of reasons for desire conditional on reasons for care for the agent as opposed to the agent’s own desires or reasons for desire. Darwall’s main opponents in WRC are thus people who want to hold that the concept of welfare is explanatorily prior (or in any event, not explanatorily posterior) to those of reasons for care and desire for a creature’s sake, and what one might call “agent’s own rational preference accounts” who maintain that an agent’s welfare is to be conceptually identified with the satisfaction of her own rational desires / preferences (or rational desires restricted in something like the way suggested by Overvold (1980) or considered by Parfit (1984)).
    In WRC, Darwall does not focus on important issues pertaining to welfare such as those of welfare and its relation to reasons over time (though in the interests of full disclosure he does discuss some such issues in Ch 2, p. 27, 32-35, 42-43) and interpersonal (or more generally intercreturial) comparisons of welfare – but neither do the opposing approaches to the concept of welfare (and its relation to the concepts of reasons for desire, care, and action) at the level of description at which he is arguing against them. Of course, in order to get an exhaustive answer to questions (Q1) and (Q2), one would have to address issues of welfare over time, interpersonal welfare comparisons, and how these relate to our reasons for attitudes and actions, but we would suggest:
    1.) We see no reason to think that Darwall’s approach is in any worse shape in terms of its ability to allow for treatments of these issues pertaining to the concept of welfare and its relationship to those of reasons for attitudes and actions. Note that proponents of taking welfare as explanatorily prior to reasons for care and desire will also owe us an account of how welfare is related to reasons for desire at different points in time. For example, for an exhaustive answer to (Q1) and (Q2), the “welfare prior to reasons” approach still needs to explain how it is that considerations of Parfit’s welfare give (or permit) Parfit at t1 to prefer the 10 hours of suffering from t0 to t0+10 hours to the 1 hour of suffering from t2 to t3, while (at least before t0 and after t3) considerations of Parfit’s welfare give us (as well as Parfit?) reasons to prefer his suffering the 1 hour from t2 to t3 to his suffering 10 hours from t0 to t0+10), and how the welfare of different agents is related to our reasons for desire, preference, and care (e.g. what is the bearing of X contributing more to A’s well being than Y contributes to B’s well being, or A’s being better off than B in state of affairs S while both are equally well off in S’ to our reasons to prefer A’s having X vs. B’s having Y, or S vs. S’ obtaining?). (Merely taking welfare as explanatorily prior to reasons for care and desire or claiming that it is clear by the natural light of reason (or some other mode of epistemic access) that one should want good things for creatures because they are good for creatures (rather than what’s good for creatures being determined by certain kinds of reasons for care & desire) does not give us an answer to these questions). An agent’s own rational preference account of welfare will similarly need to explain these kinds of things for an exhaustive answer to Q1 and Q2 (in fact, as Howard argues in a paper that attempts to extend Darwall’s general approach to the concept of welfare to the cases of interpersonal comparative welfare concepts, “Rational Care and Interpersonal Welfare Comparisons,” agent’s own rational preference accounts look busted when it comes to interpersonal comparisons (for reasons familiar to economists), and even refinements like one drawn from an interpretation of Arrow (1971) fail due to an interpersonal version of the “scope problem” that Darwall levels against the basic, non-interpersonal comparative version of the approach (p.25-31)).
    And 2.) We think that in WRC, Darwall can (and does) engage in an important debate with proponents of these rival approaches to answering (Q1) and (Q2) without engaging with these other important issues (i.e. welfare over time, interpersonal welfare comparisons, and how these relate to our reasons for attitudes and actions).
    Now, all of this said, we should concede that i.) Whether or not we should prefer Darwall’s approach to (Q1) and (Q2) over those of his opponents may in part depend upon the relative success of his approach in handling such issues as those of well being over time and interpersonal comparisons (for example, Howard thinks this is indeed part of the relevance of his attempt to extend Darwall’s view to the cases of interpersonal comparative welfare concepts), and ii.) Because Darwall does indeed cast his approach to (Q1) and (Q2) as a conceptual analysis (instead of something fuzzier like a “picture of how it could go”, etc.), he may incur additional explanatory burdens. Our contentions are simply that i.’) We should recognize that in WRC Darwall can go some of the way towards arguing for his approach to (Q1) and (Q2) without engaging with issues like those of welfare over time and interpersonal welfare comparisons, and ii.’) We should not read into Darwall’s analysis as presented in WRC more detail than would be fair to expect of an analysis the primary objective of which is to answer the broad questions (Q1) and (Q2) abstracted from issues like the those of welfare over time and interpersonal welfare comparisons.
    It may well be that when we press on our concept of a creature’s welfare (simpliciter), we can see that we do have (as suggested by some earlier posts) a concept of the welfare of a creature at a particular time, which must be reconciled with other welfare concepts we have (like a concept of a creature’s welfare “on the whole” or a creature’s welfare PERIOD), and our more coarse-grained talk of ‘a creature’s welfare’ simpliciter is somewhat ambiguous at the more fine grained level. We think that we should take the target of Darwall’s analysis in WRC to be our more coarse-grained talk of ‘a creature’s welfare’ simpliciter, which we can then precisify or extend to our more fine grained concepts of ‘welfare at a time’ and ‘welfare of a life on the whole / PERIOD’. We think that Darwall’s view can be easily extended in this way while remaining neutral as to how these concepts relate and which is normatively most important.
    Here, then, would be the extensions. We have Darwall’s general analysis as presented in WRC:
    (D for W simpliciter): X is better for C than Y = insofar as one has reason to care for C, one has reason to prefer X to Y for C’s sake
    [Or to clarify some of the language:
    X is better for C than Y = if one has reason to care for (i.e. feel sympathetic concern towards) C, then the rational motivational component of this sympathetic concern that one feels towards C is a preference (or “inclines one to a preference” / “tends to cause one to have a preference” / “exerts causal influence on one to have a preference) for X over Y (which, by WCP, means that given the truth of the antecedent of this conditional, one has SOME reason to have an all told preference of X over Y, but not necessarily most reason to have this all told preference (as one can rationally be in other, stronger, motivational states to the contrary)]
    We can precisify this into an account of the concept of ‘the welfare of a creature at a particular time’ as follows:
    (D for W of C at t): X is better for C at t than Y = insofar as one has reason to care for C at t, one has reason to prefer X to Y for C at t’s sake.
    [Or to clarify the language:
    X is better for C at t than Y = if one has reason to care for (i.e. feel sympathetic concern towards) C at t, then the rational motivational component of this sympathetic concern that one feels towards C at t is a preference (or “inclines one to a preference” / “tends to cause one to have a preference” / “exerts causal influence on one to have a preference) for X over Y (which, by WCP, means that given the truth of the antecedent of this conditional, one has SOME reason to have an all told preference of X over Y, but not necessarily most reason to have this all told preference (as one can rationally be in other, stronger, motivational states to the contrary)]
    As well as precisify it into an account of the concept of ‘the welfare of a creature on the whole / PEIROD’ as follows:
    (D for W of C on the whole / PERIOD): X is better than Y for C on the whole / PEIROD = insofar as one has reason to care for C on the whole / PEIROD, one has reason to prefer X to Y for C on the whole / PEIROD’s sake.
    [Or to clarify the language:
    X is better than Y for C on the whole / PEIROD = if one has reason to care for (i.e. feel sympathetic concern towards) C on the whole / PEIROD, then the rational motivational component of this sympathetic concern that one feels towards C on the whole / PEIROD is a preference (or “inclines one to a preference” / “tends to cause one to have a preference” / “exerts causal influence on one to have a preference) for X over Y (which, by WCP, means that given the truth of the antecedent of this conditional, one has SOME reason to have an all told preference of X over Y, but not necessarily most reason to have this all told preference (as one can rationally be in other, stronger, motivational states to the contrary)]
    Recall, now, Howard’s numbering in the Parfit case:
    t1 = the time at which Parfit wakes up,
    t0 = a time on the day before t1 that occurs within the period of time that Parfit does not remember at t1 (in the scenario in which Parfit has the 10 hour operation, this is the time at which the operation begins)
    t2 = a time after t, on the same day on which t1 occurs (in the scenario in which Parfit has the 1 hour operation, this is the time at which the operation begins)
    t3 = a time after t2 on the same day on which t2 (& thus t1) occurs (in the scenario in which Parfit has the 1 hour operation, this is the time at which the operation ends; t3 = t2 + 1 hour)
    Here, then, are the more fine-grained intuitions we think people will get in the case:
    Intuition d.) Before t0 and after t3, it is rational for everyone who has reason to care for Parfit (including Parfit himself at these times) at these times to prefer that Parfit suffer for 1 hour from t2 to t3 to his suffering for 10 hours from t0 to t0 + 10 hours for Parfit’s sake. [i.e. the rational motivational component of everyone before to and after t3’s sympathetic concern for Parfit at these times is a preference (or “inclines them to a preference” / “tends to cause them to have a preference” / “exerts causal influence on them to have a preference) for Parfit to suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3]
    Intuition e.) At t1, it is rational for everyone who has reason to care for Parfit (including Parfit himself at this time) at t1 to prefer that Parfit suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3 for Parfit’s sake. [i.e. the rational motivational component of everyone at t1’s sympathetic concern for Parfit at t1 is a preference (or “inclines them to a preference” / “tends to cause them to have a preference” / “exerts causal influence on them to have a preference) for Parfit to suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3].
    Intuition f.) it is rational for everyone who has reason to care for Parfit on the whole / PERIOD (including Parfit himself) to prefer that Parfit suffer for 1 hour from t2 to t3 to his suffering for 10 hours from t0 to t0 + 10 hours for Parfit’s sake. [i.e. the rational motivational component of everyone’s sympathetic concern for Parfit on the whole / PERIOD is a preference (or “inclines them to a preference” / “tends to cause them to have a preference” / “exerts causal influence on them to have a preference) for Parfit to suffer for 1 hour from t2 to t3 to his suffering for 10 hours from t0 to t0 + 10].
    Intuition g.) it is better for Parfit at times before t0 and after t3 for him to suffer for 1 hour from t2 to t3 than for 10 hours from t0 to t0+10 hours.
    Intuition h.) it is better for Parfit at t1 for him to suffer for 10 hours from t0 to t0+10 hours than for him to suffer for 1 hour from t2 to t3.
    Intuition i.) It is better for Parfit on the whole / PERIOD for him to suffer for 1 hour from t2 to t3 than for 10 hours from t0 to t0+10 hours.
    Note, however, that from intuitions d.) and e.), intuitions g.) and h.) follow from (D for W of C at t), and that from intuition f.), intuition i.) follows from (D for W of C on the whole / PERIOD). We would argue that to the extent that people get intuition c.) [i.e. (if Parfit at t1 has reason to care for himself, then) Parfit at t1 has reason to prefer that he suffer for 10 hours from t0 to t0+10 hours to his suffering from 1 hour from t2 to t3 for his own sake], they are actually getting intuition e.), which is an intuition about what Parfit at t1 has reason to prefer for the sake of Parfit AT T1 (and not necessarily for the sake of Parfit on the whole / period), and entails (via D for W of C at t) intuition h.). Since intuition e.) is NOT an intuition about what Parfit has reason to prefer for the sake of Parfit on the whole / period, it (and consequently intuition c.) if we are correct that intuition e.) is actually the content of intuition c.)) does NOT entail (via D for W of C on the whole / PERIOD or D for W of C at t) the negation of intuition i.). Similarly, we would argue that to the extent that people get intuition a.) [i.e. that Parfit’s suffering for 1 hour from t2 to t3 is better for him than his suffering for 10 hours from t0 to t0 + 10 hours] they are actually getting intuitions i.) or d.) (which are intuitions about what’s better for Parfit on the whole / Period or better for Parfit at times before t0 or after t3), and thus entail only intuitions f.) or d.), NOT the negation of intuition of e.). Hence, if we are correct that the intuition a.) that people are getting in the Parfit case is really an instance of intuitions i.) or d.), and intuition c.) that people are getting is really an instance of intuition e.), then intuition c.) is perfectly consistent with intuition a.).
    Note also that Darwall’s general meta-ethical view can remain perfectly neutral as to the relative normative import of the concepts of ‘the welfare of a creature at a particular time’ ‘the welfare of a creature on the whole / PEIROD’ in the sense that he can remain neutral as to whether our rational concern for Parfit at any particular time should take the form of sympathetic concern towards Parfit on the whole / PERIOD or sympathetic concern towards Parfit at that time, and which of these kinds of concern gives us weightier reasons for desire / preference / action (as well, indeed, as what the rational motivational components of these different kinds of sympathetic concern would be). (Granted, some of the things Darwall says around p.32-35, 42-43 may suggest that he thinks normative priority should be on welfare of a creature on the whole, but we would suggest 1.) that the notion of care for a creature at a time need not be care for a particular time slice of a creature (it could be care for a temporally extended entity, but just from a particular temporal “perspective”), and 2.) even if Darwall does have a view on the normative priority of the concepts of welfare of a creature at a point in time and welfare of a creature’s life on the whole / PERIOD, this should not be taken to be part of his general meta-ethical approach / view in WRC, which can be precisified into accounts of both notions and can remain neutral as to their normative priority in the way we have discussed).

  64. Hey John and Howard,
    Thanks very much for all that. Obviously, there’s quite a lot there, and it may take me some time to digest (especially given that I’m already feeling the crush of the new semester). But thank you. I’ll try to give it the attention it deserves ASAP.

  65. Chris,
    How about if one revised RCTW to:
    RCTW’: The occurrence of x at t would be good for A, iff: Anyone who cared for A prior to t should, prior to t, desire that x occur at t, for A’s sake.
    Advantages of this:
    1) This avoids what you objected to about Campbell’s suggestion, because RCTW’ doesn’t assume that the goodness of something is time-indexed (it’s only the occurrence of the event or state of affairs that is time-indexed).
    2) RCTW’ tolerates, but does not require, the view that extreme bias towards the future is rational. (It doesn’t say that after t you should no longer care about x’s having occurred at t; it is just neutral on that.)

  66. Hey Mike,
    That’s an interesting suggestion. And I agree that it does seem to avoid my original argument while also avoiding what I objected to about Campbell’s proposal.
    I’ll have to think more about it. For now I’ll just say (and perhaps you’ll agree?) that it seems quite ad hoc. Fitting-attitude accounts of value (like Darwall’s) hope to reduce value to obligation. My argument was supposed to show that we can’t do this for welfare value because, while the welfare value of some event for some person can’t change over time, a person’s obligations with respect to that event can. Your suggestion shows that that if we throw in certain temporal bells and whistles, we can “rig up” the view to generate the right results.
    But if this is what we have to do to get the view to give us the right results, is it reasonable to conclude that an event’s being good for a person does not, after all, consist in what certain people should want? That RCTW’ is instead merely a necessarily true biconditional rather than a reductive analysis? That RCTW’ gives us only an accident of value and not its essence?
    I admit that when a theory that is necessarily true if true at all strikes us as ad hoc, it’s hard to know what conclusions to draw. (When it comes to theories that are contingently true if true at all, I think we acknowledge that an ad hoc theory could turn out to be true, but we nevertheless adhere to a methodological principle according to which we suppose that our world is not described by ad hoc theories. [Otherwise, we’d have to take Ptolemaic astronomy more seriously.] Do we adhere to a similar principle for purportedly necessarily true theories?)
    Anyway, What do you think? Does RCTW’ strike you as ad hoc? If so, what follows?

  67. Chris,
    I don’t think anything in the neighborhood of RCTW is right, essentially because of the Euthyphro problem.
    However, RCTW’ doesn’t strike me as very ad hoc. We have independent grounds (namely, the very example you cite) for thinking that there is an important temporal asymmetry, that it makes sense to care about what happens in the future in a way (or to a degree) that one need not care about what happened in the past. So when someone proposes that interests can be reduced to facts about rational care, it seems to me natural to think “Well, of course it’s going to have to be rational caring about the future.”

Leave a Reply

Your email address will not be published. Required fields are marked *