The poll question was: Assuming that you’re a consequentialist (if you’re not one, then please don’t take this poll), do you believe that the best outcome available to a given agent is always the one that she ought to prefer to all other available alternatives?

And, as of 9:20 a.m. on 9/16/08, the results were as follows:

Total Votes: 47

Yes: 49.9%

No: 42.6%

Not sure: 8.5%

So, now, let me explain why I was interested in this poll. I’m interested to what extent consequentialists are committed to the idea that it is never morally wrong for an agent to act so as to bring about the outcome that she ought to prefer (i.e., the outcome that she has decisive reason, all things considered, to prefer) to all other available alternatives. I’m interested, because I find it quite implausible to hold that S ought to perform y as opposed to x while at the same time holding that S ought to prefer Wx to Wy (where Wx is the possible world that would be actual if S were to perform x, and likewise for Wy). If S can choose which possible world will be actual (and we’re supposing that she can), and if she ought to want the one in which she performs x (i.e., Wx) to be actual, then how could it be she ought not to perform x, but ought instead to perform y?

But now it seems at least coherent to suppose that S ought to prefer Wx to Wy even though Wy is slightly better than Wx, for, as Scanlon explains, “[t]o claim that something is valuable (or that it is ‘of value’) is to claim that others also have reason to value it, as you do. [Yet w]e can, quite properly, value some things more than others without claiming that they are more valuable” (1998, 95).To illustrate, suppose that S is me and that Wx is the world in which I save my daughter from drowning and Wy is the world in which I instead save some stranger’s child from drowning. Assume that they are both drowning and that I can only save one of the two. And assume that, because the stranger’s child is slightly more gifted than my own, it would be slightly better that Wy be actualized. It seems to me right to claim that I ought to prefer Wx to Wy (i.e., to value Wx above Wy) even though Wy is better than Wx. A number of philosophers agree and argue that, for this reason, those who seek to analyze ‘good’ and ‘better’ in terms of reasons for desiring and preferring must be careful to restrict their analyses to agent-neutral reasons for desiring and preferring.

But I’m wondering which consequentialists take to be more fundamental: (a) that the right act is the one that brings about the best outcome or (b) that the right act is the one that brings about the outcome that the agent ought to prefer to the available alternatives? Perhaps, they think that they needn’t choose, for, perhaps, they deny that anyone ought ever to prefer a worse state of affairs to a better state of affairs.
Thoughts?

So, here are three new polls:

10 Replies to “The First Poll: Results

  1. I’m not just interested in polling data here. If you have any thoughts about these issues, about whether, for instance, it is plausible to hold that S ought to perform y as opposed to x while at the same time holding that S ought to prefer Wx to Wy (where Wx is the possible world that would be actual if S were to perform x, and likewise for Wy) or about whether one ought ever to prefer a worse world to a better world, I love to hear them.

  2. Doug,
    It sounds to me like your target (or one of them) is the distinction between the best decision procedure and the truth maker. You seem to be saying that if someone followed the best decision procedure, then we should think the answer they get is (not merely not blameworthy) but correct. But it seems to me in some cases we should like the decision procedure vs. truth maker distinciton. Imagine a poker player for whom it is true that the best strategy they could emply to win over the long run is to trust their gut. And they never have good evidence that any particular occation is not a good one for trusting their gut. So they ought to trust their gut, one wants to say. Still, one wants to say that in some cases them trusting their gut will lead them astray–not get them the right answer about how to bet. So here it seems plausible to me to say that they, in some sense, ought to want to call the bet (because that is what their generally reliable gut tells them to do) but that doing what they ought to want to do is not the correct answer.

  3. To what I wrote above I could imagine you responding that I switched between the objective and the subjective ought. Your claim, you might say, is that if one ought to want to 0, then one ought to 0, so long as the ought in each case stays the same.
    So let me try another way in. It seems clearish to me that it could be “for the best” (among the available options) that one want to 0 yet be “for the best” that one not 0. The evil demon will destroy the world if one fails to want to 0 but will harm many people if one 0’s. So this might be thought to support the view that one objectively ought to want to 0 but objectively ought not 0. I wonder if you are ok with the “for the best” formulation and if so if you could say why it seems to you that the ought formulation can’t have the same structure.
    I could imagine one might ask how can the agent be blameworthy for acting on motives they ought to have or how can the agent reliable do what they ought on the this scheme? Are these the sort of thoughts that tempt you?

  4. David,
    I don’t think that the fact that my wanting to 0 would have good consequences is a reason for me to want to 0. Rather, I think that this fact is only a reason for me to want to want to 0 and to intend to do what might cause oneself to want to 0. So, in your example, I would deny that just because an evil demon will destroy the world if I fail to want to 0, it follows that I have some reason to want to 0 — let alone that I objectively ought to want to 0.

  5. Doug,
    That seems to me to be a red herring here. You like the principle that “the idea that it is never morally wrong for an agent to act so as to bring about the outcome that she ought to prefer.” Thus if the agent ought to prefer that she prefer doing X, then by your principle she cannot be wrong in prefering to X. So whether the evil demon gives her a reason to want to X directly or not, he does make her wanting to X something that she ought to do. Intuitively, he makes it good that she wants to X and this, one way or another trickles down to her reasons. But he does not make it good that she X and this is why it seems to me sensible to say that there is something to be said for her wanting to X which cannot be said for her actually X-ing.

  6. David,
    You’re assuming that I take the having of various attitudes, such as believing, intending, and preferring, to be actions in the sense relevant to the application of the principle. I don’t. I think that I ought to prefer than I’m happier and that I would be happier if I believed I had an immortal soul, but I don’t think that I ought to believe that I have an immortal soul. Indeed, I don’t think that I have any reason at all to believe that I have an immortal soul.
    I’m with Scanlon in thinking that reasons for doing x are best thought of as reasons for intending to do x. Thus, believing that p and desiring that p are not actions in the relevant sense, for believing that p and desiring that p don’t entail intending to do anything.
    Perhaps, then, I should formulate my principle in such a way as to avoid such possible confusion. Here, it goes:
    DP: It is never the case both that S ought to intend to do y as opposed to x and that S ought to prefer Wx to Wy (where Wx is the possible world that would be actual if S were to perform x, and likewise for Wy).
    From DP and the fact that I ought to prefer that I prefer Wy to Wx, it does not follow that I ought to prefer Wy to Wx.

  7. So you are saying that the wrong kind of reasons to O are really no reason to O. Only right kind of reasons are real reasons. Then, perhaps, you are thinking that while the evil demon can make it good if I want to O yet not good if I O, still he cannot thereby make me have a reason to O. And if I act in accord with my moral reasons, I must be acting morally acceptably? Does that feel like it is along your lines?
    So suppose I have a reason to want to O and a reason to cause myself to not act on my reason to O. The latter reason could be the more powerful reason, I presume. Why not think that in some possible cases it is morally unacceptable to fail to cause myself to not act on my reason to O?

  8. David,
    I’m confused about a number of things in your latest comment. I thought ‘O’ was supposed to stand for any attitude (believing, desiring, intending, etc.), but what is it “to fail to cause myself to not act on my reason to,” say, believe that p? I don’t even know what it is to cause myself to not act on my reason to believe that p? Can I even act on reasons to believe, intend, or desire? Rationally, when I judge that I have decisive reason to O, I don’t act on these perceived reasons, but rather I respond (involuntarily) to them by O-ing. I don’t respond by acting on my reasons to O (that is, by intending to O); I respond by O-ing.
    Why do you mention moral reasons? I never mentioned moral reasons when stating my view. They just see to come up out of the blue.
    Are you suggesting that if I answer ‘yes’ to your question “Why not think that in some possible cases it is morally unacceptable to fail to cause myself to not act on my reason to O?”, then I will have to concede that DP is false? If not, what are you driving at? Are you even suggesting that DP is false? If you are, could you perhaps give a concrete example. I’m having a hard time with your abstract examples.
    Sorry, I can’t be more helpful, but I genuinely found your comment quite cryptic.
    I can say that I understood the first part of your comment. And, yes, I’m saying that what some philosophers cite as being reasons of the wrong kind to desire that p (when, say, giving a fitting-attitude analysis of value) are not even reasons to desire that p, but only reasons to want to desire and to intend to act so as to cause oneself to desire that p. I do, however, still think that there is a genuine wrong-kind-of-reasons problem — at least when it comes to giving a fitting-attitude account of value: it’s what some call the partiality challenge to the fitting-attitude account of value. For example, I have reason to prefer the state of affairs in which my child is saved to the state of affairs in which some stranger’s child is saved, but, from this, it doesn’t follow that the former is better than the latter.

  9. Sorry to be cryptic.
    I wanted “O” to take ordinary things one can have a reason to do, say, go to the store or take a walk, as in Williams’ usage which I think of as standard in reasons for action talk.
    I introduced moral reasons because your conclusion is in terms of what is morally acceptable and so I was seeking to make all this relevant to morality.

  10. So suppose I have a reason to want to O and a reason to cause myself to not act on my reason to O. The latter reason could be the more powerful reason, I presume. Why not think that in some possible cases it is morally unacceptable to fail to cause myself to not act on my reason to O?
    Okay, so now insofar as I think that I understand what you’re saying my response to your question is: “Yes, why not. Did I say anything to the contrary?”
    You certainly can have *a* reason to want Wx to obtain and yet have better reason to intend to do y.
    I’m still not seeing how your most recent comments fit in with the dialectic. Are you suggesting that there is some counterexample to DP? Or are you asking why I find DP intuitive? I’m sorry to be slow on the uptake, but I’m still not getting it.
    By the way, I don’t think that what it is morally permissible to do is purely a function of moral reasons. I think that non-moral reasons can affect whether or not you are morally required to do what the balance of moral reasons supports doing.

Leave a Reply

Your email address will not be published. Required fields are marked *