It seems to me that there are not only reasons for action, but also reasons for belief and even reasons for having certain feelings and attitudes. But even if you don’t accept that there are non-instrumental normative reasons to want something or to prefer one thing to another, let’s just grant, for the sake of argument, that there are. If we grant this, it seems that we should also grant that a complete moral theory should tells us not only how we morally ought to act but also what we morally ought to feel, desire, prefer, etc. If this is so, we should, I think, distinguish act-consequentialism from its rivals according to how they prioritize the following two questions:

Q1  Morally speaking, which of the available outcomes (i.e., those which an agent can actualize through her action or inaction) should an agent prefer to which others? Or, in other words, which of the available outcomes should an agent rank above which others in terms of the moral desirability of their obtaining.

Q2  Morally speaking, which of the available acts should an agent perform?

Act-consequentialist theories, as we all know, come in two parts: (1) a principle for ranking outcomes and (2) a principle for determining an act’s normative status on the basis of that ranking. Given this structure, act-consequentialism (hereafter, “AC”) is committed to the priority of Q1 over Q2 in that Q1 must be answered before Q2 can be answered. By contrast, non-AC denies that Q1 is prior to Q2 and, thus, holds that Q2 can be answered independently of Q1. So it is the issue of whether or not Q1 is prior to Q2 that distinguishes AC from non-AC.

Because AC takes Q1 to be prior to Q2, it is committed to the compelling idea that it could never be wrong for an agent to perform a given act even though she morally ought to prefer the state of affairs where she performs that act to any of the states of affairs that would obtain if she didn’t perform that act. And it is because non-AC denies that Q1 is prior to Q2 that it allows for the troubling possibility that it could be wrong for an agent to perform a given act even though she morally ought to prefer the state of affairs where she performs that act to any of the states of affairs that would obtain if she didn’t. Note that when we talk about “the state of affairs that would obtain,” this includes everything that is the case should the agent perform the act in question, including the fact that she has performed the act in question and that she did so with certain motives and intentions. Thus what’s so puzzling about non-AC is that it allows for the possibility that an agent morally ought to prefer that she act in a way that she morally ought not to act. This puts the agent in the position of having to desire that she does what she ought not to do. Of course, this wouldn’t be troubling at all if we were talking about what she ought to do from a moral standpoint but what she ought to prefer from a self-interested standpoint. But, here, we’re talking about what she ought to do and what she ought to prefer from the very same standpoint: the moral standpoint.

To make this a bit more concrete, consider a version of the divine command theory that I’ll call “DCT.” DCT holds both (1) that it is morally wrong for P to perform x if and only if God has forbidden P from performing x and (2) that P morally ought to rank O1 above O2 if and only if God prefers O1 to O2. Curiously, DCT allows for the following possibility. Suppose that P must choose to do either x or y. Ox is the outcome that will result from P’s performing x and Oy is the outcome that will result from P’s performing y. Now suppose that God has forbidden P from performing x and yet God prefers Ox to Oy. On DCT, then, P ought to perform y as opposed to x even though she morally ought to prefer Ox to Oy. So DCT implies that P morally shouldn’t act in the way that P morally should prefer that she acts.

Rule-consequentialist theories are similarly puzzling. Rule-utilitarianism, for instance, holds that an outcome with more aggregate utility ranks above an outcome with less aggregate utility (indeed, it is on basis of this ranking that we are to determine what the ideal code of rules is), and yet it holds that agents should sometimes refrain from doing what will bring about the outcome with the most aggregate utility. But if an outcome with more aggregate utility is morally preferable to one with less, then why should the agent act so as to bring about an outcome with less aggregate utility, that is, an outcome that is less morally desirable? This is indeed puzzling. And this explains why the move from act-consequentialism to rule-consequentialism has seemed an unattractive solution to the problem of reconciling consequentialism with our commonsense moral intuitions: in order to accommodate our intuitions, rule-consequentialism must give up the very thing that is most compelling about consequentialism, that is, the idea that it is never wrong for an agent to perform an act she morally ought to prefer that she performs.

So far, I’ve been talking about agent-neutral versions of non-AC; both DCT and rule-utilitarianism hold that all agents should always rank the same two states of affairs identically. It’s interesting to note, though, that we find the same oddness that we found with DCT and rule-utilitarianism in agent-relative versions of non-AC. Imagine, for instance, a version of deontological prudence that I’ll call “DP.” DP holds both (1) that, prudentially speaking, P should always prefer a state of affairs where P has more utility to one where P has less utility and (2) that certain types of acts (e.g., charitable acts) are intrinsically imprudent. Thus P ought not to donate P’s money to charity even if doing so will maximize P’s utility. Thus, on DP, it is possible that an agent prudentially ought not to perform the act that will bring about the state of affairs that she prudentially ought to prefer to all other available alternatives.

Consider also rule-consequentialist prudence (hereafter, “RCP”), according to which a given agent ought to follow that set of rules that, if internalized by her, would lead her to produce more utility for herself than any other alternative set of rules would. On this view, one ought to follow a rule not to do x even if in this particular instance doing x would maximize her utility. And since RCP holds that an agent prudentially ought to prefer that her utility is maximized, RCP allows for the puzzling possibility that an agent prudentially ought not to perform the act that she prudentially ought to prefer that she performs, for, on RCP, she morally ought to prefer the state of affairs where she acts wrongly and maximizes utility by violating the ideal code to the state of affairs where she complies with the ideal code. By contrast, act-consequentialist prudence (hereafter, “ACP”) never holds that it is imprudent to do what will maximize one’s utility. So ACP never holds that an agent prudentially ought not to perform the act that she prudentially ought to prefer that she performs.

So we’ve seen that in contrast to consequentialist theories non-consequentialist theories allow there to be a conflict between how an agent ought to act and how that agent ought to want herself to act. This seems troubling. A moral theory that holds that an agent ought to act a certain way but not want herself to act in that way creates a kind of moral schizophrenia in the agent, where she’ll have good moral reason to regret it if she acts as she morally ought to. Fortunately, consequentialist theories avoid such troubling implications and hold that agents should always act in the way they ought to want themselves to act. And this, I believe, is what has seemed so compelling about act-consequentialist theories.

19 Replies to “What’s So Compelling about Act-Consequentialism and So Puzzling about Non-Act-Consequentialism

  1. Hey, cool that the blogosphere has put me in touch with old friends (this is Aaron Boyden from Santa Barbara). I have, over time, become increasingly sympathetic to pure act utilitarianism. About the only thing I think can be said for rule utilitarianism is that it’s probably the way the laws should be set up. Government officials just can’t be trusted to act responsibly if their instructions are “maximize utility”; they need much more specific directions if there’s to be any hope that they won’t do more harm than good. Still, even if in some cases it should be illegal for someone to maximize utility, that still seems to me to always be the right thing to do.

  2. No doubt you’ll think this is different, but I’ll ask you to explain why. Act-consequentialists of a certain stripe have frequently claimed that, although the right action is the one that causes the best consequences, people should be motivated at the level of practice to act in accordance with simple, general rules of commonsense morality, or something like that. So do indirect act-consequentialists also suffer from at least a similar kind of moral schizophrenia?

  3. Hey, Doug. Provocative stuff. I’m inclined to say that at least most moral theories could be set up to tether the preference ranking to the action ranking. So rule consequentialism, for example, could say that, morally speaking, an agent ought to rank highest the state of affairs in which the utility-maximizing rules are acted on. (Then RC could hold that utility should be maximized, via such-and-such rules, because it alone has non-moral but morally relevant value that makes it most [non-morally] preferable.) Or deontology might hold that one morally ought to most prefer a state of affairs in which one does one’s moral duty, leaving it open that this might be less than most preferable from the non-moral perspective. These kinds of theories still might be schizophrenic, but not morally schizophrenic, insofar as their moral rankings for both states of affairs and actions are aligned, even if they’re maligned with the non-moral.

  4. Protagoras: Good to hear from you. Thanks for the comment.
    Kyle: Good question! I probably need to think about indirect act-consequentialism (IAC) some more, but here’s my initial thoughts. On AC (direct or indirect), it will never be the case that an agent ought to wish that she had acted wrongly. If the agent follows a rule of thumb and maximizes the good, this will accord with what the agent should have hoped for. Of course, on IAC, sometimes an agent will follow a rule of thumb and this will result in a sub-optimal outcome, which will be contrary to what the agent should have hoped for. So, in such unfortunate situations, the agent ought to wish that she had deviated from the decision procedure. This doesn’t seem problematic to me, though. I often wish I had acted contrary to the best available decision procedure. For instance, I’ll be playing black jack and I’ll take another card when the decision procedure tells me to, but, when as a result I bust, I wish that I hadn’t followed the decision procedure. So I don’t see a problem with the results of following a decisions procedure coming apart from what we morally ought to prefer comes about; decision procedures are, after all, just meant to be guides. But what does seem problematic is the case where I do the right thing, and the moral theory that tells me that I’ve done the right thing also tells me that I ought to prefer that (or wished that) I had done the wrong thing.

  5. Josh: You’re right that a deontological theory, because it denies that Q1 is prior to Q2, can hold that an agent ought to prefer the state of affairs where she preforms x to the state of affairs where she performs y just in case she ought to perform x as opposed to y. That is, the deontologist can avoid moral schizophrenia by holding that we can’t give an answer to Q1 until we have an answer to Q2. So I guess my objection only applies to non-act-consequentialist theories that hold that we can answer Q1 without answering Q2. Rule-consequentialism, as typically construed (I’m not sure that I’m happy with your alternative construal), certainly falls in this category. But I think that many deontological theories fall in this category as well. That is, many deontological theories will, for instance, admit that it is morally preferable for there to be fewer murders but wrong to commit one murder to prevent five murders.

  6. There’s something I don’t understand here. You say:

    And it is because non-AC denies that Q1 is prior to Q2 that it allows for the troubling possibility that it could be wrong for an agent to perform a given act even though she morally ought to prefer the state of affairs where she performs that act to any of the states of affairs that would obtain if she didn’t.

    In order to avoid the troubling possibility you describe, one need only hold the following:
    (A) For any act X, if the agent morally ought to prefer the state of affairs in which X is performed to any of the alternatives, then X is not wrong.
    But I don’t see why someone who denies that Q1 is prior to Q2 cannot hold (A). Perhaps this is because I don’t really understand what it means to say that Q1 is prior to Q2 in the sense of “prior” you have in mind.

  7. Campbell: Let me try to clarify. When I say, “Q1 is prior to Q2,” all I mean is that no answer to Q2 can be given until there is an answer to Q1. So, for instance, since AC holds that the normative status of an act is some “value-promoting” function of how its outcome ranks relative to those of its alternatives, it must hold that Q1 is prior to Q2. We need to know how the outcomes rank before we can determine what act we ought to perform. Now since AC holds that the normative status of an act is some “value-promoting” function of how its outcome ranks relative to those of its alternatives, it cannot allow for even the possibility that one ought to do X but ought to desire that one not do X.
    Nevertheless, I take your point and would concede, as I did in response to Josh, that not all non-act-consequentialists deny (A). But, of course, all act-consequentialist must accept (A). And (A) is, I believe, something very compelling.
    So it seems that moral schizophrenia isn’t a problem for all non-ACist but it is certainly a problem for some non-ACist (e.g., rule-consequentialists).
    Now here’s a further thought just off the top of my head about how I might expand the threat to all non-ACists. It seems that the non-ACist can accept (A) only if she takes Q2 to be prior to Q1 — that is, only if she thinks that what an agent morally ought to desire is just that she does what she morally ought to do. But I’m not sure that it’s plausible to suppose that what an agent morally ought to desire can only be specified once we know what she morally ought to do. Can’t we know that one morally ought to prefer the state of affairs where one saves one’s child from drowning to the one where one can’t be bothered without having to know beforehand that one ought to save one’s child from drowning? In any case, isn’t (A) just an empty truism for the non-act-consequentialist who holds that what an agent ought to desire is just that she does what she ought to do. So, perhaps, what’s so compelling about AC is that it’s committed to (A) being more than just an empty truism. Thoughts?
    And I would be interested in what others take to be the compelling idea behind act-consequentialism.

  8. Doug, I’m still not getting it.
    Now, you say:

    It seems that the non-ACist can accept (A) only if she takes Q2 to be prior to Q1 — that is, only if she thinks that what an agent morally ought to desire is just that she does what she morally ought to do.

    But I don’t see how these two clauses — i.e. the one before “that is” and the one after — are equivalent. It seems that, in your view, the statement
    (B) Q2 is prior to Q1
    is equivalent to
    (C) for any act X, the agent morally ought to desire X iff she ought to perform X.
    But I don’t see how that is so. As far as I can tell, (C) is neutral on the question of which of Q1 or Q2, if either, must be answered first.
    For the record, I’m inclined to say that niether Q1 nor Q2 is prior to the other. I’d like to say — in a holistic, Rawlsian reflective-equilibrium kind of way — that the two questions should be answered at the same time.

  9. Campbell: I’m sorry to have misled you. You’re right: (B) and (C) are not equivalent, and (C) is neutral with respect to whether Q1 or Q2 is prior to the other. But what about my substantive point: the non-act-consequentialist can accept (A) only if she takes Q2 to be prior to Q1. Do you reject this point?
    Here’s the argument. Suppose that one accepts
    (A*): necessarily: for any act X, an agent morally ought to prefer the state of affairs that results from her performing X to any of the other states of affairs that would result were she to perform some alternative iff she morally ought to perform X as opposed to any of the other alternatives.
    Such a person would have to accept one or the other of two possible explanations for this necessary coextension:
    E1: The agent morally ought to prefer the state of affairs that results from her performing X to any of the other states of affairs that would result were she to perform some alternative, because she morally ought to perform X as opposed to any of the other alternatives.
    E2: The agent morally ought to perform X as opposed to any of the other alternatives, because she morally ought to prefer the state of affairs that results from her performing X to any of the other states of affairs that would result were she to perform some alternative.
    Now one can accept E1 iff one accepts that Q2 is prior to Q1, and one can accept E2 iff one accepts that Q1 is prior to Q2. Since the non-act-consequentialist denies that Q1 is prior to Q2, then she must accept E2 and, thus, that Q2 is prior to Q1 if she wants to accept (A*).
    Postscript: Now that I think about it some more, I suppose you could just say that “the agent morally ought to prefer the state of affairs that results from her performing X to any of the other states of affairs that would result were she to perform some alternative” just means “the agent morally ought to perform X as opposed to any of the other alternatives” and explain the necessary co-extension that way.

  10. Doug,
    You present Q1 and Q2 as if they are two different questions. And on most accounts, they would be: Q1 concerns which outcome an agent should prefer, while Q2 concerns which act an agent should perform. But the distinction between preferring and performing is fairly inconsequential: indeed, if the two were clearly distinct then your challenge to the non-consequentialist would lose its force. (After all, what is paradoxical about non-consequentialism, on your account, is its claim that agents sometimes ought to prefer an outcome that is not the one that would be brought about by the action they ought to perform. If preferring and performing were two unrelated aspects of moral life, then non-consequentialist would say, So what?)
    So the distinction between Q1 and Q2 must lie in the fact that the former concerns outcomes while the latter concerns actions. And on most accounts, these are importantly different. For instance, according to classical utilitarianism the nature of the outcome (so far as it is relevant to morality) is determined by the amount of happiness it contains; but the nature of an action is typically thought of as determined by something else. (Thus an action that is, by nature, a lie, can according to classical utilitarianism bring about a good outcome.)
    But your account departs from more traditional approaches on this point. You write:
    (BC) “Note that when we talk about “the state of affairs that would obtain,” this includes everything that is the case should the agent perform the act in question, including the fact that she has performed the act in question and that she did so with certain motives and intentions.”
    I’ll call this assumption (BC), to indicate that it represents a Broad Conception of what counts as relevant to the determination of the nature of a given state of affairs.
    Now, there is no reason why one cannot define a notion of ‘state of affairs’ that allows anything and everything—including qualities that would normally be considered to be qualities of the action that brings about the state of affairs in question rather than (what would ordinarily be identified as) qualities of the state of affairs itself—to count in determining the nature of a state of affairs. Having done so, though, you collapse the distinction between Q1 and Q2: granted (BC), determining which state of affairs ought to be brought about just is determining which outcome ought to be preferred. As Campbell writes, Q1 and Q2 must be answered together—indeed, they are the very same question.
    In other words, if we are conceiving of states of affairs in terms of (BC) then it is false to claim that non-consequentialist theories will deny “the compelling idea that it could never be wrong for an agent to perform a given act even though she morally ought to prefer the state of affairs where she performs that act to any of the states of affairs that would obtain if she didn’t perform that act.” A non-consequentialist theory might, of course, allow that there is a sense in which an outcome that contained five murders, say, could be acknowledged to be superior to an outcome containing only one; but she would deny that this implied that this outcome should be ranked “above . . . others in terms of the moral desirability of their obtaining.”
    Actually, to be a bit more sophisticated about it, she would say this: considered solely in terms of number of murders, the 1-murder outcome is superior to the 5-murder outcome; but considered in terms of all relevant moral considerations, and from the perspective of the particular agent in question, the 1-murder outcome (in which the one murder is performed by the agent in question) is morally inferior to the 5-murder outcome, and so ought not to be preferred (since the act that brings it about ought not to be performed.) Nothing puzzling there, so far as I can see.

  11. Troy: Thanks for the comment, but I’m not getting why you think that there is no distinction between Q1 and Q2 if we accept (BC). If there’s no distinction here, how is it that the rule-utilitarian — who, being a welfarist, doesn’t think that the fact that a wrong act has been committed is relevant to assessing the moral desirability of a given state of affairs — can hold that the state of affairs where I violate the ideal code and maximize utility is morally preferable to the state of affairs where I abide by the ideal code and fail to maximize utility while also holding that I ought not violate the ideal code? If the two questions are the same, how is it that the rule-utilitarian gives us different answers in this case, telling me that I ought to prefer one state of affairs but act so as to bring about the other? Surely, we can suppose that the rule-utilitarian accepts (BC) but thinks that only the amount of welfare is what matters in assessing the moral desirability of states of affairs, for one can have a broad conception what an act’s outcome consists in but a narrow view about what factors about an outcome are morally good.
    Now I have already conceded, as I noted previously in my responses to Josh and Campbell, that the non-act-consequentialist can hold that it is never the case that an agent ought to prefer one state of affairs but act so as to bring about another, but only if she holds that Q2 is prior to Q1 such that what state of affairs an agent ought to prefer depends on what the agent ought to do. You seem to admit as much when you say: “she [the non-act-consequentialist] would say…from the perspective of the particular agent in question, the 1-murder outcome (in which the one murder is performed by the agent in question) is morally inferior to the 5-murder outcome, and so ought not to be preferred (since the act that brings it about ought not to be performed. [emphasis added])”
    So I’ve admitted that the non-act-consequentialist can avoid the moral schizophrenia problem by holding that Q2 is prior to Q1. So if I want to show that there is something puzzling about all non-act-conseqeuntialist theories and not just those, such as rule-consequentialism, that don’t take Q2 to be prior to Q1, I’ll need to further argue that it is implausible to suppose that Q2 is prior to Q1. Right now, I mainly have just a hunch that this is implausible and only some very sketchy thoughts/arguments to back it up. In any case, here’s what I’m thinking. It seems to me that the fact that I have a reason to desire the total outcome (consisting in everything that’s the case) that results from my X-ing explains why I have a reason to X, not vice versa, as those who take Q2 to be prior to Q1 would have it. That is, it seems to me that reasons for desire ground reasons for action, not vice versa, for acting for a reason just is acting for the sake of some end, but if that end isn’t something the agent has any reason to desire, then it’s hard to see how that end is really any reason for the agent to so act. To illustrate, consider how odd it sounds to say that the reason I ought to desire, prudentially speaking, the state of affairs where my utility is maximized is that I ought, prudentially speaking, to maximize my utility. No! That’s not right; it’s the other way around. The reason I ought, prudentially speaking, to maximize my utility is that I ought, prudentially speaking, to desire the state of affairs where my utility is maximized.
    So it seems to me that only sensible way to explain why your agent ought not act so as to bring about the 1-murder-by-her outcome even for the sake of avoiding the 5-murders-by-others outcome is that the agent ought to care more about what she does than about what she allows and thus ought to prefer the 5-murders-by-others outcome to the 1-murder-by-her outcome. Now it won’t due to say that the reason she ought to prefer the 5-murders-by-others outcome to the 1-murder-by-her outcome is that she ought not act so as to bring about the 1-murder-by-her outcome. That would be circular. So we should take Q1 (a question about reasons for desire) to be more primitive than Q2 (a question about reasons for action) and give an agent-relative act-consequentialist account of the constraint against the commission of murder.

  12. Doug,
    Well, as you already know, I myself am at least somewhat sympathetic to the kind of agent-relative consequentialism you suggest in the last paragraph of your previous post. And I think I agree with you, in fact, that reasons for desiring certain states of affairs should be considered to be prior to reasons for action.
    Still, regardless of the attractiveness of such a theory, I am not yet convinced that you’ve shown there to be anything deeply puzzling about non-consequentialism. Nor am I convinced, by the way, that it isn’t at least somewhat misleading to use the term ‘consequentialist’ for the kind of agent-relative theory you propose. (BC), after all, is not an innocuous claim: traditionally, non-consequentialists have been able to make sense of the claim that an action that leads to the (morally) best available consequences might nevertheless be (morally) wrong precisely because they did not count every aspect of such an action as among the consequences of the action. If we do allow for every potentially morally relevant characteristic to be counted as a consequence, then we seem to make consequentialism trivially true: any theory, no matter how it tells us to act, can be considered to be a consequentialist theory because whatever aspect of the act it takes to be relevant to determining its rightness can now be counted as a consequence of the act.
    Thus, I disagree with you that the rule utilitarian can accept (BC). Rule utilitarians want to maintain the traditional understanding of ‘consequence,’ according to which, for instance, the fact that an action that is the keeping of a promise has been performed does not count as a consequence of the act of keeping a promise, while such things as the happiness of those who are affected by this action do count as consequences. This is precisely how they are able to say that the keeping of a promise might be right even though it does not lead to the best available consequences. Of course, I don’t mean that a rule utilitarian literally cannot accept (BC); I only mean that she cannot do so without rendering her theory incoherent. But there is no reason to suppose that the rule utilitarian must accept this confused position, and almost as little reason, I think, to think that most or many actual rule utilitarians do hold confused positions of this sort.
    Maybe I’m just not understanding what ‘moral desirability’ is supposed to mean here. If ‘a is the most morally desirable available outcome’ means ‘the agent in question has more reason to bring a about than to bring about any other available outcome,’ then I understand it; but in this case the rule utilitarian will deny that the outcome in which I break the promise (or commit the one murder to prevent five) is the most morally desirable available outcome. If, on the other hand, ‘a is the most morally desirable available outcome’ means, say, ‘a is the outcome which a morally sympathetic, independent, impartial, uninvolved observer would prefer to see brought about,’ then the rule utilitarian might agree that the promise’s having been broken (or the one murder’s having been committed) is the most morally desirable of those available; but now there is no puzzle, since there is nothing mysterious in the fact that an agent might not have moral reason either to perform or to prefer that he perform the action that leads to the outcome that some other agent—even a morally sympathetic, independent, impartial, uninvolved one—would prefer to see realized. Insofar as we are moral agents at all, we are by definition not independent or uninvolved; nor are we mere observers. (And given your sympathy to the role of agent-relativity in ethics, I assume that you won’t find the idea that an agent’s perspective might matter greatly, not only to what he ought to do but to the very values of the outcomes he might produce, to be either puzzling or implausible.)

  13. Troy,
    You say, “If we do allow for every potentially morally relevant characteristic to be counted as a consequence, then we seem to make consequentialism trivially true.” I think that this is false, but I don’t have anything to say beyond what I’ve already said in my posts on consequentializing, Parts I-III — see http://peasoup.typepad.com/peasoup/2004/06/consequentializ.html.
    In order for the rule-consequentialist to be able to say that the keeping of a promise might be right even though it does not lead to the best available consequences, she needn’t deny that the fact that an action that is the keeping of a promise has been performed counts as a consequence of the act of keeping a promise; she need only deny that this consequence is one that make the outcome better.
    Regarding the meaning of “most morally desirable,” I don’t want to assume that there must be an outcome that every agent should find the most morally desirable. A theory may yield not one ranking of outcomes, but a different ranking for each agent. So I would want to reject your second proposal. I want to reject your first proposal as well: moral desirability is not about what the agent has reason to do but about what the agent has reason to want.
    So, suppose rule-utilitarianism is true, and suppose I’m in position where I can either maximize aggregate utility or not. Which total outcome (consisting in the act itself and its effects) do I have reason to prefer or rank higher, morally speaking? The one where I maximize aggregate utility or the one where I don’t?

  14. Doug, some more thoughts …
    Consider these predicates: “is an equilateral triangle” and “is an equiangular triangle”. These are necessarily coextensional. So, we might ask, what explains this? Should we say that an object is an equilateral triangle because it’s an equiangular triangle? Or is it the other way around? Neither, I’m inclined to say.
    Now, consider these two questions:
    Q1 Is x an equilateral triangle?
    Q2 Is x and equiangular triangle?
    And let us ask, which question is prior? Must Q1 be answered before Q2? Or is it the other way around? Again, I’m inclined to say neither.
    Still, I’m reluctant to say that the two predicates mean the same thing. They seem to express quite distinct ideas, one having to do the sides of a triangle, the other having to do with the angles.
    Maybe the same thing should be said about the predicates “ought to be performed” and “has an outcome that ought to be desired”.

  15. Campbell,
    I’ll concede that, in your example, it is neither the case that Q1 must be answered before Q2 is answered nor the case that Q2 must be answered before Q1. We can answer each independently of the other. So if we were to define equiangular-trangle-consequentialism as the view that takes Q1 to be prior to Q2, we should reject the view, for as you point out, we needn’t answer Q1 prior to being able to answer Q2. We can just answer Q2 by measuring the angles.
    Let’s return now to my Q1 and my Q2. I still think that we can, and should, define consequentialism as the view that holds that Q1 must be answered before Q2 can be answered. So if you think that it is not the case that Q1 must be answered before Q2 can be answered, as seems to be your view, then I would say that you’re not a consequentialist. A consequentialist, as I would define one, must not only hold that what we ought to do is a function of how states of affairs are to be ranked but also that states of affairs can be ranked without appeal to what ought to be done. Now I know that you want to argue that consequentialism is empty because you think that the correct ranking of outcomes is just that ranking that in conjunction with a principle such as “an act is morally permissible iff its outcome is not outranked by that of any alternative action” yields moral verdicts that comports with what ought to be done. Thus you want to appeal to what ought to be done in determining how outcomes are to be ranked. Of course, as you point out, this just makes consequentialism an empty truism. I think, then, that the great merit of my way of defining consequentialism is that it ensures that consequentialism doesn’t come out empty. That’s why we should define consequentialism as I suggest. Wouldn’t you argee that, other things being equal, it is better to define a view, that has been viewed to be a substantive one, such that it comes out to be a substantive view, not a trivial view. Moreover, my definition is not too far off from how some very important moral philosophers, e.g., Rawls, has defined consequentialism/teleology.

  16. Campbell,
    I realize that your point was to show that I had posed a false dilemma in my response to your earlier comment. You got me there. I’ll need to come up with some other argument or abandon the position. Nevertheless, I think that it might be good to hash out our disagreement about how act-consequentialism is to be defined and whether it is empty.

  17. Doug,
    Let’s say that Extensional Consequentialism (EC) is the view that an act is right iff it maximises the good, and that Priority Consequentialism (PC) is the conjunction of EC and the further claim that the good is prior to the right (where this latter claim is spelled out in the way you suggest above). I say, these views are importantly different, and both are worthy of philosophical attention. But I don’t see much point in squabbling over which view truly deserves the label “consequentialism”. If you want to reserve the name “consequentialism” for PC, and call EC something else, I’m happy to go along with that. (It’s important here that “consequentialism” is a philosophical term of art. If it was commonly used in ordinary language, I might be less inclined to take such a view .)
    As to the issue of triviality, there is a way of making EC more precise such that it’s not trivial. We simply add the condition that the good must be represented by a single ranking of outcomes that does not vary between different agents, or different times etc. Given this condition, EC has substantive implications — for example, that there are no agent-relative norms.
    One final point: I find it very difficult to adjudicate the issue of whether or not PC is correct. It’s hard to say what’s at stake here, and what kind of considerations would settle the issue one way or the other. This difficultly is evident, I think, in your original post. Ostensibly, your aim is to defend PC. But in order to avoid the “troubling possibility” you describe — i.e. that it might be wrong to perform an act whose outcome is best (or morally preferable) — it seems sufficient to accept EC. Thus, it seems that what’s really at issue in your argument is EC, and not PC.

  18. Awhile back Doug wrote, “I would be interested in what others take to be the compelling idea behind act-consequential ism.” I thought I’d respond. Before I start, though, I just want to say how much I enjoy reading PEA Soup. I also appreciate the respectful and constructive tone of the comments. As a philosopher not making a living as a philosopher, PEA Soup helps me from getting too far away from this love of mine. Unlike most of you (I presume, since most of the posts seem to be on work days?), this is something I have to squeeze into the rest of my busy life. With that…
    The short answer is that I think AC stands a better chance than competing theories of providing a systematic, more or less unified account of ethics. Perhaps no such account will ever succeed, but it’s too soon to give up on the effort (hey, philosophers have only been working on it for a few thousand years—why stop now?). I think the objections to AC are more likely to be answerable than are the objections to other systematic moral theories, and theories against systematic theories tell us what the challenges are but have not shown that the challenges are unanswerable. I think they actually point in the direction of answers.
    Anti-theory theorists point to intractable moral disagreements and to the complex mix of moral intuitions most of us have that intransigently resist being systematized. At the risk of oversimplifying, anti-theory says: our moral intuitions are not systematic or unified, and there is no way to make sense of them. To this I would answer that our intuitions are based to a significant degree in sociobiology, which like evolutionary theory, explains in terms of utility, consequences. Our moral intuitions tend to have beneficial (or at least not seriously harmful) consequences in the ordinary circumstances of which our lives are almost entirely composed (except that some, like the tendency to tribalist judgments, might be outdated in that regard—how many Iraqi deaths does it take to equal one American death?), or else they would have been naturally expunged. Perhaps these same intuitions are not so well attuned when it comes to judging the extraordinary situations that comprise so many counterexamples to moral theories. Possibly, in these cases it is not just the theory but also the intuitions that need to questioned. (Isaac Newton’s theories work superbly until we look beyond the humanly ordinary, where counterintuitive relativity and quantum theories do better.) So, 1) it makes sense to think our moral intuitions are as they are because they have tended to promote utility, and 2) despite being deeply seated within us, those intuitions might not serve us well in extraordinary situations and might not give the right answer. [And how could a case be made that it’s not the right answer? By appealing to consequences. Feeling certain is no guarantee against being wrong, as demonstrated in the domain of cold hard facts by the relevant facts themselves (which is sometimes hard to do, but sometimes easy). In the domain of values many seem to think there is never such compelling evidence, but some kinds of utility (values) are inseparably connected to cold, hard facts: people along the Gulf coast died for lack of air, water, food, and medical care. If a moral conviction (“We must follow the rules”) runs contrary to evident utility (people are dying), there is reason to question and reject the conviction.]. There are good reasons for thinking anti-theory is not the end of the story. Along the same lines, there are also good reasons for thinking that intuitions can be questioned and that they are more deeply connected to consequences than is normally recognized.
    Consequences are a driving force in most decisions because we want to act in a way that serves our motivating interests. Because consequences are so basic to choice and because they come in such an extreme range of values, any moral theory that excludes consequences as a basis for determining what is morally right faces an impossible challenge, it seems to me. Theories that attempt to incorporate consequences along with other considerations, all treated as morally basic (to accommodate both our consequentialist and nonconsequentialist intuitions), have the apples and oranges problem, which points back toward anti-theory.
    AC says that best overall is best morally. Theories in which what is best morally is not necessarily best overall have to make sense of duties that require us to leave some significant value unrealized, something that actually matters in people’s lives. In return for what?–some value inherent to the act itself apart from any actual effects in the world? I’m not sure that’s even intelligible. I’m not aware of any persuasive answers to this; if there are, I’d be interested in knowing what they are.
    AC treats consequences as the only consideration. It says we must consider all values on the whole and over the long run and do whatever action will have the best outcomes (i.e., maximize value). The problem for AC is that sole reliance on consequences sometimes seems to give the wrong answer, meaning that it runs contrary to strongly held moral intuition. Two ways to try to address this problem occur to me. One is to ask whether AC requires what we think it requires—that is, have we considered all the potential consequences and risks? The other is to puts the intuitions themselves under a microscope. Between the two, the gap between them might at least be narrowed enough to encourage consequentialists in the possibility of eliminating it altogether.
    Let’s use the kill-one-to-save-five example. Is it moral to kill one innocent person to save five other innocents (what if the one you would kill is among those who would have been murdered anyway?). Is it right to kill one innocent to save a billion? It seems to me that somewhere along the continuum of increasingly awful consequences, it becomes wrong NOT to kill—which would at that point better be called a sacrificial killing than a murder, an act that unfortunately requires the deliberate taking of an innocent life, but which is, nonetheless, altruistically necessary because of all the other lives at stake.
    At what point do such acts become right, according to AC? First, if we add “other things being equal,” then on AC we should kill one to save two, but other things are not equal, and I doubt that we can make a fair judgment on such terms, given that intuitions to the contrary are likely attached to complex unconscious substrata that cannot simply be detached and set aside for the sake of argument, so I don’t want to go there. Next, there can be no precise answer because our ability to predict complex consequences is so limited, especially in the long run. We should be skeptical of our ability to know the future, so in our calculations we should be especially cautious when contemplating actions (like killing or telling a lie) that are outside time-tested moral norms. We may underestimate the ‘downside’ of such actions through lack of experience or being disproportionately influenced by short-term over long-term effects. (The famous “unintended consequences” argument for doing nothing rests upon this caution. Think of ecosystems.) Third, to move forward, what are the risks? A person who murders an innocent to save others will be a changed person for having done it, likely for the worse emotionally and in a way that might also harm others such as spouse and children who have to live with this haunted individual; maybe would be jailed. Furthermore, if society were to openly condone such behavior, how many yahoos would then feel compelled to weed out evildoers to promote what they think is best in the long run? Maybe people in general would lose their moral footing from not knowing how to handle the idea that a virtuous person could morally kill an innocent person, even to save five (in contrast with the idea of a sociopath killing five—we know that kind of thing happens on occasion). Risks like these are real, need to be considered, and might explain—solely in terms of consequences—why AC could say killing one to save five would be wrong, but killing one to save some larger number would be right. Policy regarding assisted suicide is a real world example involving similar considerations. I believe that assisted suicide can be right in some situations. I’m told it is not that uncommon (disguised as palliative care), but to formally allow it might lead to extensive abuse (an action can be right, but to publicly proclaim it right might be wrong). So: a careful examination of risks perhaps narrows the gap between AC and intuition. It might also be a start on unpacking intuitions themselves.
    Regarding the demandingness objections, maybe they can be summarized like this: Maximizing AC says we are always morally required to do what is consequentially best, but the ‘always’ and the ‘best’ are both too demanding; there are times when we are morally permitted 1) to take a rest even when we could keep going and 2) do less than the best.
    Let’s look first at the ‘always.’ Hmm. Isn’t ‘always’ an aspect of ‘best’? Because at every waking moment I am doing something that counts as an action, and if it’s not for the best, then I am not acting morally, according to maximizing AC.
    So let’s look at ‘best.’ Consequences would not be maximized by stopping to consider every act in light of everything else we could be doing. We’d spend hardly anytime doing and virtually all our time considering. Fortunately, most acts are components of larger ‘life projects’, and acts in the context of a project derive their baseline merit from the merit of project. So Maximizing AC would say we need to choose the projects we take on with great care in consideration of the odds of success at each given our own individual skills and abilities. Shall I marry or remain single? Shall I become a monk? Support myself economically or depend on others? What career, if any? Commit myself to a cause? Buy and build a home theater? I see no moral justification for deliberately choosing less than the best projects o which we are capable (the goal being to maximize consequences), and once we choose we should act to execute those projects to the best of our ability. This is consistent with maximizing AC, and is not that demanding, it seems to me. My guess is that those reading this do live their lives pretty much that way. As for the rightness of taking a rest when one could keep going (and thus doing less than what is best at that moment), sure, we could “keep going,” but only for so long, and then it would catch up with us in various manifestations of exhaustion. Rest can be excessive, but appropriate rest refreshes our minds and bodies, and that’s good in the long run.
    Ok, I’d better stop.
    [An aside: If any of you have seen ’24’ on TV, you know that Jack Bauer makes about a half dozen decisions every hour where he mortally risks the innocent to keep terrorists from blowing up Chicago and killing millions, or whatever. Some innocent people are maimed and dead because of him, and he himself has become emotionally maimed, but he had to do it to save Chicago (and L.A., etc.). The program illustrates these out-of-the-ordinary situations, where an action toward another person that is normally wrong becomes right (still bad for the victims, but right as the least of the evils), or so it seems to me, in light of extreme consequences.]
    [Another aside. There is an argument one hears for capital punishment: a few innocents are expendable for the greater good. I agree that the general point can have merit, but not that it succeeds regarding capital punishment.]

  19. Hi, Doug,
    Very interesting discussion you’ve launched! I do think that the particular way you’ve set up the problem leads quite naturally to act consequentialism. But Q1, inquiring about which outcomes are morally desirable, is an ill-formed question according to those like Philippa Foot or Judy Thomson who don’t think that the notion of “morally good outcome” or “morally preferable outcome” actually means anything. So, if Q1 is indeed intelligible, what you say sounds correct. But there are worries about the well-formedness of Q1. I share these worries.

Leave a Reply

Your email address will not be published. Required fields are marked *