It seems to me that there are not only reasons for action, but also reasons for belief and even reasons for having certain feelings and attitudes. But even if you don’t accept that there are non-instrumental normative reasons to want something or to prefer one thing to another, let’s just grant, for the sake of argument, that there are. If we grant this, it seems that we should also grant that a complete moral theory should tells us not only how we morally ought to act but also what we morally ought to feel, desire, prefer, etc. If this is so, we should, I think, distinguish act-consequentialism from its rivals according to how they prioritize the following two questions:
Q1 Morally speaking, which of the available outcomes (i.e., those which an agent can actualize through her action or inaction) should an agent prefer to which others? Or, in other words, which of the available outcomes should an agent rank above which others in terms of the moral desirability of their obtaining.
Q2 Morally speaking, which of the available acts should an agent perform?
Act-consequentialist theories, as we all know, come in two parts: (1) a principle for ranking outcomes and (2) a principle for determining an act’s normative status on the basis of that ranking. Given this structure, act-consequentialism (hereafter, “AC”) is committed to the priority of Q1 over Q2 in that Q1 must be answered before Q2 can be answered. By contrast, non-AC denies that Q1 is prior to Q2 and, thus, holds that Q2 can be answered independently of Q1. So it is the issue of whether or not Q1 is prior to Q2 that distinguishes AC from non-AC.
Because AC takes Q1 to be prior to Q2, it is committed to the compelling idea that it could never be wrong for an agent to perform a given act even though she morally ought to prefer the state of affairs where she performs that act to any of the states of affairs that would obtain if she didn’t perform that act. And it is because non-AC denies that Q1 is prior to Q2 that it allows for the troubling possibility that it could be wrong for an agent to perform a given act even though she morally ought to prefer the state of affairs where she performs that act to any of the states of affairs that would obtain if she didn’t. Note that when we talk about “the state of affairs that would obtain,” this includes everything that is the case should the agent perform the act in question, including the fact that she has performed the act in question and that she did so with certain motives and intentions. Thus what’s so puzzling about non-AC is that it allows for the possibility that an agent morally ought to prefer that she act in a way that she morally ought not to act. This puts the agent in the position of having to desire that she does what she ought not to do. Of course, this wouldn’t be troubling at all if we were talking about what she ought to do from a moral standpoint but what she ought to prefer from a self-interested standpoint. But, here, we’re talking about what she ought to do and what she ought to prefer from the very same standpoint: the moral standpoint.
To make this a bit more concrete, consider a version of the divine command theory that I’ll call “DCT.” DCT holds both (1) that it is morally wrong for P to perform x if and only if God has forbidden P from performing x and (2) that P morally ought to rank O1 above O2 if and only if God prefers O1 to O2. Curiously, DCT allows for the following possibility. Suppose that P must choose to do either x or y. Ox is the outcome that will result from P’s performing x and Oy is the outcome that will result from P’s performing y. Now suppose that God has forbidden P from performing x and yet God prefers Ox to Oy. On DCT, then, P ought to perform y as opposed to x even though she morally ought to prefer Ox to Oy. So DCT implies that P morally shouldn’t act in the way that P morally should prefer that she acts.
Rule-consequentialist theories are similarly puzzling. Rule-utilitarianism, for instance, holds that an outcome with more aggregate utility ranks above an outcome with less aggregate utility (indeed, it is on basis of this ranking that we are to determine what the ideal code of rules is), and yet it holds that agents should sometimes refrain from doing what will bring about the outcome with the most aggregate utility. But if an outcome with more aggregate utility is morally preferable to one with less, then why should the agent act so as to bring about an outcome with less aggregate utility, that is, an outcome that is less morally desirable? This is indeed puzzling. And this explains why the move from act-consequentialism to rule-consequentialism has seemed an unattractive solution to the problem of reconciling consequentialism with our commonsense moral intuitions: in order to accommodate our intuitions, rule-consequentialism must give up the very thing that is most compelling about consequentialism, that is, the idea that it is never wrong for an agent to perform an act she morally ought to prefer that she performs.
So far, I’ve been talking about agent-neutral versions of non-AC; both DCT and rule-utilitarianism hold that all agents should always rank the same two states of affairs identically. It’s interesting to note, though, that we find the same oddness that we found with DCT and rule-utilitarianism in agent-relative versions of non-AC. Imagine, for instance, a version of deontological prudence that I’ll call “DP.” DP holds both (1) that, prudentially speaking, P should always prefer a state of affairs where P has more utility to one where P has less utility and (2) that certain types of acts (e.g., charitable acts) are intrinsically imprudent. Thus P ought not to donate P’s money to charity even if doing so will maximize P’s utility. Thus, on DP, it is possible that an agent prudentially ought not to perform the act that will bring about the state of affairs that she prudentially ought to prefer to all other available alternatives.
Consider also rule-consequentialist prudence (hereafter, “RCP”), according to which a given agent ought to follow that set of rules that, if internalized by her, would lead her to produce more utility for herself than any other alternative set of rules would. On this view, one ought to follow a rule not to do x even if in this particular instance doing x would maximize her utility. And since RCP holds that an agent prudentially ought to prefer that her utility is maximized, RCP allows for the puzzling possibility that an agent prudentially ought not to perform the act that she prudentially ought to prefer that she performs, for, on RCP, she morally ought to prefer the state of affairs where she acts wrongly and maximizes utility by violating the ideal code to the state of affairs where she complies with the ideal code. By contrast, act-consequentialist prudence (hereafter, “ACP”) never holds that it is imprudent to do what will maximize one’s utility. So ACP never holds that an agent prudentially ought not to perform the act that she prudentially ought to prefer that she performs.
So we’ve seen that in contrast to consequentialist theories non-consequentialist theories allow there to be a conflict between how an agent ought to act and how that agent ought to want herself to act. This seems troubling. A moral theory that holds that an agent ought to act a certain way but not want herself to act in that way creates a kind of moral schizophrenia in the agent, where she’ll have good moral reason to regret it if she acts as she morally ought to. Fortunately, consequentialist theories avoid such troubling implications and hold that agents should always act in the way they ought to want themselves to act. And this, I believe, is what has seemed so compelling about act-consequentialist theories.