A number of philosophers have argued that there can be actions that are at one and the same time immoral and admirable. These philosophers sometimes take the existence of admirable immorality (AI) to show that there must be nonmoral values that are at least sometimes capable of outweighing or overriding moral values (since such an action would not be considered admirable unless its immorality were being overridden.) It seems to me, though, that this may be too quick. There is a different picture of what may be going on in cases of AI that does not support value pluralism, or the overridingness of nonmoral values, at all. In fact maximizing consequentialists, who reject the idea that the moral requirement to maximize the good can ever be overridden by any sort of nonmoral concern, can nonetheless accept the existence of admirable immorality.
In “Alienation, Consequentialism, and the Demands of Morality,” Peter Railton draws a distinction between subjective and objective consequentialism. The subjective consequentialist thinks an agent ought always to aim at the good directly and explicitly; the objective consequentialist, by contrast, thinks that an agent ought always to behave in the way that actually brings about the most good, whether or not this involves constantly engaging in explicitly consequentialist deliberations. After all, thinking always in consequentialist terms can be inefficient and distracting, and can lead the agent to be alienated from those around her, and even from herself.
Let us suppose, then, that Jenna is a sophisticated consequentialist agent of this sort, and that she lives in a community of such agents. Let us suppose, too, that the community in which Jenna lives shares a set of moral rules and guidelines I will refer to as the moral code—a code that is widely regarded as justified on objective consequentialist grounds. Thus, this code will tell people to behave in the ways an objective consequentialist agent would behave—promoting the good to a large extent, but not pursuing the promotion of the good to the degree where that pursuit becomes counterproductive. Let us assume, too, that in Jenna’s society, as in most societies, the moral code is very closely tied to questions of moral evaluation, and that following the dictates of the code automatically provides one with a moral justification for one’s behavior.
Suppose that Jenna believes herself to be faced with a situation in which following the moral code’s dictates will not maximize the good. Rather, all things considered, the good will here be maximized by betraying a friend, a type of action which the moral code forbids. She knows, of course, that engaging in such behavior is always risky; indeed, one of the primary motivations lying behind sophisticated consequentialism is the fact that moral agents are often incapable of accurately predicting the consequences of their actions, so that a consistent adherence to certain reliable rules will usually produce more good in the long run than will a reliance on case-by-case judgments. She knows, too, that many sophisticated consequentialists would not judge this situation to be one of those exceptional cases in which violating the ordinarily accepted moral rules would be seen as clearly justified. Nevertheless, Jenna decides to take the risk, and betrays her friend. And suppose that the consequences are exactly as she predicted them to be, and that the costs (in terms of damage to her relationship, harm to her friend, etc.) are outweighed by the benefits produced.
Any evaluator who shares Jenna’s moral beliefs, who is aware of the facts of the situation and, in particular, is aware that things worked out precisely as Jenna predicted and intended, is likely to regard Jenna’s action as an instance of admirable immorality. Her action was admirable, first, because it was intended to bring about the best and second, because it did in fact succeed in doing so. (Had Jenna failed, it is much less likely that anyone would regard her as admirable; she would be much more likely to be seen as reckless and dangerous, not to mention untrustworthy.) However, the action can also be held to be immoral, for three reasons. First, the action did in fact fail to accord with the moral rules promulgated in Jenna’s society – moral rules which, moreover, are accepted by Jenna herself, and supported by the consequentialism to which she is committed. (Recall that we specified that most of her fellow consequentialists would not, prior to her action, have judged her contravening of the rules to be justified, and that she herself was aware of this.) Second, from the perspective of the betrayed friend it may be hard to appreciate the consequentialist justification for the betrayal in this particular case; thus, to the extent that we empathize with his position, we will find our admiration for Jenna’s action somewhat less than unalloyed. Finally, Jenna’s action was, as we have said, a significant risk: if the projected benefits had not materialized, or if the costs had turned out to be greater than anticipated, Jenna’s action would have been regarded by everyone (including, presumably, Jenna herself) as a clear case of wrongdoing, no matter how admirable the intentions with which it was performed. There are good reasons, indeed good consequentialist reasons, to encourage agents not to take such risks. And had she decided to play it safe by following the moral code it would have been quite impossible to criticize Jenna on moral grounds, despite the fact that by doing so she was giving up (what she perceivedto be) an opportunity to bring about a significant amount of moral good.
So what we seem to have, interestingly, is an action that is both immoral and admirable—and indeed, admirable in explicitly moral (and explicitly consequentialist) terms. And while I have described the situation in terms of consequentialism, that is not necessary; the account can be generalized to cover many sorts of moral theory. We should, I think, expect cases of admirable immorality of this sort to be present, or at any rate possible, in any situation that possesses the following features:
(1) There is a moral code: that is, a socially accepted set of instructions for bringing about outcomes that are justifiable in terms of considerations that have moral weight.
(2) Actions that go against the dictates of the moral code are considered immoral.
(3) Due to the complexity of the moral universe, and to pragmatic limitations on the nature and complexity of moral codes, the moral code occasionally dictates action that fails to bring about the best possible outcome, as judged from the perspective of those considerations that have moral weight.
Thus, the existence of admirable immorality does not in itself provide evidence for any conclusion regarding the strength, or even the existence, of nonmoral considerations.
A very interesting post Troy. However I think you’re argument won’t convince everyone who believes in immoral but praiseworthy actions. This is because your explanation captures only one set of immoral but praiseworthy actions, namely those which previous to acting seemed immoral, but after action turned out to in fact be what morality demands. I think defenders of the view you critique want to make a stronger claim, namely that there are actions which are in fact morally wrong (Not just appear to be wrong) which nonetheless are still praiseworthy.
David,
I do acknowledge that some people think not every case of admirable immorality is of the sort I describe; it would take a further argument to show them to be wrong. However, I disagree with your characterization of the kind of AI that I have described. I didn’t intend to describe actions which merely appear to be immoral but in fact turn out to be “what morality demands”; I meant to describe actions that really are immoral, yet whose outcomes are impossible to condemn in moral terms. Thus the focus on consequentialism: given the emphasis they place on outcomes, there is considerable pressure on consequentialists to regard an action that is performed with the intention of bringing about a good outcome, and which does so, as admirable. Yet the whole point of sophisticated consequentialism a la Railton is, I take it, that just as moral praise can attach to some non-maximizing actions (an agent’s decision to spend money to visit his wife rather than donating it to charity, in his famous ‘Juan and Linda’ case), moral condemnation can attach to some maximizing actions.
Such actions really are immoral, not merely apparently so. This is primarily because they involve significant risk. (What if things hadn’t turned out as planned? Jenna’s friend would have been harmed, and with no compensating promotion of the good.) Part of the thought of sophisticated consequentialism is that we would all be better off if agents never took such risks, even granted that such risks would occasionally pay off. Part of what a moral code is supposed to protect us from, on this view, is the sort of bad outcome that the majority of these gamblers will bring about. (If a gamble is on average worth making, the code won’t forbid it; rather, it will be built in to the code as an exception clause.) Given this, it is only natural to see agents who go against the code, and who subject others to serious risks by doing so, as acting without justification in a way that we do not want to encourage. An agent who does what the moral code dictates is acting morally rightly, and has a moral justification for her action whether or not the outcome turns out to be the morally best one available. Conversely, an agent who knows what the moral code requires in a given case, and who acts against it, knowing that by doing so she is giving up any claim to the automatic justification provided by the moral code, and knowing that in doing so she is putting others at risk, is acting immorally. Or so I claim.
Troy,
I doubt that your example, associated with maximizing consequentialism, is actually theory-neutral or can be generalized as you claim. You suggest that we’ll have admirable immorality of this sort in situations meeting these three conditions:
“(1) There is a moral code: that is, a socially accepted set of instructions for bringing about outcomes that are justifiable in terms of considerations that have moral weight.
(2) Actions that go against the dictates of the moral code are considered immoral.
(3) Due to the complexity of the moral universe, and to pragmatic limitations on the nature and complexity of moral codes, the moral code occasionally dictates action that fails to bring about the best possible outcome, as judged from the perspective of those considerations that have moral weight.”
So imagine Deontia, a land ruled by a simple rule-based deontological moral code, in which there are, say, three absolute moral prohibitions that all recognize. Such a conception of the moral universe is decidedly non-complex. The only way in which Deontia’s moral code could allow admirable immorality is if the sorts of features that make Jenna’s action count as admirable immorality in a maximizing consequentialist world could also present themselves to an agent in Deontia. But that can’t happen, since (for instance) the idea of an agent faced with the difficulty of ascertaining the consequences of her actions and thereby undertaking a morally “risky” action won’t make sense in Deontia. There’s no requirement to consider consequences in this moral code, and this code lacks the dilemmatic “complexity” or “concern for pragmatic considerations” that enable admirable immorality. Presuming the rules have content simple enough for all agents to grasp, anyone who violates one of the three principles has acted wrongly, and this can never be admirable. Since there is no subjective/objective gap here, as in your consequentialist example, the conditions under which your kind of admirable immorality could arise will not be met.
I wouldn’t want to defend a moral code like that of Deontia’s, for it would have to face worries about possible conflicts among its three principles. But supposing that such a moral code is logically possible, it would not tolerate admirable immorality like you describe. I take this to show that whether admirable immorality exists is not a question that can be divorced from the substance of moral codes. One might take it that explaining how admirable immorality is possible might be a consideration in deciding among rival moral theories, but I doubt that we can give a theory-inspecific answer to how admirable immorality could occur.
Michael,
I completely agree; indeed, it was never my intention to give an account of AI that could accommodate every moral theory. (The account can be generalized, perhaps, but not universalized.) Let us restrict ourselves, then, to plausible moral theories. (The moral theory accepted in Deontia is clearly not plausible, given that it gives no moral importance whatsoever to the consequences of our actions.) I think any plausible moral theory will allow for AI; and perhaps the claim that my account of AI can be generalized to cover all plausible theories can be defended.
Troy, I’m worried that your discussion conflates two distinctions, both of which have been, in the consequentialism literature, marked by the the terminology of “objective” and “subjective”.
On the one hand, there’s Railton’s distinction, according to which, as you say, subjective consequentialism is the view that agents ought to aim directly, in their deliberations, at the best consequences, whereas objective consequentialism allows that agents may deliberate in some other way (e.g. by following some set of rules). As it’s sometimes put, subjective consequentialism, in Railton’s sense, advocates adopting the consequentialist dictum — i.e. “maximise good consequences” — as a “decision procedure”, whereas objective consequentialism advocates this only as a “criterion of rightness”.
On the other hand, there’s the distinction between actual and expected goodness of consequences. Subjective consequentialism is sometimes said to be the view that agents ought to perform the action whose consequences have the greatest expected goodness (by the lights of her “subjective” probabilities), while objective consequentialism is said to be the view that agents ought to perform the action that actually has best consequences. To illustrate, suppose that Raymer gambles his family’s entire life savings playing roulette at a casino, and by extraordinary luck he happens to win. The expected consequences of Raymer’s choice were very bad (we may suppose); hence, subjective consequentialism would say that he acted wrongly. But the actual consequences of his choice turned out to be very good; hence, objective consequentialism would say that he did not act wrongly.
Now, it’s not clear to me which of these two distinctions is playing a role in your purported example of admirable immorality; there appears to be elements of both in your discussion.
In response to Michael’s comment, it may be worth pointing out that that questions of “morally risky actions” may arise even on a non-consequentialist theory. Take Michael’s example of “Deontia”, and let one of the three absolute moral rules adopted there be a prohibition on killing innocent people. Now, consider the following situation (borrowed from Frank Jackson and Michael Smith). One day while walking near some ski slopes, Delia, a resident of Deontia, notices a skier high up on the mountain and realises that, unless the skier is stopped, he’ll cause an avalanche killing five people further down the mountain. But the only way Delia can stop the skier is by shooting him (with the gun she happens to be carrying), thereby killing him. Delia believes with high confidence, but not with certainty, that the skier is maliciously intending to cause an avalanche, and so she’s uncertain whether or not the skier is innocent. Thus, by the lights of Deontia’s moral code, Delia’s choosing to shoot the skier is “morally risky”, since there is some risk that the skier will turn out to be innocent, in which case her shooting him would violate the absolute prohibition.
Campbell,
You’re right that I didn’t distinguish between the two distinctions you mention. In my defense, after having mulled it over for a little more than a day I can’t see how the failure to distinguish the distinctions leads to any harm. But I’m willing to entertain suggestions to the contrary.
Probably elements of both distinctions will be present in any plausible version of sophisticated consequentialism; it depends on one’s view as to how the good is best maximized. All such versions appeal to the idea of what we can call an Ideal Consequentialist Agent, or ICA. Typically the theory says that a praiseworthy action is one that an ICA would have performed. The consequentialist can also maintain, as Railton does, that there is a more ‘objective’ sense of ‘right’ that is not tied to this: the ‘right’ action in this sense is the one that leads to the morally best outcome. Since we can’t always know which action is the right one (I want to win the lottery and donate the money to OXFAM – which number should I pick?) failure to do the ‘right’ thing is not always immoral, and indeed it is sometimes right (that is, praiseworthy) to do what is not ‘right.’
Now, a consequentialist could reject every element of the sophisticated consequentialist story except for the idea that moral rightness is determined by expected goodness rather than actual goodness. In this case an agent should always try to do what is best; but an agent who legitimately tries, and does what can reasonably be expected to lead to the best outcome, has acted rightly whether or not the outcome actually is the best. (Consequentialist theories that do not even go this far, and insist that one must actually bring about the best outcome—i.e. buy the winning ticket—in order to act rightly are surely not plausible.) Railton-type consequentialists will go further: on at least some occasions a consequentialist agent need neither achieve, nor aim at , the best outcome in order to act rightly. There are many different positions along this continuum, and I’m not going to argue for any one in particular (for one thing, I’m no consequentialist!); but as I say, I’ll happily admit to not having bothered to draw the distinction, as I don’t see that that omission leads to any harm or confusion.