The book abstract:
Moral theories can play both a theoretical and a practical role. As theories, they provide accounts of which features make actions right or wrong. In practice, they provide standards by which we guide our choices. Regrettably, limits on human knowledge often prevent people from using traditional moral theories to make decisions. Decision makers labor under false beliefs, or they are ignorant or uncertain about the circumstances and consequences of their possible actions. An agent so hampered cannot successfully use her chosen moral theory as a decision-guide. This book examines three major strategies for addressing this “epistemic problem” in morality. One strategy argues that the epistemic limitations of agents are defects in them but not in the moral theories, which are only required to play the theoretical role. A second strategy holds that the main or sole point of morality is to play the practical role, so that any theory incapable of guiding decisions must be rejected in favor of a more usable theory. The third strategy claims the correct theory can play both the theoretical and practical role through a two-tier structure. The top tier plays the theoretical role, while the lower tier provides a coordinated set of user-friendly decision-guides to provide practical guidance. Agents use the theoretical account indirectly to guide their choices by directly utilizing the supplementary decision-guides.Making Morality Work argues that the first two strategies should be rejected, and develops an innovative version of the third strategy.
From the review:
Many of today’s “hot topics” in value theory concern how or whether our assessments of a person’s behaviour ought to be sensitive to her shortcomings and limitations. In ethics, we have the debates about moral uncertainty, subjective and objective reasons, and blameworthiness for moral ignorance; in epistemology, there’s luminosity, “operationalized epistemology”, and higher-order evidence. Decades before this spate of work, back when many philosophers were treating such concerns as afterthoughts, Holly Smith was laying bare with painstaking precision the vitality and the difficulty of questions about culpable ignorance and “deciding how to decide”. She has returned to such issues in recent years, and her long-awaited first book Making Morality Work is the culmination of these efforts.
The book considers the merits of three “responses” to two putative “impediments” to the exercise of our ability to guide our actions by morality. The first impediment is error: We often have difficulty acting in accordance with our moral beliefs because we often have false beliefs about the way the world is, non-morally speaking. The second is uncertainty: We will have difficulty, to say the least, guiding our actions by our moral views when we are uncertain about the nonmoral facts to which these views assign moral relevance.
Smith calls the three possible responses to these impediments “Austere”, “Pragmatic”, and “Hybrid”. These responses differ in how or whether they tailor moral theory to agents’ cognitive limitations. The Austere theorist would not tailor it at all. A rock weighs 30 kg, say, whether or not we believe it does, or have evidence that it does; similarly, the Austerist would say, an action is right or wrong regardless of our beliefs, or the evidence we possess, or what-have-you. The Pragmatist (in Smith’s sense) would tailor the entirety of her moral theory to the agent’s limitations.
Moral theory is supposed to be useful, after all — and more specifically, is supposed to help us guide our actions; a theory that doesn’t play this role is defective as a moral theory. The Hybrid theorist tries to get the best of both worlds, through a moral framework consisting of both “theoretical” and “practical” levels. The theoretical level gives us an explanation of actions’ rightness or wrongness that may be independent of action-guidance considerations. The practical level provides a guide to action for agents who want to steer their behaviour ultimately by the lights of the theoretical one, but who find they cannot do so directly.
Smith’s position is, I guess you could say, a “meta-hybrid”. She adopts the Hybrid approach as a way to deal with uncertainty, and the Austere one as a response to error. Her reasoning for the latter goes like this: We can be wrong about anything, including the beliefs or evidence or probabilities that the Pragmatic approach and the practical part of the Hybrid one designate as morally significant. So there is really no way to ensure that benighted agents will always be in a position to act in accordance with the moral views they accept — that they will find these views usable in what Smith calls the “extended” sense. The best we can do is to help the agent to guide her behaviour in the “core” sense — i.e. to derive an action-initiating prescription from her moral theory. But the Austere approach can provide that. A moral theory that says, e.g., “If an action has F, you should do it” can provide core guidance to an agent who believes that an action she’s contemplating has F, whether that belief is true or not. Given that, we should favour the Austere approach because it at least does not water-down its prescriptions with agent-accommodating elements. It does not sacrifice what Smith calls “deontic merit” as the other two approaches seem to do.
But a theory that says “If an action has F, you should do it” will not help an agent who is consciously uncertain, rather than simply wrong, about whether some action has F. Here we would need…well, something else. But what? Maybe an action-guiding element that adverts to the probability of the action’s having F would help, or one that counsels us to maximize expected Fness? Maybe we should employ a rule that advises us to do an action with another feature, G, which often co-occurs with F, and is typically easier for us to discover? Maybe Aleister Crowley’s clean-and-simple “Do what thou wilt” deserves a second look?
Smith’s answer: It’s all of the above. Whereas previous Hybrid approaches have supplemented a theoretical account of right and wrong — e.g. “You should maximize utility” — with a single rule designed for cases of uncertainty — e.g. “You should maximize expected utility” — Smith argues persuasively that this will generally be inadequate for action-guiding purposes. What we need, first and foremost, is a “multiple-rule hybrid” view, consisting of a theoretical account, plus a hierarchy of norms crafted with an eye towards guidance. The norms at the top of the hierarchy will more closely approximate the verdicts of the theoretical account, but will be usable by fewer agents, than those lower down. Additionally, Smith argues, we need rules for agents who are uncertain about which rules best approximate the pure, theoretical ones, rules for those who are uncertain about those rules, and so on.[…] I’d encourage anyone with even the faintest interest in these topics to read this book, for its many argumentative highlights repay careful attention. […] Smith offers a very interesting argument against any Pragmatic view that incorporates non-consequentialist elements. Her claim is that these views cannot be squared with a general prima facie duty to inform oneself — to gather evidence, to do the calculations, whatever — prior to action. For consider a Pragmatic view on which my deontological duties depend on my beliefs regarding certain non-moral facts. On such a view, updating these beliefs based on new information does not put me in a better position to apprehend duties that existed antecedently; rather, it creates (and destroys) duties. But on any plausible deontological view, while there is value in doing things that conduce to my fulfillment of my existing duties, there is often no value in doing things that bring about new duties that I may then fulfill. […]
But there are some places where the edifice could have been stronger or more fully built-up.
First, Smith might have done more to address the worry that Hybrid views, especially “multiple-rule” ones like hers, introduce the possibility of an unacceptable conflict between levels. For in my experience, at least, many Austerists and Pragmatists are quick to claim it as a virtue that their approaches do not generate such a conflict. Inter-level conflict is most glaring in Regan/Jackson/mineshaft/etc. cases. These are imagined situations in which the agent faces several options all of which stand roughly the same chance of being, objectively, the right thing to do, but might also be disastrous — and then at least one option that is certainly not the objectively right thing to do, but comes very, very close. This option would seem to be subjectively right — right in the sense that’s relevant to action-guidance under uncertainty — and hence recommended by a Hybrid-type theory; but remember, it is certainly objectively wrong. How can the Hybrid theorist claim to offer a unified prescription for action here?
A good, hard question. Smith addresses it by saying that positive prescriptions (“Do X!”) should take precedence over negative prescriptions (“Don’t do X!”) in the case of a conflict. She suggests that this is because the former are capable of guiding you to do something, whereas the latter can only inhibit you, guide you not to do it. But this seems to be, at most, a reason why positive recommendations would be more precise, and in that respect, more useful guides than negative ones. I can’t see why it would tell in favour of the former overriding the latter when they conflict. To be upfront: I do think Smith’s conclusion here is correct, and that it admits of a satisfactory explanation. I just think Smith’s own explanation isn’t it.
Second, for a book that goes to such great lengths to ensure morality’s action-guidingness, Making Morality Work does little to persuade us that the guiding role is all that important. Smith surveys four main rationales for the “usability demand”. The first is that usability for the guidance of action is required by the very concept of morality. The second is likewise “conceptual” — that it’s part of the very concept of morality that it’s “available to everyone”, which it can’t be unless it’s usable in certain ways. The third and fourth rationales are what she calls “goal-oriented”: Morality can promote social welfare (e.g. by promoting cooperation) only to the extent that its canons are usable; and, finally, people can engage in the best pattern of actions in the long run only if they are able to guide their actions by moral rules.
None of these rationales strike me as getting quite to the heart of the demand that morality (or at least, one part or level of a comprehensive moral code) be action-guiding. And indeed, Smith — to her credit — goes out of her way in various places to register doubts about them.
My own take is that guidance matters because trying matters, and the concept of guidance is bound up with this action-theoretic notion of a try. I can sensibly think that an action might be the right thing to do, in the objective sense, even if I am not certain that that’s the case, and as such, cannot guide my doing it by the thought that it’s the case. However, as I’ve argued elsewhere, I can’t think that one action might be a better try or attempt than another at doing, now, what objective normativity favours, in cases where I am consciously uncertain about whether it’s a better try. To think that some action might be better try than another in the relevant sense, I’ve got to think, straight up, that it is a better try — such that I could guide my performance of that action by that thought.
Were I to accept a moral framework that denied the truth of any moral views sufficient, in the present instance, to use to guide my actions, then I’d be committed to denying that any action I could perform now would count as a better try than any other at doing what objective normativity favours. But it would be implausible to deny that in most cases. Typically, there are not only better and worse things to do in the objective sense, but also actions that are better and worse specifically as tries at doing what is better in the objective sense.