Welcome to our Ethics review forum on Holly Smith‘s Making Morality Work (OUP 2018), reviewed by Andrew Sepielli.

The book abstract:

Moral theories can play both a theoretical and a practical role. As theories, they provide accounts of which features make actions right or wrong. In practice, they provide standards by which we guide our choices. Regrettably, limits on human knowledge often prevent people from using traditional moral theories to make decisions. Decision makers labor under false beliefs, or they are ignorant or uncertain about the circumstances and consequences of their possible actions. An agent so hampered cannot successfully use her chosen moral theory as a decision-guide. This book examines three major strategies for addressing this “epistemic problem” in morality. One strategy argues that the epistemic limitations of agents are defects in them but not in the moral theories, which are only required to play the theoretical role. A second strategy holds that the main or sole point of morality is to play the practical role, so that any theory incapable of guiding decisions must be rejected in favor of a more usable theory. The third strategy claims the correct theory can play both the theoretical and practical role through a two-tier structure. The top tier plays the theoretical role, while the lower tier provides a coordinated set of user-friendly decision-guides to provide practical guidance. Agents use the theoretical account indirectly to guide their choices by directly utilizing the supplementary decision-guides.Making Morality Work argues that the first two strategies should be rejected, and develops an innovative version of the third strategy.

From the review:

Many of today’s “hot topics” in value theory concern how or whether our assessments of a person’s behaviour ought to be sensitive to her shortcomings and limitations. In ethics, we have the debates about moral uncertainty, subjective and objective reasons, and blameworthiness for moral ignorance; in epistemology, there’s luminosity, “operationalized epistemology”, and higher-order evidence. Decades before this spate of work, back when many philosophers were treating such concerns as afterthoughts, Holly Smith was laying bare with painstaking precision the vitality and the difficulty of questions about culpable ignorance and “deciding how to decide”. She has returned to such issues in recent years, and her long-awaited first book Making Morality Work is the culmination of these efforts.

The book considers the merits of three “responses” to two putative “impediments” to the exercise of our ability to guide our actions by morality. The first impediment is error: We often have difficulty acting in accordance with our moral beliefs because we often have false beliefs about the way the world is, non-morally speaking. The second is uncertainty: We will have difficulty, to say the least, guiding our actions by our moral views when we are uncertain about the nonmoral facts to which these views assign moral relevance.

Smith calls the three possible responses to these impediments “Austere”, “Pragmatic”, and “Hybrid”. These responses differ in how or whether they tailor moral theory to agents’ cognitive limitations. The Austere theorist would not tailor it at all. A rock weighs 30 kg, say, whether or not we believe it does, or have evidence that it does; similarly, the Austerist would say, an action is right or wrong regardless of our beliefs, or the evidence we possess, or what-have-you. The Pragmatist (in Smith’s sense) would tailor the entirety of her moral theory to the agent’s limitations.

Moral theory is supposed to be useful, after all — and more specifically, is supposed to help us guide our actions; a theory that doesn’t play this role is defective as a moral theory. The Hybrid theorist tries to get the best of both worlds, through a moral framework consisting of both “theoretical” and “practical” levels. The theoretical level gives us an explanation of actions’ rightness or wrongness that may be independent of action-guidance considerations. The practical level provides a guide to action for agents who want to steer their behaviour ultimately by the lights of the theoretical one, but who find they cannot do so directly.

Smith’s position is, I guess you could say, a “meta-hybrid”. She adopts the Hybrid approach as a way to deal with uncertainty, and the Austere one as a response to error. Her reasoning for the latter goes like this: We can be wrong about anything, including the beliefs or evidence or probabilities that the Pragmatic approach and the practical part of the Hybrid one designate as morally significant. So there is really no way to ensure that benighted agents will always be in a position to act in accordance with the moral views they accept — that they will find these views usable in what Smith calls the “extended” sense. The best we can do is to help the agent to guide her behaviour in the “core” sense — i.e. to derive an action-initiating prescription from her moral theory. But the Austere approach can provide that. A moral theory that says, e.g., “If an action has F, you should do it” can provide core guidance to an agent who believes that an action she’s contemplating has F, whether that belief is true or not. Given that, we should favour the Austere approach because it at least does not water-down its prescriptions with agent-accommodating elements. It does not sacrifice what Smith calls “deontic merit” as the other two approaches seem to do.

But a theory that says “If an action has F, you should do it” will not help an agent who is consciously uncertain, rather than simply wrong, about whether some action has F. Here we would need…well, something else. But what? Maybe an action-guiding element that adverts to the probability of the action’s having F would help, or one that counsels us to maximize expected Fness? Maybe we should employ a rule that advises us to do an action with another feature, G, which often co-occurs with F, and is typically easier for us to discover? Maybe Aleister Crowley’s clean-and-simple “Do what thou wilt” deserves a second look?

Smith’s answer: It’s all of the above. Whereas previous Hybrid approaches have supplemented a theoretical account of right and wrong — e.g. “You should maximize utility” — with a single rule designed for cases of uncertainty — e.g. “You should maximize expected utility” — Smith argues persuasively that this will generally be inadequate for action-guiding purposes. What we need, first and foremost, is a “multiple-rule hybrid” view, consisting of a theoretical account, plus a hierarchy of norms crafted with an eye towards guidance. The norms at the top of the hierarchy will more closely approximate the verdicts of the theoretical account, but will be usable by fewer agents, than those lower down. Additionally, Smith argues, we need rules for agents who are uncertain about which rules best approximate the pure, theoretical ones, rules for those who are uncertain about those rules, and so on.

[…] I’d encourage anyone with even the faintest interest in these topics to read this book, for its many argumentative highlights repay careful attention. […] Smith offers a very interesting argument against any Pragmatic view that incorporates non-consequentialist elements. Her claim is that these views cannot be squared with a general prima facie duty to inform oneself — to gather evidence, to do the calculations, whatever — prior to action. For consider a Pragmatic view on which my deontological duties depend on my beliefs regarding certain non-moral facts. On such a view, updating these beliefs based on new information does not put me in a better position to apprehend duties that existed antecedently; rather, it creates (and destroys) duties. But on any plausible deontological view, while there is value in doing things that conduce to my fulfillment of my existing duties, there is often no value in doing things that bring about new duties that I may then fulfill. […]

But there are some places where the edifice could have been stronger or more fully built-up.

First, Smith might have done more to address the worry that Hybrid views, especially “multiple-rule” ones like hers, introduce the possibility of an unacceptable conflict between levels. For in my experience, at least, many Austerists and Pragmatists are quick to claim it as a virtue that their approaches do not generate such a conflict. Inter-level conflict is most glaring in Regan/Jackson/mineshaft/etc. cases. These are imagined situations in which the agent faces several options all of which stand roughly the same chance of being, objectively, the right thing to do, but might also be disastrous — and then at least one option that is certainly not the objectively right thing to do, but comes very, very close. This option would seem to be subjectively right — right in the sense that’s relevant to action-guidance under uncertainty — and hence recommended by a Hybrid-type theory; but remember, it is certainly objectively wrong. How can the Hybrid theorist claim to offer a unified prescription for action here?

A good, hard question. Smith addresses it by saying that positive prescriptions (“Do X!”) should take precedence over negative prescriptions (“Don’t do X!”) in the case of a conflict. She suggests that this is because the former are capable of guiding you to do something, whereas the latter can only inhibit you, guide you not to do it. But this seems to be, at most, a reason why positive recommendations would be more precise, and in that respect, more useful guides than negative ones. I can’t see why it would tell in favour of the former overriding the latter when they conflict. To be upfront: I do think Smith’s conclusion here is correct, and that it admits of a satisfactory explanation. I just think Smith’s own explanation isn’t it.

Second, for a book that goes to such great lengths to ensure morality’s action-guidingness, Making Morality Work does little to persuade us that the guiding role is all that important. Smith surveys four main rationales for the “usability demand”. The first is that usability for the guidance of action is required by the very concept of morality. The second is likewise “conceptual” — that it’s part of the very concept of morality that it’s “available to everyone”, which it can’t be unless it’s usable in certain ways. The third and fourth rationales are what she calls “goal-oriented”: Morality can promote social welfare (e.g. by promoting cooperation) only to the extent that its canons are usable; and, finally, people can engage in the best pattern of actions in the long run only if they are able to guide their actions by moral rules.

None of these rationales strike me as getting quite to the heart of the demand that morality (or at least, one part or level of a comprehensive moral code) be action-guiding. And indeed, Smith — to her credit — goes out of her way in various places to register doubts about them.

My own take is that guidance matters because trying matters, and the concept of guidance is bound up with this action-theoretic notion of a try. I can sensibly think that an action might be the right thing to do, in the objective sense, even if I am not certain that that’s the case, and as such, cannot guide my doing it by the thought that it’s the case. However, as I’ve argued elsewhere, I can’t think that one action might be a better try or attempt than another at doing, now, what objective normativity favours, in cases where I am consciously uncertain about whether it’s a better try. To think that some action might be better try than another in the relevant sense, I’ve got to think, straight up, that it is a better try — such that I could guide my performance of that action by that thought.

Were I to accept a moral framework that denied the truth of any moral views sufficient, in the present instance, to use to guide my actions, then I’d be committed to denying that any action I could perform now would count as a better try than any other at doing what objective normativity favours. But it would be implausible to deny that in most cases. Typically, there are not only better and worse things to do in the objective sense, but also actions that are better and worse specifically as tries at doing what is better in the objective sense.

17 Replies to “Ethics Review Forum: Smith’s ‘Making Morality Work’, reviewed by Sepielli

  1. Many thanks to Daniel Wodak for organizing this discussion, and to Andrew Sepielli for his generous and insightful review.
    I’m pleased that Andrew finds so much of value in Making Morality Work. As someone who has spilled a lot of ink trying to make morality more action-guiding, he’s deeply acquainted with this endeavor, and his opinion that the book “succeeds wonderfully” at “developing the kind of theoretical apparatus the use of which gives us flawed and kludgy mortals our best shot at living up to our ideals” is high praise indeed. His interest in Chapters 6 and 7 on the non-ideal Pragmatic strategies is especially welcome, as those chapters attempt to sort out (in a manner investigated by no one else) how someone who wants to make morality work, but is willing to accept imperfect usability, might trade off usability against fidelity to the canons of ideal morality. Still, Andrew identifies two issues on which he believes the book could have been improved. I’ll address both.
    The first arises because views such as mine introduce the possibility of unacceptable conflict between distinct levels of moral recommendations. In the famous Dr. Jill case, for example, Jill can prescribe drug B, which would provide a moderate cure for the patient’s skin complaint; or prescribe drug A or drug C, each of which have a fifty per cent chance of completely curing the patient, but also a fifty per cent chance of killing him; or provide no treatment at all, which would leave him to suffer for life from the skin complaint. In such a case my Hybrid view implies that giving the patient drug A (which, unbeknownst to Jill, would actually completely cure the patient) is objectively obligatory, and giving him the moderate cure B is objectively wrong. It also implies that giving the patient drug B is subjectively obligatory. How is the patient to choose, given these conflicting prescriptions? In the book I address the conflict by arguing that in such cases of conflict, positive prescriptions (“Do X!”) take precedence over negative prescriptions (“Don’t do X!). My stated rationale is that for effective guidance, an agent needs either a prescription that some act must be done, or at least a prescription that several acts are permissible. Merely telling the agent not to perform some act doesn’t tell her enough to count as guidance. Telling Jill not to prescribe drug B (since it is certain to be objectively wrong) doesn’t sufficient narrow down her choices, since drugs A and C may also be objectively wrong, and no treatment at all is certain to be objectively wrong. She is left not having any guidance about what to do. If you have a choice between four roads, and someone tells you “Don’t take the road on the left,” they’ve not provided enough help to enable you to make a choice which road to take. You must choose one, and without any advice on which road to choose, you can’t implement your aim to get to a certain destination.
    Andrew thinks my rule that positive prescriptions take precedence over negative prescriptions is a mere matter of positive prescriptions being more precise and therefore more useful, and he finds this wrong-headed. He leaves it unclear what he means by “precision.” However, I can find no reading on which this is a matter of mere precision. “Don’t prescribe drug B” is just as precise as “Prescribe drug B”: both evaluate a single option. Nor is it generally relevant what proportion of options are evaluated. Being told that three out of five of your options are permissible is far more useful than being told that three out of five are wrong, even though the same percentage of options are evaluated. It’s a matter of being told what is best (or permissible) to do when you must do something rather than nothing. Only this will be useful guidance.
    The second area for improvement that Andrew identifies is the discussion of why the usability demand for morality is compelling. I describe four traditional rationales for the demand, and argue that most of them are quite limited in their usefulness in this debate. The major surviving (new) rationale is that the core usability of a moral principle which is adopted by an agent ensures that the agent has an important form of autonomy insofar as she can guide her choices by her values. Andrew proposes his own rationale, which he believes gets closer to the heart of the matter. It involves using the notion of a “try” to explicate subjective rightness. His proposal, simplifying a bit, is that “What I subjectively ought to do” =df. “What would count as my best try, or attempt, or ‘shot’ at doing what objective normativity favours.”(Sepielli, “How Moral Uncertaintism Can be Both True and Interesting,” forthcoming in Mark Timmons, ed., Oxford Studies in Normative Ethics, Vol. V (Oxford University Press, 2018), p. 106). A good try at doing A would appear to be an attempt that has a good chance of resulting in your doing A. Your best try at doing A would be the one that has the highest probability of your doing A. The natural thought is that Andrew is saying the subjectively obligatory action is the one that has the best chance at succeeding in doing what’s objectively obligatory. However, he rightly dismisses this idea, since doing what has the best chance at doing what’s objectively obligatory would, in cases such as Dr. Jill’s dilemma, issue highly counterintuitive prescriptions. Instead, Andrew clarifies that the subjectively obligatory action is the one that is the agent’s best try at “doing what objective normativity favours” (Sepielli, “Remarks on Holly Smith’s Making Morality Work,” Rutgers Conference on Holly Smith’s Making Morality Work, October 18, 2019, p. 3). He claims this will involve taking into account the possible outcomes of each possible try, weighted by the probabilities of those outcomes. I have no beef with saying that subjective rightness sometimes involves taking into account the possible outcomes of each possible try, weighted by the probabilities of those outcomes. But I do have a beef with describing this as “doing what objective normativity favours.” Objective normativity pays no attention to the probabilities of the outcomes of your actions. It only pays attention to the outcomes your actions would actually have. Any recommendation that turns on probabilities can’t be accurately described as “doing what objective normativity favours.” So characterizing the subjectively obligatory as “doing what objective normativity favours” is simply mistaken.
    Andrew argues that using the concept of a “best try” to elucidate subjective rightness helps justify the “usability demand.” He claims that it’s built into the concept of the part of normativity concerned with evaluations of actions qua “tries” that this part must be action-guiding, and that characterizing the subjectively right action as one’s “best try” ensures that a recommendation to perform that action is in fact action guiding. His thought seems to be this. He claims that I cannot coherently think that some action of mine, say X, might be the best try at doing A, without actually being sure that X is the best try (ibid., p. 5). And feeling sure that X is the best try means that I am in a psychological state such that I can use the idea of X as the best try to guide a decision to do X (ibid.) Thus to identify something as the best try is to put myself in a position to make a decision.
    I have a hard time seeing how there can be no uncertainty about whether some act is one’s best try, given Andrew’s account of what a “best try” is. According to him, a judgement that X-ing is my best try at doing A involves taking into account all the objective normative verdicts—about reasons, “ought,” wrongness, and so on—about X-ing in each epistemically possible world in which I perform X, weighted by their epistemic probabilities, and comparing these facts with parallel facts about all my potential alternatives (ibid., p. 3). Clearly there is lots of room here for uncertainty about which alternative is best in terms of these factors. So I could well conclude that X might be a better try than Y, without feeling certain that either X or Y is my best try. Thus I cannot jump on board Andrew’s claim that invoking “best tries” to explain subjective rightness gives us a guarantee that an agent who considers which “try” is best has found a component of a comprehensive moral theory that is guaranteed to be usable for making decisions.
    How does this bear on why normativity must be usable for guidance? Andrew states that the part of normativity that concerns trying must be capable of providing guidance. But even supposing this is true, why must there be a part of normativity that concerns trying? Someone—like the Austere theorist—might simply reject the claim that normativity must include any such component. Then Andrew, unless he can tell us more, has also not answered the question of what justifies the search for a way to fulfill the usability demand.

  2. Hi Holly,
    Thanks for all that and for a fantastic book! I have lots of thoughts and questions, but will limit it to picking up a thread of your answer to Andrew. Like Andrew I think that the notion of trying might be useful in defending a pragmatic view here. In brief, it seems that the rationale for usability can be construed as a rationale for a sense of rightness that ties it closely to praiseworthiness (and wrongness to blameworthiness). Here is a fairly lengthy quote suggesting you are sympathetic to that idea:
    “Of course if an agent were punished or blamed for failing to do what is right because he lacks knowledge, we would all feel that this was unfair (at least if we assume he could not have discovered what was right even if he had tried). It is wrong to visit unpleasant consequences or criticism on someone for doing something which he did not know to be wrong (or could not have known to be wrong). But the moral code evaluating a mistaken action as wrong need not have this upshot. Most moral codes recognize that ignorance of one’s duty, arising from a non-culpable mistake of fact, is an excuse for failing to carry out that duty. People are neither punished nor thought to be morally blameworthy for wrongful actions that arise from such mistakes. Hence there is no need to change the character of the duties constraining human beings in order to protect them from this form of injustice. The apparatus of excusing conditions, which includes excusing impermissible acts resulting from ignorance about the world, is a fully effective way of precluding such injustice. We do not need the strong conception of a successful moral life, which insists that morally successful agents neither do wrong nor are ever blameworthy, to avoid this kind of injustice. Agents can avoid blameworthiness even though they sometimes do wrong.” (p. 198).

    That makes sense of why factual errors, so long as they are non-culpable, leave ‘usability’ intact – it is because they leave blamelessness intact. This lends support to the pragmatic view – in particular, the subjectivized version of it. A subjectivized code achieves usability by framing the instruction in terms of something that is always accessible to an agent, her beliefs, for example. The thought behind that is that we are not blameworthy when we act on our beliefs. BUT you object, we can be mistaken about our beliefs! Ok, but first, why not think that that sort of mistake is just a factual mistake (it isn’t always, clearly, but any mistake can be culpable, and this sort of mistake can be innocent). The question then, is should we think that an agent who is non-culpably mistaken about her beliefs has received the right sort of action guidance and does she escape blameworthiness? It seems that she has received action guidance. She wants to regulate her actions in accordance with her moral view, and she can. She can act on the beliefs she believes she has. Does she escape blameworthiness? Yes, she does. She has made a mistake, but it is a non-culpable mistake. We may have reason to think that mistakes about one’s own mental states are culpable more often that mistakes about the world, but we have no reason to think that non-culpable mistakes about one’s own mental states should be treated differently to non-culpable mistakes about the world.

    In fact I think that it is better to formulate the view in terms of trying (for reasons I will not go into here). Holly, you object that one might be mistaken or not know how to try whether one is trying. I think the complaint about not knowing HOW to try is less worrying – I think we should use a very thin notion of trying, so that taking steps towards your goal, including thinking about what steps to take, counts as trying. But I agree that we cannot always know that some effort or other is our ‘best try’ (as Andrew puts it). I think we can answer that as we answer the belief issue – there can be non-culpable factual errors that do not impede usability. Then we would have as much core usability as the hybrid view has.

  3. Hi Holly,

    I just want to probe a little more on the justification for the rule that positive prescriptions take precedence over negative prescriptions. I think I can understand the rationale you’ve sketched here (“It’s a matter of being told what is best (or permissible) to do when you must do something rather than nothing. Only this will be useful guidance.”) if I’m thinking about a decision theory. But it seems to be an open question whether this is the best way to think about morality. Deontological views that take constraints very seriously and allow for dilemmas may well view matters differently.

    This might not be a deep issue at all. The way you’ve (very briefly!) sketched the rationale here seems to presuppose that there’s at least one permissible action in any context. It’d still be true on views that reject this presupposition that *if* there’s a permissible action then learning this would suffice to learn that it didn’t violate any constraints, etc. That may well suffice for the rationale you have in mind. But I think it’d help me to better understand the rationale for the rule to see how it fares if we think about morality as primarily including set of negative prescriptions rather than positive injunctions.

  4. Holly, thanks for the very astute reply, and Daniel, thanks for setting up this exchange.

    I wanted to let the discussion play out before offering a comprehensive reply-to-the-reply-to-the-review-of-the-book.

    But just briefly: Holly writes:

    ***
    “I have no beef with saying that subjective rightness sometimes involves taking into account the possible outcomes of each possible try, weighted by the probabilities of those outcomes. But I do have a beef with describing this as “doing what objective normativity favours.” Objective normativity pays no attention to the probabilities of the outcomes of your actions. It only pays attention to the outcomes your actions would actually have. Any recommendation that turns on probabilities can’t be accurately described as “doing what objective normativity favours.” So characterizing the subjectively obligatory as “doing what objective normativity favours” is simply mistaken.”
    ***

    But my suggestion is rather that (basically — see below) the subjectively right act is the act that would constitute the *best try* at doing what obj. normatativty favours, not the act that it is, itself, what objective normativity favours. But I fail to see how the quality of an action *qua try* at doing X cannot depend on probabilities in the relevant way just because X, or the degree to which one has done X, does not “pay attention” to the probabilities. For example, I might say that throwing my daughter a pool party for her birthday would count as my best attempt, best try, best shot at throwing her a party that will make her happy. We might say this partly on the grounds that, e.g., we thought that it has a very good change of making her very happy, a decent chance of making her somewhat happy, and only a small chance of making her unhappy. This all seems right, notwithstanding the fact that the degree to which she, or any other person, is happy, typically doesn’t depend on probabilities in the general way that the quality of an action *qua try* depends on probabilities.

    Re: uncertainty about tries — I worry that laying out a complete view here will suck the air out of the discussion, but I may do it later once things die down on their own. Brieftly, though, when I talk about a “best try” in this review, I mean “best try” in what I call its primary sense. That is the only sense of “best try” the correct application of which requires the agent’s certainty. There are other senses. For Holly: I tried to suggest a bit of this more complicated picture in my comments at that Rutgers workshop on your book. For others; the apparatus I lay out in my 2014 Nous paper, “What to Do When You Don’t Know What to Do When You Don’t Know What to Do…” may give you a feel for how I think about these different senses of try. (There are affinities between the primary sense of “best try” and the notions of “perspectival” rationality and epistemic-probability-relative normativity I employ in that paper.)

    Re: the Austere theorist rejecting the idea that the true complete theory of normativity includes evaluations of some actions as better tries (in the primary sense) than others — Well, again, I think that would be implausible, and while I have reasons for thinking it’s implausible, I’d kinda be more interested in hearing some Austere theorist (or anyone) explain why it *is* plausible before writing more.

  5. Thanks for the very interesting discussion! Sounds like a great book — apologies for commenting without having read it yet.

    Like Andrew, I wasn’t sure about the idea that positive prescriptions beat negatives. Andrew put the point in terms of precision, but Holly had doubts about that. She said “Don’t give B” is precise, in the sense that it’s just about a single act — giving drug B.

    I think Andrew meant something different by “precise.” I think he was talking about specific options, which can’t be done in relevantly different ways. (“Not giving B” seems equivalent to “giving A or C” — a non-specific option.)

    More to the point: when we change specificity, positive doesn’t always guide better than negative. If a negative prescription forbids a non-specific option (“Don’t give A or C!”), it might give guidance. Meanwhile, when a positive demand for a non-specific options (“Give either A or C!”) might fail to give helpful guidance.

    What counts as guidance? My impression from Holly’s comment: I’m “guided” if I can know of some *specific* option that it’s permissible.

    If that’s the right interpretation, then clearly “You must give either A or C” isn’t a good enough guide; I don’t know about A and C specifically. Meanwhile, “It’s wrong to give either A or C” would be excellent guidance, assuming that there’s always a permissible option, and that B is my only specific option left.

    (More precisely, the moral assumption here is that wrong(p) –> permissible(~p), which is equivalent to obligatory(~p) –> permissible (~p), aka the D axiom.)

  6. Many thanks to Ellie Mason, Daniel Wodak, Andrew Sepielli, and Daniel Munoz for their helpful comments.

    I will try to lump together responses that raise related issues, and here start by replying to the two Daniels, who both focus on my suggested “rule” that for guidance purposes the positive prescriptions of the subjective component of a hybrid theory take precedence over any conflicting negative prescriptions of the objective component of the hybrid theory. It may be worth remarking that (whatever one thinks of my “rule”) we can’t escape the conclusion that in cases such as that of Dr. Jill, the positive prescription that giving the patient treatment B ought subjectively to be done takes precedence for guidance purposes over the negative prescription that giving the patient treatment B ought objectively not to be done. Jill is seeking guidance on what action to perform, given that she must do something (where not treating her patient of course counts as “doing something”). Her subjective theory gives her an answer that she can apply and use: do B. Her objective theory gives an answer of which she is unaware, and therefore cannot apply and use: do A. It also tells her not to do B. She is aware of, and can apply, this latter answer. But doing so still leaves her unable to choose what to do, even though she has other options, namely A, C, and D, because she doesn’t know which of these is obligatory or even permissible. To make her decision she needs one or more options singled out as permissible or obligatory. The subjective theory provides this; the objective theory does not. Of course, which kind of evaluation takes precedence in the context of guidance may be very different from which kind of evaluation takes precedence in other contexts. But my proposal only applies to guidance contexts.

    Daniel Wodak raises the issue that many deontological theories emphasize negative prescriptions (“constraints”), and that some allow for prohibition dilemmas, in which every option is wrong. He wonders how the rationale for my rule fares for such theories. First, I did assume, as he suggests, that the theories at issue are not ones that countenance dilemmas. My assumption was that every theory under discussion avoids such dilemmas. Of course, one of the standard objections to theories that accept such dilemmas is that they cannot provide an agent any guidance for how to act in a situation in which every act is wrong, even if the agent is fully apprised of the facts. If Jill’s situation were a dilemma, then there would be no objectively permissible act for her. But let’s set such theories aside.

    What about non-dilemmatic deontological theories that emphasize constraints, or negative prescriptions, rather than positive injunctions? We might imagine such a Theory D that included (a) a set of negative constraints (“Don’t do such-and-such”); a permission to perform any act that doesn’t violate any constraint (or involves the minimum possible violation of constraints); and a “derivative” obligation to perform any act that is the only option not violating the constraints (or involves the minimum possible violation of constraints). It’s possible to describe a situation parallel to the original Dr. Jill case involving such a theory, for example a code of professional ethics for doctors. Suppose in this theory constraint C1 (the most stringent) says “Don’t kill your patient”; constraint C2 (the least stringent) says “Don’t give your patient a less effective treatment than other available treatments”; and constraint C3 (the medium strongest constraint) says “Don’t fail to treat your patient if an effective treatment is available.” Our new Dr. Sal has a patient Tom with a minor but not trivial skin complaint. Dr. Sal can give Tom drug A, which will cure him completely; drug B, which will relieve his complaint but not cure him completely; drug C, which will kill him; or not give him any treatment at all. Giving him A doesn’t violate any constraints; giving him B violates constraint C2; giving him C violates Constraint C1; and not giving him any treatment violates constraint C3. According to Theory D, giving Tom drug B, giving him drug C, and not treating him all violate some constraint; each is objectively wrong. Giving him drug A, the only option that doesn’t violate any constraint, is then objectively obligatory. But Dr. Sal doesn’t know all this; she believes (or her evidence indicates) that there’s a 50% chance A will cure him and a 50% chance it will kill him; she also believes there’s a 50% chance C will cure him and a 50% chance it will kill him; and she believes giving him B will violate constraint C2 while not treating him will violate constraint C3. Subjectively, what should she do? Suppose we assign violating C1 a negative deontic value of -100, assign violating C2 a negative value of -30, assign violating constraint C3 a negative deontic value of -40, and assign violating no constraint a deontic value of 0. It then turns out that giving Tom drug B has the highest expected value (-30 versus -50 for drugs A and C, and -40 for no treatment). So Sal subjectively ought to give Tom drug B, even though she knows doing so will violate constraint C2 and so be objectively wrong, since there is some other option that violates no constraint. This shows that non-dilemmatic deontic theories emphasizing constraints rather than injunctions can also be subject to a conflict between what’s objectively wrong and what’s subjectively obligatory. In the context of such theories, for guidance we once again must give usable positive subjective prescriptions precedence over negative objective prohibitions.

    Of course there are deontological theories that absolutely prohibit certain types of acts, whatever the agent’s other options may be. Such a theory might, to the agent’s knowledge, absolutely prohibit option X. But for all she knows it might also absolutely prohibit some of her other options as well. How such absolutist theories offer guidance when agents are uncertain about what kinds of actions they might be performing is a matter of much debate. Some theorists hold there is no reasonable way for absolutist theories to do so. But I did not try to deal with this issue in the book.

    Daniel Munoz also raises questions about whether positive prescriptions beat negative prescriptions (again, I only claim this for contexts of guidance). Daniel argues that a negative prescription (“Don’t give A or C!”) might give guidance, whereas a positive demand (“Give either A or C!”) might fail to give helpful guidance. Clearly “Don’t give A or C!” can give helpful guidance in a case (such as he describes) where there is only one other option, giving B, and the theory implies that any action that is not forbidden is permissible. In these circumstances, the theory guides the agent via an implicit positive prescription to give B. Daniel also says that the positive prescription “You must give either A or C” isn’t a good enough guide. I’m not fully sure what he has in mind here. If “You must give either A or C” should be understood as “Either you must give A or you must give C,” then indeed this isn’t good enough guidance, since it leaves it indeterminate whether you ought to give A or ought to give C, when you can’t do both. This rightly points out that not every positive prescription succeeds in providing guidance. On the other hand, if “You must give either A or C” means “Either A or C is permissible,” then for me this is good enough guidance, even though it doesn’t plump for either A or C over the other. Both A and C are permissible, and the agent is told she is free to do either. She may choose among them. Not all moral guidance must isolate a single act as the one that must be performed.

  7. Thanks for the reply, Holly. This helps me understand where you’re coming from!

    You said you weren’t sure what I meant by “you must give A or C.” I just meant: “you must [give A or C],” where [give A or C] is a non-specific option.

    This is different from the first interpretation you suggested, where the disjunction has wide scope. “You must [give A or C]” is logically weaker than “You must [give A] or you must [give C].” (Only the first is true, if it’s indifferent whether you give A or C.)

    The other interpretation — “Both A and C are permissible” — is different, too. It’s weaker than “You must give A or C” given two assumptions:

    (1) free choice for obligation: Must(A v B) –> Must (A) & Must (B)
    (2) D axiom: Must(A) –> May(A)

    But I certainly don’t want to assume free choice, and anyway, it’s clear in the example that you can’t have both Must(A) and Must(C). Only one drug is objectively best!

    It sounds like we agree about the main point, however, which is that a positive prescription might not be action guiding when it’s about a non-specific option (or when it’s a disjunction of prescriptions about specific options, as on your first interpretation). Not too mind-blowing, but it was my 2c!

  8. Thanks, Daniel (Munoz!), for the reply. I’m not too fond of referring to [give A or C] as a non-specific option; it seems to me this may rely on a questionable theory about action. But this is a side issue. I completely agree with you that a positive prescription might not be action-guiding, either (as you say) when it’s about a non-specific option, or when its a disjunction of prescriptions about specific options (my case). I hadn’t realized this until you pointed it out, so kudoes on the insight!

  9. Hi, Ellie.
    Thanks for your comment, which brings in interesting new issues. You’re certainly right that there are close ties between the usability of a moral code and the blameworthiness or praiseworthiness of an agent who hopes to use it in making a decision. You’re also right that a pragmatic view consisting solely of subjectivized principles (ones that prescribe an act in virtue of the agent’s beliefs about that act) might be able to achieve what I call “core usability.” A principle has core usability for an agent if (roughly) it is true that if the agent wanted to derive a prescription from it, she would do so. A principle may have core usability for an agent even though the prescription she would derive is mistaken, given the actual facts, since she may have erroneous beliefs about the facts. As you say, agents often have false beliefs about their own beliefs, so if they derive a prescription from a subjectivized principle, it may not be the correct prescription. Still, the principle has core usability for them, and if their beliefs about their beliefs are non-culpable, then they may be blameless.
    I’m slightly concerned that your comment focuses entirely on agents who have false beliefs. What stands in the way of agents’ applying principles is often their uncertainty about the world, not their false beliefs about the world. If a moral code consists of principles none of which provide guidance for an agent who is uncertain—for example, uncertain about her own beliefs—then that code is not usable, even in the core sense, by agents afflicted with this kind of epistemic impediment. I suspect, though, that your emphasis on false beliefs in your comment is a bit of an accident, since you are surely aware that agents can be held up by uncertainty. The question for a fully subjectivized pragmatic code would be how it would incorporate both (a) principles phrased in terms of the agent’s firm beliefs about her beliefs and (b) principles phrased in terms of the agent’s uncertainty about her beliefs. One might suspect that the best structure for such a code would be a hybrid one, with the principles phrased in terms of firm beliefs at the top level, and the principles phrased in terms of uncertainty at the lower level. Since in Chapter 5 I lodge a different, in my view fatal, objection to subjectivized codes (namely that they can’t support duties to acquire more information), I’ll leave further investigation of how to structure such codes to you!
    You also plump here and elsewhere for formulating moral recommendations in turns of “trying.” In the book I rejected the idea that a hybrid theory could succeed if its only decision guide was “Try to do what’s objectively right.” Partly my rejection was based on the fact that the Dr. Jill case shows that sometimes an agent’s best bet is not to try to do what’s objectively right, since that would rule out Dr. Jill’s giving her patient drug B (which she knows to be objectively wrong). But partly my rejection was based on the point that sometimes an uncertain agent is uncertain, or doesn’t know, how to try to accomplish her goal. You argue against this latter point by adopting a “very thin” notion of trying, according to which even just thinking about what steps to take counts as trying. I think this might be true for certain kinds of goals, for example trying to develop a mathematical proof. But for other types of goals we wouldn’t ordinarily say that, for example, someone “tries to save a drowning victim” when all she does is mentally review possible steps towards saving the victim’s but doesn’t take any concrete action towards that goal (perhaps because she can’t think of anything she could do that would be helpful). Of course you’re free to adopt a thin concept of “trying,” but the concept is so far from our normal concept that it strikes me as somewhat misleading to say an agent’s subjective obligation is just to try. And of course this doesn’t tell us what mental steps it would be appropriate for her to canvass. Perhaps I’ll be more persuaded when I read “Ways to be Blameworthy: Rightness, Wrongness, and Responsibility.”

  10. Thanks, Andrew, both for your illuminating original review and for this reply to my response. I gather you’ll have more to say in future, but for now we can focus on the issues you raise in your current comment.

    The first issue has to do with the link between a person’s “best try,” the features of an action that objective normative favours, and probabilities. I think there may be some ambiguity in the manner you’ve chosen to describe the relation among these factors, according to which the subjectively right act constitutes the agent’s “best try” at doing what objective normativity favours. This led me to think you were lumping probabilities into the features objective normativity favours. But it looks as though we both agree that the subjectively right act (at least often) is one that, in the agent’s view, has a good probability of having features that are objectively right-making (such as making your daughter happy), and a low probability of having features that are objectively wrong-making (such as making your daughter unhappy)—or at least is no worse in these regards than any of the agent’s alternatives. These objective right-making and wrong-making features don’t usually include probabilities per se. Another small problem here is that to describe the subjectively right act as the agent’s best try at doing what objective normativity favours unfortunately invites the reader to mistakenly understand this view as the view that the best try is the act most likely to be objectively right, a view that you rightly reject. It would be helpful to have a more perspicuous statement of your view that doesn’t lend itself to my misunderstanding, or to this misinterpretation.

    In the book I discuss some decision guides (for choosing the subjectively right act) that don’t mention the objectively right- or wrong-making features, or even mention probabilities. Some common rules of thumb, such as “Don’t text while driving,” don’t explicitly mention the features of such an act that could make it objectively right or wrong. And some decision-theoretic guides, such as “Perform the act whose worst outcome would be the least bad” don’t mention probabilities. So it’s complex to characterize the relation between the features picked out by principles of subjective rightness and those picked out by principles of objective rightness. But this can be a discussion for another day.

    In an initial foray at my contention that one could easily be uncertain which potential action is one’s best try, you say in your response that here you’re only talking about one’s “best try” in its “primary sense.” In your comment on my book at the Rutgers workshop, you say “I regard the primary sense of both “Subjective ‘ought'” (and related subjective normative notions) and “(best) try” as the ones that play an immediate role in the individual’s guidance of her own action. Sometimes we also use these notions in other roles — in giving advice, when theorizing about counterfactual or counterevidential situations, in reasoning to conclusions about “subjective ‘ought'” and “(best) try” in the primary senses.” You seem to be clarifying that an agent’s “best try” might be a concept used by the agent in guiding her choice, or might be a concept used by others in giving advice to the agent, etc. Certainly assessment of what it would be wise for an epistemically limited agent to do can be carried out by the agent herself, or by others in advising her, or as part of counterfactual scenarios. But I don’t see how this addresses the fact that an agent aiming to guide her choice may be not just mistaken but also uncertain about the factors that render her action her best try. In the Rutgers comment, you say, speaking as an agent, “I can be wrong about the probabilities of the various ways things could be in terms of objective normativity or the things on which it supervenes, and in how to put them together.” You identify these as the factors that contribute to whether some act is the agent’s best try. It remains a mystery to me how an agent can be wrong about these factors, but not be uncertain about them. I’m hoping you can cast more light on your perspective on this issue.

    Your final remark here is that you hope some friend of the Austere theory will explain why it is plausible to reject the idea that a true complete theory of normativity includes evaluations of some actions as better tries (or subjectively better) than others. I have a grip on the Austere theorist’s point of view, and understand why it can be more attractive than the Pragmatic theorist’s position that we ought to dumb down the true account of normativity to make normativity usable. But I stand with you in hoping such a theorist can explain why rejecting the availability of supplementary decision guides—that is, rejecting the Hybrid theory—is plausible.

  11. Hi Holly – thanks for that reply.
    Yes, I do think we should focus on uncertainty as well as false beliefs. Again I would fall back on the thin notion of trying here. If an agent is uncertain about what would count as trying, she has to do her best to figure out what counts as trying, and that is trying. So, to take an example – let’s say Jane is super confused about her own motivation with respect to her friend who is in trouble– she knows she has a whole mix of envy and resentment and a bit of schadenfreude, but she also genuinely (she hopes) believes she ought to do her best to help her friend. Her first instinct is to stage an intervention, and she goes some way down that road, At some point though she begins to suspect that that is a horribly passive aggressive and not very useful thing to do. Now she is just uncertain. What should she do? I do think she can try, even if she is uncertain about what her best try would be – she can think carefully about what her best try would be.

    You worry that this might work for some cases but not others. You say: “But for other types of goals we wouldn’t ordinarily say that, for example, someone “tries to save a drowning victim” when all she does is mentally review possible steps towards saving the victim’s but doesn’t take any concrete action towards that goal (perhaps because she can’t think of anything she could do that would be helpful).”

    I agree that if she couldn’t think of anything to do it would unnatural to say she tried to save him. But it would be ok to say, ‘she did what she could’, or, she didn’t do anything wrong’, or ‘she failed to save him, but it wasn’t her fault’, or even… ‘she failed, but not through lack of trying’. Ok, maybe the last is a stretch, my point is that if there is nothing to do – nothing that thinking really hard revealed – it is true that she tried. And she is therefore not blameworthy.

    On your other reason for rejecting trying – yes, I agree that the agent should not try to do what is objectively right – as you say, that will get us into a Regan/Jackson style example. A sensible account of subjective obligation should not say that the obligation is to try to do what is obligatory by some other standard. Rather, the thing an agent ought to try to do is to do well by the various values at stake. And that, of course, makes good sense of the idea that there must be a connection between the auxiliary principles and the primary code. On my view, we shouldn’t see the primary code as a set of prescriptions, but as a set of claims about what matters, about what would be best, and only sometimes, about what must or must not be done. The moral theory might tell us that well-being matters, for example. So from that we can conclude that it is good for people to be pain free where possible, but that pain is sometimes a necessary side effect of something that is worth it. And from that we can conclude that we should prescribe the safe drug in this case.

  12. I wonder if you might provide a brief gloss of “objective normativity” means? Is it a normative theory applied to a different set of beliefs and information or is it referring to a moral order of the universe? Or is it merely an explanatory tool?

  13. Hi, J. Bogart. If you’re John Bogart, it’s great to hear from you, twice in one month!

    On to your question. The discussion in this blog (partly) concerns the contrast between objective and subjective moral principles. This contrast is not meant to be metaphysically interesting—here “an objective moral principle” isn’t necessarily one that is part of the moral order of the universe, or anything like that. The contrast doesn’t focus on whether the status of the correct moral principle is objective or subjective. Instead, it focuses on the features that, according to the moral principle under discussion, are right-making or wrong-making. If these features are objective features of the non-normative world, then the principle is said to be objective. If instead these features are subjective features, for example the epistemic states of the agent, then the principle is said to be subjective. So a principle that says an act is obligatory iff it would produce the greatest happiness (an objective feature of the world caused by the act) is an objective moral principle. A principle that says an act is obligatory iff its agent believes that it would produce the greatest happiness is a subjective principle.

    So the present contrast has to do with the content of the moral principle, not with its status. For example, a relativist might hold that a correct moral principle is one held by his own culture. But he might also hold that the content of one such correct moral principle says that an act is obligatory iff it would produce the greatest happiness. This principle would be objective in terms of content, even though this principle is not viewed as part of the furniture of the universe.

    However, this doesn’t mean that it is easy to draw a precise contrast between principles that are objective versus subjective in terms of their contents. The problem is made worse by the fact that according to some apparently objectivist principles, some of the features that affect the moral character of an act may have to do with the agent’s beliefs or other mental states. So, for example, many deontologists advocate a principle saying that it is wrong to lie. To tell a lie is (roughly speaking) to make an assertion that you believe to be false and that you intend to mislead your audience. So several features affect the act’s moral character: the fact that it is an assertion, the fact that the agent believes the assertion to be false, and the fact that the agent intends with the assertion to mislead. Even though the agent’s beliefs and intentions play a role in determining whether the act is wrong or not, still this seems to be an objectivist principle. To accommodate this kind of case, my view is that, at least within a single theory, the best way to draw the contrast is to stipulate that an objective principle holds an act to be (say) obligatory if it has feature F, while a subjective principle holds an act to be (say) obligatory if the agent believes it to have feature F. F itself might have to do with the agent’s beliefs or other mental states. Andrew Sepielli has argued for much the same view. Theorists vary in their accounts of what epistemic states are the appropriate ones for subjective principles: some hold that it is the agent’s beliefs, some hold it is the beliefs the agent would have if she considered all the evidence available to her, some hold it is the beliefs it would be reasonable for the agent to have, and so forth.

    I hope this is helpful. Note, however, that in itself it doesn’t help us draw the line between a whole theory that is objective versus a whole theory that is subjective. That would take further work.

  14. Thanks, Ellie, for your November 26 response to my November 24 reply.

    In case your first comment created any misunderstanding, it’s good that you’ve clarified that we must focus on agents’ uncertainty as well as their false beliefs. In such cases, too, you want to appeal to your “thin” sense of trying as what an agent should do. Here you say that when an agent is uncertain, at least she ought to do her best to figure out what counts as trying to do her best in the situation. However, you agree she might think the matter through, and not be able to identify anything she can do that would be helpful. In the case of the agent who wants to save a drowning victim but can’t think of anything helpful to do (the agent can’t swim, no boats or life-savers are available, she doesn’t have a cell phone to call for assistance, etc.) you agree it would be unnatural to say the agent “tried to save him.” But then you claim that “if there is nothing to do—nothing that thinking really hard revealed—it is true that she tried.” And, you conclude, she is therefore not blameworthy. But we don’t have to stretch trying to such extreme and unnatural lengths to support our conclusion that the agent isn’t blameworthy. All we have to do is note that she made a very serious attempt to identify something helpful she could do, found nothing, and therefore isn’t blameworthy for failing to save the victim’s life. She had a strong investigatory duty to identify her best option in this situation, a duty that she carried out. It turns out that she had no objective or subjective duty to save the victim, because she had no means of doing so, and she reasonably came to believe this. She escapes blameworthiness for not saving the victim for this reason, not because she tried to save him although failed in the attempt. (Of course if the agent ran around trying, for example, to find something to throw to the victim, we would all say she tried to save him. But in the unusual case we’re imagining, she stayed rooted to the spot and merely thought about her options.)

    In your last paragraph you sketch your view that what an epistemically limited agent ought to try to do is “do well by the various values at stake.” In Dr. Jill’s case, the values are avoiding death for her patient and avoiding pain for her patient. Jill has to balance these against each other, since (given her information) she must accept some pain for her patient in order to avoid the risk of killing him by prescribing the lethal drug. You’re hopeful that we can get to that result from a primary code that consists mostly of a set of claims about what matters (such as well-being). At the Rutgers conference I argued that it’s hard to see how in such a framework we can derive plausible subjective prescriptions unless the primary code includes the comparative worth of the things that matter, and includes prescriptions for what actions we are to take concerning these things. Should we merely honor them? Maximize them? Maximize their average value? Appreciate them from an aesthetic point of view? Once these features are added to your primary code, it appears very similar to the objective principles of the hybrid code I argue for. So I’m inclined to think our approaches to the problem of prescriptions for epistemically limited agents are fairly similar after all.

Leave a Reply

Your email address will not be published. Required fields are marked *