Brian Weatherson has posted a new paper in which he argues against "moral hedging" — roughly, refraining from A-ing on the grounds that there's a non-zero probability that A-ing is wrong and a zero probability that not A-ing is wrong. I'd like to explain why I think his central argument fails, and hear what y'all have to say both about that argument and about the issue in general. 

The argument is that one cannot hedge without exhibiting unseemly motivations in so doing, and so one ought not to hedge. Specifically, Weatherson says, one cannot hedge without thereby being motivated to avoid wrongdoing as such. He asks us to imagine a person who has some credence that eating meat is wrong, and so refrains from eating meat. The content of her ultimate motivation cannot be to refrain from subsidizing the killing of cows, since she does not (fully) believe that this is wrong; rather, it must be to refrain from doing what's morally wrong (whatever that happens to be)

Now, it seems to me that Weatherson's argument also applies, and that he really intends it to also apply [see the end of this post], to all efforts to take the probabilities of moral propositions into account, not just to hedging. For example, there are some cases in which my credence is non-zero both that A-ing wrong and that not A-ing is wrong. There's no "safe" action here, as the Casuists would say, and so I can't "hedge" exactly, but I can still behave in such a way as to take into account the probabilities as I see them. 

In support of the claim that this motivation is unseemly, Weatherson approvingly quotes the "fetishism" argument that Michael Smith offers against motivational externalism in The Moral Problem:

"Good people care non-derivatively about honesty, the weal and woe of their children and friends, and well-being of their fellows, people getting what they deserve, justice, equality, and the like, not just one thing: doing what they believe to be right, where this is read de dicto and not de re. Indeed, common sense tells us that being so motivated is a fetish or moral vice, not the one and only moral virtue". (Smith, 1994, p. 75). 

I hope it's not out of place to say that I disagree with Weatherson at every step. To wit:

1) I don't think the hedger must be motivated to avoid wrongdoing as such;

2) I don't think there would be anything wrong with the hedger's motivations even if she were occasionally so motivated; and

3) I don't think hedging would be (subjectively) wrong or immoral even if one could not hedge without sometimes exhibiting such a motivation.

 

On 1): There are at least two ways to be a hedger without having any motivations to avoid wrongdoing as such:

First, one's uncertainty about whether, e.g., eating meat is wrong could be due to uncertainty about whether P, where P is a moral proposition that can serve as the content of an apparently less objectionable motivation. (To be fair, Weatherson concedes something similar to this point.) For example, you might be uncertain whether in eating beef you'd be treating the cow with proper or due respect, or wronging the cow, or wrongfully killing the cow. And so you might refrain from eating meat on the ultimate motivation that doing so might wrong the cow, or might involve wrongfully killing it or treating it with less respect than it's due. Now, Weatherson never explains why he thinks the motivation to avoid wrongdoing is objectionable. So maybe he'd think these motivations are similarly objectionable. But my (very shaky) intuition is that the impropriety of so-called "fetishism" is attributable to its impersonal character. The fetishist seems bad because he seems not to care ultimately about anyone; he cares about doing what's right. His motivation is weird for the same reason that the exclusive motivation to maximize utility is weird. But it is absurd to suggest that someone whose ultimate motivations are treating X with proper respect and avoiding wrongfully A-ing X doesn't really care about X. 

Second, the hedger may be motivated to avoid C-ing, where "C-ing" is a non-normative description of the act-type designated as wrong by the moral views in which she has credence. Note that it's plausible that one may be ultimately motivated to avoid what is wrong according to the views one fully believes (understood de re). That follows from the kind of motivational internalism that, e.g., Smith is defending in the The Moral Problem. So why does Weatherson not admit the possibility that one may be motivated to avoid (and, again, read this de re) what is wrong according to a moral view in which one has a credence of less than 1 (to a degree roughly, though not exactly, proportionate to the credence)? For example, if I have some credence that killing a cow is wrong, why can't I thereby be underivatively motivated to some degree to avoid killing the cow? I admit that I'm not sure that this suggestion is right. But I don't see why it's wrong, and Weatherson doesn't give us any reasons to think that it's wrong. (Also, FWIW, much of what Smith says in his influential paper "Evaluation, Uncertainty, and Motivation" suggests that he'd support the suggestion I'm making here.)

On 2): It's important to see that one can side with Smith and against Weatherson. Smith's target is a form of "strong externalism" on which the very general connection between moral judgment and motivation is explained by positing a very general motivation to do right/not to wrong as such, of which motivations like those Smith valorizes are derivative. Smith's beef doesn't seem to be (and certainly doesn't need to be) with ever having a de dicto moral motivation; it's with having this as one's single ultimate motivation, in lieu of being ultimately motivated by these other things.

One might go along with Smith's criticism of strong externalism, while allowing that, at least in certain cases of moral uncertainty, it's fine to be motivated not to do wrong/to do right/etc. if indeed such motivation is necessary to take account of the probabilities of the relevant moral propositions. 

Indeed, I think Weatherson should see this more nuanced position as attractive. To see why, recall the combination of positions I attributed to him at the end of (1). He seems to think that while I can be motivated to do (de re) what I believe is right, I can't be motivated to do (de re) what I have some (low?) credence is right. Again, I think this is mistaken, but let's assume arguendo that it's correct. Then it would seem that there's an important difference between acting under full belief about morality and acting under moral uncertainty. In the former case, a de dicto motivation to be moral is indeed "one thought too many" in the sense that it's not necessary for guided action. In the latter case, assuming Weatherson is right, it's not "one thought too many". It's precisely the thought the agent needs to have to be responsive to the probabilities, as she sees them, of the relevant moral propositions. But if I need to be so motivated to hedge or otherwise be responsive to moral uncertainty, then it doesn't seem right to call such a motivation unseemly unless we have some other ground for thinking that hedging, etc. are wrong. Weatherson gives no other ground. This is all the more persuasive if we imagine the agent as being ultimately motivated to do (de re) what's right in all of the "easy" cases — e.g. motivated to save her wife because she's her wife, motivated not to kick a dog because it'd hurt the dog, and so on.

It's also worth noting that even if the moral hedger must occasionally be motivated to avoid doing wrong, she needn't be so motivated in every case of in which she's uncertain. We are familiar with the idea that the utilitarian may aim to do things other than maximize utility; he may follow rules of thumb instead, and be motivated to maximize utility only on those occasions when he performs the mental act of checking on his rules to make sure they are utility-conducive. We may say something analogous here: A moral hedger may act on rules of thumb — e.g. Alex Guerrero's "don't know, don't kill", or maybe "don't kill any large vertebrates" — almost all of the time, and be motivated to do what's right only on those occasions when she performs the mental act of checking whether it accords with her view about what it's subjectively right to do under moral uncertainty.

Finally (3): Let's just assume that hedgers must be motivated to avoid wrongdoing as such, and that this is an unseemly motivation. Can we conclude from this that we (subjectively) shouldn't hedge? Hardly. To see why not, start by considering another case of unseemly motivation: I give a billion dollars to Greenpeace for the sole reason of upsetting an oil barron uncle whom I hate. There seem to me to be two sensible evaluations of my behavior: (1) I did the right thing, in the sense that I shouldn't have not donated the money. (And I mean, talk about fetishism — imagine not trying to save the planet because you can't (now) do that from a pure motive! And indeed, what could be a worse motive for such an omission?!), and (2) I did the wrong thing despite contributing to the planet's salvation, because I could in some relevant sense have donated the money with a different motive, but didn't.

The parallel to the first response in the hedger's case is that hedging may be right even if it's done with an unseemly motive. But this is of course not what Weatherson wants. As for the second response: If Weatherson is right that hedging requires a de dicto motivation to avoid wrongdoing, then this response does not have a parallel in the case of the hedger. For there is no relevant sense in which she could have hedged with a different motive. But I have a hard time believing that an action may be wrong purely by dint of the motive with which it's done when it's impossible (inconceivable? knowable a priori to be impossible? — what does Weatherson want to say here?) for someone to do that action without that motive. Now, again, I think Weatherson's wrong that one can't hedge without that motive — but then his argument fails on other grounds.

That's all I want to say about Weatherson's explicit argument. There's a sorta neat analogy he draws between moral and prudential motivation that merits discussion, but I'm still thinking of what I want to say about that. (I don't think it impacts any of the criticisms I've offered.) I want to conclude this post, though, by taking note of a gestural remark in Weatherson's conclusion that, I believe, indicates the deeper roots of his beef with the "moral uncertainty" project.

Weatherson writes: "Morality should be the guide of life, not probabilities of moral propositions."

I'd want to say: "Yeah, this is true if you mean that there's a sense in which everyone should fully believe (all and) only the moral truths, and that those who believe them and should guide ourselves by those beliefs. But if you mean that when we, here, now are uncertain about morality, we should guide ourselves by the moral truth and not by the probabilities we assign to moral propositions, then you are saying something that either is false or at least presupposes something false. "Should" implies "can", and you're suggesting that I guide myself in a way that I (and you, and everyone else) simply cannot. For the morally uncertain agent, probabilities must guide our actions, and the only question worth asking is 'How?'"

I think that Weatherson and I (and the rest of the "moral uncertainty" people) must have a very deep, tough-to-articulate disagreement about guidance or "should" or something; otherwise we would not each write as we do. I don't know exactly what the disagreement is, and at any rate I don't want to prolong this post by trying to suss it out. Perhaps that's best left for the comments section.

 

19 Replies to “Moral Uncertainty and Motivation

  1. I need to say something longer about the first two points, but two quick things about a couple of points at the end.
    I agree that the argument that we shouldn’t hedge is weak. What I really care about is the argument that we are not compelled to hedge.
    A woman is (contra my advice) moved by the moral uncertainty argument and for that reason doesn’t get an abortion. (That’s the case I really care about of course, not the vegetarianism case.) Does she thereby do something wrong? Well, perhaps it would be wrong for her to have a baby in her circumstances, but those arguments are tricky, and I would say most of the time it is not wrong. She acts from an imperfect motivation, but that’s barely a moral misdemeanour. (Agreeing a bit I think with what you say earlier, it might be a serious moral failing to always act from the de dicto desire to do what’s right, but barely a flaw to do so once.) So no, she doesn’t do anything wrong, or at least hardly anything wrong worth mentioning. (Unless it is wrong to have the baby – but we’re setting that aside.)
    Here’s the only case I want to insist on. A woman who has no good response to the hedging argument available to her, nevertheless has the abortion because she just isn’t moved by these probabilistic considerations. I want to say that’s perfectly fine. The existence of a theoretical response to the argument is enough to justify her action, even if she hasn’t worked through that response herself.
    Short version: It wouldn’t necessarily be wrong to hedge in any given case. But doing so reveals (I think – though I need to respond to the earlier points) a character flaw, and that’s why it’s fine to not hedge.
    I’m in much bigger disagreement with the point at the end about guidance. I have no idea what the probability is that eating meat is wrong. I’m even more in the dark about that than I am about whether eating meat is wrong. And I have no idea how to act on that probability if it should be taken into account. The various theories around here get very complex, and often very unintuitive, very quickly.
    Nobody in this debate has anything like guidance to offer in the sense you’re looking for. Do the right thing isn’t exactly operationalisable advice, but it is simple and intuitive. The alternatives are also not operationalisable, and to my mind aren’t even easier to follow than the simple rule, and are not simple and intuitive. So they look like complications introduced without any practical payoff.

  2. Hi Andrew,
    Not having read the paper I am going by your summary, but I am wondering what you think about this rather flat footed response. Imagine that Jane has some credence that having an abortion would be wrong, and so refrains from having one. If Weatherson is right, then the content of her ultimate motivation cannot be to refrain from having an abortion, since she does not (fully) believe that this is wrong. According to your summary Weatherson would claim, rather, that her motivation “must be to refrain from doing what’s morally wrong (whatever that happens to be).” But what about describing her motivation in terms of risk avoidance? Perhaps Jane refrains from having the abortion because she wants to avoid the risk of wrongfully killing another person. I assume I am missing something, but why isn’t this an option that avoids the fetishism worry?

  3. Here is an example that comes to mind. Imagine that I could really use my son’s help when I am old and grey – his help would make my life much better, but is not a dire need or anything. But his help will cost him a good deal (say, downgrading his job). We both start assuming he has a filial obligation to move home and help me. But I happen to read something thought-provoking about such obligations and develop uncertainty about whether this is really right. I share the material with my son and he is also unsure.
    Later he says he has decided to move home because he doesn’t want to risk wronging me or being a bad son to me. This seems like an admirable motive with normative content and even one that is aptly aimed at me, so to speak. I have trouble seeing the sense in which it involves one thought too many.

  4. So far I’m most convinced by Andrew’s reply to (3).
    Brian’s examples (in these comments) are not persuasive to me, because they seem to derive their force from our (his and mine, anyway) moral views and not from general features of hedging. If instead of the second woman who wants to get an abortion (and does, despite her judgment that there is some chance that it is morally wrong), we think about someone who wants to do something we ourselves are quite sure is immoral, my intuition goes the other way.
    For instance, suppose Huck decides just to turn Jim in to the authorities, because although he does think there’s a pretty good chance that Jim has a moral right to freedom, he’s just unmoved by the probabilistic considerations, while he is motivated by the thought that it’s important to respect one’s society’s property norms. I take it nobody is going to say, “That’s fine”, even if we think it might be a character flaw for Huck to be motivated by the thought that there is a chance Jim has a moral right to freedom.

  5. If Weatherson is primarily arguing against the imposition of moral hedging as a moral rule itself, I agree with him. I fail to see how any general rule about moral hedging could be useful. I do not see how a theoretical choice between X, which we are certain is morally permissible, and Y, of which we are uncertain about its permissibility, can have any practical application. If X and Y are mutually exclusive in fact (e.g. abortion or no abortion), and we are certain of Y’s moral permissibility, this logically means that we are certain that X is not morally required. If the decision is purely a moral decision, and we are certain of Y’s moral permissibility, this must mean that we are morally certain that X is not permissible; for, in a circumstance of mutual exclusivity, we could not possibly have moral certainty that both are permissible. If both might be morally permissible to varying degrees, then we must question our certainty of the permissibility of Y, since weighed against X, it may be wrong to choose Y in some cases.
    The easy answer to this is that no decision is made on purely moral grounds; as such, both options may be morally permissible, such that we must turn to other considerations to make the decision. But if that is the case, then moral certainty about one of the options cannot possibly serve as a firm guide, since there must be other non-moral factors which assist in our decision. This being the case, we could not possibly posit a rule where it is “wrong” or “irrational” to take the option which is more morally uncertain.
    Of course, we must as a practical matter weigh the apparent moral value of two competing options. But if we feel moral certainty about one option and only moral probability about a mutually exclusive option, the appropriate course of action is likely not to discard the latter, but to re-evaluate the certainty we have about the former. Otherwise it seems we are abdicating our responsibility to make a reasoned moral decision, which I think may have been a concern Weatherson was driving at.

  6. Thanks to everyone for replying. My thoughts:
    –Brian Weatherson–
    I had not intended to concede that, if Smith’s argument is right, then de dicto motivation to avoid wrongdoing in cases of moral uncertainty is ever-so-slightly unseemly. My thought was more that, if Smith is right, then de dicto motivation to avoid wrongdoing is of course unseemly *in some cases*, but that (granting other premises of your argument) cases of moral uncertainty are not among them.
    Re: the “should”/”permissible” distinction on which you’re leaning — This is very helpful. But I still object. You claim that “hedging reveals…a character flaw, and that’s why it’s fine to not hedge”. Of course, I continue to think that it doesn’t reveal a character flaw, but we’ll hash that out later. I have two other objections:
    First, in my original post, I wrote: “…I have a hard time believing that an action may be wrong purely by dint of the motive with which it’s done when it’s impossible for someone to do that action without that motive.” I continue to find this persuasive, and I see no reason why it’d not be *at least* as persuasive (see the next objection) if we replaced “may be wrong” with “may be rendered permissible (when it’d not be otherwise)” and replaced talk if its motive with talk of its complement’s motive. That is to say, I think that if not hedging is permissible, it must be so for other reasons; the motive that you say must accompany hedging doesn’t seem to me to do any work here.
    Second: I think this clarification actually makes your position less tenable. You seem to be drawing on the distinction between a consideration that tends to make it the case that A-ing is permissible, and a consideration that tends to make it the case that not-A-ing is wrong. And in clarifying your position as you have, you seem to be suggesting that the fact that not-A-ing is necessarily accompanied by a bad motive is a consideration of the former sort. Insofar as I think motive has either of these effects, though, it seems more plausible to me that it has the latter one. I.e. a bad motive can taint an otherwise permissible act, but it can’t *remove* any taint that may have attached to that act’s complement so as to make it permissible.
    As for the “guidance” point at the end: Perhaps I can make my position more plausible by noting two features of the probabilities mentioned in what I regard as the relevant probability-relative norms. (E.g. “If there’s such-and-such a probability, do X”). First, I think they are neither subjective probabilities (i.e. credences) nor objective probabilities, but rather epistemic probabilities, by which I simply mean probabilities mentioned in statements (in inner or outer speech) that express credal states. (Think of the sort of view Seth Yalcin defends.) Second, I think that among such statements are “There’s a decent chance that…”, “More than likely…”, and so on. I don’t know about everyone else, but I certainly find myself saying to myself, “More than likely consequentialism’s true” and “There’s a decent chance that killing a gorilla is as bad as killing [fill in the blank]”. In saying these things, it’s not as though I’m uncertain about the objective probabilities, and I’m being as precise as possible given that uncertainty. No, rather, I’m giving exhaustive expression to my credal state at the time. I don’t see why it’d be at all difficult to guide my behaviour by norms that mention these epidemic probabilities.
    –Brad Cokelet at 2:46 —
    The motivation you ascribe to Jane differs from the motivation to avoid wrongdoing as such in two ways: (1) It is, as you say, a “risk avoidance” motivation, and (2) The risk involved is a risk of wrongfully killing another person, not of wrongdoing.
    I suggest the possibility of motivations to avoid wrongfully killing in my original post and suggest that these are unfetishistic, so I concur with that strategy. As for the “risk” part — I’m not so sure. For one thing, it seems to me that IF the motivation to avoiding A-ing is unseemly, the motivation to avoid risking A-ing is unseemly. Secondly: The question is whether Jane could plausibly be construed as having an *ultimate* motivation to avoiding risking wrongfully killing. But in the case as described, this does not seem plausible. Rather, I would think that she is motivated to avoid risking wrongfully killing in virtue of a more fundamental motivation to avoid wrongfully killing.
    –Brad Cokelet at 4:24 —
    Completely agree. Like the example.
    –Jamie Dreier–
    Yes, I agree about where the force of Brian’s example comes from.
    –Matt Pierce–
    As I say in my post, Weatherson’s argument seems to target responses to moral uncertainty other than simple hedging. Like you, I don’t find simple hedging plausible, but I do find some other responses plausible.
    As for your *reasons* for objecting to hedging: while a thorough response would take us into questions about how permissibility is related to the weights of reasons, and of the weights of moral reasons specifically, my initial response is that X’s permissibility and Y’s permissibility are compatible, and so certainty in the former does not commit one to a credence of zero in the latter. You raise the idea of “degrees of permissibility”, about which I’d like to hear more, but then you seem to undercut this idea by demanding that we “must question our certainty of the permissibility of Y”.
    You then say, basically, that we should not make decisions on moral grounds alone — that we ought to look to non-moral grounds as well. I agree with this. In fact, this is why in my own work I focus more on normative uncertainty generally. It’s worth mentioning, though, that on some prominent accounts of moral permissibility — Doug Portmore’s springs to mind — deontic statuses like “morally permissible” depend on non-moral as well as moral reasons.

  7. Another thought: the risk view does better in some cases, including the abortion one, because it allows for motivation by second-personal reasons.
    Say Jane is unsure whether early fetuses are persons with rights. She is unsure whether they can be wronged. Even with that uncertainty, she can refrain from aborting in order to avoid the risk of wronging her fetus.
    Alternatively, she can refrain in order to avoid wronging persons (in general).
    The former motive is aimed/directed at the fetus in a way that the later is not, and that helps mitigate the fetishism worry, I think. Even if the second motive is not unseemly, the former one is better.

  8. My last comment should probably start like this:
    Another thought: the risk view does better in some cases, including the abortion one, because it allows for motivation by patient-directed demands or obligations.

  9. Andrew,
    My point about “degrees of permissibility” was simply that the extent to which option X is permissible is dependent upon the value placed upon those options which will logically be excluded by X in a given case. So I do not find it helpful to say that we have certainty about the moral permissibility of X unless we are prepared to also say that none of the foreseeable consequences of X raise any moral problems for us. My view is that moral certainty of any kind, even with regard to what is permissible rather than required, is incoherent (I do not suggest that we disagree on this point).
    I realize I am veering somewhat away from the issue of motivation which is central to this debate. This may be because I view the problem of motivation to largely disappear if the moral agent directs his decision-making to the desirability of likely consequences of his action. So long as motive is tied to producing consequences that are viewed as morally valuable, rather than to doing what is “right” (whatever that may turn out to be), I agree that a probabilistic analysis does not of itself reveal any character flaw. But the integrity of that probabilistic analysis requires that it be done by weighing the merits of the likely consequences of choosing X versus Y in practice (what I think you are referring to as “epistemic probability”), rather than the relative strength of X versus Y as abstract moral statements. The results of the first type of analysis will no doubt inform the second, but the second is not an adequate proxy for the first. Refraining from an abortion may well be animated by a desire to avoid the moral risk of unlawful killing; but this is only a satisfactory moral justification if one has decided that the risk of not aborting – say, the risk that one may be wrongfully neglecting the primacy of one’s own physical well-being – is outweighed by the risk of aborting; and this weighing will have regard to a host of moral and non-moral factors. So-called hedging is only problematic if it neglects this analysis in favour of a default rule to adopt what seems intuitively to be the least morally risky premise.
    My familiarity with these issues is no doubt much more superficial than yours and that of others on this thread, and I have much more reading and thinking to do. But as I work through it I think my animating concern is this: while moral risk mitigation is not objectionable per se, avoidance of one moral risk will almost always involve acceptance of a different moral risk, and so the risk avoidance model does not provide a sufficient moral calculus in any given case. I think here I am drawing somewhat on Dworkin’s distinction between moral uncertainty and moral indeterminacy, and the belief that hedging based on abstract moral statements tends toward the latter end of the spectrum; I’d be interested in your thoughts on how his analysis might relate to this issue.

  10. Hi
    sorry to be so late on this – very interesting discussion. I just wanted to give a quick response of the number 1 type.
    The uncertainty in question is uncertainty about whether something is wrong overall. Ross predicted that this type of uncertainty is widespread. We’ve got certainty about what makes acts wrong to a degree but we are rarely certain about where the balance of wrongness lies given that how wrong a consideration makes an act in an individual case varies in different contexts. So, consider a person who thinks that there are various things that make eating meat wrong to a degree (causes pain to the animal, leads to global warming and so on) and various considerations that make eating meat right to a degree (the animal is not a person, it’s tasty and so on). If this person is uncertain about where the strenghts of these considerations lies, she might conclude that eating meat might be wrong.
    Assume then that internalism is true about wrong- and right-making judgments: you can sincerely think that something makes an act right/wrong to a degree only if you are motivated to pursue/avoid that thing (this makes my response a variation of your response). This would mean that the person who thinks that eating meat might be wrong would have to care de re about the things that make that act right and wrong even if they are uncertain about the overall judgment. If this person then decides not to eat meat on the grounds that it might be wrong, it is guaranteed that she will have appropriate motivations.

  11. –Brad–
    Sorry, but I think I might be missing something. As I wrote originally — yes, I think the “second-personal moral motivation” route is promising. What I’m not understanding is the relationship between this and risk. I mean, yes, I think a morally uncertain person can be motivated to avoid risking wronging someone; but I also think she can be motivated to avoid wronging someone. I don’t see how the motivation’s having second-personal moral content in any way depends on its having “risk” content.
    –Matt–
    I basically agree with you that a simple hedging rule will usually be inapplicable. You also suggest that people may be motivated to produce morally valuable consequences, rather than to do what’s right. Two comments: (1) People can be so motivated only to the extent that they are consequentialists; (2) Brian W. can speak for himself, but my suspicion is that he’d find the motive you attribute to be unseemly or “fetishistic”, too.
    –Jussi–
    Yes, this seems like a promising thing to say about many cases of moral uncertainty. Thanks.

  12. In response to Matt — I should have said “people can be so motivated only to the extent that they think consequences matter”. Obviously, you may have a low credence in consequentialism, but a very high credence that consequences matter.

  13. Hi Andrew,
    I think I might just have been confused about how much you were going to grant to Weatherson for the sake of argument. If Jane is deeply uncertain about whether her fetus is a person, can she refrain from aborting in order to avoid wronging her fetus? I took it that Weatherson argues no, and I was thinking you could grant that and run the risk route.
    But I take it (now) that you are simply rejecting the claim that if she is deeply uncertain she cannot refrain from aborting in order to avoid wronging her fetus (rather than just refraining from wronging persons in general)?
    For my part I can at least feel the pull of the contrary idea…maybe you could try to explain that intuition away by looking at implications (and talking about cancellation/pragmatics stuff, like Schroeder does in Slaves o the Passions)? I am assuming you would at least agree to this..
    “X refrains from phi-ing in order to avoid wronging Y”
    implies but does not entail
    “X believes that by phi-ing he would wrong Y”
    Just a random idea!

  14. This sounds pretty close to Moore paradoxical to me:
    “I refrained from phi-ing in order to avoid violating Y’s right to life, but I am not convinced that Y has a right to life.”

  15. I didn’t think I was disagreeing with what Jamie said above, and I’m worried that the talk about the appropriateness of hedging is distracting here.
    Here’s the question I want to take sides on. Consider any action X that would be permissible for a morally omniscient agent. Could it be impermissible for an agent with less moral knowledge to X, solely on grounds that they should be hedging against moral risk?
    I say no, moral uncertainty is not the kind of thing that makes permissible actions impermissible.
    In the paper I take stands on other things too, especially about motivation, but it’s that conclusion that I really want to defend, not any general claim about hedging.
    On Jamie’s Huck/Jim case, I think turning in runaway slaves is wrong no matter what motivation, or probabilistic views, one has. I don’t mean to take any view inconsistent with that. (And I don’t think I have, but I’m never the best judge of what commitments I’ve made.)

  16. Andrew, I don’t think going to epistemic probabilities really helps here. I mean, I literally have no idea what the epistemic probability is that meat eating is wrong. I don’t think I have any firmer idea about that than I do about whether meat eating is wrong.
    And even if I did, I really have very little idea about what the connection is between epistemic probability of action and action-guidance. I suspect that something like your expected moral value approach is best, but it’s a hard question, and there are tricky technical questions about whether your approach will work.
    On the other hand, at least on my view, I know what the connection is supposed to be between the moral claims and action: Don’t do wrong things is the only rule!
    So on my view there is one relevant thing we’re uncertain about. On yours, there are two relevant things we’re uncertain about: what the probabilities are, and what to do about them. I think my view provides *more* guidance, not less.
    All this is to say, the big difference between me and the rest of the moral uncertainty crowd (at the end of the original post) is that I think the regress arguments against conciliatory views about disagreement work, and presumably everyone else doesn’t. If you think those regress arguments work, they really undermine the attractiveness of any kind of principle connecting probabilities of normative claims to action.

  17. Aha, I see.
    Okay, I agree, then, moral uncertainty doesn’t make permissible actions impermissible.
    Can it make otherwise impermissible acts permissible? I guess Gideon Rosen thinks so. But it’s hard to tell, often and particularly in Gideon’s examples, whether the actions become permissible or excusable. I’m inclined to think the ‘ultimate motivations’ involved are going to affect excusability and its kin, and not permissibility.

  18. –Brad–
    I find your “Moore paradoxicality” claim at 11:27 plausible, but that’s because expressions like “I want to A S’s X” imply that S has an X. As for your example in 10:56, which includes no such construction — No, I don’t agree that “X refrains…” implies “X believes…”,
    When in the latest Daft Punk single, Pharrell Williams sings, “I’m up all night to get lucky”, you think that implies an especially high credence that by being up all night he will, in fact, get lucky? (And if so, is he mis-speaking in calling it “getting lucky”?) I’m sure Pharrell would have such a credence, but still…
    And just generally, I find it very psychologically implausible that, in the typical cases, one may be motivated to avoid risking A-ing where that is not due to a further motivation to avoid A-ing.
    –Brian–
    1) I think Jamie was simply saying that he didn’t think the example in your first comment provided much intuitive support for your claim about the effect of unseemly motivations on permissibility.
    2) You put your central question like this: “Could it be impermissible for an agent with less moral knowledge to X, solely on grounds that they should be hedging against moral risk?”
    I would have though the central question is whether there’s some sense in which the agent *should hedge*, not whether it’s being the case that she should hedge renders not-hedging *impermissible*. I don’t need to disagree with your “permissibility” claim to say everything I want to say about normative uncertainty, and indeed I express sympathy in a few places for what you say about (im)permissibility specifically.
    3) Re: epistemic probabilities — I understand that you’re unwilling to express your doxastic attitude toward, e.g., McMahan’s theory of jus in bello by saying “There’s a .53 probability that it’s true. I am, too. But what I want to say is that:
    (a) *I* certainly have attitudes towards such theories that are expressible by claims like “It’s more probable than Walzer’s theory of the same” or “It’s more than likely true” or “There’s a decent chance it’s true”, or even “It’s a live possibility that it’s true”.
    and that:
    (b) There are rules that specify what to do under conditions like the aforementioned (e.g. when view X is more probable than view Y but there’s a decent chance view Y is true, etc.)
    I take it you either want to deny that you have the attitudes mentioned in (a), or deny that one ought in any sense to follow any rule of the type mentioned in (b). If it’s the former, then I’d say you’re misattributing attitudes to yourself. For on the view of epistemic probability I favor, these attitudes are simply imprecise — maybe *very* imprecise — credences, and I’m inclined to think you have at least such credences in every proposition you’ve entertained but are not sure of. If it’s the latter, then I don’t yet see a compelling argument that we oughtn’t to follow such rules, although I can see that you think something like the “regress” argument against conciliationism is going to do some work here. On that point:
    4) I look forward to hearing more about this. I *think* that the apparatus I develop in my 2013 Nous paper “What to Do When You Don’t Know What to Do When You Don’t Know What to Do…” (note the ellipsis!) addresses your concerns; I’d be interested to see whether you think it addresses them adequately. (Sorry for the self-plug.)

  19. Ha! Love it. I’m not convinced your example sheds much light on the moral cases that presumably matter, but who knows. I suppose one would need to go pester a linguist about it.

Leave a Reply

Your email address will not be published. Required fields are marked *