Let us define objective consequentialism and subjective consequentialism, respectively, as follows:

OC: S’s performing x is morally permissible if and only if, and because, there is no alternative that would produce more value than x would.

SC: S’s performing x is morally permissible if and only if, and because, there is no alternative that would produce more expected value than x would.

Philosophers, such as Frank Jackson, have argued that we should reject OC and accept instead SC. In doing so, they use the following sort of example, which I borrow, with some minor revisions, from Derek Parfit:

Mine Shafts: A hundred miners are trapped underground in one of two mine shafts. Floodwaters are rising, and the miners are in danger of drowning. Sally can close one of three floodgates. Let ‘W(G1, SA)’ designate the possible world in which Sally closes Gate 1 and the miners are in Shaft A, and likewise for the other possible worlds. Depending both on which floodgate she closes and on which shaft the miners are in, the results will be one of the following six possibilities:

W(G1, SA) -> Sally saves 100 lives

W(G1, SB) -> Sally saves no lives

W(G2, SA) -> Sally saves no lives

W(G2, SB) -> Sally saves 100 lives

W(G3, SA) -> Sally saves 90 lives

W(G3, SB) -> Sally saves 90 lives

Sally doesn’t know which shaft the miners are in, but she knows all the above as well as the fact that, given her evidence, there is a 50% subjective chance that the miners are in Shaft A as well as a 50% subjective chance that the miners are in Shaft B. As a matter of fact, though, the miners are in Shaft A.

Sally doesn’t know whether its closing Gate 1 or closing Gate 2 that will maximize value, but she knows that there is no chance that closing Gate 3 will maximize value. And, according to OC, it is wrong to fail to maximize value. So, according to OC, Sally should close Gate 1. But this seems both impractical and counterintuitive. Given her ignorance of whether its Gate 1 or Gate 2 that will maximize value, it seems that she should close Gate 3. Closing either Gate 1 or Gate 2 is just too risky. It carries the subjective risk of saving no one, whereas if she closes Gate 3, she can be certain that she’ll save ninety miners. Consider also that we would not blame her if she were to close Gate 3, but that we would blame her if she were to close Gate 1. Thus, OC fails to provide the basis for either making decisions about what to do or making assessments of blame/praise. Thus, Jackson argues that we should reject OC and accept SC instead.

But it seems that SC is subject to an analogous sort of objection, only the objection involves normative uncertainty as opposed to non-normative uncertainty. Below, I’ll give an example to show this, but first I’ll need to explain my assumptions. One thing that I assume is that although impermissibility doesn’t come in degrees, the moral badness of impermissible acts does. That is, some impermissible acts are morally worse than others. For instance, on SC, any act that fails to maximize expected value is impermissible. But it’s plausible to suppose that some acts that fail to maximize expected value are worse than others, as some fall further from the mark than others do. Likewise, on Kantianism (K), any act that violates the categorical imperative is impermissible. But it’s plausible to suppose that some violations of the categorical imperative are morally worse than others. For instance, it seems that violating the categorical imperative by murdering someone who is in the prime of her life is much worse than violating the categorical imperative by telling someone a nearly harmless lie. Let’s also assume that we can measure the degree to which acts are morally good/bad in terms of their deontic values, where the deontic value of a supererogatory act is positive, the deontic value of an impermissible act is negative, and the deontic value of a non-supererogatory permissible act is zero. Let ‘DV(x)’ stand for the deontic value of an act, x. Now, here’s the example:

[Editorial note: the post was edited subsequent to the initial posting — the edits are noted by a line striking out what should have been deleted.]

Normative Uncertainty: Sandra must choose among the following three options: a1, a2, and A3. Both a1 and a2 are act-tokens, and A3 is an act-type—viz., that of doing something other than either a1 or a2. If she does a1, she will maximize expected utility value but violate the categorical imperative. If she does a2, she will abide by the categorical imperative but fail to maximize expected utility value. And if she does A3, she will both fail to maximize expected utility value and violate the categorical imperative. Let ‘W(a1, SC)’ designate the possible world in which Sandra performs a1 and SC is true, and likewise for the other possible worlds. Depending both on which moral theory is true and on which act she performs, the deontic value of her action will be one of the following six possibilities:

W(a1, SC) -> DV(a1) = 0

W(a1, K) -> DV(a1) = −100

W(a2, SC) -> DV(a2) = −100

W(a2, K) -> DV(a2) = 0

W(A3, SC) -> DV(A3) = −10

W(A3, K) -> DV(A3) = −10

Sandra doesn’t know whether SC or Kantianism is true, but she knows all the above as well as the fact that, given her evidence, there is a 50% subjective chance that SC is true as well as a 50% subjective chance that Kantianism is true. As a matter of fact, though, SC is true (or, at least, it gives the correct account of what an all-knowing being should do).

Sandra doesn’t know whether its performing a1 or a2 that is morally permissible, but she knows that there is no chance that performing A3 is morally permissible. And, according to SC, it is wrong to fail to maximize expected value. So, according to SC, Sandra should perform a1. But this seems both impractical and counterintuitive. Given her ignorance of whether its SC or Kantianism that is true, it seems that she should perform A3. Performing either a1 or a2 is just too risky. It carries the subjective risk of performing an act with a deontic value of −100, whereas if she performs A3, she can be certain that the deontic value of her act will be no worse than −10. Consider also that we would not blame her if she were to perform A3, but that we would blame her if she were to perform a1. Thus, SC fails to provide the basis for either making decisions about what to do or making assessments of blame/praise.

So if we should reject OC on the basis of examples such as Mine Shafts, then we should reject SC on the basis of examples such as Normative Uncertainty.

49 Replies to “Consequentialism and Uncertainty

  1. Doug, I’m a little confused about the terminology, but I think I get the basic idea.
    What is it that seems impractical and counterintuitive? That Sandra should perform a1, or that SC says that Sandra should perform a1? I have to say that I don’t find either of these counterintuitive (except in a very superficial way).
    You may be right that we would not blame Sandra for performing A3 and would blame her for performing a1 (I think this is hard to decide when the example is so schematic). But this is because (I figure) the example is set up in such a way that Sandra is reasonably confused about what she ought to do. It’s understandable that she is confused, so we won’t blame her for doing what she oughtn’t.

  2. Jackson, I take it, would argue that OC is impractical in that it could not be used as “a guide to action, for a guide to action must in some appropriate sense be present to the agent’s mind.” And I take it that Jackson would claim it is counter-intuitive to think that Sally ought (or is required) to close Gate 1 in Mine Shafts, which is what OC entails.
    Now, my point is that although there may be some subjective notion of rightness that is closely tied to blameworthiness that OC fails to capture (as Mine Shafts demonstrates), whatever this subjection notion is it is also a notion that SC fails to capture, as Normative Uncertainty demonstrates.
    So this just goes along with my earlier post in which I argued that SC doesn’t give us the correct account of either subjective or objective rightness. That is, SC doesn’t give us the correct account of what we objectively ought to do. Nor does it give us the correct account of what we ought to do in the sense of ought that guides our first-personal practical deliberations or informs our assessments of blame and praise.

  3. Jamie,
    Sorry. It looks like I may not have answered your question. It can seem counter-intuitive to think that Sandra should (in the sense of ‘should’ that’s most closely tied both to our first-personal practical deliberations and to our assessments of blame) perform a1.

  4. “this subjection notion is it is also a notion that SC fails to capture”
    This conclusion seems too strong to me. In the first case, we suppose that Sally has 100% confidence in her theory of the good and what we are uncertain about is the correct theory of right action given our assumption about the good. In the second case, everything is up for grabs: if Sandra still had the same confidence in her theory of the good, K wouldn’t be something she was uncertain about. Since OC itself wouldn’t seem to fare any better in the second case, but SC does fare better in the first, SC still seems dominant to OC; at least in a hypothetical sense, it seems to capture the subjective notion better.
    I wonder if both cases would be satisfied by a kind of bare subjective decision (SD) theory, which would tell you to do what maximizes expected deontic value. You might think, however, that certain normative theories will have built-in an objective decision (OD) theory, which would tell you to do what that normative theory prescribes, if it is the correct normative theory. OC would seem to have that built-in: given that OC is correct, you should only do what gives the best consequences, regardless of your uncertainties.
    By contrast, it’s not clear to me that SC is not just a special case of SD: if you are certain deontic value is cashed out in terms of goods, you should maximize those; if not, then you should maximize over the possibilities. If you think about SC as a special case of a parent principle, you see why it doesn’t work as well in the normative uncertainty case: a condition on the application of SC wasn’t being satisfied.

  5. Mike,
    Here’s my argument against SC-subj, which is what we get when we disambiguate the notion of rightness and interpret SC to be providing an account of subjective rightness:
    (1) According to SC-subj, an act is subjectively wrong if and only if it fails to maximize expected value. (By definition)
    (2) If SC-subj is true, then it is subjectively wrong for Sandra to perform A3 in Normative Uncertainty. (From (1))
    (3) It is not the case that it is subjectively wrong for Sandra to perform A3 in Normative Uncertainty. (Assumption)
    (4) Therefore, SC-subj is not true.
    As I understand SC-subj, it either does or doesn’t give the correct account of the necessary and sufficient conditions for subjective rightness. And my argument above is meant to show that it doesn’t. So I don’t know what it means to say that SC[-subj] “fares better” than OC[-subj]. Aren’t they both false? Can one false theory fare better than another?
    Note that what I say above is compatible with OC[-obj] being the correct account of objective rightness. It’s also compatible with the following view being the correct account of subjective rightness:
    S’s performing x is (subjectively) morally permissible if and only if, and because, there is no alternative that would produce more expected deontic value than x would.
    I take no stand on these issues. My point is only to argue that if we reject OC as the correct account of what we ought to do in the sense of ‘ought’ that guides our first-personal practical deliberations and informs our assessments of blame and praise because of its counter-intuitive implications in Mine Shafts, then we should likewise reject SC as the correct account of what we ought to do in the sense of ‘ought’ that guides our first-personal practical deliberations and informs our assessments of blame and praise because of its counter-intuitive implications in Normative Uncertainty.

  6. Doug,
    you should check M. Smith’s paper ‘Moore on the Right, the Good, and Uncertainty’ from the Horgan and Timmons Metaethics after Moore volume. Smith makes just this argument against Jackson if I remember this right.

  7. Come to think about this, Smith’s argument is about evaluative uncertainty rather than deontic uncertainty, but I believe that the arguments are intertranslatable.

  8. “according to SC, Sandra should perform a1.” – Douglas Portmore
    Shouldn’t Sally pick action 3 (a3) according to subjective consequentialism (SC)?
    If Sally is contemplating her situation of normative uncertainty, and applies SC to that problem, she would have to pick action 3 (a3) for the same reasons she would pick Gate 3 (G3), and it seems that if she applied the categorical imperative (K) to her situation of normative uncertainty, her choice in action would be indeterminate. The only time Sally would pick action 1 (a1) when faced with a situation of normative uncertainty is if she is applying objective certainty (OC) to that particular problem.
    The strangeness of your counter, for me, is that it requires an assumption of OC on the part of Sally when weighing the deontic value of SC versus K, whereas it would seem more consistent for Sally to apply the same normative commitment to solving the problem of normative uncertainty that she is testing for success.

  9. James,
    In Normative Uncertainty, it’s stipulated that, if Sandra performs a1, she will maximize expected value. SC holds that Sandra should do what will maximize expected value. It follows, then, that, according to SC, Sandra should perform a1. [Note that, in the example, I inadvertently wrote ‘utility’ in place of ‘value’.]
    Sandra’s performing A3 would maximize expected deontic value, not expected value. The act that maximizes expected deontic value need not be the act that maximizes expected (non-deontic) value. Suppose, for instance, that what’s of value is utility and that Sandra’s performing a1 will maximize expected utility but violate the categorical imperative.

  10. James,
    Also note that you seem to be getting your Sallys and Sandras mixed up.
    And why do you think my example (i.e., Normative Uncertainty) “requires an assumption of OC on the part of Sally when weighing the deontic value of SC versus K”? I don’t see how it does.

  11. I understand that, but deontic value itself is a measure of the consequences of Sally’s possible ethical commitment. So, when facing an ethical choice between differing ethical methods, Sally must select a means of weighing those consequences – of weighing deontic value. In the case of Normative Uncertainty (aka. which method for determining the right course of action is the correct one?), different results will occur depending on the presupposed method of weighing consequences.
    This creates a paradox. How can one be both normatively uncertain and still apply a subjective norm to resolve normative uncertainty?
    (maybe that IS your point against SC?)

  12. James,
    I think that one can be uncertain about the objective norms and still use a subjective norm to guide one’s practical deliberations. So Sandra is uncertain about whether it is SC or Kantianism that gives the correct account of how someone who doesn’t suffer from any relevant normative or non-normative uncertainty should act. She could, though, still use a subjective norm to determine that she subjectively ought not to perform a1 and that she should instead perform A3. I’m assuming, then, that the correct subjective norm is one that implies that S subjectively ought not to take a substantial risk (such as a 50% risk) of doing something that’s terribly morally bad (that is, something that has a deontic value of -100 or worse) if S could instead ensure that she doesn’t do anything that too morally bad at all (that is, something that has a deontic value that’s no worse than -10).
    Does this make sense and seem reasonable?

  13. It may seem weird to think that SC could be the correct account of how someone who doesn’t suffer from any relevant normative or non-normative uncertainty should act, but just assume that the laws of physics are probabilistic as opposed to deterministic and that not knowing the outcome of some act that has no deterministic consequence doesn’t count as suffering from any relevant non-normative uncertainty.

  14. It can seem counter-intuitive to think that Sandra should (in the sense of ‘should’ that’s most closely tied both to our first-personal practical deliberations and to our assessments of blame) perform a1.

    Not to me.
    Well, let me be more precise. SC isn’t entirely intuitive on all cases; I assume that’s agreed by all parties. To the extent that SC is intuitive, just to that extent it is quite intuitive (to me) that Sandra should perform a1. Since she is unsure about this, it’s understandable, reasonable, etc., that she might in the end do A3 instead, but she should do a1.
    You are talking about the moral ‘should’, ‘ought’, and so on, right? You don’t switch over to what Sandra ought rationally to do, or anything like that?

  15. Jamie,
    Fair enough. We may just have different intuitions. But let’s make sure. Do I have the following right? You accept that there are two senses of ‘ought’ and that one is objective in that it doesn’t depend on the agent’s epistemic position and the other is subjective in that it does depend on the agent’s epistemic position. And you accept that the subjective sense is the one that guides our first-person practical deliberations and informs our assessments of blame and praise. But you deny that it is counter-intuitive to think that Sandra ought (in this subjective sense of ‘ought’) perform a1. Is that right?
    In any case, assume that I’m talking about the moral ‘should’, ‘ought’, and so on.

  16. Thanks to Jussi, I just finished reading Michael Smith’s “The Right, the Good and Uncertainty.” I think that Jussi is right that I’m just making the same argument against Jackson that Smith gives in that paper, only Smith does a far more brilliant job of it. Of course, Smith talks about “expected value-as-I-see-things” whereas I talk about “expected deontic value (relative to my expectations),” but the two notions seem roughly, if not wholly, inter-translatable.
    Perhaps, the point on which we might disagree is on the notion of ‘right’. Whereas Jackson thinks that the notion of ‘right’ that is central to ethical theory is the subjective notion, Smith thinks that it’s the objective notion, and Smith wants to distinguish sharply between the concept of right action and the concept of what we can legitimately hold agents responsible for doing given their epistemic positions. I, by contrast, tend to think that ‘right’ is ambiguous between an objective sense and a subjective sense and that the subjective sense of ‘right’ cannot be sharply distinguished from what we can legitimately hold agents responsible for doing in their epistemic positions. Like Smith, though, I think that the objective notion is more central to ethical theory. The subjective notion, on the other hand, is more central to giving an account of decision-making, the sort of account that is, as Smith puts it, “formalized — or anyway partially formalized — in decision theory.”
    In any case, I highly recommend Smith’s article to those interested in this topic.

  17. I wouldn’t put it that way.
    I don’t think “ought” has distinct ‘objective’ and ‘subjective’ senses. Rather, it has a parameter, which takes information sets as values. To put it another way, what one ought to do is relative to information (in Jackson’s model, a credence function).
    What’s odd about the “normative uncertainty” case is that it seems weird to think of the correct normative theory as part of the information base. This seems most odd when the “ought” is moral.

  18. What’s odd about the “normative uncertainty” case is that it seems weird to think of the correct normative theory as part of the information base.
    What’s so odd about it? The correct (objective) normative theory is just a theory about what degrees of moral goodness are attributable to acts with various objective features (e.g., their actual consequences). And just as we can be uncertain about what objective features a certain act has, we can be uncertain about what degrees of moral goodness are attributable to those features. To paraphrase Michael Smith, just as we can be uncertain about which acts are a means to bringing about our ends, we can be uncertain about which ends are the ones that we morally ought to have. So one way to look at it is that a moral theory is just a theory about which ends we morally ought to have. And since we can be just uncertain about which is the correct moral theory (that is, which are the ends that we morally ought to have) as we are about which acts constitute effective means to those ends, it would be odd to treat the two asymmetrically and include one but not the other is the relevant credence function (or information base or however you want to put it).

  19. So I think I have sort of a different take on what’s deficient about OC, but I just want to get clear on something first:
    Doug, I suspect your/Jackson’s case against OC relies on imputing to it a commitment that it doesn’t have. Applying OC delivers this result: there’s a .5 chance that closing 1 is (objectively) permissible, a .5 chance that closing 2 is (objectively) permissible, and no chance that closing 3 is (objectively) permissible. I take that to completely exhaust the normative verdicts we get from applying OC. But later, in arguing against OC, you say:
    “So, according to OC, Sally should close Gate 1.”
    I’d like to know what you mean by “should” here. Do you mean “(objectively) permissible”? If so, then no, this is not what OC says. Applying OC to this case gives us only the result detailed above — .5 chance closing 1 is permissible, etc. That’s it. Do you mean “(subjectively) permissible”? Again, this is not what OC says; the theory does not even speak to subjective permissibility.
    If I may suggest a way of interpreting it: by “you should close gate 1”, you mean that it is only by closing gate 1 (or 2, I guess) that you are responding properly to OC in this situation, such that you could properly be said to “guide your actions” by OC. An improper response to a cognized norm (coming to believe that I ought not to A, and then A-ing is the starkest example) simply doesn’t count as being guided by it. Anyway, that’s sorta what I think. Let me know if that doesn’t make sense.
    Anyway, given that reading of “you should close gate 1”, then I simply deny that you should close gate 1, even if OC is true. I think that, in this case, the property of being the proper response to OC isn’t had by ANY of the three actions, nor is the property of being the improper response had by any of them. OC is, for that reason, useless in guiding your actions. To put things loosely, once we let uncertainty into the picture, norms phrased in terms of permissibility/’ought’/etc. and not some gradable feature like reason strength are almost always useless for guiding action — as useless as a norm like, say, “There are reasons of some strength or other to do A” is when we act under certainty.
    Anyway, I have more to say about this — just wanted to get your response to what I’ve written so far.

  20. In summary: on the correct view about action guidance, one is not guided by OC to do ANY of the three actions. That’s why OC is useless by itself in many cases of uncertainty, and that’s what’s wrong with OC. The thought that one would be guided by OC to close either 1 or 2 rather than 3 is the result of applying a mistaken view about action guidance.

  21. Andrew,
    Applying OC delivers this result: there’s a .5 chance that closing 1 is (objectively) permissible, a .5 chance that closing 2 is (objectively) permissible, and no chance that closing 3 is (objectively) permissible. I take that to completely exhaust the normative verdicts we get from applying OC.
    Are you keeping in mind that I explicitly said: “As a matter of fact, though, the miners are in Shaft A.” I take it that this fact in conjunction with the various other stipulations that I made about the case entail that, according to OC, Sally is required to close Gate 1. And when I said ‘Sally should close Gate 1’, I should have said ‘Sally is required to close Gate 1’. And, as I see it, OC is ambiguous as to whether it is talking about what Sally is objectively required or subjectively required to do.

  22. Andrew,
    I agree that OC is useless as a guide to our practical decisions. Thus, OC is clearly not the correct account of subjective rightness. It may, however, be the correct account of objective rightness.

  23. Doug — Sorry; I read that a bit hastily. Maybe I’m not quite tracking what’s going on. Here are different criticisms of OC:
    1) It delivers the wrong result re: what it’s objectively permissible to do.
    -This is clearly not what you’re saying.
    2) (Considering it now as a theory of objective permissibility): An agent guiding his actions by it will do the subjectively wrong thing.
    -I thought maybe this is what you were urging. This is what I was denying.
    3) It delivers the wrong result re: what it’s subjectively permissible to do.
    -This seems false, because it’s not a theory of subjective permissibility in the first place. (After all, it takes as its input actual value produced, rather than beliefs or credences.)
    4) You cannot guide your actions by OC in the situation described.
    -We both seem to accept this. But I accept it because I think the notion of action guidance is a normative one; to be guided by a norm is at the least to respond properly to one’s cognition of it (plus one’s beliefs re: the features that are relevant according to that norm). And I think that in the case imagined, none of gate 1, gate 2, or gate 3 counts as either a proper or improper response to Sandra’s beliefs. But I DON’T think this is true when we act under certainty. If I’m sure A produces more value than B, I will use OC as a guide in choosing A. A is the uniquely proper response. Nor do I think it’s true when we apply other objective norms to the present case. For example, I do think there is a uniquely proper response — opening gate 3 — to the beliefs you describe Sandra as having, plus her cognition of a norm like “The strength of reasons to do an action is a positive linear function of the value of the outcome produced by that action.” Sandra can use this norm to guide her action of opening gate 3, even though it takes as its input the value ACTUALLY produced.
    -Relatedly, I don’t understand Jackson’s reason for saying one can’t guide one’s actions by something like OC — this “present to the mind” stuff. The actual value produced by my action is no more present to my mind when I act under utter certainty than when I act under uncertainty, and yet there’s no problem whatsoever guiding one’s actions by something like OC in cases of utter certainty.
    -So that’s my reason for pushing criticism 4) — unclear as it may be — and my understanding of Jacksons’ reason for pushing it, which strikes me as missing the point. Do you have a different reason for pushing 4?
    -Also, is there another criticism of OC that you endorse that I’ve left off the list?
    Thanks for indulging me,
    -Andrew

  24. The correct (objective) normative theory is just a theory about what degrees of moral goodness are attributable to acts with various objective features (e.g., their actual consequences).

    While I agree that this is true, I think it’s very misleading. The degrees of moral goodness attributable to acts with various objective consequences already build in facts about which acts ought to be done given various information (i.e., for various values of the information parameter in ‘ought’).

    And just as we can be uncertain about what objective features a certain act has, we can be uncertain about what degrees of moral goodness are attributable to those features.

    We can be uncertain, but not, I think, “just as” we are uncertain about what the consequences will be. Exactly why or how the kinds of uncertainty are different is a hard theoretic question; that they are different is not theoretic at all.

    And since we can be just uncertain about which is the correct moral theory (that is, which are the ends that we morally ought to have) as we are about which acts constitute effective means to those ends, it would be odd to treat the two asymmetrically and include one but not the other is the relevant credence function (or information base or however you want to put it).

    I think it’s exactly the opposite. Since the kinds of uncertainty are so different, it would be odd to treat them the same.
    Suppose someone points out that there are different theories of how to treat uncertainty. So you become uncertain about which way is right. Do you think it’s perfectly natural to proceed as if this kind of uncertainty were just the same as uncertainty about whether it will rain tomorrow? Do you think we should average the credences that the two different theories tell us to have about the credences we ought to have about the credences we ought to have in each theory? Maybe the average should be weighted. (In case it’s not clear, I think this is entirely the wrong way to proceed, although I have no particular theory about the right way to proceed in the face of uncertainty about the correct theory of how one ought to proceed.)

  25. Doug,
    What’s interesting is that the subjective norm you adopt to tell you what you ought to do in Normative Uncertainty could itself be a part of a similar case. Suppose Normative Uncertainty is as you describe, but Sandra is also uncertain about whether the right next-level norm will have her minimize the chance of doing something grossly impermissable, or rather tell her to maximize the chance of doing the thing that is most permissable, or something else. Then it seems like she will need a further norm to answer this question, one she could also be uncertain about, and so on, ad infinitum. If this is so, then the only way to answer the Normative Uncertainty case is on the condition that we have no higher-level uncertainty. I see no reason why a similar move would not be possible for SC, although it seems a little trickier. But I suspect someone like Jackson would say that if you reason yourself to 100% credence in a theory of right action, SC will be the theory you should come up with. Likewise, given that you think Sandra should choose A3 in the NU case, you presumably think that if we reason ourselves to 100% credence at that level, we would come up with something like a principle that tells us to minimize impermissability.

  26. Doug,
    Oh, ok – much clearer now. I think.
    The conclusion still doesn’t follow for me though. If Sandra is unsure of which deontic values are the right ones, SC suggests that she pick a3 – because acting in an impermissible manner is a consequence, and a3 advertises the best expected return given Sandra’s uncertainty. The fact that a1 happens to be the correct gate (objectively), or that a1 is what would be chosen if Sandra wasn’t uncertain about SC vs. K, doesn’t matter for the same reason that it doesn’t matter in the shafts case. That is, because of her uncertainty.
    That makes sense doesn’t it? Maybe. Yeah, I think it does. On a certain level, it seems that when Sandra selects a3 that she is rejecting SC, but that isn’t the case. The whole point of the shafts case is to show that consequences look differently from behind a wall of uncertainty.
    If, for example, we’re using SC to judge Sandra’s behavior, we might judge her harshly unless we know that she was uncertain about which ethical theory to commit to. Armed with that knowledge, and our own commitment to SC, we would judge a3 the appropriate response. Would we not?
    Granted, knowledge of uncertainty seems like a difficult thing to acquire, but there you have it.

  27. “Consider also that we would not blame her if she were to perform A3, but that we would blame her if she were to perform a1.”
    Doug, can you say more about this claim? Who are “we”? Is the idea that no reasonable person would blame her? In Mine Shafts, it seems to be implicit that Sally’s ignorance about which shaft the miners are down is not due to any failure on her part. We might well blame her for opening gate 3 if she should have known where the miners were but didn’t. In Normative Uncertainty, it does not seem entirely obviously to me that Sandra is not responsible for her own failure to know what theory of morality is true. (We might need to know more about Sandra to judge this.) And even if she is not, it does not seem obvious to me that failure to know this means that she should not be blamed failing to do a1. It might mean that some negative reaction to her would be unjustified, but perhaps this is a judgment about her character. Can’t we blame people for actions that they performed in obedience to a moral view that they hold but that we reject, while simultaneously admiring their commitment to acting on the moral principles to which they (wrongly, in our view) subscribe?

  28. Dear all,
    Thanks for all the wonderful comments. I respond to each of you individually below.
    Andrew,
    I agreed that OC is not helpful in guiding our first-person practical decisions—at least, not in the sorts of cases described above, but I never said that this was a criticism of OC. Whether it is or not depends, I think, on whether we consider providing a guide to our decision-making to be something that an adequate moral theory must do. Jackson does. I’m not so sure. As to whether Jackson has missed the point, I’ll need to think about what you said more.
    Jamie,
    I should back away from the claim that it would be odd to treat the two asymmetrically. You’re correct that the two are different and so it’s possible that this difference warrants their being treated differently. What seems odd to me, though, is to include the one but not the other in our account of subject rightness. I’m assuming that there’s a very tight connection between doing something subjectively wrong and being blameworthy. And I’m assuming that not only can one’s non-normative uncertainty affect whether one is blameworthy, but so too can one’s normative uncertainty affect whether one is blameworthy. The fact that normative uncertainty can affect whether one is blameworthy is supposed to be illustrated by Normative Uncertainty. My intuition is that, given Sandra’s normative uncertainty, Sandra would be blameworthy for performing either a1 or a2, but not necessarily for performing A3. So if both normative and non-normative uncertainty can affect whether we are blameworthy and if subjective rightness is so closely tied to blameworthiness, it would be odd to include one but not the other in our account of subjective rightness.
    Mike,
    I’m not following you. You say, “But I suspect someone like Jackson would say that if you reason yourself to 100% credence in a theory of right action, SC will be the theory you should come up with.”
    SC provides necessary and sufficient conditions and it is implicitly quantified over all acts and all agents. So it needs to give the correct answers in all cases. Now, it implies that the Sandra should perform a1 in Normative Uncertainty, but this is, I believe, the incorrect answer. Therefore, SC is false. Or so I argue.
    Are you denying any of this? Are you saying that if you reason yourself to 100% credence in OC, SC will be the correct theory of subjective rightness (where SC is quantified over all acts and all agents)? Are you saying if you reason yourself to 100% credence in Kantianism, SC will be the correct theory of subjective rightness? Perhaps, I’m not understanding what you mean by “the theory you should come up with.”
    James,
    You write, “The conclusion still doesn’t follow for me though.”
    Are you referring to the conclusion in this argument, which I gave in response to your initial comment?
    (P1) Sandra will maximize expected value if and only if she performs a1. (By the stipulations of the case)
    (P2) SC holds that, for all S, S should do whatever will maximize expected value. (By definition)
    Therefore, (C) SC holds that Sandra should perform a1.
    Are you saying that (C) doesn’t follow from the conjunction of (P1) and (P2)?
    You also write:
    If Sandra is unsure of which deontic values are the right ones, SC suggests that she pick a3 – because acting in an impermissible manner is a consequence, and a3 advertises the best expected return given Sandra’s uncertainty.
    A3 offers the best expected return in terms of deontic value. But it doesn’t offer the best expected return in terms of value.
    According to SC, S’s performing x is morally permissible if and only if, and because, there is no alternative that would produce more expected value than x would. Do you accept that SC so defined is distinct from what we might call deontic consequentialism?
    DC: S’s performing x is (subjectively) morally permissible if and only if, and because, there is no alternative that would produce more expected (objective) deontic value than x would.
    I admit that DC implies that Sandra should perform A3, but I thought the claim that you were making was that SC implies that Sandra should perform a1.
    Dale,
    That’s right. I’m assuming that just as Sally’s credences, in Mine Shafts, regarding the possible locations of the miners are not due to any failure on her part, Sandra’s credences, in Normative Uncertainty, regarding the possible correct moral theories are not due to any failure on her part. This is, I admit, controversial. But the idea that reasonable people can have different credences about different moral theories seems very plausible to me. My credence with regard to hedonistic act-utilitarianism has changed over the years, but it hasn’t changed as result of my exercising various rational capacities that I failed to exercise in the past or in my exercising them in a more responsible fashion than I had in the past. The change, it seems to me, has resulted from my exposure to new evidence: e.g., exposure to new arguments for and against hedonistic act-utilitarianism. These were arguments that I hadn’t thought about before, not because I didn’t make a faithful effort to entertain all possible arguments for and against the view, but because I’m simply not smart enough to come up with those arguments myself. This, I claim, is not my fault. And I shouldn’t be blamed for my cognitive limitations.
    So the ‘we’ refers to those of us who are familiar with all the relevant details of the case. And the idea is that no reasonable person familiar with the pertinent details could rightfully blame Sandra for performing A3 given the stipulations of the case and the implicit assumption that Sandra’s credences, in Normative Uncertainty, regarding the possible correct moral theories are not due to any failure on her part.
    Now I take it that you may want to reject this implicit assumption. You might argue that Sandra could not possibly come to have a 50% credence in Kantianism (or SC) without having made some unreasonable mistake in the exercise of her rational capacities and that this is a mistake for which she can be faulted. But if you do, I would want to hear an argument for this position.
    You also ask,
    Can’t we blame people for actions that they performed in obedience to a moral view that they hold but that we reject, while simultaneously admiring their commitment to acting on the moral principles to which they (wrongly, in our view) subscribe?
    Yes. But we can rightfully blame such people for their actions only if either their credence in their moral view is unreasonable or their actions are unreasonable given the reasonable credences that they have.

  29. Doug-
    Thanks, that is helpful. It seems to me that the arguments are disanalogous in one way that might turn out to be consequential. The argument in Mine Shafts turns on the implicit assumption that there can be no blameless wrongdoing; Sally is blameless for what she did, therefore what she did was not wrong. Since OC plus the facts entails that it was wrong, we must reject OC. On the other hand, in Normative Uncertainty you seem to be assuming implicitly that blameless wrongdoing is possible: SC is true, SC plus the information available to Sandra entails that for her to do A3 is wrong, yet she is blameless for doing A3.

  30. Doug-
    Or was what I described as a disanalogy between the arguments really just another way of stating your original point? Normative Uncertainty shows that blameless wrongdoing is possible, so Mine Shafts has no force against OC?

  31. Dale,
    I disagree that there is this disanalogy. In Normative Uncertainty, SC and Kantianism are meant to be theories of objective rightness (theories about what it would be right for an all knowing being to do). So, in both cases, I’m assuming that it is possible that there can be blameless objective wrongdoing, but that, except in those instances in which certain non-epistemic excusing conditions are present, there can be no blameless subjective wrongdoing.

  32. Doug,
    Thanks for your patience btw (I’m a first year philosophy student, so this is all still somewhat cumbersome for me).
    Confusion over Conclusions
    The conclusion that isn’t working for me is that Normative Uncertainty suggests we should reject SC. As you’ve guessed, I’m not seeing deontic consequences as significantly distinct from subjective consequences, anymore than if Sally’s shafts were occupied by women instead of men.
    “I admit that DC implies that Sandra should perform A3, but I thought the claim that you were making was that SC implies that Sandra should perform a1.” – Doug
    No no. I think SC and DC both imply that Sandra should perform a3 – given that Sandra is uncertain about her ethical options.
    If Sandra applies SC to Normative Uncertainty, she gets the same result (a3) as if she applies DC to Normative Uncertainty. Maybe I should be more clear about what I mean by “applies”. I’m drawing a distinction between, let’s call it meta-SC, and the SC that Sandra is weighing against K, where meta-SC is operating on the same level as DC. I guess I’m thinking of them as functions. Something like this:
    SC(Shafts) -> a3
    DC(Normative Uncertainty) -> a3
    meta-SC(Normative Uncertainty) -> a3
    I think Mike is saying something similar.
    I would draw the conclusion that SC produces different results depending on where the wall of uncertainty is, but isn’t that the point that shafts is trying to make?
    On the other hand, I’m tempted to say I agree with you because the normative uncertainty case does suggest that SC (rather than meta-SC) is insufficient. That’s why it’s been coming across as a paradox for me. Normative uncertainty seems to both reject and confirm SC.

  33. James,
    I take it that you’re having a hard time seeing how SC and DC are distinct. Clearly, they’re distinct if and only if ‘value’ and ‘deontic value’ are distinct concepts. So let me try to explain why those two concepts are distinct. One difference is that their bearers are different. The bearer of value is a state of affairs. A state of affairs can have more or less value. The bearer of deontic value is an act (not a state of affairs). Some permissible acts are morally better than others. And some impermissible acts are morally worse than others. Let’s consider an example. Let’s suppose that hedonism is true. In that case, one state of affairs is better than another if and only if it contains more aggregate pleasure. And let’s assume that SC is true. In that case, one act is morally worse (has less deontic value) than another if and only if the amount of expected goodness it produces is further from the optimal amount than the other. For instance, if the optimal amount of expected goodness that S can produce is 100 units and x produces 80 units and y produces 60 units, then y is morally worse than x. Whereas the deontic value of y is -40, the deontic value of x is only -20. Nevertheless, it could be that the state of affairs in which S performs y is better than the state of affairs in which S performs x, because, say, y produces more aggregate pleasure than x.

  34. Doug,
    The distinction between “bearer” helps a lot. Thanks. That’s more or less what I though. I guess I just don’t buy the distinction. If Sandra is considering which moral theory to pursue, the deontic value *is* the state of affairs she is deliberating on. In essence, when considering deontic value, Sandra is acting as a consequentialist. The “action” she is deliberating on is “which moral theory should I adopt”, and the state of affairs is the deontic value of each possible course of action.
    I need to mull it all over for awhile (and I think I’ve made you run around too much already). Thanks for the post and your many comments though – my head is still swimming.

  35. Doug,
    Apologies, I ran a few things together rather quickly. Let me just spell out my main point:
    In Normative Uncertainty it looks like you have in mind a higher-level norm (in this case a norm about deontic values) that will give us the intuitively right answer. In your response to Dale you mention deontic consequentialism (DC); we could take this as the norm that prescribes the intuitively right answer in NU. I also take it that someone could be uncertain about whether DC is the correct norm to act on when one is uncertain about a particular set of theories of rightness. Sandra might think there is some chance that DC is correct, and some chance, for example, deontic perfectionism (DP) is correct:
    DP: S’s performing x is (subjectively) morally permissible if and only if, and because, there is no alternative that could produce more (objective) deontic value than x would.
    DP is a view that recommends taking the action that potentially leads to the most deontic value, and so would recommend choosing either a1 or a2 (but not A3). So, Sandra’s revised decision list would be:
    W(a1, SC, DC) -> DV(a1) = 0; EV=-50
    W(a1, SC, DP) -> DV(a1) = 0; PV=+10
    W(a1, K, DC) -> DV(a1) = −100; EV=-50
    W(a1, K, DP) -> DV(a1) = −100; PV=+10
    W(a2, SC, DC) -> DV(a2) = −100; EV=-50
    W(a2, SC, DP) -> DV(a2) = −100; PV=+10
    W(a2, K, DC) -> DV(a2) = 0; EV=-50
    W(a2, K, DP) -> DV(a2) = 0; PV=+10
    W(A3, SC, DC) -> DV(A3) = −10; EV=-10
    W(A3, SC, DP) -> DV(A3) = −10; PV=0
    W(A3, K, DC) -> DV(A3) = −10; EV=-10
    W(A3, K, DP) -> DV(A3) = −10; PV=0
    where EV= expected deontic value and PV= perfectionist value. Suppose Sandra thinks it is much more likely that DP is true than DC is true. DC will still recommend a1 and a2, but DP will recommend A3, and given her much higher credence in DP, if you wanted to make an expected value calculation across EV and PV, you can imagine there being greater expected combined value at this level for both a1 and a2 (but not A3).
    SO, if you think Sandra remains blameless in being uncertain about these higher-level norms, you will need an even-higher-norm to mediate her decision here, and if you find it intuitive to appeal to an expected value maximizing norm on this level (as you did with the lower level), this will tell us that DC gets it wrong. And if DC gets it wrong in this case, why should we be following it in Normative Uncertainty?
    Does this make sense so far (forgive the quick and dirty presentation)? In fact, it seems that any norm that is a function from particular values (e.g. deontic values) to actions will fall prey to an uncertainty argument, since one could be uncertain between that norm and another norm that uses different values. I think this points to the form of a theory of subjective rightness that we might all be happy with, but I’ll leave it at that for now.

  36. I think Mike is on the right track here.
    First of all I want to clarify just what we mean when we talk about SC.
    “SC: S’s performing x is morally permissible if and only if, and because, there is no alternative that would produce more expected value than x would.”
    I’m going to assume two things. Firstly, that the “value” referred to is “consequentialist value” of some kind. This seems to be implicit in the discussion you have had until now.
    Secondly, the “expected value” is the value that the agent expects to obtain.
    I think the key move you make is switching to “deontic value” when assessing situations in Normative Uncertainty (NU).
    The natural response to make to your situation is the following:
    “Suppose SC is true. Then the right thing to do is to maximise the expected value. Sandra expects the most value to result from A3, therefore Sandra should do A3. This agrees with our intuitions, so there is no problem”. I think this is the kind of argument that James is appealing to.
    The problem is that when you assessed the different situations in (NU), you did so with respect to their deontic value. This is quite natural: we are contrasting SC with Kantianism, and we need to capture the values that both of them bring to the table. Actions that produce good consequences can straightforwardly be given high deontic values under SC.
    However, SC itself is insensitive to deontic value (this seems to be required for your premiss P1, Doug), and hence fails to follow our intuitions, which suggest that we should give some weight to Kantian course of action. But this then reduces the whole argument to a claim that SC dismisses deontic value, whereas we intuitively do not think that should be done. That is, the problem lies with the consequentialist nature of SC, rathe than it’s subjective qualities. (interestingly, Kantianism fares just as badly as SC if your argument goes through)
    However, SC was introduced in the context of consequentialism, so perhaps we should not be surprised that SC does not play nicely with non-consequentialist intuitions!

  37. Mike,
    I don’t think that uncertainty about what the correct account of what agents subjectively ought to do (that is, uncertainty about what the correct account of subjective rightness is) affects what one subjectively ought to do, because the correct account of what one subjectively ought to do just specifies what one ought to do given whatever uncertainty one has. So if DC is true, then one subjectively ought to maximize expected deontic value even if one is uncertain that about whether it is DC or DP that gives the correct account of what agents subjectively ought to do.
    By the way, I do not claim or deny that DC is the correct account of what agents subjectively ought to do, although I do claim that it gets the right answer in Normative Uncertainty. My point is not offer a positive account of subjective rightness, but only to argue that SC is no better than OC as an account of subjective rightness.

  38. Hi, Doug.
    Interesting case. I’m mostly with Jamie, but want to introduce a couple of distinctions that maybe clarifies a point that I think he is trying to make–or, at any rate, I want to make.
    On the view that I like ‘ought’s have a parameter that *can* and do take a body of information as a value. (Looks like I’m with Jamie here, which is news to me. Yay!) In these cases, the proposition itself is relative to a body of information. On the view I defend, these correspond to so-called ‘subjective’ “ought”s.
    But that same parameter can also take circumstances as a value; the resulting proposition is relative to circumstances and these are correspond to the so-called ‘objective’ “ought”s. (From what Jamie says above, I take it he doesn’t think this.) On my view, these two uses of “ought” aren’t in competition with one another; there’s no need to pick.
    Also, on the view I like, there is a separate parameter that, when “ought”s are used evaluatively, takes a standard of some kind as a value. We can wonder about how a particular standard is selected at a context as a value, but the important point is that this itself is not information- sensitive, at least not in the sense of reflecting our uncertainty as to which standard is correct. Though there may be cases where information is relevant for determining which standard is selected, the important point is that uncertainty with respect to which standard is the ‘correct’ one is NOT built into the proposition itself.
    Does this mean that uncertainty with respect to which standard is correct never arises? Or that uncertainty about which standard is correct doesn’t give rise to its own puzzles? No, it doesn’t. I think the way to represent this is as an uncertainty as to which “ought” statements are true. That’s Sandra’s problem. She’s uncertain as to whether SC or Kantianism is true, so she’s uncertain as to which ‘ought’ proposition is true. And that looks to me like a straightforward case of uncertainty about what to believe. If that’s right, then we should keep distinct three different ways we might assess what Sandra does: 1) Does she do as she ought, given the true moral theory and given her information? 2) Does she do as she ought, given the true moral theory and given the circumstances? and 3) Does she believe as she ought, given her information about which moral theory is likely to be correct? It looks to me, Doug, like its the last assessment that is at issue in your puzzle and that’s why I think Jamie is suggesting that issue about ought-evaluation raised by your puzzle isn’t a moral-ought evaluation.

  39. Janice (Dowell),
    Suppose that Sandra performs a1. I would claim that she is blameworthy for doing so regardless of what her other beliefs are (note that the example presupposes that she has certain beliefs/credences, viz., the ones that she stipulated to have). Do you deny that she would be blameworthy for performing a1?
    Now I don’t think that this assessment of blameworthiness is your assessment number 3. And I would think her blameworthy for performing a1 regardless of what is, or what I believe to be, the true moral theory. So I don’t think that this is your assessment number 1 or your assessment number 2.
    It seems to me that agents can be blameworthy for failing to act in certain ways given that it was reasonable to expect them to act in those ways given their relative credences with respect to various normative facts — facts about what other facts provide objective reasons to perform acts.

  40. Janice (if I may),
    I just want to get a better feel for your view. So:
    1) When you say that the practical ought is relative to an evaluative standard, do you mean the TRUE evaluative standard? If it doesn’t have to be that, then why can’t it be relative to the evaluative standards an agent believes in, or that the information suggets is true?
    2) Why, if “ought” can be relative to information about the way the world is non-normatively, can’t it be relative to information (or for that matter, an agent’s credences) about the way the world is normatively? I mean, one motivation for thinking there’s a normative-information-relative “ought” is that we seem to USE that “ought” when we ask ourselves what we ought to do in cases of normative uncertainty — whether to support abortion rights, whether to inferfere with someone’s autonomy for his own benefit, etc.

  41. Hi, Doug,
    We can and do blame agents for different sorts of failure, not all of which involve moral evaluations. I can blame someone for doing something stupid, because they acted on a belief all of their evidence spoke against. That’s not a moral evaluation of their act. There’s no ‘tight connection’ between that sort of blameworthiness and subjective *rightness*. If you want to insist that the attitude that I have to someone who acts stupidly isn’t blame, then I don’t have the intuition that Sandra is to blame if she performs a1 over any of A3, on the assumption of the truth of SC.
    In so far as I have an intuition that Sandra is to be faulted for performing a1 or a2, its on the grounds that she’s acted contrary to her evidence (or credences). So, if Sandra believes SC, despite her evidence, and so performs a1, I *may* say that what she’s done is unreasonable because she doesn’t believe as she ought, given her evidence, and so she doesn’t act as she ought, given her evidence. Or, if she herself doesn’t believe her evidence supports SC, but does what she knows to be the SC supported action anyway, I may say that she doesn’t do as she ought, given her beliefs. But neither of those ‘ought’s is moral, it’s the ‘ought’ of rational action. The standard against which I measure the degree of ideality her action possesses is the standard of practical rationality.
    So, I still want to keep the 3 different ‘ought’s above distinct and add information-sensitive ‘ought’ of practical rationality.
    Andrew,
    #2: Yes, the relevant information can be normative information. “ought” propositions are cheap. There are at least as many possible ones as there are possible values for each of the two parameters. What I am challenging is the assumption that the resulting propositions are always distinctively moral ‘ought’s. It may be that some of them are. I’m not sure. But not all of them are, so that “Sandra ought perform an act of Type A3” in the above scenario is both true and clearly moral is what I’m not now seeing.
    So, I think there can be normative information relative ‘ought’s, but haven’t yet found one that looks like a clear case of a moral ‘ought’. Your two examples look like ‘ought’s of practical rationality: e.g. ‘given my evidence and given my ends (e.g. to not do something horribly morally wrong), what government policy ought I support?’ That’s not a moral use of ‘ought’.
    #1: So, yes, non-true standards can be parameter values. Context determines which standard is selected and these can get selected. So, if I believe SC and SC is false, my use of “ought” *may* (but needn’t) take SC as a value for the standard parameter.

  42. …Or, if she herself doesn’t believe her evidence supports SC, but does what she knows to be the SC supported action anyway, I may say that she doesn’t do as she ought, given her beliefs. But neither of those ‘ought’s is moral, it’s the ‘ought’ of rational action.
    So you think that when Sandra performs a1 (the act that SC supports) despite thinking that there’s only a 50% that SC is correct, her action is irrational but not immoral. I don’t have that intuition. I think that her action is immoral (and, perhaps, also irrational). If I were adversely affected by her action, I think that it would be appropriate for me to resent her for performing a1. And I think that she would appropriately feel guilty for having performed a1. I don’t think that resentment and guilt are appropriate in the case in which an agent acts morally but irrationally. So do you deny that resentment on my part and guilt on Sandra’s part are appropriate? Or do you think that resentment and guilt are appropriate when agents act irrationally but not immorally?

  43. I think it is not surprising that, in some cases, we may resent people for doing things that were stupid things to do, but not immoral, if they turn out badly for us. And it is also not surprising were we to feel guilty for performing such an act, were it to turn out badly for someone else. (Maybe a case of accidentally hurting someone’s feelings by doing something that is morally permissible for you to do, but not that important to you, would be an example.) If you have the further intuition that it is appropriate for Sandra to feel moral guilt for performing a1 in the case in which it is stipulated that SC is the true moral theory and that we are warranted in feeling moral resentment when she does under the same stipulation, then your intuitions are not the same as mine.

  44. Okay, your intuitions are not the same as mine. I have the intuition that it is appropriate for Sandra to feel moral guilt for performing a1 in Normative Uncertainty. But let me point out that when I stipulated that, in Normative Uncertainty, we are to assume that SC is the “true moral theory,” I was careful to state that it is true in the sense that it gives the correct account of what an all-knowing being should do. The idea that a being that doesn’t know all the relevant facts could be morally blameworthy for doing what an all-knowing being would be morally required to do is not, I think, surprising.

  45. Doug,
    The abstract nature of the example (Normative Uncertainty) makes brute intuitions about it very unreliable. The outcomes are all numerical values, and utilitarianism is stipulated to be the true ‘objective’ theory, and then we’re asked for intuitions about right action. This seems like a bad method.
    Suppose Huck is not certain what the correct moral theory is, but he’s pretty sure it’s what his aunt taught him. If his aunt is right, then, of course, aiding a fugitive slave is a very grave wrong. He has a glimmer of an idea that maybe helping Jim escape is actually the objectively right thing to do, but upon reflection he thinks that’s unlikely. Even so, he decides to help Jim escape.
    Do you actually have the intuition, about this case, that Huck acts wrongly? He’s blameworthy, you resent his actions on behalf of Jim’s owner?
    Here are some of my intuitions.
    I think it’s very unsurprising if Huck feels guilty. I admire him for feeling guilty. I think what he’s done is not guilt-worthy or blame-worthy.

  46. Jamie,
    I don’t think that Huck case is a good case to test our intuitions on this matter. In this case, you say that “Huck is not certain what the correct moral theory is, but he’s pretty sure it’s what his aunt taught him.” I think that, in this case, there’s probably a big difference between how likely Huck thinks it is that it’s wrong to aid a fugitive slave and what, given his evidence, is the subjective likelihood that it’s wrong to aid a fugitive slave. This seems to be a case where Huck is responding to evidence at an emotional level that he doesn’t yet recognize as being evidence at the cognitive level. Thus, his cognitive assessments of what his evidence supports seems to be way off the mark.
    So I don’t have the intuition that he is blameworthy, but that’s because I think that, given his evidence for there being decisive moral reasons to aid Jim (reasons which he was clearly capable of recognizing and responding to, even if not in a reflective manner), this is a case where, relative to his evidence, the expected deontic value of his aiding Jim is much greater than the expected deontic value of his refusing to aid Jim. And, of course, I don’t “resent his actions on behalf of Jim’s owner.” I think that resentment (as opposed to indignation) would be appropriate only where I had been harmed by an agent’s subjectively wrong actions. In any case, I don’t feel indignant either, because I don’t think that what Huck did was subjectively wrong. It seems to me that subjective wrongness depends on what the evidence is — at least, it depends on the what the evidence that the agent is capable of recognizing and responding to is.
    But, fair enough, I admit that my example is pretty abstract, although I’m not sure why that makes our intuitions about them unreliable. I just have the intuition that in cases where the agent’s evidence supports great uncertainty, the agent shouldn’t take the chance of doing a very grave wrong if she can instead perform a relatively modest wrong without risking the commission of a very grave wrong. I could fill in the example with details. But I actually suspect that adding details might make the intuitions less reliable, because then the intuitions will start focusing on many inessential features of the detailed case.
    Do you have any reason for thinking that my intuitions in my abstract case are unreliable?

  47. Hm. Could you give an example of evidence for a moral theory? I thought testimonial evidence would be evidence, and it’s not clear to me what else is, in this case.
    Here are some reasons for thinking that intuitions (yours, anyone’s) are unreliable here. First, you have stipulated that utilitarianism is the true theory. I doubt that we can have good intuitions that are conditioned on normatively implausible suppositions. (This is related to imaginative resistance.) Second, related, your examples involve some worlds in which K is true and some in which SC is true. You call these ‘possible worlds’, but it’s pretty dubious that a theory like K or SC could be contingent, so I suspect the worlds are impossible worlds. Counterpossible reasoning is sometimes useful, but it’s not a good idea to rely on intuitions about counterpossible conditionals.
    Third, the assumption that deontic value comes in degrees is very substantial, and the further (unstated) assumption that the deontic value of a prospect is its expected deontic value is more so, and the idea that there are measures that are comparable across different theories is even more so. Each of these steps builds more into the background, and I’m skeptical that our intuitions take account of what is stipulated to be true. (Just speaking for my own intuitions, I think I can manage to get an intuitive grasp on the first step, but I have no clear idea of what I’m assuming in the second and third steps.)
    But also, in general, I just don’t trust intuitions about such abstractly characterized examples. Suppose you could create more value in the world by doing something under a maxim that you couldn’t at the same time will as a universal law of nature — would that be intuitively the right thing to do? Yeah, huh.
    I think lots of people share my distrust, but I’m not going to give any theoretical defense of it.

  48. I’m joining this late, but reviewing the comments so far, I think I most agree with Mike’s view that a generalized Subjective Decision theory would advocate A3 in Normative Uncertainty, as well as A3 in Mine Shafts , SC being a special case of SD. Doug, you responded to this by not discussing SD, but something called “SC-subj,” but I didn’t follow if by this you intended to be describing something like SD, or something else, so it doesn’t appear to me that this point has been responded to.
    I also like James Allen’s suggestion that the value/deontic value distinction is untenable. Doug responded to this by saying that “The bearer of value is a state of affairs. A state of affairs can have more or less value. The bearer of deontic value is an act (not a state of affairs).” But states of affairs can be described as those in which some act has been done, or resulting from some act. This is true even if all future causal results of, say, act A and act B are identical, for we can speak of a state of affairs across a range of time, starting with the performance of A and B, and so the “state of affairs” resulting from them is different, even though all later time-slice “states” of the universe are identical. So unless the bare term “value” is restricted in some way so that it is not, in fact, value-all-things-considered, then it must *include* deontic value. Then we may ask if deontic value must also mean value-all-things-considered. If not, then it is something less significant than value, and we can ignore it, since “value” includes it and other considerations as well; if it does, the two are identical. Or if “value” is restricted and “deontic” value is value-all-things considered, then again, SD (generalized SC) tells us to pick A3.
    I quite agree that the relevant kind of subjective expectation of value is one’s evidence (not one’s beliefs), as Doug argues for in the Huck Finn case. I simply add to this that the evidence for SC is, I think, fairly massive, when this is properly defined. Hence in any but very convoluted cases, SD immediately leads to SC, which is why SC is a very interesting theory, even though in the strictest sense it is not universally correct, as I suggested in Doug’s earlier post several months ago attacking SC.

Leave a Reply

Your email address will not be published. Required fields are marked *