Dear Soupers,

Do you enjoy puzzles? Yeah? Well then, let me share one with you.  John Basl (Northeastern University) and I have had some fruitful conversations about it; and we have some views about how to address it (and some views about how not to); but in the spirit of collective inquiry and intellectual theft let me take this opportunity to solicit your initial responses.

The puzzle might be construed either in terms of rationality or theoretical justification, but it is roughly as follows:

 Why are we permitted to revise our moral/normative/evaluative beliefs in light of non-moral beliefs but not vice versa?

Indeed, while it’s clear we are often guilty of sub-consciously shaping the facts to fit our evaluative commitments (e.g. the powerful correlations between political ideology “climate skepticism”, 911 conspiracy theories, and beliefs about the president’s religion and birthplace), we all disavow this a proper way to form our non-moral beliefs.As obvious as this may seem, the puzzle is how to best explain why this is so and then sorting out what the implications may be for meta-ethics, moral epistemology, and even epistemology more generally…. 


A related illustration and way of framing the issue might help get your intellectual juices flowing:

While there is significant doubt about the possibility of deriving a normative conclusion from purely descriptive premises, deriving an is from an ought seems relatively straightforward:

 

Ought-To-Is

A. Act utilitarianism is true

B. It is always morally wrong to kill an innocent child.

Therefore,

C It never maximizes utility to kill an innocent child.

 

 Though C is logically entailed by A and B, it also seems clear that one makes some serious mistake were they to infer C from A and B.  Imagine a sheltered scientist or an isolated hermit who decides to give up direct empirical investigation and instead turns to learning about the empirical world via a study of morality. For example, he might appeal to powerful intuitive responses to killing innocent children, and then appeal to what he takes to be unique and compelling theorhetical virtues of act utilitarianism, and thus some justification for both A and B.  So does he thereby have some (defeasible) justification for C?  If not, why not? What exactly is wrong with our scientist/hermit’s research program?

So here are some specific questions you might use to frame your response.

1.            Can we agree these inferences are problematic?

2.            If so, to what extent and why?

a.            Is there a general problem with moral to non-moral inferences?

b.            Is it a unique problem with respect to the moral?           

 

 I’m excited to hear what you have to say, and please say hello and introduce yourself to John when he chimes in during the exchange.

-Best,

Christian (and John) 

 

40 Replies to “From Ought to Is

  1. In Ought-to-Is, I think that the only reason that it may seem that there must be some serious mistake in believing C on the basis of A and B is that it may seem that the only way one could be justified in believing B is on the basis of A and C. But I doubt that that’s true. Perhaps, you would be justified in believing B (and A) on the basis of testimonial evidence. And, in that case, I don’t see that there would be any mistake in believing (or inferring) C on the basis of (or from) A and B.

  2. I think the case, insofar as it is tugging at intuitions, might be playing on an ambiguity in premise B. In context, the claim that ” It is always wrong to kill an innocent child” might be read two ways. First, it might be read as containing an implicit prediction – the “always” here is a temporal always. Interpreted this way, it means something like, “in our world, the calculations will never work out such that it is not wrong to kill an innocent child.” On this reading, the conclusion is just a restatement of premise B, and the justificatory question is whether the prediction is a good one.
    However, your discussion of our sheltered scientist’s motives suggests to me a second reading: the “always” here is an indication of necessity. Thus, the claim reads as something like, “it is necessarily (or, perhaps, categorically) wrong to kill an innocent child.” Note that on this reading, premises A and B are inconsistent with each other, because Act Utilitarianism rejects such claims of necessity. It would be surprising if an epistemological theory called an inference from inconsistent premises justified, wouldn’t it?
    So, to wrap up, on the reading of B that makes a claim of justification plausible, the conclusion is already implicit in the premise, which contains an empirical, and not only a moral claim. On the second reading – I think the more natural one – of premise B, C follows from inconsistent premises, and so is not justified (everything follows from a contradiction, after all). What do you think?

  3. I also think the glitch is in B in your example. If we were actually confident of B, then the inference would go through, and that’s fine. But (I think this amounts to Doug’s point) B isn’t just a normative proposition, it’s also a non-normative one, and most of the ways we’d establish it would go through C.
    That being said, we might think of ways to get at it otherwise. For example, suppose my best understanding of the balance of reasons in reflective equilibrium leads to A. And suppose that, as a Bayesian, I’ve updated my beliefs about B from my epistemic peers. (And for some bizarre reason my epistemic peers don’t have beliefs about A or C.) Then why not say that I have a justified belief in C too? It doesn’t seem all that counterintuitive.

  4. So I forgot to answer your questions.
    Regarding question 1, I think that we can agree that the scientist/hermit’s research program is faulty, for he is drawing empirical conclusions without having any (or, at least, adequate) empirical evidence for those conclusions. But whether this shows that these inferences are always problematic depends on whether there is any way for one to know B except on the basis of both some a priori knowledge about which empirical properties are the fundamental wrong-making ones and some empirical knowledge about which acts in our actual world have these empirical properties. And my thought was that perhaps there is: that is, perhaps we can acquire such knowledge by way of testimony. Now I don’t know the literature on moral testimony. So perhaps there is something necessarily problematic about moral testimony. But if there isn’t or if there is some other way to know B independent of knowing both A and C, then I would think that we should not agree with you that there is something necessarily problematic about such inferences.

  5. Neil, thanks for the pointer that paper, I’ll look at it.
    Jamie, thanks for the pointer to the WT problem. I’m unfamiliar with it, but perhaps Christian can take a look.
    Doug, Thanks for that. I wonder what you think of the following two cases:
    Case 1: I’m told by Reliable Scholar, a moral philosopher and social scientist, that both A and B are true. Reliable Scholar has come to have evidence for A via non-empirical means, but has come to believe B by investigating a huge number of cases of punished innocents. On the basis of what I’m told, I infer C.
    Case 2: I’m told by Reliable Philosopher, a talented philosophical hermit, that both A and B are true. Reliable Philosopher has come to believe both A and B independent of any empirical investigation. On the basis of what I’m told I infer C.
    I’m not sure what to think about coming to have any of the relevant beliefs on the basis of testimony, but I definitely feel more comfortable with my inference to C in case 1 than in case 2. But, that’s because somewhere in the chain of evidence, I have evidence directly for the conclusion; Reliable Scholar’s evidence for B is his evidence for C. In case 2, the kind of case me and Christian are worried about, I’m not sure it is okay for me to infer C because I’m not sure that it is okay that Reliable Philosopher is in a position to make the relevant inference on the basis of his position. What do you think?
    Pete, Thanks. I like that worry. With respect to the second interpretation, I’ll say that I’m not really convinced that in coming to believe Utilitarianism, I’m coming to believe the contingency of C. I’ve got a feeling that the solution might be related to that (still arguing about this with Christian), but certainly there is nothing in Utiltiarianism that makes holding A and B logically inconsistent. At least, it doesn’t seem so to me. Christian can chime in with some additional examples later that don’t depend on Utilitarianism.
    Paul, I think your first response is a bit like Pete’s. I’m not convinced that B is non-normative unless one adopts the view that Utilitarianism comes with commitments about how we investigate other moral claims. I’m not sure that it does, but even if it did, you can the inference to go through without being a full blown utilitarian: You must only come to believe that utility maximization is sufficient for rightness.
    As far as the second response, maybe putting it this way seems more counter-intuitive:
    I’m a philosophical hermit and wish to increase (or decrease) my confidence in empirical beliefs and I don’t have access to the relevant empirical evidence. I recognize that Pr(C|A & B) > Pr(C) (as one member of the class of ought to is relationships that might be useful to my project). I don’t currently have any reason to believe A or B, but I decide that, in order to come to have some evidence about C, that I will investigate A & B (non-empirically).
    Setting up that kind of research program strikes me as perverse. That’s so even if I think I’ll only come to have a little evidence for C. If you aren’t bothered by that, I’m not really sure how to responds to your claim that it isn’t counter-intuitive.
    Thanks all!

  6. I like Doug’s and Pete’s suggestions for what’s (typically) going wrong in the Ought-to-Is case.
    To dissolve the broader “puzzle”, I’d raise questions about the assumption that “we [are] permitted to revise our moral/normative/evaluative beliefs in light of non-moral beliefs”. I’d want to distinguish three classes of beliefs:
    (1) Pure normative beliefs, e.g. that some moral theory is true, or conditional claims about what reasons would arise in such-and-such fully described states of affairs.
    (2) Pure descriptive beliefs.
    (3) Mixed beliefs, e.g. that we’re in a situation which gives rise to such-and-such reasons.
    My claim: (1) and (2) are generally autonomous in both directions (with the exception of testimony, etc., but that’s a special case). So there’s no mysterious asymmetry to explain. The puzzle arises from mistaking mixed beliefs for (pure) normative beliefs, when really it should be no surprise that we might revise (the descriptive component of) our mixed beliefs in light of new descriptive information.
    Does that seem right? Or were you thinking that it’s generally fine to revise even one’s pure normative beliefs in light of one’s descriptive beliefs?

  7. Christian — This is a neat puzzle! I guess this is how I’m thinking of things: Either we give (A) and (B) the CUSTOMARY PHILOSOPHICAL READING (CPR), or the WEIRD CONTINGENT READING (WCR).
    On the CPR, (a) is “Necessarily, if an act maximizes utility, it is not wrong”, and (b) is “Necessarily, if an act is an instance of killing an innocent child, it is wrong”. The modality here, it seems to me, is epistemic — although really, all I care about is that it’s the kind of modality to which conceivability is a particularity strong guide.
    Now, assuming the CPR of (a) and (b), we know that one of them must be false. That is because I can clearly and distinctly conceive of a killing of an innocent child that maximizes utility. And this ain’t a zombie case, either, folks; it is unassailable that what I’m conceiving of is a child, and utility, etc., and while I’m not conceiving of them fully (I hadn’t thought about the child’s eye color until now), I’m conceiving of them fully enough that I am justifiably certain that no matter how much more fully I conceive of them, my ability to conceive of the case is preserved.
    But of course, if either (a) or (b) is false, then the argument is unsound. (That’s not a problem with the INFERENCE, exactly.)
    On the other hand, suppose we give (a) and (b) the WCR: (a) is “Yo, guess what, turns out that no utility-maximizing act is wrong. I mean, what’re the odds, right?” and (b) is “Dude, you won’t believe it; it ALSO turns out that that every single instance of killing an innocent child is wrong. MIND = BLOWN.” (I’m using the vernacular to indicate contingency.)
    But look, if the CPR of either (a) or (b) is false, then the only way you could possibly know both (a) or (b) on the WCR is if you (or your testifier, or her testifier, blah, blah), knew C already. So basically, what Doug said.

  8. Hi John,
    Assuming that B is not a fundamental moral claim (and if we’re assuming A, it can’t be a fundamental moral claim given that act-utilitarianism is a claim about what the fundamental wrong-making features of actions are), then B is a derivative moral claim, one that follows from the conjunction of some fundamental moral claim about what the fundamental wrong-making properties of actions are (such as claim A) and some empirical claim (such as claim C) about which acts in the actual world have those properties. And if all that’s right, then the justification for B has to proceed through some knowledge of the empirical world, whether that comes directly from observing that C is the case or indirectly via the testimony of someone who has empirical knowledge such as the Reliable Scholar. The Reliable Philosopher can’t then be justified in believing B given his lack of the relevant empirical knowledge. And insofar as I know this, I wouldn’t be justified in accepting B on the basis of his testimony. So I agree that there is a difference between the two cases, but how does this show that the inferences in Is-from-Ought is problematic as opposed to showing only that accepting B on the basis of the testimony of someone who like Reliable Philosopher doesn’t have the relevant empirical knowledge problematic?

  9. I’m not sure that there aren’t cases of permissible ”ought” to ”is” cases. Duties towards friends are maybe a likely case. There are more contentious and grotesque ones involving distribution of talents and the like. But why is:
    The evidence points towards my friend being a cheat.
    I shouldn’t believe of my friend that they’re a cheat.
    So, my friend isn’t a cheat.
    a bad “inference” (note, not a bad [deductive] argument…rather a bad transition between thoughts)? I’m not sure I have a strong view here, but it’s not obvious to me what to think.
    Also, see David Enoch’s “Wouldn’t it be nice if p, therefore p (for a moral p)” for related discussion on a similar issue.

  10. Jack,I’m sympathetic with that kind of inference (Christian, I think, less so). I think an adequate solution to the puzzle can explain why the puzzling inference is a bad one, but your case isn’t. So, I’m glad to see someone else thinks those inferences are okay.
    Andrew, Christian and I have been talking over a very similar solution to the one you give on the CPR. I think we want to frame it a little differently, but your suggestion is helpful. Thanks!
    Richard, for my part, I think that it does require some form of mixed belief to revise our moral beliefs in light of our descriptive (i.e., you can’t do so in terms of purely descriptive beliefs or premises), but I’m not yet seeing how that applies to the puzzle. How are you seeing our premises falling under your categories such that it makes the inference to C okay? I’m still trying to wrap my head around it and I’m a little slow this morning.
    Doug, do you think any moral epistemology permits me to increase my confidence in both A and B at the same time? You seem to be denying that (but maybe you only mean you can’t come to be fully justified in A while also coming to be justified in B). If so, that’s one solution to the puzzle. It just seems open to me that one could endorse a moral epistemology on which one could come to have reasons in favor of both A and B. Even if I recognize Utilitarianism as a fundamental moral principle, might I not allow that I can evidence for B of the same sort? Maybe I’m missing something. Let me keep thinking about it. Meanwhile, when Christian is up, maybe he’ll post an example you’ll find truly puzzling.

  11. What about the following inference:
    1. There are never conflicts between our moral duties.
    2. There is a moral duty not to lie.
    3. There is a moral duty not to let others die.
    4. So, we are never in circumstances where if we lie it will prevent a death.
    I think something like Andrew’s move works here, but not Doug’s. Yes?

  12. Basically I want to second everything Doug just said, which seems to me to hit the nail. But I wanted to call your attention to Morton White’s classic discussion of these sorts of inferences which might be relevant in “What Is and What Ought To Be”. (White defends them, essentially by extending a sort of Quinean holism to include moral and other sorts of evaluative propositions in with the empirical ones. So basically he would deny, I think, Doug’s claim that “the justification for B has to proceed through some knowledge of the empirical world, whether that comes directly from observing that C is the case or indirectly via the testimony of someone who has empirical knowledge such as the Reliable Scholar.”)
    Another paper that might be relevant is Matt Brown’s “Values in Science beyond Undetermination and Inductive Risk,” Philosophy of Science, 2013.

  13. I am inclined to agree with Pete that, in the original example, there is a problem with B; but I want to make a broader point about the genus that this kind of inference might fall under.
    First, here’s another shot at an example of moral→empirical inference:
    I conduct a series of experiments to test whether people are happier under monarchies or democracies. My evidence turns out to favor monarchies.
    Then I reflect philosophically for a while, read some of the great works of political and moral theory, and discover that freedom and self-rule is necessary for human happiness.
    I conclude that what I took for happiness experimentally couldn’t have been true happiness, and must have been merely an appearance thereof.
    So long as I can come up with some kind of explanation as to why the subjects of monarchies might have seemed happy when they were not, this could be a legitimate inference
    Second, my suggestion is that the one difficulty the original example may have been drawing on would also be captured by the following bit of reasoning:
    P1 The book I have just written is correct
    P2 A book is correct iff all its claims are true
    __
    P3 My book does not contain even one tiny false claim
    There is no issue here of empirical vs. moral claims (P2 is a definition); the problem is one I would call ‘inferring your way out of an objection’ (i.e., a form of begging the question). If someone raises some claim as a way of objecting to a thesis, you are not allowed to infer that that claim is false on the basis of the truth of your thesis. I think, in the case of AB & C, we are importing a set of intuitions about how arguments agst. utilitarianism work, namely, that we often object to utilitarianism by pointing out that doing immoral things can maximize utility. Or, in John’s second example, we are importing our familiarity with how people typically object to the claim that there are no duty-conflicts. Only with this argumentative context as a backdrop can we say the argument is a bad one.

  14. This is a very interesting post, and I find much to agree with in previous comments. I’d just like to make a small point. I don’t think there is anything wrong with inferring C from A and B if one knows A and knows B (A and B would obviously need to be true for one to know A and know B). And I don’t trust our intuitions about your A, B, and C, because I don’t think anyone is in a position to know A and B. Would people have the same intuitions in cases where one really thinks the subject knows some A’ and B’?

  15. Hi everyone,
    Thanks for the wonderful (and overwhelming) response. Let me start with the latest and then begin again at the top. So first, Daniel, I agree and I’ve found that interesting, closure of KNOWLEDGE here seems no more dubious than anywhere else. So in diagnosing the puzzle we better keep that in mind; some explanations may prove too much.

  16. Doug,
    I agree that what’s puzzling probably has less to do with the types of claims themselves, but rather our background beliefs about how one comes to have evidence for them. It still seems perverse to me in the case of testimony (for both)–but allow me to just say that here.
    Here’s a wrinkle I think we’ve been neglecting.
    Yes, it seems one cannot have evidence for B under the assumption of A without evidence for C. However, our “scientist” may not believe A–he/she may not be reasoning under that assumption. Rather, he/she first thinks about AUtil. virtues and concludes “rather plausible” then considers her powerful intuitions about child killing and says the same about B. Now we can ask, isolation, does this this provide any grounds at all to believe C. And it seems to me the answer is, “No”…even though her premises may enjoy some justification.
    However, the point that the “moral scientist” does not have the right kind of evidence for B (were she to ACCEPT A) is rather useful. If justification/evidence is supposed to regulate belief formation, we might plausibly evoke a principle that says something like “don’t treat justification for premises as at all transferable to a conclusion, when the justification for at least one premise would be lost if we accepted the others.

  17. Jamie and Neil,
    Thanks for the leads! An argument for anti-realism, eh? I’ve got a sense there’s something important to learn from all this, but I doubt it’s that.

  18. Pete,
    I’m not so sure the “necessity” reading of B is, strictly speaking, inconsistent with A. But, I’m not sure it’s not either…so, fair point. As for the temporal reading–it amounts to that (putative) empirical claim only if we are working under that assumption that A is true. But again, we may not be (see my first comment to Doug). Nevertheless, the point (again) might rear its head as the evidence/justification for A “piles up” so too it seems our non-empirical grounds for B cease to be grounds. We can, accordingly, appeal to a non ad-hoc principle that bars evidence transfer in cases like these.
    I wonder whether evoking a principle like this, however, is going to have radical and constraining implications for what kinds of things we can take to be evidence for a moral/normative position. Haven’t fully thought this part through yet.

  19. Paul,
    Nice points. And as I’ve been pressing, it might be that the fact that full credence in A WOULD undermine your credence in B that allows us to block transfer in partial cases.

  20. Hi Richard, and hello to all those in Bowling Green!
    I assume your “pure” and “mixed” are a matter of the content of the claims and not how we came to believe them. Pure do not entail the actual instantiation of any moral property but WOULD conjoined with a full non-normative description of the world, and mixed do. Yes?
    I think a final view may very well appeal to this distinction. But even with the distinction front and center there’s still a lot to puzzle about. First, while the pure moral might be “autonomous” with respect to the non-moral, its being so may not be all that surprising given the likely necessity of the “pure moral” and the fact that no deductive relations hold from P-Descriptive to P-Moral. But those grounds don’t hold in the reverse case. Moreover, why should we think the relevant independence/autonomy entails the other? And even if it did, we still need a more satisfying explanation for the first “autonomy” and only then would we see if the explanation entails mutual “autonomy.” So dividing up ground in this way is exactly what we need to do, but not the end of the story, I think.
    In the next post I’ll give an example of a “mixed” inference that seems dubious too, which might show the explanations mentioned in earlier posts are wanting.
    Though, I should note, some of them (Jack’s example, for example) complicate matters.

  21. My thanks to Dale Dorsey for the mention and for bringing this thread to my attention.
    It seems to me that a common theme in the responses to John’s & Christian’s puzzle is a form of epistemic foundationalism that just won’t do. So, Doug Portmore says the following:

    And if all that’s right, then the justification for B has to proceed through some knowledge of the empirical world, whether that comes directly from observing that C is the case or indirectly via the testimony of someone who has empirical knowledge such as the Reliable Scholar. The Reliable Philosopher can’t then be justified in believing B given his lack of the relevant empirical knowledge.

    But that only works if we assume that the degree of belief or degree of justification has to be established in at least quasi-foundational form. This is, as Dale points out, one way Morton White starts his argument for extending Quine’s holism. Quine’s brief response to White is totally unsatisfying, but Dale made a very interesting go of trying to justify Quine’s keeping science and values apart in “A Coherence Theory of Truth in Ethics: http://people.ku.edu/~ddorsey/coherence.pdf
    I hold a pretty extreme view on these matters, I will admit, but it seems clear to me that most philosophers of science who have really considered the matter reject the strict separation of science and values under discussion here. In other words, empirical / factual / scientific beliefs can (contra Alex Barber) be influenced by (if not, exactly, “refuted” by) moral, practical, or political values. See my summary in the PhilPapers “Science and Values” bibliography: http://philpapers.org/browse/science-and-values

  22. Andrew,and all those who point to tensions between A and B.
    I hear you, and I think Pete’s worry is reflected in yours a bit too…but though our “scientist” can clearly and distinctly imagine a child killing being util-optimal, I don’t yet clearly see why that’s problematic. It seems like the relevant sort of necessity is reflected in the kind of reasoning the “scientist” uses to justify each premise. And so, why isn’t his/her response…”though I could conceive it, I now have some evidence it is in fact impossible (in the relevant sense). I’ll have to think more about it.
    But…maybe we’ve been barking up the wrong tree all day. After all, suppose, plausibly, ought entails can. Here we can construct a one premise deduction that seems equally problematic but (on its face) without the “bad-making” explanations appealed to above.
    It seems perverse, again, to marshal as evidence that you could do something the moral case that you ought to do it. And yet surely we can have justification for believing one “ought” without yet knowing one “can.”

  23. Dale,
    Thanks for the leads! And yes, I like Doug’s explanation in that case too…but it’s not clear it generalizes to the “intuitive” cases (see John’s last example, and my and Paul’s earlier point that one need not be working under the assumption that A).

  24. Agnes,
    That’s a really interesting example. And, I certainly think revising one’s original investigatory assumptions and rejecting the earlier conclusions might be order. But, where, precisely, is a moral to non-moral inference happening here? Why isn’t what needs to be revised a conceptual understanding–a characterization of happiness–rather than a non-moral conclusion in light of a substantive moral premise?
    As for the second point, you’ve offered a nice “debunking” explanation for why there is a strong appearance of a problem in these cases, though no general problem really exists. It’s a great thing to keep in mind. But I think there are resources for a vindictive explanation as well. We shall see.

  25. Matt,
    Thanks for joining us. Feel free to set us straight. Although, let me provocatively suggest that inquiry here–thinking about the putative “puzzle”–may not just presuppose some important epistemic asymmetries, but reveal them as well. Or maybe one person’s Modus Ponens really is just another’s…

  26. Thanks for your reply, Christian. You said, “I agree and I’ve found that interesting, closure of KNOWLEDGE here seems no more dubious than anywhere else. So in diagnosing the puzzle we better keep that in mind”. I agree, but there is a reasonable perspective from which this seems to solve the problem altogether. If one thinks that knowledge is the right standard, and one shouldn’t be in the business of forming beliefs on the basis of reasoning from unknown propositions, then it follows that we have a diagnosis of what is going wrong in the bad cases. When I think about what goes wrong in political debates this diagnosis seems right, e.g. a libertarian might come to form false contingent non-ethical beliefs about the effects of possible economic policies reasoning from (or under the influence of) a libertarian moral principle in conjunction with common moral judgments about particular cases (e.g. that nobody in the US ought to live below the poverty line). What’s gone wrong? They might know the common moral proposition (I sometimes think that’s plausible), but they certainly don’t know the libertarian moral principle (even if it’s true, which I seriously doubt, nobody now *knows* such a principle). What’s wrong with just stopping here in our diagnosis (assuming the knowledge norm is correct, anyway)?

  27. Daniel,
    An elegant solution, but isn’t it really just a beautifully built bomb? Scientific reasoning seemingly relies on inference from the provisional and unknown; indeed probably most theoretical reasoning does. And while we know that that sort of reasoning is fallible; we don’t want to say it’s generally an illegitimate way to proceed. And yet, that’s precisely what we will want to say about the kind of moral inferences discussed above and in your example–not merely that they’re insecure or not always reliable.

  28. No, it isn’t a bomb (at least not for the reasons you adduce), because the claim is one about what we rationally ought to believe (in a fully fledged fashion), not what inferences we ought to draw when engaging in theoretical hypothesis building and testing, etc. But doesn’t the fact that I am admitting that there are a host of other legitimate ways of drawing inferences undermine my claim that I have offered a solution to the problem? I don’t think so. The claim in my first post was that when we consider your example (and others like it), it only seems like an inference that shouldn’t be drawn when we imagine it being used to form beliefs (in cases where knowledge of the premisses is lacking). If, alternatively, I consider the argument in a purely hypothetical mode, I find myself thinking there is absolutely nothing wrong with it, since it is clearly a valid argument (or could easily be made valid).

  29. …okay, I only discussed two extremes there. What if I kinda think A is true, but I’m not sure, and I kinda think B is true, but I’m not sure. Is it counterintuitive to think I am thereby licensed to to start kinda thinking C is true but not be sure it’s true? I don’t know about other people, but I don’t have a firm intuition in such a case. Maybe I’m only kinda licensed?

  30. Hi Christian,
    When you talk about “intuitions about child killing,” what do you have in mind? Suppose you know what children are (young, not fully developed, human beings) and know what killing is (causing the death of) and know what ‘innocent’ means. Do you think that anyone just has the brute intuition that causing the death of an innocent and young, not fully developed, human being is wrong? Don’t you have to know things such as that human beings typically have valuable lives, that children often have living parents who would be grief stricken by the loss of their children, and that killing children typically makes them worse off than they would have been had they not been killed. So it seems to me that our intuition is that it is wrong to make a being worse off and that this conjoined with some empirical facts that we know about killing children leads us to accept the view that killing children is typically wrong. Now, if you knew these relevant empirical facts about killing innocent children, knew that killing children is typically wrong, and were justified in believing that act-utilitarianism is more likely true than false (which is doubtful given that this goes well beyond just saying that it is plausible relative to the philosophical alternatives on offer), then I would think that you then have some evidence for the claim that killing an innocent child typically doesn’t maximize utility. Perhaps, you wouldn’t be justified in accepting C, but that’s because B is clearly false. It wouldn’t be wrong to kill an innocent child in the world where both children typically have miserable lives and killing them allows their souls to ascend to heaven where they will then have a blissful existence.

  31. From what many have been saying, it appears there’s something wrong with the inference but it’s not the “ought to is” character. Either the grounds of the premises undermine each other (i.e. one should not believe both A and B). Or the grounds of B involve C, and so, though the argument is fine, the inference is suspect.
    These problems shouldn’t deter us. I think Christian and John are especially concerned with the problem of inferring an “is” from an “ought.” Maybe Christian’s original case isn’t the best version of this.
    It’s easy to construct an “ought to is” free from these problems.
    (1) I ought to do X if X maximizes utility.
    (2) I ought to do X only if X proceeds from a universalizable maxim.
    (3) Therefore, X maximizes utility only if X proceeds from a universalizable maxim.
    [Side note: Someone mentioned to me that Kant may have made an inference like this. Is that true? Also, I’m a bit uncertain about the oughtiness of (2), but we’ll let that slide for now.]
    We might think that this inference is bad on the grounds that there are more direct ways of coming to the conclusion. It’s like using a microscope to read a book. Or like using a geometrical argument to come to the belief that this object in front of me is a teacup. Maybe this is the problem with inferring an “is” from an “ought.”
    Can anyone think of an “ought” to “is” inference where this diagnosis won’t apply i.e. where reasoning through our obligations is the most direct, or only way to decide some empirical matter?

  32. Daniel,
    Yeah, it seems pretty counter-intuitive to me. And, importantly, given A and B are supposed to be necessary, it looks like your evidence will suggest the relevant (contingent and non-moral) C is necessary too! So there’s an added wrinkle/concern John and I have been exploring.

  33. Jacob and Doug,
    Nice example, I had some things to say in response to Doug’s pressure on the earlier example, but maybe this one is more powerful, and as Jacob suggests, immune to the same charges…what do you guys think?
    Boy, I sure hope we can do better than “Yeah, that’s a perfectly good way to form your beliefs, the ‘problem’ is that there are more efficient ways.” And how, after all, do you know this (moral “science”) method is worse? Maybe addressing THAT question, will give us the resources to say something more satisfying.
    Maybe this will help: In these cases, we want whatever our diagnosis is to square with or predict that person should say to themselves “hey, better revise A or B, or both, rather than move on to C.” So imagine trying to address the scientist tempted to infer the relevant C…how can we make a compelling case for revision…a case that generalizes (as the original example may also have special problems as, Doug especially, has pressed).

  34. Let me vaguely lay some of my, (and John’s) cards out on the table. It seems one thing we can say, in the problematic cases, is that person’s inference ultimately relies actual world insensitive claims (would be necessary truths)–what Richard call’s Pure Moral claim– to draw conclusions about the actual world. This gives the conclusions the wrong modality, and by the “scientist’s” own lights (if they agree they are would-be necessary truths) illegitimate for actual world inference.

  35. In other words, if you’ve got sets of moral claims that are supposed to hold across worlds–be actual world insensitive–then you know something is wrong when they entail some claim about the actual world.
    And while there are cases where the A’s and B’s are NOT treated as insensitive (as when the Act Util. asserts B), we here have a different problem: the justification for the premise is parasitic on an independent justification for the conclusion.

  36. Hi Christian,
    What’s problematic about Jacob’s inference:
    (1) I ought to do X if X maximizes utility.
    (2) I ought to do X only if X proceeds from a universalizable maxim.
    (3) Therefore, X maximizes utility only if X proceeds from a universalizable maxim.
    Would you think that the inference below is likewise problematic? Why or why not? And if you think that the inference below is not problematic, then what explains why it is not problematic but the one above is? They both link two properties via their relationship with another sort of property.
    (I) Doug has the sensation of yellowness when looking at X under normal light if X is yellow.
    (II) Doug has the sensation of yellowness when looking at X under normal light only if X reflects light that has a wavelength of about 570 nm.
    (III) Therefore, X is yellow only if X reflects light that has a wavelength of about 570 nm.
    Also, is it your view that (1) and (2) are actual world insensitive, whereas (3) is actual world sensitive? If so, could you say in what way (3) is actual world sensitive and (1) and (2) aren’t? Wouldn’t all three hold across all possible worlds if they are indeed all true?

  37. 3) Is actual world sensitive because it’s contingent whether the maximizers are also the ones that are conform to the CI.
    BUT…you’re right…it (3) would be actual world insensitive given the truth of one and two (when held as “basic”/”pure”/”non-derivative” moral principles)
    My point is that the speaker him or herself is committed to denying that 3 is actual world insensitive, that’s why the inference is illegitimate (and upon reflection, even by their own lights)
    Your question about the color argument is way too hard for me…as it’s hard to know what we’re allowed to hold fixed–what set of worlds is relevant. You could go blind after all. But though there may be some modality where I and II are necessary (in the sense that their truth does not depend on the way the world is actually like)…it’s sure not the same way as in 1) and 2). It seems obvious to me that if you “change things around” i.e. the actual and contingent facts…I and II can and will change truth values. Not so with 1 and 2 (when they are not ‘derived’ from more fundamental principles plus non-moral contingents).

Leave a Reply

Your email address will not be published. Required fields are marked *