Advocates of moral dilemmas claim that there are possible cases in which no action open to an agent is morally permissible.  If we translated this into Gibbard-speak, it would come out roughly, “Sometimes there’s nothing it is okay to do.”  But such a claim cannot express a plan or system of plans for action; in every situation, you wind up doing something.  So the moral dilemmatists’ claim is on Gibbard’s view something like analytically false (he calls it “inconsistent”), and anyone who made such a claim would be deeply confused.  However, advocates of moral dilemmas seem to understand what they’re saying; they don’t seem to suffer from fundamental confusion, even if they are wrong.  How should we explain this?

  1. Moral dilemmas are possible, and Gibbard’s semantics can’t account for them, so Gibbard’s semantics are wrong.
  2. Gibbard’s semantics shows that moral dilemmas are impossible, and those who have argued otherwise are in fact deeply confused.
  3. Gibbard’s “okay to do” and the moral dilemmatists’ “morally permissible” are predicates that don’t have much to do with one another; this is a pseudo-problem due to bad translation.
  4. The two predicates are in fact quite similar, but they are theoretical terms defined by their roles in substantive moral theories.  What we learn is that Gibbard’s semantics is not theory-neutral but carries consequences for an adequate first-order moral theory.

The first two solutions don’t strike me as very plausible.  The third and fourth have more merit. 

In favor of the third solution, one might point out that advocates of moral dilemmas often appeal to the rightness of remorseful feelings no matter your choice of action in a dilemmatic situation.  An expressivist might then construe their “…is (not) morally permissible” as expressing, not commitment to a plan of action, but commitment to norms for feeling remorse.  On this view, the dilemmatists are construing moral theory as the study of what to feel remorseful for, not the study of what to do. 

If this seems like an implausible construal of the dilemmatists’ views on moral theory, we have the fourth option to fall back on.  If the fourth option is correct, then expressivist metaethics is not as purely “meta” as it is usually advertised.  Thoughts?  Other suggestions?

39 Replies to “Moral Dilemmas and Gibbard’s Expressivism

  1. There are very good arguments for the possbility of moral dilemmas that do not rely (as they used to) on intuitive responses to detailed examples. Pointing to alleged dilemmatic situations was never the most promising approach. Now there are any number of deontic logics that accomodate moral dilemmas. In fact the validity of a deontic “theorem” varies with the moral theory you adopt. You can still generate a contradiction in standard deontic logics (SDL), but that’s not a big deal. The deontic theorems needed to generate a contradiction are more dubious than the claim that moral dilemmas are possible. There are also fewer arguments for moral dilemmas that appeal to “moral residue” claims or claims that no matter what you do it still seems apt to feel remorse or guilt.
    Greenspan argued this was, as I recall, as did Bernard Williams and Ruth B. Marcus.
    But the best case that moral dilemmas are possible (even utilitarian moral dilemmas) are models containing infinite sequences of improving worlds and a suitably defined principle of obligation. Michael Slote describes one informally (Beyond Optimizing). Slote’s example is actually due to Timothy Williamson. The assumptions are so simple, it is very difficult to see how those models don’t describe perfectly possible situations.

  2. I’m not sure that I see the problem of moral dilemmas for Gibbard’s semantics. The dilemma’s are usually presented as cases where I think that ‘I ought to do A’ and ‘I ought to do B’ and know that I cannot do both. In Gibbard semantics I’m thus expressing a plan I have to do A and a plan to do B and knowing that I cannot do both. It’s not clear why I cannot have incompatible plans.
    You can also put this in negative terms. I think that I ought not to do A (A’ing would be wrong) and that I ought not to do B (B’ing would be wrong) and I know that I must do either A or B. In Gibbardian I have adopted a plan not to do A and a plan not to do B with the knowledge that these plans are the only available option. Both of these seem okay as long as you don’t think that it is impossible to knowingly to hold incompatible plans.
    Wouldn’t the utterance ‘sometimes there’s nothing it is okay to do express a lack of a plan for those situations. What would be a problem for that?

  3. Hi Heath. Jussi beat me to the punch on both counts, which is good, since he no doubt expressed it better than I would have:

    The dilemma’s are usually presented as cases where I think that ‘I ought to do A’ and ‘I ought to do B’ and know that I cannot do both. In Gibbard semantics I’m thus expressing a plan I have to do A and a plan to do B and knowing that I cannot do both.

    Wouldn’t the utterance ‘sometimes there’s nothing it is okay to do’ express a lack of a plan for those situations.

  4. The dilemma’s are usually presented as cases where I think that ‘I ought to do A’ and ‘I ought to do B’ and know that I cannot do both. In Gibbard semantics I’m thus expressing a plan I have to do A and a plan to do B and knowing that I cannot do both.

    I don’t think this can be right. Genuine dilemmas are cases in which You ought to A and You ought to B, and in some sense, You cannot A & B. So it is not merely a matter of thinking or believing that you ought to A and thinking or believing that you ought to B. Perhaps there is something inconsistent in expressing a plan to (A & B) (since the goal, (A & B), is by hypothesis impossible, though each conjunct is possible). But you can reasonably deny that obligation is closed in this way. You can reasonably deny, that is, that OA & OB entails O(A & B). That principle is invalid for most forms of utilitarianism anyway.

  5. Mike,
    we are thinking about whether there are moral claims that Gibbard’s semantic account cannot account for. I thought Heath’s claim was that if someone believes in a moral dilemma’s and asserts that there are such things, then they would be making claims that would be incomprehensible in Gibbard’s theory. Now, I’m wondering what such claim would be. If the dilemmatist makes the claims you give – ‘You ought to A’, ‘You ought to B’, and ‘You cannot A&B’, then none of these claims is such that Gibbard would have any problems with. First two are expressions of plans and the third a modal claim. So, if that is all the dilemmatist needs to claim, then that normative view is not a problem for Gibbardian metaethics. Thus, if what you say is right, there is even less problems for Gibbard. Gibbard’s theory after all is a semantics for moral claims and not a normative theory about what you ought to do.

  6. Jussi,
    That’s fine. I just don’t see why the proposition ‘I ought to A’ is translated (given anything Gibbard says) as ‘I think I ought to do A’. I didn’t see that suggested in the post. But I might have missed it and I might be misundertanding why you’re saying it. I think I can see how Gibbard might say that my assertion “I ought to do (A&B)” is incoherent. That assertion amounts to saying “I am planning on doing (A&B)”. But since I know that (A & B) is impossible, I cannot intend to do (A&B). And presumably if I cannot intend to do the impossible, I cannot plan to do the impossible. Of course I suggested above that in a dilemmatic situation you need not ever plan on doing (A & B).

  7. It’s not translated. The starting point is that the ethicists who accepts moral dilemmas *thinks* and *claims* ‘I ought to A’ and ‘I ought to B. Now, the question is, can Gibbard’s metaethical position make sense of what the ethicist is thinking and claiming. I don’t think Gibbard would have a problem with incoherent plans at all. After all, we make incoherent moral claims a lot. So, I think a Gibbardian should use incoherent plans to make sense of the dilemmasist’s claims and thoughts.
    I think you are right it would be better to interpret the assertion ‘I ought to A and B’ as expressing two plans, Plan(to A) and Plan(to B), rather than Plan (to A and B). Sometimes it still worries me whether one can even knowingly intend to A and intend to B when one knows that this is impossible. Can I intend to go home at 5 and go to see a film at 5 when I know that these would happen at the same time and I cannot be at two places simultaneously? What would count as succeeding in having such intentions? I’m not sure though if even this is any problem for a Gibbardian account. It may be that our assertions of moral dilemma’s express lack of plans for the tragic situations.

  8. I take Gibbard to say that anyone who thinks they are in a moral dilemma is irrational (no doubt it is psychologically possible) and the dilemmatists to say that they are not necessarily irrational, because they might be right.

    An important requirement of consistency for a plan is that it must not rule out every alternative open on an occasion. A plan that did that–even a partial plan–would preclude offering any guidance on what to do on that occasion. (THTL p. 56)

    The point of having a plan is to guide action; a plan which is expressed as a dilemma cannot guide action. Plans for a single situation cannot (rationally) be incompatible because then there’s no guidance about what to do. Having no plan would not be expressed as a dilemma; it would be expressed, “I don’t know what is okay to do.”
    But those who believe in moral dilemmas are quite frank that considerations of moral im/permissibility cannot guide action in dilemmatic situations. Personally, I don’t believe there are any dilemmas; I’m with Gibbard on this one. But I’m struck by the fact that Gibbard and the dilemmatists seem to have very different ideas about the point of, and hence constraints on, moral theories.

  9. I’m still puzzled about what the problem is supposed to be. You write that ‘I don’t believe there are any dilemmas; I’m with Gibbard on this one.’ Where does Gibbard say that there are no moral dilemmas or there cannot be ones? Why can’t he admit that there are? For him that would be to admit that there are situations for which we fail to make good action-guiding plans. Maybe life in some situations is that difficult. What would be the pressure for him to resist?
    He could thus agree with the defender of dilemma’s that for those situations no practical guidance is forthcoming. I’m not sure that he would even have to say that accepting a moral dilemma is irrational (does he say this somewhere?). I cannot see why failing to have a plan for a situation is irrational as such even when this failure consists of two incompatible plans.

  10. The starting point is that the ethicists who accepts moral dilemmas *thinks* and *claims* ‘I ought to A’ and ‘I ought to B. Now, the question is, can Gibbard’s metaethical position make sense of what the ethicist is thinking and claiming.
    I don’t see that at all. The starting is the set of facts describing the dilemmatic situation. The question is whether those propositions purporting to describe a possible situation are consistent. Gibbard seems to think that they are not.
    Heath, you quote Gibbard,
    A plan that did that–even a partial plan–would preclude offering any guidance on what to do on that occasion. (THTL p. 56)

    So I suppose the argument goes this way:
    1. Every consistent plan offers guidance.
    2. Dilemmatic plans offer no guidance.
    :. Dilemmatic plans are inconsistent plans.
    If that’s right then, if ~Poss(A & B), then the proposition O(A & B) is inconsistent. I’ve no doubt that there are moral dilemmas, so of course I think they’re consistent. In any case I don’t find this sort of argument at all cogent. For just one problem, why believe that dilemmatic plans offer no guidance? Isn’t the problem–assuming it is a problem–that they offer too much guidance?

  11. Mike,
    You write that ‘The starting is the set of facts describing the dilemmatic situation. The question is whether those propositions purporting to describe a possible situation are consistent. Gibbard seems to think that they are not.’
    The starting point just cannot be a set of facts in this case. You cannot start an argument against *non-cognitivist* from the premise that here are *these moral facts*. The whole point of the expressivist project is to come to understand moral talk and moral fact talk by starting from the attitudes we express by making these claims. The same goes for *propositions describing the situation*. Saying that just is to beg the question against the expressivist who doesn’t think there are descriptive moral propositions at all, coherent or incoherent.

  12. I guess I’m with Jussi. Why can’t Gibbard just say that dilemma cases are cases in which it makes sense to do p and makes sense to do not-p? I accept a norm that requires p, and I accept a different norm that forbids p. I need a meta-norm, a norm that tells me which to favor, but there is none. Gibbard would only be stuck if he held that it always makes sense to do some one thing. But why think he holds that?

  13. Gibbard (at least in WCAF) needs to rule out dilemmas to get the logical apparatus to work right. And he needs the logical apparatus to deal with the Geach/Frege/Searle problem of giving an account of what moral claims mean when embedded which makes clear how they function together with yet further claims to logically imply yet further things. I forget all of the details and my Gibbard is not at home. But I recall he explicitly recognizes that he has ruled out such dilemmas and cites Marcus and van Fraasen as people who would disagree with him. If I recall correctly the problem arose because he represent the contents of claims via sets of pairs and “consistent” sets of norms. And his gloss on consistency requires that there always be a permissible action in any given situation. There may be a way for him to get out of this (and Mark Schroeder has a nice book manuscript in which he develops a whole approach that is in the spirit of Gibbard and which does seem to avoid the problem) and still get what he needs out of the apparatus. But as he constructs it, moral dilemmas are ruled out by the content/meanings of moral judgements plus logic.
    That said, I’m no longer as impressed by the objection that Gibbard rules out dilemmas as a matter of “logic” as I once was. (I once made the objection myself in print.) We know from thinking about the paradox of analysis that competent speakers of a language can deny claims that follow logically from the best analysis of the content of the claims they make. So Gibbard could well reply to the objection that while it is true that there can be no moral dilemmas given the meanings of our words ‘right’ and ‘wrong’, a competent speaker of the language could be ignorant of that precisely because s/he did not yet accept that correct analysis.

  14. Oh, that stuff, ok. So that does seem to be the thing for Gibbard to say. The critical issue, it seems to me, is preventing the conclusion that a speaker is confused just for thinking he’s in a dilemma, and your reply does that.

  15. The same goes for *propositions describing the situation*. Saying that just is to beg the question against the expressivist who doesn’t think there are descriptive moral propositions at all, coherent or incoherent.
    Who said “descriptive moral propositions”? My point is just this. If there is a logic here governing moral talk–and the assumption seems to be that there is one–then the logic determines the logical relations among whatever gets expressed by the sentences describing the dilemmatic situation. I can’t see how that could be mistaken. I’m not begging any questions against expressivism, since I’m not saying anything about what gets expressed (if anything). In fact that begs no questions against any interpretation of what moral sentences express that have a logic.

  16. There’s a great paper on Gibbard’s moral semantics by Jamie Dreier in Nous 1999 called Transforming Expressivism. Dreier starts by explaining how Gibbard applies truth-conditional, possible world semantics to ethical claims. The content of basic factual claims is in that kind of semantics the set of possible worlds in which the sentence is true. Given this content we can then explain the meaning of complex sentences and the validity of logical inferences.
    In the case of ethics, Gibbard replaces the factual worlds by ‘factual-normative worlds’ or ‘fact-prac worlds’ or plan-ladden worlds. These worlds are pairs of naturalistic descriptions and complete plans of actions for them. It is true that by definition or by logic the plans (or maximal contingency plans or hyperplans as Gibbard calls them) that in part constitute these worlds must be consistent, i.e., there must be only one thing to do in each situation. However, the content of our moral claims is supposed to be a *set* of such consistently planned worlds.
    This semantic machinery is supposed to account for our simple moral claims and use the machinery of the ordinary possible world semantics to account for the complex claims and inferences and their validity. So far so good. Now, it still beats me why this semantic machinery cannot be used to account for expressions of dilemmas even if any hyperplan in any one possible planned world cannot be inconsistent. An inconsistent claim would just express a set of planned worlds where the individual, consistent plans of different worlds are mutually inconsistent. Or, different claims like ‘I ought to A’ and ‘I ought to B’ would express consistent sets of planned worlds that are mutually incompatible.
    I still cannot see where Gibbard rules out moral dilemmas on logic alone. On page 59 he says that the problem with inconsistent judgments is practical – they fail in giving us a plan to follow in our lives. This is the aim of our normative thinking after all. But, isn’t the point of moral dilemma’s just that there is no guidance to be provided and satisfactory plans to be formed? I don’t think moral dilemmas are even mention in THtL.
    Mike,
    If there is a difference between propositions describing the situation and descriptive propositions I would like to hear what it is.

  17. A proposition describing a situation need not be a descriptive proposition or, better, a purely descriptive proposition. I took you to mean the latter. Maybe you didn’t mean that. But then you say,
    This semantic machinery is supposed to account for our simple moral claims and use the machinery of the ordinary possible world semantics to account for the complex claims and inferences and their validity. So far so good.
    On your understanding of Gibbard’s view, does he use the word ‘validity’ the way it is normally used? Does he understand it in terms of truth-preservation among sets of propositions, in this particular case, among sets of moral claims? Is an inference rule, on his usage, what we typically mean by ‘inference rule’. If he means something else by these terms, I’m very curious to know what it is.

  18. This is how Dreier puts how possible world propositions ordinarily explain the property of logical validity some inferences have: ‘When some premises strictly imply a certain conclusion, that is to say that the intersection of the sets expressed by the premises is a subset of the set expressed by the conclusion’. The difference between naturalistic and moral claims is what type of worlds constitute the premises and the conclusion – purely naturalistic worlds or plan-ladden worlds.
    Thus the idea is same as truth-preservation but it should, I guess, rather be understood as plan-reservation. If the content of the premises is a set of hyper-planned worlds that are compatible with the plan I’m expressing with the sentences of the premises, then the sentence in the conclusion is an expression of a plan (i.e. sets of hyper-plans) to which I must be committed to given the planned worlds of the premises. I take it that inference rules mean here what everyone means by inference rules.

  19. Now, it still beats me why this semantic machinery cannot be used to account for expressions of dilemmas even if any hyperplan in any one possible planned world cannot be inconsistent. An inconsistent claim would just express a set of planned worlds where the individual, consistent plans of different worlds are mutually inconsistent
    As I said, I’m not sure how ‘validity’ and ‘inference rule’ or (for that matter) ‘consistency’ are being used here. But, there is one way in which he could argue that an inconsistency arises. Suppose that You ought to do A or OA iff. there is some relevant world w in which you plan to do A. One way to ensure consistency is to hold that an agent can fulfill all of his obligations in a single world. So if OA and OB, then you plan to do A in some w and you plan to do B in some w’. But there must also be some single world in which you plan to do (A & B). This is what the closure principle (OA & OB) only if O(A & B) is supposed to ensure. I guess that could be what he has in mind to generate the inconsistency.

  20. Jussi,
    He could, I think, still understand validity in terms of truth-preservation, since, I take it, plans are said to hold or or to be true, at worlds. So take the intersection of worlds at which S plans to do A and S plans to do B and you have the set of world at which (S plans to do A & S plans to do B). What about consistency? Is his claim that it is logically inconsistent to plan to do (A & B), when it is not possible to do (A & B)? Does he give a semantics for claims like ‘S plans to do A’?

  21. Mike,
    I don’t think any expressivist would want to commit to anything like ‘You ought to do A or OA iff. there is some relevant world w in which you plan to do A’. That would be close to giving up expressivism. Similar problems follow from the talk of truth-preservation and truth of plans in worlds. I guess the better way to understand the validity is to start, as Gibbard does, from Blackburn’s idea that accepting complex statements is ‘tying oneself to trees’. What one then commits oneself in the normative case is not the truth of the claim in the conclusion but rather the planning attitude expressible by the conclusion.
    About consistency. I think he uses the term ‘ruling out’. So a plan is inconsistent when a part of the plan rules out what the other part of the plan would guide towards. The plans of each planned worlds must be consistent in this way. But, two hyperplans can be inconsistent when following the plan of the other world rules out following the plan of the other. Of course this is close to saying that one of the plans makes conforming to the other impossible.
    I’m not sure if he gives semantics for statements about plans but he does talk about what kind of psychological attitudes planning states are supposed to be. The project is to give semantics for normative claims in terms of this attitudes. I’m not sure that requires semantics for propositional attitudes ascriptions.

  22. Jussi,
    You write:
    Now, it still beats me why this semantic machinery cannot be used to account for expressions of dilemmas even if any hyperplan in any one possible planned world cannot be inconsistent. An inconsistent claim would just express a set of planned worlds where the individual, consistent plans of different worlds are mutually inconsistent.
    I’m going to speak of norm world pairs rather than fact prac worlds because I think it is easier to say the relevant things – but the formal apparatus is essentially the same, as is the basic idea. The idea is that any claim is inconsistent either with some way the world might be descriptively, or with some practical commitment, and we can use what is ruled out to represent the contents of those claims. The worlds ruled out represent what the claim represents descriptively; the norms(or plans) ruled out represent the practical commitments. So, for example, to say that Bush will do what is wrong if it will gain him political advantage, rules out those world norm pairs in which some action is disapproved by the norm member of the pair but yet the world is one in which that action is politically advantageous and yet not done by Bush. (A purely moral claim rules out all the pairs which couple sets of norms inconsistent with that claim with any world. And a purely descriptive claim rules out all the pairs composed of worlds in which that claim holds coupled with any set of norms whatsoever.)
    According to Gibbard’s rules, consistent norms always allow a permissible action – see p 88 of WCAF. (Or using his later apparatus, consistent plans always allow for some action as the one to do in any situation.)
    An inconsistent claim is one that cannot be represented by a set of world/norm pairs (or equivalently, which rules out every set of consistent world/norm pairs). Thus any conjunctive claims which jointly rule out all the “consistent” world norm pairs is inconsistent. A dilemma can be thought of as a situation in which doing A and not doing A are both forbidden. So a dilemma can be represented by a conjunctive claim, the conjuncts of which together rule out all the possible “consistent” combinations of world norm pairs.
    So basically, I think the answer here is that it is not enough for a consistent set of norms to be satisfiable in some possible world. Rather, as I understand Gibbard’s intentions, consistent norms allow for a permissible option in any situation. (Or to use the fact/prac talk, consistent maximal contingency plans will tell you what is to be done in any situation. And the plans that compose the fact prac worlds are maximal.)
    There is an issue about what happens if you relax these requirements on consistent norms/consistent maximal plans. I suspect that it would make it hard to make the rest of the logic work out right so that the inferences we do want to follow as a matter of logic in fact do follow.
    I hope I haven’t made any mistakes here. It’s been a while since I’ve worked through the details of Gibbard and my mind is not naturally suited to logic.

  23. Mark,
    I completely agree with all of what you write and tried to crudely say the same thing before. The point is though that Heath’s original criticism against Gibbard was that moral dilemma’s are ruled out on logical grounds bu Gibbard, because what is put forward in moral dilemmas cannot be made sense of with Gibbard’s semantics. My argument was that it can. When you write that:
    “So a dilemma can be represented by a conjunctive claim, the conjuncts of which together rule out all the possible “consistent” combinations of world norm pairs.”
    this is an answer to Heath’s question of how to understand moral dilemmas in Gibbardian. This retains the normative neutrality of the expressivist metaethics. Moral dilemmas express acceptance of world-norm pairs that rule out one another. And, clearly the psychological background for this view is plausible – we do accept inconsistent norms and need a way of expressing them.
    So, nothing in the logic of Gibbard’s semantics rules out moral dilemmas. On practical grounds, Gibbard might think that inconsistent norms or plans are deficient in some way in failing to guide our actions. After all, finding such guidance is supposed to be the aim of our normative talk and thought. But, isn’t the state of not finding guidance our predicament anyway if accept unsolvable moral dilemmas?

  24. Let me raise the same issue as others have been discussing in a different way. I THTL, Gibbard employs a device he calls the ‘argument from hyperstates’ in order to give ‘transcendental’ arguments for strong supervenience and for the thesis that moral terms pick out natural properties. Absent major complications, these arguments appear to commit him to denying that there are any situations in which someone ought to do each of two incompatible things.
    The way that the arguments from hyperstates work, is to show that some thought is shared by all consistent, fully-decided thinkers – hyperplanners. If every consistent fully decided thinker shares this view, then intuitively, there is no way that you can come to a completely decided view without accepting it, so everyone is committed to it. In fact, on Gibbard’s view, everyone already has these thoughts (unless they have inconsistent thoughts, in which case everything goes out the window). For on Gibbard’s view, your total state of mind is represented by the set of hyperplanners with whom you are consistent. Whatever they think, you do. So if all hyperplanners think P, then ipso facto all the ones you are consistent with do. So you do, too.
    How does Gibbard argue that moral terms pick out natural properties? Well, he argues that for any given hyperplanner, and any given situation, that hyperplanner plans to do exactly one thing and only one thing in that situation. So she plans to do A1 in C1, say. And the same thing for every other situation. When you make the complete list of these, A1-in-C1-or-A2-in-C2-or-A3-in-C3-or… and so on, for a given hyperplanner, you have specified a natural property that that hyperplanner thinks is necessarily coextensive with being the thing to do. So that hyperplanner thinks that is the natural property picked out by ‘the thing to do’. And hence, that hyperplanner thinks there is some natural property picked out by ‘the thing to do’ (namely, that one). Since that works for every hyperplanner, every hyperplanner thinks ‘the thing to do’ picks out a natural property, and hence you do, too.
    Now that argument only works if there is exactly one thing that each hyperplanner thinks is the thing to do in each situation. So each hyperplanner must apparently think, for each situation, that there is only one thing to do in it. So by the argument from hyperstates, everyone is committed to that thesis, too. In fact, you already think it, unless you are inconsistent (in which case everything goes out the window, anyway). So that’s a transcendental argument from hyperstates for the nonexistence of moral dilemmas.
    Now this argument might have gone wrong when I took the step from every hyperplanner planning to do exactly one thing, to every hyperplanner thinking that there was exactly one thing to do. Maybe Hera only plans to do A1 in C1, but nevertheless thinks there are 2 distinct things to do in C1. In that case, she would have to, say, plan to do A1 in C1 and plan to do A2 in C1, but believe that A1 and A2 are distinct. She couldn’t believe of herself that she is a hyperplanner, then – and so she would have to believe of herself that she is not – but Gibbard never says that hyperplanners have to have true beliefs. Still, things start to get very weird for the formal framework, if hyperplanners can have false views about which courses of action are distinct from one another. So I think the point holds – very likely, the argument from hyperstates shows that there are no moral dilemmas.
    Mark van Roojen is also right about why Gibbard originally went in for this view, in order to explain inconsistency and thus try to solve the Frege-Geach problem. For those who haven’t looked closely at his paper, his footnote about the topic of this discussion thread is one of my favorites. But this is just to point out that there are convergent features of Gibbard’s view which appear to require the same conclusion.

  25. Mark vR, you write,
    to say that Bush will do what is wrong if it will gain him political advantage, rules out those world norm pairs in which some action is disapproved by the norm member of the pair but yet the world is one in which that action is politically advantageous and yet not done by Bush. (A purely moral claim rules out all the pairs which couple sets of norms inconsistent with that claim with any world
    So, to evaluate moral claims we use these points, I guess, that are norm/world pairs where N is just a set of norms and w is a world. Now under what semantic conditions is it correct or appropriate to make the claim ‘I ought to do A’? Is it just in case such the claim is (in some sense) compatible with the norms in N (or compatible with the norms across some relevant set of worlds) or what? I am trying to track this in a way analogous to specifying truth-conditions for normative claims.

  26. I should also add that in the book manuscript that Mark mentioned, the way I show how to do expressivist logic is neutral not just on the question of whether moral dilemmas are possible, but on the question of whether they are semantically ruled out as impossible. There are ways of implementing my proposal for how expressivists should give the semantics for ‘wrong’ that yield either result.

  27. Hi, Mike.
    I think you should treat the ‘semantic correctness condition’ of uttering a sentence, in an expressivist semantics, as the condition that the speaker is in the mental state expressed by the sentence. In fact, as I’ve argued both in ‘Expression for Expressivists’ and in the book manuscript Mark vR mentioned, I think for a variety of reasons that that is how we should understand what expressivists _mean_ by ‘express’.
    What you don’t get, on a standard expressivist semantics, is something that corresponds to semantically-given truth conditions. What you get instead, are these assertability conditions – rules that say things like: ‘assert me only if you think grass is green’ or, ‘assert me only if you plan not to murder’ or, ‘assert me only if you are in a state of mind that is consistent only with hyperplanners who are not consistent with someone who plans to murder in C’ (this last is the state of mind expressed, for Gibbard, by ‘murdering is not the thing to do in C’). The way to understand compositional rules for complex sentences as working, on an expressivist account, is as operating on these rules to give a rule of the same kind, for the complex sentence.

  28. ‘assert me only if you are in a state of mind that is consistent only with hyperplanners who are not consistent with someone who plans to murder in C’
    That’s very helpful. One further question. My state of mind is consistent with a hyperplanner . . .etc” if my state of mind is, I think you say, expressed by “murdering is not the thing to do in C”. Does that state of mind correspond to the belief that murdering is not the thing to do in C? Anyway, under those conditions is it assertable that “I ought not to murder in C”?

  29. Mike – that state of mind is supposed to be equivalent to thinking that murdering is not the thing to do in C (be careful about the word, ‘belief’). In fact, it is all that Gibbard tells us about what it is to think that murder is not the thing to do in C. And being in that state is what makes ‘I ought not to murder in C’ assertable. Note that Gibbard doesn’t talk about ‘assertability’ – that’s my gloss.

  30. Mike,
    You write:
    Now under what semantic conditions is it correct or appropriate to make the claim ‘I ought to do A’? Is it just in case such the claim is (in some sense) compatible with the norms in N (or compatible with the norms across some relevant set of worlds) or what?
    Mark S’s answers probably already make this clear. But to answer this with different words, the state of mind expressed by I ought to A is represented by the set those world/norm pairs in which the worlds represent A as having a property that the norms paired with that world say only actions that are required have. Or conversely, it rules out those norm/world pairs in which A is represented by the world as having some properties which the corresponding norms don’t assign to only actions that are required. (Remember that norms are rules for classifying actions into required, merely permissible, and forbidden categories in virtue of their naturalistic features. Similarly as Mark S’s discussion above highlights, hyperplans are formulated in naturalistic terms, and the analogous view holds there.)
    Jussi,
    I agree that the theory does not say that people cannot have a belief in moral dilemmas. But it does entail that there can be no moral dilemmas. Just as the theory does not say that people cannot have inconsistent beliefs, though it also says that inconsistent beliefs are inconsistent.

  31. Mark,
    Sorry I’m being a bit thick. But, which part of the theory implies that there cannot be moral dilemmas? It can’t be the part that the considered planned worlds are constituted of consistent plans for them. It would be really odd if expressivism as a view in moral semantics had that normative consequence. I can see how the complicated argument Mark S gives might implies that but I wonder about its premise that we already are consistent hyperplanners.

  32. Jussi,
    I agree that the constraint that hyperplans/norms be “consistent” is controversial. But the very same features of the apparatus that rule out accepting arguments with intuitively contradictory conclusions also rule out such hyperplans.
    The way I put it earlier, that there can be no dilemmas is perhaps not the right way to put it, since that sounds like a descriptive claim. What I should have said is that the state of mind expressed by sentences which assert that a person does the wrong thing no matter what they do, has the very same sort of incoherence (according to the apparatus) that the state of mind expressed by ‘If A is wrong B is wrong; A is wrong and B is not wrong’.
    The big picture is that Gibbard uses incoherence between the mental states that represent various claims to generate a logic for moral claims that tracks the intuitively correct story about their logical relations. This very same sort of incoherence is displayed by the mental states of someone who believes that no matter what one does one does something wrong.

  33. Jussi –
    My argument didn’t turn on the assumption that we already are hyperplanners. It was exactly the same kind of argument that Gibbard gives repeatedly in THTL. So Gibbard is uncontroversially committed to thinking that it is a good argument (minus the step I admitted was slightly controversial). And it works, if you grant Gibbard’s explicit views (none of which is that we are all hyperplanners).
    Think about it this way: it’s slightly misleading to say that Gibbard’s approach generalizes on possible world semantics, but what it does do, is to incorporate all of the unintuitive results of possible world semantics. On his view, if you believe p and p entails q (even if you don’t know this), then you believe q. What the argument from hyperstates seeks to show, is that some view is entailed by any view at all. That’s why if it works, it shows that we all believe it. Similarly, because of the possible worlds framework, Gibbard’s explanation of the validity of arguments effectively works by showing that people already accept the conclusions of any valid argument whose premises they accept. You might not think these are intuitive results – but that doesn’t show Gibbard isn’t committed to them. It may instead be a motivation to try to do better. Jamie Dreier’s ‘Transforming Expressivism’ paper is an intriguing idea in this direction, and the suggestion I have for how to do expressivist semantics is another account that builds in the right kind of structure that we’re not in a framework with the foibles of equating beliefs with sets of possible worlds.

  34. Mark, in your view, is Gibbard’s committment to closure under entailment explicit — that is, does he come out explicitly somewhere defending this — or do you think it’s just that his overall theory in fact commits him to it?

  35. Thanks Marks, that’s very helpful of both of you. I do have another question though maybe more for Mark vR. Do you think that cognitivists are any better of in that respect? Some people put the moral dilemmas in the way that it is true that you ought to over-all to A and it is true that you ought to over-all not to A. If both of these conjucts express beliefs, then don’t you have as contradictory beliefs as you can under any failure of logical reasoning? Of course there are dialethesists who are not worried by this. But, why should the non-cognitivists be more worried about whether or not we have contradictory planning states?
    Mark S,
    I can see the worry now. I wonder if Gibbard could make use of the ‘tying yourself to a tree’ metaphor again. Couldn’t he try to say that in accepting the premises you commit yourself to the conclusion that is given by the certain set of planned worlds but this commitment is different to de facto believing the conclusion, i.e., having adopted the plan which the conclusion expresses?

  36. Hi, Rob.
    For Gibbard, to think that A is to be in the state of mind expressed by ‘A’, which is to be such that your total state of mind disagrees with any hyperplanner who is not in a certain set, SA (intuitively, who does not think that A – i.e., who thinks that ~A). And similarly for B, and set SB.
    But then guess what? Suppose that ‘A’ entails ‘B’. Then every hyperplanner who thinks that A must think that B. That means that the set of hyperplanners who don’t think A (SA) includes the set of hyperplanners who don’t think B (SB). So anyone whose total state of mind disagrees with every hyperplanner in SA also has a state of mind that disagrees with every hyperplanner in SB – i.e., that anyone who thinks A thinks B. Similarly, necessarily equivalent thoughts turn out to be the same thought, and so on, for each usual consequence of the possible worlds framework.
    So this is just a totally straightforward consequence of the framework. Like many straightforward consequences of his framework, he doesn’t discuss it explicitly – he focuses on discussing what is intuitive about his view, not on what its costs are.
    You might think that it is just a model, and to be improved on. Kaplan used sets of possible worlds as a model for propositions when introducing characters, but only in order to avoid unnecessary complications. Similarly, you might think that Gibbard’s appeal to possible worlds is just a model, to be further refined – for example, along the lines of Dreier’s suggestion in ‘Transforming Expressivism’, which is about exactly this issue. Or you might think that Gibbard simply means to pass the buck for solving these sorts of problems to Stalnaker and Lewis. He’s not explicit, either way.
    Jussi – the ‘tying to a tree’ metaphor has not, I think, ever been very helpful for anyone. But its point is to explain how someone can be committed to a disjunction, without being committed to either conjunct. Things that every hyperplanner your state of mind does not disagree with all accept, are like things that are part of either disjunct. You might believe ‘(A&B)v(A&C)’ without believing ‘A&B’ or believing ‘A&C’, but not without being committed to ‘A’.
    Also, to answer your other question, the whole point of deontic logics that allow for moral dilemmas is precisely that they _don’t_ make it turn out that there is any contradiction in belief, in thinking that there can be moral dilemmas. They deny that ought not entails not ought, and there is only a contradiction between the latter and ought.

  37. Thanks Mark. If that’s true though I wonder if the things are as bad for the expressivist either. Here’s what the other Mark wrote:
    ‘What I should have said is that the state of mind expressed by sentences which assert that a person does the wrong thing no matter what they do, has the very same sort of incoherence (according to the apparatus) that the state of mind expressed by ‘If A is wrong B is wrong; A is wrong and B is not wrong’.’
    Now, in the light of what you say, I wonder if this is true. Say that the expressivist makes the difference between ought (not-A) and not ought (A) and accepts the claim that the former does not imply the latter. The former expresses a plan to do some other thing than A whereas the latter a lack of plan to A. The former plan does not correctly imply the latter. (I know the negation is tricky for the expressivist and that there are many complications to this as Dreier would show).
    Now the dilemma of Ought(not-A)& Ought (A) would express a plan to do something else than A and a plan to do A. This means that, although there won’t a single planned world where both plans are carried out, there is no contradictory attitude of both planning to do something and planning to not do it. Thus, it looks like the expressivist too can avoid strictly contradictory attitudes.
    In this case, there seems to be a difference even for the expressivist with this case and the failure of following modus ponens where the premises tie to plan to do something and the conclusion to not doing it. I’m probably missing something again though.

  38. Jussi –
    This discussion thread was not about whether expressivists can allow for the existence of moral dilemmas, it was about whether _Gibbard’s_ view does. Gibbard’s view does not, in fact, allow for the existence of moral dilemmas.
    Can another view do so? The answer is not trivial, because it is not trivial that expressivism can be made to work, and hence not trivial that expressivists can earn the right to say anything that cognitivists can say. This nontriviality is not mitigated by Simon Blackburn’s insistence on describing the ambitions of expressivism as if they were accomplishments.
    But in fact, the answer does turn out to be ‘Yes’. As I noted before, I show how to do so in chapters 3-5 of _Being For_, the book manuscript that Mark vR mentioned, a draft of which is on my web page. My approach also solves a wide range of other problems for Gibbard’s view and for Dreier’s amendment of it. It is only a draft, and some important material, particularly in later chapters, is in the process of changing, but I stand by the material in those chapters. I think they are the only promising way of developing an expressivist view.

  39. Jussi, you wrote:
    I do have another question though maybe more for Mark vR. Do you think that cognitivists are any better of in that respect? Some people put the moral dilemmas in the way that it is true that you ought to over-all to A and it is true that you ought to over-all not to A. If both of these conjucts express beliefs, then don’t you have as contradictory beliefs as you can under any failure of logical reasoning? Of course there are dialethesists who are not worried by this. But, why should the non-cognitivists be more worried about whether or not we have contradictory planning states?
    Mark S’s comments may already have answered this, but since you’re asking me directly, I’ll give an answer too.
    Intuitively Doing X is wrong is contradicted by Doing X is not wrong. Every theory should say that. It is not as obvious that every theory should find a contradiction between Doing X is wrong and Doing not X is wrong.
    In order to explain how the former is contradictory consistent with a non-cognitivist semantics for moral terms, Gibbard introduces the world/norm pair apparatus and suggests that the states of mind involved in accepting a claim is the state of “ruling out” certain sets of world/norm pairs. He then explains the logical inconsistency of the sentences as a function of a certain sort of inconsistency in the states of mind they express. And, states of mind have this sort of inconsistency if between them they rule out all of the world norm pairs.
    Given all of that (and the constraints he puts on “consistent” maximal norms), the sort of inconsistency there is in accepting that Doing A is wrong and that Doing not A is wrong is the very same sort of inconsistency as is used to explain why Doing A is wrong and Doing A is not wrong are inconsistent. So the explanation he gives of the former sort of inconsistency commits him to finding the latter inconsistent as well. Cognitivists don’t need to give this sort of explanation because on their view ‘wrong’ functions like any other predicate and X is F and X is not F are inconsistent as a matter of ordinary predicate logic.
    As I think I wrote up thread, this problem (if it is a problem) is generated by the particular apparatus Gibbard uses and there might be other non-cognitivist treatments of the issue and even other treatments of the issue in the general spirit of Gibbard’s proposals here that can avoid the problem. In fact one reason I mentioned Mark S’s manuscript was that the view Mark describes there is broadly in the spirit of Gibbard’s project and yet does not have to have the feature that it rules out moral dilemmas.

Leave a Reply

Your email address will not be published. Required fields are marked *