As a first pass, we may think of Consequentialist moral theories as those that specify the right in terms of the good.  But these terms occlude some important structure that can be brought out by further analysis.  In particular, I take it that to say what's good is to say what we have reason to desire, whereas to ask about what's right is to ask about what we have reason to do.

I'm interested in how our understanding of different variants of consequentialism may be advanced by reformulating them in terms of reasons.  I think we obtain two especially illuminating results if we discipline our normative theorizing in this way.  Firstly, we find that Global Consequentialism (GC) is arguably just a terminological variant that fails to go beyond Act Consequentialism (AC) in any substantive respect.  Second, we gain some insights into the structure of Rule Consequentialism.

(1) Here's the argument for deflating Global Consequentialism.  Consequentialists begin with an axiology which specifies all the evaluative facts, i.e. what outcomes are good or desirable.  The substantive work for a consequentialist theory is to use these evaluative facts to ground some further normative claims.  AC does this work: it uses the basic consequentialist axiology to develop new claims about what reasons for action we have.  The challenge for GC is to clarify what further claims it makes beyond this.

The GCist may suggest that his theory allows us to make new 'ought' claims about any kind of thing, not just acts.  Let's take eye colours as a random example.  Just as we ought to perform the act (of those available) that would lead to the best outcome, so — the GCist might claim — we can now say that we ought to possess the eye colour (of those available) that would lead to the best outcome.  But we may question whether the GCist is really making a new claim here, or whether they are just repeating an old (evaluative) claim using new words.

The crucial point of disanalogy is that while we can act for reasons (which allows us to make substantive claims about how we ought to act that are not just disguised evaluative claims), we cannot possess eye colours for (normative) reasons.  Having blue eyes is not the exercise of a rational capacity, the way that acting is.  So there doesn't seem to be anything else for a consequentialist to say here, beyond the fact already implied by our axiology, that having a certain eye colour may be desirable, or the fact implied by AC, that we thereby have reason to bring this about if we can.

That's the gist of my argument.  I develop it in more detail in a short paper viewable online here. (Any feedback would be much appreciated!)

(2) Towards the end of the paper, I discuss the following puzzle regarding Rule Consequentialism (RC).  RC claims that we ought to act in accordance with the best rules, even if so acting is not itself best.  This seems difficult to make sense of if the axiology exhaustively specifies our reasons for desire, for RC would then seem to imply that we ought to hope that we act differently from how we ought to act.  Such a disconnect between rational preference and rational action does not seem especially coherent.  Yet Rule Consequentialism is surely a coherent (if mistaken) view.  What has gone wrong?

Most naturally, when RC in this way prohibits the so-called "best act", the prohibited act is not really desirable all things considered, but only antecedently desirable, i.e. before we consider the distinctive reasons for desire that derive from an act's deontic status as morally right or wrong.  In this sense, their initial axiology is inconclusive or incomplete.  It accounts for only some of our reasons for desire: agent-neutral welfarist reasons, perhaps.  But these reasons for desire are not decisive.  Let's unpack how this might work.

Rule Consequentialists first identify the rules that are best in terms of impartial welfare (or what's antecedently desirable), and then specify that we have decisive reasons to act in accordance with these rules.  Finally, they might add, we have overriding reasons to prefer that we so act.  This way, a prohibited act may be "best" according to the antecedent (agent-neutral welfarist) reasons for desire, and yet be bad (undesirable) all things considered.  This avoids the incoherence mentioned above.  But it also brings out how convoluted the view is.  It is recognizably Consequentialist in the sense that it takes (some) reasons for desire as fundamental, and subsequently derives an account of reasons for action.  But then it goes back and "fills in" further reasons for desire — trumping the original axiology — to make sure that they fit the account of right action.  In this sense it exhibits a deontological streak: reasons for action are at least partly prior to reasons for desire.  In other words, the initial axiology includes only some values (the 'pre-moral', agent-neutral welfarist ones), and what's right serves to determine the remaining ('post-moral', all things considered) good.

Does that sound right?

19 Replies to “Analyzing Act, Rule, and Global Consequentialism

  1. Richard,
    that’s interesting. This is actually something that Brad Hooker is sensitive to even if, if I remember this right, the discussion in Ideal Code, Real World is not in terms of reasons. But, here’s the crux of the position on the top of my head.
    The idea is that first we evaluate the outcomes of rules in terms of their value. I think here well-being (and fairness) are the good things. What is of value is assessed here on the basis of trying to achieve a reflective equilibrium. I don’t think reasons are mentioned at this point, but he could say that there would be most agent-neutral reason to bring the ideal outcome about (perhaps for an impartial spectator).
    The next step is what one has reason to do when one could bring about more well-being by violating the rules. At this point, Brad says that well-being is not the fundamental point of following the rules but rather being able to justify one’s actions to others. So, this must mean that there are justification based moral reasons that outweigh the well-being based, agent-neutral reasons. These could be reasons that come to the existence only after the selection process for two reasons. First, these reasons are agent-relative reasons and plausibly the rules must be agent-neutral. Second, you might think that justification must be based on some sort of commonly available rules so you have to select the rules before you get reasons based on justification.
    So, anyway, I think Brad’s theory does have a structure like you suggest. On the other hand, it doesn’t seem immediately objectionable.

  2. “while we can act for reasons (which allows us to make substantive claims about how we ought to act that are not just disguised evaluative claims), we cannot possess eye colours for (normative) reasons”
    Are you here relying on the premise that all normative reasons have to be able to feature as motivating reasons? This is supposedly subject to counter-examples in the form of surprise parties and so on.

  3. Hi Richard,
    Very interesting post; a few thoughts:
    1) Your argument for deflating GC shows, I think, that GCists must structure their view in a way that only generates ought-claims (in addition to those generated by standard AC) that agents can, in some relevant way, respond to for reasons. If GC is to be spelled out in a way that doesn’t just amount to AC plus disguised restatements of axiological claims, then it must not imply that you ought to have such-and-such color eyes. The scope of “global” in GC must be limited to reflect this constraint.
    2) One possible motivation for moving to something like GC consists in two key thoughts. The first is that while it’s true that acts are appropriately assessed by consequentialist standards, there are also things besides acts that can be appropriately assessed by the same standards. Two candidates that immediately come to mind are rules and dispositions. The second thought is that when the assessment of an agent’s choice in act terms conflicts with the assessment of that same choice in some other dimension (rule, disposition, etc.), none of the assessments necessarily takes priority over the others and determines what the agent ought, all things considered, to do (this thought can perhaps be motivated by thinking about Railton’s case of Juan’s long-distance marriage – as an AC, Railton must say that it’s impermissible for Juan to visit his wife rather than donating the airfare to Oxfam, despite the fact that he can give a kind of consequentialist justification for his doing so, namely that it reflects a disposition the lack of which would make him a worse agent, in consequentialist terms, overall; GCists can, it seems, avoid this conclusion).
    3) I don’t find these thoughts very helpful, and so I don’t think there’s much of a case for GC here. First, the thought that no particular assessment has priority, and that therefore there may be no single fact of the matter about what an agent ought to do, just strikes me as unacceptable (I wonder if a GC view could avoid this implication). It seems to me that in the absence of appeals to the sort of considerations that Hooker appeals to for prioritizing assessments in rule terms (which GCists also want to avoid), the standard case for prioritizing assessments in act terms applies – the rule based assessment says that, in general, following such-and-such set of rules will generate the most good, but in the cases in which this is not true, the given axiology provides no reason to stick to the rules and compelling reason to break them. With regard to dispositions, the claim that one ought to have such-and-such a disposition seems to either be a disguised evaluative claim (i.e. that it would be better if one had it) or an act-assessment or set of act-assessments (i.e. one ought to act, in a particular instance or over a period of time, so as to develop it). Either way, there is nothing here that supports GC over AC.
    4) A very rough stab at an alternative motivation for something like GC: perhaps cases involving collective or institutional action, discussed by Parfit and others, in which what it would be best for the group or institution as a whole to do would not result from each individual involved doing what it would be best for him/her to do are best understood by appeal to something like GC (of course if what I say above is right then the view can’t include some of what standard GC includes, but it could still include distinct and sometimes conflicting consequentialist assessments none of which necessarily take priority over the others). On such a view, we can assess what it would be best for the group/institution to do, and assess what it would be best for each individual involved to do, and when these come apart there is no priority relation among the distinct assessments. Again, I doubt that this thought can provide a compelling basis for a GC-like view, and I’m inclined to think that consequentialists should attempt to understand the cases in AC terms.

  4. Hi Richard,
    Very interesting post. I raise the same objection against RC regarding its disconnect between rational preference and rational action in my “Consequentializing Moral Theories.” I hadn’t thought of your suggested reply though. But I wonder whether RC really is rule-consequentialist if it selects the rules on the basis of an incomplete axiology. After all, would we think that a theory is act-consequentialist if it held both (1) that an act is permissible iff it maximizes pleasure and (2) that pleasure is not the only good? Such a theory would imply that it is sometimes wrong to bring about what, by its own lights, is the best outcome. Such an implication is, as I see it, antithetical to act-consequentialism. And if that’s right, then I think that we should also question whether RC, as you conceive of it, is really a version of rule-consequentialism after all. It would be, as Frances Howard-Snyder puts it, a rubber duck — in the sense that a rubber duck is not a kind of duck at all. Of course, I realize that this is part of your point: that RC has a strong deontological streak. But would you go further and deny that it is truly rule-consequentialist?
    Also, you note that the idea that we ought to hope that we act differently from how we ought to act is not “especially coherent.” I agree. In Chapter 3 of the book that I’m working on, I appeal to the same idea in defending the teleological conception of reasons. It’s not easy, though, to spell out what exactly the incoherence amounts. Do you have any further thoughts? I’ve been surprised to find that some readers are willing to accept this as perfectly coherent.

  5. Richard,
    As you know, I disagree with your conclusions, and I hope I can helpfully point out where the disagreement starts:
    I take it that to say what’s good is to say what we have reason to desire, whereas to ask about what’s right is to ask about what we have reason to do.
    I think this goes wrong on two counts.
    Firstly, I don’t see how defining ‘good’ in this way does justice to what most consequentialists mean by the term. If a demon will cause unending torment unless we all desire mild pain, then we have reason to desire mild pain and by your definition mild pain is good, even though intuitively it is just that the desiring of pain has become instrumentally good, not that the pain itself has become intrinsically good.
    Secondly, if you define ‘right’ as what we have reason to do, then strong versions of global consequentialism (those which apply the term ‘right’ to all evaluands) are obviously false and you don’t even need to continue onto your explanation. This just appears to beg the question against global consequentialists.

  6. Quick follow-up to Toby’s comment: it doesn’t seem enough to define what’s right (if we mean what’s morally right) as being what we have reason to do. We can have non-moral reasons to do something, but no moral reasons to do it. So, to make these reasons turn out moral we either, it seems, have to add that they have certain special contents or that these are also reasons to feel guilty if we don’t do these things and reasons for others to resent us if we do etc. Did you by “right” mean just something very general like what we ought all things considered to do or something like that? If so, different claims may apply than if you have in mind specifically moral reasons. (Sorry if this was already cleared up above and I missed it when I quickly skimmed the discussion so far.)

  7. Lots of helpful comments here! (It’ll take me a couple of goes to get through them all.)
    Jussi – Hooker seemed non-committal the last I asked him about how to formulate his view in terms of reasons for desire. But you’re quite right that the central role he gives to “being able to justify one’s actions to others” would seem to naturally support a reading along these lines. My worries about the convoluted structure stem from my having a more straightforwardly consequentialist conception of the “fundamental point” of acting. But insofar as rule consequentialism really stems from more deontological motivations, this “in house” objection will lose its force. He can just bite the bullet and admit that the view won’t appeal to traditional consequentialists.
    Doug – I agree that there is an important sense in which RC is not fully consequentialist. But then there is the following sense in which it partly is: all reasons for action ultimately derive from some class of reasons for desire. In this sense ‘the right’ derives from ‘the good’. (In my above linked post I even suggest a way for RCists to modify the fitting attitudes analysis so that this restricted class of reasons for desire is taken to be exhaustive of ‘the good’. This verbal variation makes no substantive difference, of course, but it does serve to highlight the sense in which the view is recognizably consequentialist.) So if we think (as I do) that the difference between consequentialism and non-consequentialism concerns the relative priority of reasons for action and desire (i.e. the right and the good) then what we learn from RC is just that there are in-between cases.
    On the incoherence worry: is this a matter of instrumental rationality? We may at least expect there to be principles of means-ends coherence that an agent violates when they act in ways that they wish they wouldn’t (or fail to act as they wish they would). Though that may not be the heart of it: the central incoherence here seems more direct and immediate than that…
    Alex – a weaker claim would suffice. Even if there are some unfollowable reasons for action (and I’m not saying there are), we may think that the fact that there are reasons for action at all depends on the fact that at least some of the reasons in this class are followable. Acting generally involves the exercise of rational capacities, and that’s why it’s the sort of thing for which there can be normative reasons: considerations that properly get a grip on those rational capacities. None of this seems to carry over to eye colours and the like. If someone wants to propose that there are such things as “reasons for eye colours” (that aren’t just disguised claims about reasons for desiring or acting to bring about certain outcomes involving eye colours) then I would need to hear more about what they meant by this!

  8. Sven – For my purposes it suffices that questions of rightness are somehow settled by the facts about what we have reason to do. While I’m most interested in the ‘all things considered ought’ myself, I’d expect that any more restricted notion of ‘morally right action’ will also be analyzable in terms of (moral and non-moral) reasons for action. I don’t mean to provide that analysis here: you can fill in the details as you please. I’m only worried if you don’t think that any analysis in these terms will work.
    Toby – good to hear from you again! Second things first: I only meant to claim that right action is analyzable in terms of reasons for action. I actually think that we can apply ‘right’ to other reasons-responsive states, e.g. beliefs. But note that talk of right or correct belief has implications for what we have (objective) reason to believe. So I’m certainly open to talking about other kinds of things as ‘right’ or correct. You simply need to clarify what kind of normative claim you mean to make in doing so. That’s the challenge.
    On the wrong kinds of reasons: I don’t think the demon scenario makes pain desirable. What it gives us reason (or makes it fitting/correct) to desire is instead the following: that we desire pain. But to have reason to desire that we desire pain is not thereby to have reason to desire pain. (Alternatively, if you insist that there are such ‘state-given’ reasons in addition to the ‘object-given’ ones that I countenance, simply read my talk of “reasons” as more specifically referencing ‘object-given’ or fitting reasons. See also my previous discussion of Fittingness and Fortunateness.)
    Brian – interesting thoughts. I largely agree with your assessments. One thing to note: on standard formulations of GC, there is a definite fact about what the agent ought to do, namely whatever would be best. It’s just that there are also (possibly competing) obligations concerning how one ought to be. Personally, I’m not sure what’s gained by calling the latter ‘obligations’. Better to just stick with evaluative talk in describing such situations, it seems to me.
    On your second point: it’s an interesting question whether we can assess rules and dispositions for their rational ‘fittingness’, and not just their ‘fortunateness’. This does seem plausible in a sense: we can talk of ‘rational dispositions’, and describe circumstances in which it would be more fortunate to instead have irrational or ‘unfitting’ dispositions. But these normative assessments seem importantly derivative. A rational disposition is, I take it, just one that disposes you to act better, or something along those lines. But dispositions may be fortunate for reasons other than their manifestation in action (think Mutually Assured Destruction): hence fittingness and fortunateness may come apart. But that’s just to say that a direct account of (normative) fittingness in terms of (evaluative) fortunateness, as offered by GC, fails in these cases.

  9. Richard,
    Thanks for the clarification regarding your claim about rightness as ‘reason to do’. In that case, our dispute really is whether rightness is best understood in terms of reasons, and if so, whether the appropriate sense of reasons is one in which we can have reasons to have a certain eye colour, reasons to have a certain character, reasons to love etc. I personally wouldn’t understand rightness in terms of reasons, but even if I did, I think that people talk about many more types of reasons than you allow for. GC lets us take these at face value, whereas you have to describe them as disguised reason claims of a certain restricted set of types. Maybe that is the best way to go, but it doesn’t strike me as a strong argument when GC more accurately reflects the surface language, is a simpler theory than AC and is more expressive in important ways (such as its assessment of character, principles, decision procedures, institutions etc).
    Regarding goodness as reason to desire, I’m glad to see that you don’t think pain would be good in the world I describe, but I can’t see how you can avoid it… If someone will kill me unless I sit down, then I have reason to sit down, right? So it seems to me that if someone will kill me unless I desire pain, then I have reason to desire pain. I don’t understand how (or why) you treat these cases differently. Or are you saying that in the first case I just have reason to desire to sit down?

  10. Toby – your “reason to sit down” is, I take it, a reason for action, where the action in question is that of sitting down. You likewise have reasons for acting in demon case: in particular, you have reason to act so as to bring it about that you desire pain. But that is not a ‘reason for desire’. Desiring pain is straightforwardly irrational. It’s just that in this case it is rational for you to bring it about that you acquire this irrational state.
    This is just like the standard story about so-called “practical reasons for belief”. Really they aren’t reasons for belief. They’re just reasons for desiring or acting to bring it about that you acquire a (perhaps irrational) belief. Or think of the toxin puzzle. You can’t rationally intend to drink the poison. But it might be rational to manipulate yourself into acquiring such an irrational intention, if you are able to do so. That is just to say that you have certain reasons for action — for acting so as to bring about another mental state. It doesn’t follow that the latter mental state is itself rational or directly supported by reasons.
    If you find all this ‘reasons’ talk obscure, you can rephrase all of my claims in terms of objective rationality. That might make them more intuitive.
    P.S. How is GC “more expressive in… its assessment of character, principles, decision procedures, institutions etc”? All you need to assess such things is an axiology, which again any consequentialist (including AC) has. You can make the same evaluations using different words — using “right” to mean “best”, or whatever. But to be “more expressive” in any interesting sense you need to be making some substantive new claims, right? Otherwise I could claim to increase the expressivity of my theory by translating it into French.

  11. Richard,
    I am very confused by your explanation of why you treat the reasons for acting case so differently to the reasons for desiring case. One thing is that you changed the terminology from ‘reason to desire’ and ‘reason to do’ into ‘reason for desire’ and ‘reason for action’ which is a different part of speech and seems to confuse the issue (the same goes for the introduction of the term ‘rational’). My puzzle is why having a benefit from acting gives a reason to act (not a mere reason to desire that we act) whereas having a benefit from desiring doesn’t give a reason to desire, but just a reason to desire to desire. This is certainly an odd consequence of your theory and is where is comes apart from GC (and common sense, I think).
    I understand that one might be able to couch everything in terms of voluntary action and thus focus on the voluntary transitions between states rather than states. However, I don’t know why you would want a theory to do this, when it adds various complications. Of course, this is probably a big topic in the discussion of reasons (which I’m not familiar with) and I’m not certain that a long comment discussion between us is the best way to work it out. My main point is that I don’t think you have deflated GC. I think you have just shown that:
    * If you understand rightness and goodness in terms of a conception of reasons in which there is a strong asymmetry between reasons to act and other types of reasons, then GC is deflated.
    I am pretty happy to consent to this conditional, but one would need to do a lot of arguing for its premises.
    On the matter of GC being more expressive, I think this comes down to your understanding of an axiology. I understand it as a function from states of the world (including the entire future and maybe the past) to some kind of numbers, such that we can talk about different outcomes being better or worse than each other and maybe about degrees of betterness. To determine the goodness of an entire world, we often break it into the intrinsic values of some of its parts (such as happy people), but it is not always fully separable (for example, distributional effects might get in the way). I am not aware of anyone else who disagree with this conception of an axiology.
    Axiology in this sense has no intrinsic connection to consequentialism — it is a study of the goodness of states of the world, but doesn’t imply that we should assess (the instrumental value of) actions in terms of the state of the world which is the outcome of that act, or that we should assess (the instrumental value of) motives in terms of the state of the world which is the outcome of having that motive. To do so is to add something other than an axiology. AC is often taken as an axiology plus a connection to rightness via outcomes of acts such as:
    (1) an act is right iff it leads to the best outcome
    If so, then AC does not yet assess the instrumental value of motives. If you also add something like:
    (2) an X is best iff it leads to the best outcome
    then we can also evaluate motives, rules, dispositions, etc and have a form of GC which has rightness for acts only and betterness for all evaluands (we could call this semi-normative GC). You may think that all people who accept AC should accept (2) as well. I think this too, but I recognise that it is a further step, and one that can dramatically change how one views consequentialism in its relationship to virtue ethics and deontology. I also think that Mill, Bentham and Sidgwick accepted (2), so I know it is not new. However, in my dissertation I go to a lot of effort to show that it is quite difficult to spell out (2) in a way that works, but that it can be done and that what follows from it is very important to understanding consequentialism. If any pea soupers are interested, I can send them a copy (just search for my address on google and email me).

  12. Hi Toby, I wouldn’t exactly say there’s a “strong asymmetry” (except in a derivative sense) between reasons to act and other types of reasons. In either case, I take it that a reason to X is just the kind of thing that (all else equal) makes X fitting / warranted / objectively rational (i.e. in light of all the facts). The difference is not between the nature of the reasons, but between the nature of the things that they are reasons for.
    In particular, while for internal attitudes there is a distinction to draw between the object of the attitude and the state of possessing it (e.g. one can believe or desire that X, without X being the case), in case of action this distinction collapses. This explains why gaps between ‘fittingness’ (object-given reasons) and ‘fortunateness’ (state-based evaluations) arise for mental states and not for actions.
    But again, if you remain confused by all this ‘reasons’ talk, forget it and just think about objectively rationality or attitudinal ‘correctness’ instead. I trust that you wouldn’t have been so tempted to ask ‘why having a benefit from acting can rationalize action, whereas having a benefit from desiring p doesn’t rationalize desiring p, but merely desiring to desire p.’ I take this to be an obvious datum, not an “odd consequence”!
    So here are the rest of my claims thus translated:
    The scope of normative theorizing may be set by the following job description: to specify the formal ‘aim’ or standards of correctness for such attitudes and processes that may be more or less rational (e.g. belief, desire, action, intention, maybe various emotions, etc.). In this sense the scope of normative theorizing is given by moral psychology, since it’s the job of moral psychology to tell us just which elements of an agent’s psychology are ‘rationally evaluable’ (or reasons-responsive) in this way. I don’t mean to settle the details here, but I trust that certain things, e.g. eye colours, are definitely not on the list.
    Of course, we can evaluate these other things as more or less ‘good’/fortunate. But that is just to assess whether they are fitting objects of desire, such that it would be objectively rational to desire them. (It is this connection to rationally correct agency that gives such evaluations their normative significance: what would it matter to call something ‘good’ if that didn’t have any implications for how we ought to react to it?)
    Now, our axiology tells us what it’s correct to desire [modulo the complications I’ve described for Rule Consequentialists and deontologists]. (As you know, I’ve already argued that ‘outcome’ or world-evaluations entail local evaluations, so I don’t see your (2) as a “further step”. It’s like claiming that once we’ve determined the value of being a bachelor, it’s a “further step” to specify the value of being an unmarried man. There aren’t multiple options on the table here.)
    So all that remains for a complete normative theory is to specify the standards of correctness [i.e. fittingness, not just fortunateness] for other rationally evaluable processes or attitudes. AC does this for actions: it effectively claims that acts are objectively rational (or correct, or fitting) just in case, and because, they are also most fortunate.
    GC apparently wants to make similar claims as AC makes about acts, but to make such claims about everything. But once we understand what claim AC is making about acts, it makes no sense to extend it in this way. For some things, like eye colours, there’s just nothing more that could coherently be said. Once we’ve settled how fortunate they are, there’s no further question of fittingness. In other cases, like belief, GC can (formally) make a further claim, but it is substantively absurd. Fortunate beliefs (or emotions, etc.) are not thereby rationally fitting or correct.
    In sum:
    (I) AC, in virtue of having an axiology, has all the resources it needs to evaluate anything as more or less fortunate (fitting to desire). It makes a single further (normative, non-evaluative) claim, about which acts are fitting.
    (II) There’s no other further (normative, non-evaluative) claim for GC to make beyond this. Most evaluands (e.g. eye colours) are not apt for norms of correctness, and the remaining few that are (e.g. beliefs) are not plausibly governed by consequentialist norms (though of course it may sometimes be desirable to possess incorrect beliefs).
    It might help clarify the dispute if you could answer the following questions (corresponding to the two claims above; the latter being more important):
    (i) Do you deny that the world-evaluations offered by an axiology entail local evaluations of particular things as more or less fortunate? Or do you accept this and merely hold that this entailment doesn’t prevent the local evaluations from being “further claims” in some interesting sense?
    (ii) Do you think that GC makes further (non-evaluative) claims? If so, what are they claims about? Do they have implications for the rational ‘correctness’ of our responses to the world (and if so which ones?), or do you think that one can make normative claims that have no such implications for us?

  13. Richard,
    (i) Do you deny that the world-evaluations offered by an axiology entail local evaluations of particular things as more or less fortunate? Or do you accept this and merely hold that this entailment doesn’t prevent the local evaluations from being “further claims” in some interesting sense?
    I’m not sure which. However, I think that there are many people who might accept an axiology in the world-evaluating sense but would stop there. These people are non-consequentialists, or at least people who are not direct consequentialists (Brad Hooker takes this view for instance). They might agree that one act would lead to a better outcome, but not that it is the better act, or that a motive would lead to the better outcome, but not that it is the better motive. In the case of Hooker, he doesn’t think a moral theory assesses motives one way or the other.
    I’m really quite unsure that we have a substantive disagreement on this topic though, as you agree with the claims made by what I call semi-normative consequentialism, you just think that the evaluative parts are trivial (and thus that Hooker et al are trivially mistaken). This is something to take up with Hooker et al, not with global consequentialists, who are in fact the only people who explicitly agree with you on this point!
    (ii) Do you think that GC makes further (non-evaluative) claims? If so, what are they claims about? Do they have implications for the rational ‘correctness’ of our responses to the world (and if so which ones?), or do you think that one can make normative claims that have no such implications for us?
    The additional normative claims (rightness claims applied to all evaluands) are made by what I call normative GC (as opposed to semi-normative). Personally I’m pretty untroubled as to whether to accept normative or semi-normative GC. That said, I’m quite partial to scalar consequentialism anyway and thus also scalar GC (which is just the axiology plus the direct evaluations of all evaluands). I thus don’t know exactly what rightness claims are about, whether they apply to acts or other evaluands as I’m not very interested in them (which makes me not as good a sparring partner for you on this point as I could be!).
    I trust that you wouldn’t have been so tempted to ask ‘why having a benefit from acting can rationalize action, whereas having a benefit from desiring p doesn’t rationalize desiring p, but merely desiring to desire p.’ I take this to be an obvious datum, not an “odd consequence”!
    Actually, I would have asked this too. Boring though it may sound, I have no idea why you think this and I don’t find it obvious at all.

  14. Ok, thanks, that’s helpful. Insofar as my main concern here is (II), and you aren’t committed to disputing that, we may not disagree much in the end. But I’ll just say something quick about your other remarks.
    Firstly, I don’t think that “Hooker et al are trivially mistaken”, because I don’t interpret them as denying that world desirability entails local desirability. Rather, I take it that their ‘axiology’ is incomplete, and only indicates prima facie desirability, as explained in the second part of my original post. At the end of the day, they shouldn’t think that the world where you act wrongly is ultimately most desirable (or ‘best’) at all.
    (Of course, it’s possible that they wouldn’t accept the interpretation I’m offering of their view, in which case they might end up affirming the claim that now strikes me as ‘trivially mistaken’. That’d be interesting — I’d really need to hear what alternative story they offer to make their view coherent. But you’re right that that’s something for me to take up with them, not you.)
    Finally, concerning your puzzlement about how “having a benefit from desiring p doesn’t rationalize desiring p, but merely desiring to desire p” — didn’t the explanation in the second paragraph of my previous comment help?
    And to see that the claim is intuitively right, just consider a few cases. Isn’t it obvious that the inducement of the evil demon doesn’t make pain desirable, but rather only makes the state of your desiring pain desirable? It’s even more obvious in case of belief: you can’t rationally believe that pain is good (say) just because it would be good for you to acquire this belief. (Right?) Notice that you can’t bring yourself to believe this, for example; and I’d suggest that the explanation of this inability is that you’re a (largely) rational agent, whose beliefs are responsive to the reasons for belief — of which there are none in this case. A parallel story explains our inability to non-instrumentally desire pain, or to intend to drink Kavka’s toxin, even when it would be fortunate to bring about these irrational states. You’ll have a hard time explaining all this if you don’t go along with my claims about the ‘object-based’ rational norms for these various attitudes.
    If my claims about these cases still don’t seem intuitive to you, what do you say about Kavka’s toxin puzzle? (Do you think that a rational agent — e.g. you — could directly form the intention to drink the toxin, or do you deny that this failure indicates anything about the irrationality of such an intention? Or do you accept all this for belief and intention, but just not for desire, for some reason?)

  15. OK, good, it looks like this conversation has been useful. So the final thread remaining is the ‘goodness as reason to desire’ thread, and I think I see some light in that tunnel.
    Isn’t it obvious that the inducement of the evil demon doesn’t make pain desirable, but rather only makes the state of your desiring pain desirable?
    I might be willing to grant that the demon’s threat doesn’t make pain desirable but I’m not sure where the term ‘desirable’ came from. I would deny that when it is good to desire X that X is desirable (the same goes for when we have reason to desire X). In general I think that ‘desirable’ is a pretty flexible word which can mean several different things in different contexts and is best avoided.
    I would say similar things with ‘rational’. I’m not sure that ‘rational belief’ means the same thing as ‘belief that you have reason to possess’. Indeed, it seems that you are committed to some kind of substantive connection between reasons and rationality that I would probably deny.
    Perhaps it would be useful to say a bit on the topic of rationality. I think that rational has two types of meaning within the type of theory of rationality that I support (i.e. Bayesian belief formation + decision theory). It is applied to belief formation in a sense that ignores consequences and is purely epistemic, then it is applied to outcomes of the reasoning process in a way that takes into account consequences (well, they don’t have to be consequences, but lets just say that they are). Now these outcomes of the reasoning process are often called ‘actions’, but are perhaps better called ‘choices’ as they can be broader than what we commonly consider as actions. For example, Jeffrey in The Logic of Decision represents them as arbitrary propositions. If so, then it is possible to have beliefs as the output of decision theoretic reasoning (more accurately: the proposition that you hold the belief). If so, then there are two types of label ‘rational’ that can be applied to the holding of beliefs – the bayesian belief formation one and the output of decision theoretic reasoning one. (I’m not sure if Jeffrey notices this consequence of his theory).
    In the case you give, they conflict, meaning that I can’t simply answer your questions. Ultimately, I think that it is not bayesian-belief-formation-rational to believe what you don’t have evidence for, but it might be decision-theoretically-rational to do so if the outcome is expected to be good (assuming it is a belief you can voluntarily form).
    You seem to hold a roughly analogous position for reasons, but whenever the term would be overloaded (outcome related practical reasons versus epistemic reasons or the like) you only seem to consider the special type of reasons and not the more general ones. I consider both, so I think there is a sense of reason in which I have reason to believe the truth, another sense in which I have reason to believe what the evidence points to (both of these are epistemic) a third sense in which I have reason to believe what leads to the best outcome, and a fourth sense in which I have reason to believe what leads to the expectably best outcome (both the latter types are consequentialist). I don’t think the latter types go away when there is a former type present. I just think that the question ‘what do you have reason to believe’ becomes ambiguous and misleading since there a multiple answers on different senses of reason. If forced to choose, I would ultimately go with the consequentialist ones in every case, as it is more important to lead to more value than to hold a correct (or justified) belief. In practical decision making I *am* forced to choose since the accurate belief may be different to the utility producing one, but in conversation, I’m happy to recognise that there is something I’d call a reason which is epistemic and not consequentialist.
    I hope this helps!

  16. I’m not sure where the term ‘desirable’ came from.
    Since you use ‘reason’ in different ways from me, I need to find a term that will help you to latch on to the concept that I have in mind. “Desirability” looks to be it, since you agree that something that is fortunate to desire (e.g. pain) is not thereby desirable. If you have this notion of desirability, then you can derive a sense of rational ‘fittingness’, since ‘desirable’ is synonymous with ‘fitting to desire’. You can also derive my sense of ‘reasons’: just stipulate that reasons for desire are facts about an object that make it desirable (or fitting to desire). There are no such features of pain: it is not desirable in any respect. That’s all that I mean when I say that there are no reasons to desire pain.
    If you don’t like ‘desirability’ talk, here’s my last-ditch attempt to communicate the concept I have in mind. There is a form of normative appraisal, let’s call it ‘fittingness’, which applies to belief, desire, emotions, actions, etc. Here are some substantive (but uncontroversial) claims about ‘fittingness’ that might help you to get a grip on the concept: it is fitting to believe what’s true. It is fitting to fear what’s dangerous. It is fitting to desire (and choose) what’s good: pleasure, understanding, love, etc.
    Any normative claim is, or corresponds to, some fitting-attitude claim. For example, to claim in the evil demon case that desiring pain is good, is equivalent to claiming that it’s fitting to desire that you desire pain. Pain itself is still bad (undesirable) though, as reflected in the fact that it is not fitting to desire pain.
    This is a perfectly neutral framework, so you should be able to translate all of your substantive views into this terminology without problems. Moreover, this framework provides some additional structure. We can more clearly distinguish (i) believing p as a result of theoretical deliberation on the question whether p is true, from (ii) the choice or action to bring it about that you believe that p, as a result of practical deliberation about the value of possessing such a belief. In the first case what’s rationally fitting is the belief itself; in the second case what’s rationally fitting is acting upon yourself to bring about a state of belief. (Note the different kinds of rational capacities that are engaged in either case. Note especially that the latter case does not exercise your capacities for rational belief-formation. They are bypassed entirely when you act to acquire a state of belief. It’s no different in kind from acting upon yourself in any other way, e.g. to give yourself a papercut. Obviously something very different is going on in the first case.)
    In other words, when you say that I “only seem to consider the special type of reasons and not the more general ones“, I respond that I am considering all the same reasons, but simply being more careful and systematic in how I classify them. Nothing is gained by calling the practical reasons ‘reasons to believe‘, when we could more clearly illuminate their nature by recognizing them as reasons to act to bring it about that you acquire a belief. After all, note that [as a rational agent] you can’t respond to these reasons by directly forming the belief. “The demon will reward me if I believe that the earth is flat, therefore the earth is flat” is not a rational belief-forming process. “The demon will reward me if I believe that the earth is flat, therefore I should act upon myself – take this pill – to induce such belief” is much better reasoning. What this reveals is that the reason in question is a reason for action, not belief per se.
    I should add that I’m also happy to talk about evidence-relative reasons and rationality in appropriate contexts: hence my earlier clarification that I’m here talking about what’s “objectively rational (i.e. in light of all the facts)”.
    Finally, you write: “If forced to choose, I would ultimately go with the consequentialist ones in every case, as it is more important to lead to more value than to hold a correct (or justified) belief.
    A virtue of my framework is that this result is trivially correct. “Choice” is between actions, and you ought to do whatever you have most reason (or is most “fitting”) to do. One such action might be to swallow a pill that causes you to acquire a false and unreasonable belief in p. Even if p is not fit to believe (i.e. you have no reason to believe p), that’s a completely independent question from whether you have reason to act so as to bring about this belief. That’s a question about fitting action, or reasons for action. So you see, once we draw the distinctions I’m wanting to draw, these questions become much easier to answer. It’s not as though there are two competing kinds of reasons here: ‘practical’ ones that count in favour of acquiring the belief, and ‘epistemic’ ones that count against this very same thing. ‘Acquiring the belief’ is an action, which the epistemic reasons don’t speak to (or count against) in the slightest.
    So I guess I don’t see any barriers to your playing along with the framework I’ve set out (once you understand it — is it getting any clearer?). At worst it may be a bit more verbose. But at best it (i) provides us with the theoretical resources to draw more careful distinctions, and (ii) explains why rational agents can’t form beneficial beliefs at will (or beat Kavka’s toxin puzzle, etc.).

  17. Richard,
    Thanks for the detailed explanation. I think you have succeeded in giving me a better idea of where you are coming from, but I’m afraid that it is not somewhere I’d like to be. In short, I think that the concept of fittingness has a lot of hidden complexity (and possibly vagueness) and is more complex than good which it is being used to define (and along with proposition is all that I require). I think that the conceptual framework that I have is simpler and more elegant than yours, though I’d probably have difficulty convincing you of that and you probably think the converse. Thanks for explaining it though, and hopefully this comment thread will be a useful place for us to point others who have a similar clash of conceptual frameworks for consequentialism.

  18. Richard,
    I’ve written something recently in which I argue that RC is probably only viable if we reject the metaethical claim that “to ask about what’s right is to ask about what we have reason to do.” Since I don’t find that claim terribly compelling, though, I’m not yet convinced that a version of RC isn’t viable. An action is right, let’s say, if it is not wrong. There is a way of understanding what it is for an action to be wrong that can be found in Mill (on some readings), among other places, according to which an action is wrong if it is appropriate for the agent to experience guilt for it.
    For the sake of argument, let’s suppose that AC is a correct account of what we have reason to do. Someone who subscribes to this account might, if she reads Hare, be convinced that she has reason to instill certain dispositions to experience guilt in herself, namely dispositions to feel guilt upon violations of certain rules, even knowing that sometimes these dispositions will cause her not to do what she has the most reason to do.
    Given that we are presupposing AC as the correct account of what we have reason to do, there seems to be a clear sense in which it is appropriate for this agent to experience guilt upon violating the rules in question. On the Millian account of what it is for an action to be wrong, this seems to entail that it would be wrong for her to do actions that violate those rules and right for her to do actions that don’t. And this even though it will probably sometimes be the case that it would not be right for her to do what she has most reason to do.
    The version of RC that this reasoning leads to is very different from, e.g., Hooker’s in some important respects. And there are a lot of issues here that I certainly haven’t thought completely through yet. But I think it is interesting, and want to spend some more time thinking about it—unless you convince me not to bother! 🙂

  19. Hi Dale, that sounds okay to me. Here I was really only intending to talk about fundamental normative theories. It sounds like you have in mind a subject that is normative only in a derivative sense. Maybe it’s an indirect way to talk about the underivatively normative claims of AC. (AC will tell us what dispositions we should now acquire. And our axiology will tell us what dispositions, though too costly to acquire, would have been best to already possess. Presumably you are just talking indirectly about one or other of these normative claims.)
    Mind you, I’d hesitate to say that the appropriateness of acquiring a disposition to feel guilt in circumstance C entails the appropriateness of feeling guilt in C. It might instead be a case of Parfitian “rational irrationality”. I discuss this more in my old post, ‘Are Sophisticated [i.e. two-level] Consequentialists Irrational?

Leave a Reply

Your email address will not be published. Required fields are marked *