I’m starting to warm up to the objectivist form of
act-consequentialism (partly because I think it lacks content) which Doug
defended in the previous post. One worry people have is that this kind of a
view severs the connection between what is right/wrong and how ordinary, good
people deliberate and advice one another. This argument has recently been made
forcefully by Uri Leibowitz in his ‘Moral Advice and Moral Theory’ paper (Phil
Studies). So, I want to explain this objection first, and then why
act-consequentialism (and many other monistic views) do not actually suffer
from this problem.

Let’s start from basic act-consequentialism; an act is right
iff it maximises utility. Leibowitz first claims that ethical theories such as
AC serve two roles. First, they either tell us what rightness/wrongness is, or
what makes actions right. Second, and more controversially, they are supposed
to guide judgment and action. So, we should be able to derive some general
moral advice from ethical theories. The relevant advice provided by the theory
should be something which a normal agent can use in deliberation, and it should
also be ‘helpful’ (the descriptions used in the advice should be substantial
enough). A defender of AC might contest this second role of ethical theories,
but I’m going to accept it at least for the sake of the argument.

Leibowitz’s argument begins from the claim that the
following is good moral advice:

(RC) Perform action A only if after reflecting on and
deliberating about the normative status of A, you do not believe that A is
morally wrong.

Of course, this might not be good advice for all agents.
But, as Leibowitz says, we can take rational, sensitive, and well-informed
agents (RSI-agents). If they follow (RC) – think before you act, then arguably
it is likelier that they do the right thing (there is a threat of begging the
question here against AC).

Leibowitz’s argument against AC is that AC cannot explain
why this would be the case. He lists ordinary first-order considerations which
RSI-agents would be likely to use in their deliberation. These include things
such as how the autonomy of others will be affected, whether others will be
harmed, whether one will be untruthful, and, perhaps surprisingly, whether bad
consequences will be brought about. The claim then is that there is no
connection between these factors and utility-maximisation. If thinking about
these considerations turned out to lead to utility-maximisation, this would at
best be a cosmic coincidence. So, RC could not be good advice if AC were right.
The distance between the criterion of rightness and the considerations used in
deliberation would be too big.

On the basis of this Leibowitz concludes: “To the best of my
knowledge, no one has yet offered any reason to think that, in fact, the
factors that RSI-agents consider when they reflect on and deliberate about the
normative status of actions are reliable indicators of the exemplification of
the property of utility-maximisation. Moreover, I doubt that we have any
evidence, not to mention overwhelming evidence, for the co-instantiation of
certain properties of actions that RCI-agents typically consider and the
property of utility maximisation. As a result, proponents of (AU) are poorly situated
in order to explain how it could be that the factors that agents consider when
they reflect on and deliberate about the normative status of an action are
morally relevant features of that action”.

I’m going to do now something which no one has allegedly
done before. Very few contemporary consequentialists talk about ‘utility’
maximisation. Rather, they talk usually about ‘value’ maximisation. There is a
simple reason for this. Most of them are value pluralists (whereas ‘utility’ is
often associated with only pleasure, happiness, or well-being). They think that
many different things can be good and bad. In fact, I think as many things can
be good that it is fitting for an impartial spectator to value.

The modern consequentialists will then claim that (on the
axiology they have provided) autonomy is good (and undermining it is bad), that
harm is bad, that truthfulness good and untruthfulness bad, and that bad
consequences are of course, well, bad. Now, if we think that these things are
good and bad, then it is not a surprise why reflecting about these things is
likely to make one act rightly – given that our theory says that it is right to
bring about as much goodness as possible. The only way to reliably do this just
is to consider the many different good-making and bad-making considerations just
like ordinary agents usually do. So, as far as I can see, ordinary AC with a
pluralistic value-theory is in perfect situation to explain why (RC) is good
moral advice and the considerations which ordinary agents think about are
morally relevant factors. The view offers a conceptual connection to bridge the
gap which Leibowitz thought that he identified.

This goes for other theories too. Contractualists can say
that these considerations match with the reasons there are to reject different
principles, Kantians can claim that these are the sorts of considerations that
make certain maxims not consistently universalisable, and so on.

12 Replies to “Objective Consequentialism and Deliberation

  1. Hi Jussi,
    Interesting post. But I have to admit, I am having a hard time seeing why we should be moved by Leibowitz’s argument in the first place. He says that RC is good moral advice, and in some practical sense, I guess that is true. I might say something like that to a friend or whatever, perhaps because I think it is not my place to tell him what he should in fact do. So I count on his being a generally good person, and assume that a reflective action is likely to be better than a non-reflective one. But it precisely IS the role of a moral theory to tell an agent what to do, which makes me think that the kind of advice we are looking for may look quite differently.
    The agent requesting advice might be a RSI, but that doesn’t mean she doesn’t believe false things. If one of those false things concerns the nature of right action, then RC is not good advice for her. If it is a fact that objective consequentialism doesn’t provide RC as advice, this would seem like a point in that theory’s favor, as it is able to explain why the correct advice in some situations is for an agent to go against her beliefs and do something else entirely.

  2. Travis,
    me too. Anyway, a couple of points. First, about this:
    “But it precisely IS the role of a moral theory to tell an agent what to do, which makes me think that the kind of advice we are looking for may look quite differently.”
    This would be a stronger constraint on moral theories than Leibowitz sets up. It would rule out certain indirect forms of consequentialism, and so it’s not clear whether we can or should require this from ethical theories. Leibowitz only sets a weaker constraint: ethical theories should be able to explain why usually intuitively good moral advice is good moral advice. My point was that he fails to show that monistic theories cannot do so.
    Also, I don’t think it’s a problem for him that RC isn’t good moral advice always. He in fact is explicit about this. All he claims is that AC cannot explain why RC is good moral advice in ordinary cases in which it is good moral advice. I like the point though about AU being able to explain why it isn’t good advice in those non-normal cases. That really supports my response too.

  3. An analogy:
    I once owned a mechanical puzzle called a Rubik’s Clock. It is a device with eighteen clock faces that have only an hour hand, four adjustment wheels, and four switches, connected in mysterious ways. When each of the wheels are turned, and depending on the settings of the switches, the hour hands on some but not all of the clocks turn with the wheel. The aim of the game is to set all the clocks to twelve. (Let’s also define an “act” in this context as a setting of all the switches to any particular combination, followed by a turn of any single wheel by any amount less than 360 degrees.)
    The Rubik’s Clock is not all that difficult, and a reasonably intelligent person can soon learn to solve the puzzle from any arbitrary starting point. But now suppose that a counterfactual Rubik’s clock that has an inconceivably large number of clock faces, and an inconceivably complex set of interconnections between the switches, wheels and clock faces. Then, while you were busy setting one (or a few) nearby clock faces to twelve with an act, you would also be unsetting (and also – but only by pure coincidence – setting) vastly larger numbers of clocks elsewhere. That is, you couldn’t possibly predict the overall impact of your act.
    We could say that the “Rubik’s-right” act here is the one that sets the most possible clocks to 12. A “Rubik’s-wrong” act would be any act that is not Rubik’s-right.
    Now, consider whether the following is good advice to a reflective, well-informed player:
    (Rubik’s RC) Perform move A only if after reflecting on and deliberating about the normative status of A, you do not believe that A is Rubik’s-wrong.
    We have no reason to believe that (Rubik’s RC) is good advice in the case of the counterfactual Rubik’s clock. Because it is hopeless to try to assess the overall consequences of a given move, you might just as well act randomly as follow any piece of advice. And this, I think, is essentially analogous to the point that Lenman and Leibowitz press against consequentialists.
    Jussi, the analogy to your argument is to claim that it isn’t just the twelve o’clock position that is valuable. It is equally Rubik’s-good, you suggest, to set all the clocks to either-six-or-twelve, and (perhaps) it is also easier to set a given clock in the vicinity to six than it is to set it to twelve. Maybe so. But this does nothing to refute the main point, which is that (Rubik’s-RC) cannot be useful advice, given the unfathomable complexity of the consequences of the choices we are faced with.

  4. Wow Simon. That has to be the most elaborate analogy in philosophy I’ve ever seen used. Well done! Hope to see you today too.
    I do like Lenman’s argument, but Leibowitz isn’t making that argument at all (or if he was, then his argument wouldn’t be very original). Nothing in his argument seems to turn on unknowable consequences very much further down the line in the same way as Jimmy’s does. Well, in one point, he does say that the act-consequentialist cannot use an empirical inductive argument to show that thinking about the first-order considerations are likely to lead to acting rightly.
    Furthermore, a sophisticated act-consequentialist can set a constraint on which future consequences will be taken into account in creating the axiological ordering of the options. So, no act-consequentialist has to think that we are dealing with anything like the counterfactual Rubik’s rather than the actual Rubik’s you started with. And, even if it couldn’t, act-consequentialism would still be compatible with the idea that we should think about the first-order good-making considerations of the consequences we are aware of before we act. The first-order considerations, as good-making considerations, would on this view be morally relevant factors. They would affect the overall value of the consequences, and so it would be obvious that one should think about them. This seems to be all Leibowitz seems to want the consequentialist to explain.

  5. Hi Jussi,
    1) Maybe Leibowitz underemphasizes the way in which his argument rests on Lenman’s, but he does in fact cite Lenman. The original content he adds to it, as I read him, is a strengthening of Lenman’s response to the objection that consequentialism provides only a criterion of goodness and not a decision procedure. If (RC) is in fact good moral advice, then even the consequentialist who thinks AC is just a criterion of goodness needs to be able to explain this fact.
    2)

    a sophisticated act-consequentialist can set a constraint on which future consequences will be taken into account in creating the axiological ordering of the options. So, no act-consequentialist has to think that we are dealing with anything like the counterfactual Rubik’s

    But the argument was directed against your AC, not some “sophisticated” consequentialism that says something else. Still, let me invite you to modify your consequentialist principle in the way you prefer, while maintaining a plausible version of objective AC. Then we can continue the argument!
    3)

    And, even if it couldn’t, act-consequentialism would still be compatible with the idea that we should think about the first-order good-making considerations of the consequences we are aware of before we act. The first-order considerations, as good-making considerations, would on this view be morally relevant factors. They would affect the overall value of the consequences, and so it would be obvious that one should think about them

    I take it that you are referring to an unreformed version of objective act-consequentialism here, that bears the analogy with my counterfactual inconceivably large Rubik’s clock. In which case, you seem to have entirely missed the point of my analogy. It’s not obvious that we should set as many as we can of the clocks that we are aware of to twelve, because doing so is almost certainly going to turn out to be no better and no worse than acting randomly. It is, rather, prefectly obvious that no advice is good advice in this situation.

  6. Simon,
    about these points:
    2) No I don’t need to reformulate AC in any way. It’s still the case that X is right iff it has the best consequences of the available options just as AC has always claimed. All one needs to say is that the considerations that make the consequences good-relative-to-the-agent-at-the time of action (a la Smith) include only considerations in the immediate future rather than all considerations whatsoever. In this way, the same considerations (for instance, people dying as a result of one’s actions) maybe a bad-making feature if it is in the immediate future but not so if it takes place in the distant future (this is enough to deal with the Lenman cases).
    The main point is that whatever deontic classification of actions the opponent of consequentialism (including a particularist) accepts there will be a version of consequentialism that matches that classification given appropriate axiology.
    Of course, you might object that deaths must be just as bad no matter how long in the distant future they happen if this is in the causal chain from the act. But, if you say that, then you need to explain why it isn’t wrong to bring about these deaths. That’s a challenge which anyone must face.
    3) The previous response also explains why the conseqentualist qua consequentialist is not committed to the inconceivably large Rubik’s clock. Also, the analogy breaks down here. In the whether one clock is at 12 is causally connected to in what states the other clocks can be. Putting certain clocks at 12 rules out being able to set others to 12. This is matter of explicit design of the mechanism.
    True, there are some ethical cases in which this works the same way. In these cases, one can only avoid harming a person by harming someone else. But, it does seem like such cases are prevalent, or at least I’d want some evidence for this. In ethics, this is a matter of accident rather than design.
    Finally, there is a sense in which the considerations that are inside what the agent would be able to know of the consequences of the different actions are good-making even if we take into account all the future consequences. They still contribute to what the grand-total is in those cases. In this sense, they are still morally relevant come what may.
    In contrast, in the Rubik’s clock case, when the inconceivably large Rubik’s clock is not solved, there is no sense in which the clocks that are at 12 are Rubik’s clock solution contributing. The analogy just breaks down here.

  7. Also, note that Leibowitz makes the argument against all monist ethical theories. This would work only if the kind of considerations ordinary people think about would be only accidentally related what reasons there are to reject principles, which maxims can be consistently universalised, and which virtuous character-traits might lead to acting in the given way. None of these theories face any problems with the unknowable future consequences of actions. So, if the argument turns on the Lenman considerations, then it fails against the other theories.

  8. Jussi – your last response made me go back and look at Leibowitz and notice a rather important point: Leibowitz directs his argument only against act utilitarianism and “many [other] monist theories” (357). He explicitly argues that we should prefer pluralist theories (of a kind with the consequentialism you advocate), as well as particularist ones, on the grounds that they are immune to the objection. So I don’t think you actually disagree with him at all (except for perhaps a semantic disagreement about how to use the word “utility”, which you seem sometimes to use to mean nothing more than “value”).
    I’m not sure I disagree with you either, once you express your willingness to abandon agent- and temporal-neutrality. (I should perhaps have taken your parenthetical remark more seriously, when you said the consequentialist view you advocate “lacks content”)! If, on the other hand, you don’t make these concessions, I think the disanalogies to the Rubik’s clock you raise are either irrelevant or inessential. But I won’t press the point further here.

  9. Thanks for this post, Jussi, which certainly gave me a lot to think about. Here are a few thoughts I had while reading your post and the related comments.
    Regarding your presentation of my argument: I believe that we can identify two distinct roles of moral theorizing: (a) explaining the rightness/wrongness of actions; and (b) guiding action/judgment. (I don’t think this is controversial, is it?) I use the term “moral theory” for any account that purports to explain the rightness/wrongness of actions and the term “moral advice” for any account that purports to guide judgment or action. I argue that although these two roles are distinct they are related in an important way: if we find a piece of good moral advice S we should be able to appeal to our moral theory in order to explain how it is that S is good moral advice.
    However, I do not claim that “ethical theories such as AC serve two roles.” To my mind Lenman’s argument shows that consequentialist theories offer no guidance at all. Moreover, I do not think that the fact (if it is a fact) that consequentialist theories offer no guidance is in itself a good objection to theories like AC. Proponents of AC may claim that AC is (merely) a criterion of moral rightness and that AC is not meant to provide action-guidance at all. So, they may argue, it is no objection to AC that it fails to guide action. Perhaps this is what you had in mind when you wrote “A defender of AC might contest this second role of ethical theories.”
    My proposal is to begin with a piece of moral advice which, I believe, is innocuous and to see whether there are moral theories that are better situated to explain how it is that this advice is good moral advice. The advice I have in mind (as you point out) is this:
    (RD) Perform action A only if after reflecting on and deliberating about the normative status of A, you do not believe that A is morally wrong.
    It seems to me that it is difficult to deny that (RD) is (in some situations) good moral advice for RSI-agents, but I don’t have much to say in support of this claim apart from asking the reader to consider what it would take for one to fail to follow this advice (i.e., to perform an action without deliberating about its normative status, or to perform an action that one believes is wrong).
    Travis, in comments, suggests that an RSI-agent may sometimes act wrongly if she follows (RD). I think Travis is right. For example, consider Mark Twain’s portrayal of Huckleberry Finn. Huck helps his slave friend Jim to escape from his owner even though Huck falsely believes that by assisting Jim he is acting wrongly. According to the story, then, had Huck followed (RD) he would not have helped Jim to escape and as a result he would have acted wrongly.
    This example illustrate that there are situations in which agents are less likely to act rightly if they follow (RD) than if they fail to follow (RD). Examples of this sort are, to my mind, extremely interesting but I will not say more about them right now because for the purpose of my argument all I need to show is that we can identify a sufficiently large group of people for whom, and situations in which, (RD) is good moral advice. If we can identify such a group, we will need to explain the fact that (RD) is good moral advice for these people in these situation. And if particularist theories and pluralist theories are better situated to explain this fact than monist theories, then we have some reason to favor the former theories over the latter.
    Consider, for instance, the following scenario: you have just been appointed general manager of a large hospital. You decide to gather your employees for a moral pep-talk and you deliver a passionate speech on the importance of morality in the hospital setting. You conclude your talk by offering the following moral advice: “Before you perform any action stop and think about the act you are about to perform (unless it’s an emergency) and if you think it is wrong, don’t do it!” It seems to me that this is good moral advice to give to healthcare practitioners; I would like to be treated in a hospital in which healthcare practitioners follow this advice rather than a hospital in which they fail to follow this advice. Wouldn’t you?
    Now the main argument of my paper can be stated as follows:
    1. (RD) is good moral advice (with some qualifications, as noted above).
    2. In order to explain (1), we must explain how it could be that the factors that agents consider when they reflect on and deliberate about the normative status of an action are morally relevant features of that action (either intrinsically or extrinsically).
    3. Pluralist theories and particularist theories are in a better position than monist theories to explain how it could be that that the factors that agents consider when they reflect on and deliberate about the normative status of an action are morally relevant features of that action.
    4. Therefore, we have some reason to favor pluralist theories and particularist theories over monist theories.
    The rationale for premise (3) is this: according to monist theories there is only one intrinsically morally relevant feature. So in order to explain how it could be that the plurality of factors that agents consider when they reflect on and deliberate about the normative status of an action are morally relevant features of that action monists must show that all these features reliably track the one property that the monist theory they defend identifies as the only intrinsically morally relevant property. Perhaps this can be done. But at the moment it seems to me that this is an explanatory burden that rests on the shoulders of monists that has not yet been met.
    Regarding the consequentialist theory you mention in your post: As Simon pointed out, it is not clear that the theory you have in mind is a monist theory because it seems that according to your theory there are several intrinsically morally relevant features (autonomy, truthfulness, etc.) If so, my argument is not meant to target your (pluralist) theory. Alternatively, you may have in mind a theory similar to the one I mention in fn. 26. As I acknowledge in my paper, monist theories of this kind may get around my argument . Is this the kind of view you had in mind?
    Thanks again for your post, Jussi. And thanks, Simon and Travis, for your thoughtful comments.
    -Uri

  10. Thanks Uri.
    One question Uri and Simon. Can you give an example of a monist theory that would be vulnerable to the argument? Has anyone ever defended a view of this sort?
    I started to think about Mill. Very roughly, a Millian utilitarianism would say that an act is right iff it brings about the greatest balance of pleasure over pain. So, that seems like a monist view. But, if you look at Mill more closely, he of course is pluralist over pleasures and pains (especially given that he calls external things pleasures). So, for him autonomy would be a pleasure, harm would be a displeasure, and so on. Thus in thinking about what is right, for Millian view we would have to think about the many different pleasures there are as a result of our actions and these will be just the first-order considerations mentioned in the paper.
    So, Mill’s view won’t be monist in the required sense. Has anyone ever been a monist then?

  11. Jussi,
    As I use the term in my paper, a monist theory is a theory according to which there is only one intrinsically morally relevant property. A monist theory can take the following form: (T) an act A is morally right iff A exemplifies property P. Simple hedonic act utilitarianism (SHAU) is, I believe, a monist theory. According to this theory there is only one morally relevant property–the property of utility-maximization. Any act that exemplifies this property is right and any act that fails to exemplify this property is wrong. Other features may be extrinsically morally relevant. For example, that an act has the property of being just is not intrinsically morally relevant according to (SAU); it is relevant (if it is relevant) only because of its relation to the property of utility-maximization. Similarly, that an action has the property of producing pleasure for me is not intrinsically relevant; it is relevant only because of its relation to the property of utility maximization. However, this latter property is conceptually related to the property of utility-maximization while the former property is not. So if a proponent of (SHAU) thinks that we should consider whether an act is just when we deliberate, she will have to explain how the property of being just relates to the property of utility-maximization.
    Now you proposed the following consequentialist theory: (MV) an act A is morally right iff A maximizes value. According to (MV) there is one intrinsically morally relevant property–the property of value-maximization. Any action that exemplifies this property is right and any act that fails to exemplify it is wrong. For one who is a pluralist about value there will be a plurality of things that are conceptually linked to the property of value-maximization. So for each and every one of these things, a proponent of (MV) will be able to explain how it relates to the single IMR-property. I think this is right. I now see that the view you proposed is not a pluralist theory (as I use the term in my paper) but rather a view that is similar in structure to the view I mention in fn. 26. I think you may be right that views of this kind are more common than I had thought. I will have to think about this some more. This is helpful. Thanks!

  12. I don’t think that’s quite it yet. Note that the answer in the second paragraph is based on the distinction between goodness and good-making properties. The idea is that there is a conceptual connection between the two properties. This is why anyone who thinks of goodness will need to be thinking about the good-making properties.
    Yet, this goes for all properties P you mention in the first paragraph. There too we can distinguish the property P and P-making properties. And, whoever thinks about P in trying to comply with the theory will need to think about the P-making properties. Here too we have the same kind of conceptual connection. If acting justly is one of the things we find pleasure in (or in the thought of anyone doing the just thing), the justice it is that we have to think about. This just follows conceptually in the same way as with goodness. I guess the point is that there is always a conceptual connection on the monistic theories between what is right and in virtue of what acts have the property P. Just as long as this connection obtains, people will need to think of the right kind of considerations.

Leave a Reply

Your email address will not be published. Required fields are marked *