Dale Jamieson and Robert Elliot (hereafter J&E) have recently formulated a new version of consequentialism. It’s called ‘progressive consequentialism’ (this is in Philosophical Perspectives 2009). They believe that this view is motivated by the same considerations as the so-called satisficing views whilst not suffering of the same problems. I want to claim that, as the problems of the view which J&E themselves explore show, progressive consequentialism just is a form of satisficing, and thus it suffers from the very same problems. And, if J&E believe that progressive consequentialism can deal with these problems, then so can the satisficers too.


J&E begin from the familiar thought that the standard forms of consequentialism are too demanding. Some people have suggested that satisficing consequentialism can avoid this obejction. If one is required to only bring about ‘good enough’ consequences, then perhaps we do not face 'unreasonable' moral demands.

J&E claim that the central problem of satisficing is arbitrariness. What would be good enough? According to J&E, any attempt to establish the required baseline would be ad hoc. Should be bring about at least 20%, 25%, or 30% of the best consequences? Nothing about the satisficing consequentialism as such seems to motivate any answer above others.

J&E suggest that progressive consequentialism can avoid the demandingness objection without falling into the arbitrariness trap. Progressive consequentialism states that right action ‘improves the world’.  Like satisficing, this view too thus gives up the maximising idea of traditional consequentialism. One is not required to bring about the best outcome. Rather, one is only required to improve things, i.e., make them better than the ‘status quo’.

This makes me think that progressive consequentialism is just a form of satisficing. It too provides a base-line and then claims that morality requires to do something that gets you above that base-line. ‘We have to make the world [some amount] better than we found it’ J&E write.

So, if a progressive consequentialism is a form of satisficing, one expects it to inherit the problems of satisficing. And, indeed this is the case. Firstly, arbitrariness creeps back in. As J&E recognise, progressive consequentialism itself does not offer just one baseline. There are many different baselines of the ‘world as we find it’ to which we could compare the value of the consequences of our actions.

We have at least two major choices (and their variants). First, in the ‘It’s a Wonderful Life’ – James Stewart kind of thought-experiment, we could compare the consequences of our acts to a world in which we did not exist at all. Call this the impersonal option. Second, we could compare the consequences of our actions to a world in which we existed but did nothing in the relevant situations. Call this the personal option.

Is there a principled way of making the decision which one of these is the right baseline? The only method J&E use is to consider which gets the results that fit our moral intuitions. If this is a non-arbitrary way of picking the baseline, then there’s no reason why the satisficers should not be able to use this same method to avoid the arbitrariness problem. In fact, there could be a contextualist satisficing view according to which the good enough varies in different contexts so as to get right normative outcomes. This would be just as ad hoc or non ad hoc as any form of progressive consequentialism.

The second problem is that none of the baselines seems to work, as J&E seem to recognise. The personal option requires being able to make the distinction between acts and omissions which is notoriously problematic (and not consequentialist in spirit). Also, consider a case in which there are 12 people drowning in a lake and one person has a small headache. You could try to save as many people as could, go into the other direction to give a pain-killer for the person who has a slight head-ache, or do nothing (stand still perhaps but that does sound like an action). Well, giving the pain-killer would make things go better than doing nothing, so this should be permissible according to the progressives.

J&E think they can deal with this problem by what they call ‘an efficiency requirement’. This says that ‘if a person is willing to allocate a specific degree of effort to improve the world, then we might reasonably require that she use that degree of effort to produce the best result possible’. The thought is that individuals have a natural disposition to act in certain ways.  This inclination fixes the degree of effort the agent is willing to allocate originally in the efficiency requirement.

Now, if this requirement worked for progressivists, it would also save satisficing theories from the familiar harming and preventing goods from being delivered above the cut-off line kind of objections. But it doesn’t. Saving even one person might require more effort that giving a pain-killer to someone. If my natural inclination is then to only help others a little bit by doing trivial acts but not to do more, the efficiency requirement could not explain why I might be required to save someone’s life in this case.

What about the impersonal James Stewart baseline? Well, if you hadn’t been conceived, your parents could have had children instead who would have solved the problems of nuclear fusion. In this case, anything you would do would be wrong unless you made things even better. Or, imagine that your parents would have had children whose evil deeds would have surpassed those of the worst dictators. In this case, it would be ok for you to be almost as many bad things just as long as your natural inclinations were not towards any good. Improving things would be easy for you in this case but very bad for others.

So, it seems like progressive consequentialism makes irrelevant considerations about the counterfactuals in which you don’t exist relevant to the rightness of your actions (as well as natural inclinations to act badly or well). I guess I cannot see any non-arbitrary way of picking the counter-factuals so as to avoid these problems. This seems like the same percentage-game as the one facing the satisficers.

 

14 Replies to “Progressive Consequentialism

  1. Hi Jussi,
    I like your analysis. I think that you’re exactly right.
    P.S. I know that there’s not much point to my posting this comment, but since I always wanted someone just to say “you’re right,” I thought that you might appreciate such a comment as well.

  2. Hi Doug,
    aww – thanks. How nice of you! I’m going to start doing this too. Maybe I remember this wrong, but I recall seeing your name in the acknowledgements of the paper. I’m sure you’ve said something similar about their view.
    Reminds me of a conference where a friend of mine gave a paper and there was no question and a very long awkward silence. He really felt bad about that. Only later after the talk did it became clear that everyone had just agreed. I wish someone had realized to say sooner that ‘yep, that sounds right’.
    Not that there probably isn’t an answer by Jamieson and Elliot to what I say above.

  3. Yes. I did comment on an early draft of their paper. I don’t remember the details, but I believe that my comments were focused mostly on the general plausibility (or implausibility) of progressive consequentialism. I wasn’t so much concerned to discuss whether it fares better than satisficing consequentialism, for I think that dual-ranking versions of act-consequentialism can avoid the demandingness objection while also meeting the arbitrariness objection.

  4. Wow! Assuming that agreeing is transitive (maybe it isn’t – could x agree with y, y with z, and x not with z?), I believe that this is the first time anyone has agreed with me. Thanks guys!

  5. Here’s an idea. Suppose you bracket the problem of finding the alternatives at a time. Let the set S = {a0, . . .,an} the alternatives open to S at t, where a0 would yield the best outcome, a1 would yield a slightly less good outcome, and so on. Instead of looking for a baseline, locate a topline. The topline is the best I could do and in general (though not always, sometimes it will be less bad than every alternative) will be some positive value n. The rest is an empirical matter. Certainly the property of being demanding reflects the difficulty of conformity with the principle over time. Ask whether it is too difficult for agents to, in general, perform actions between the best producing n and and half of the best producing n/2. If that’s in general too hard, then consider between n and n/3. We can then come to some reasonable conclusion about when a principle is appropriately demanding. Suppose we come to some such conclusion. Then a reasonably demanding utilitarian principle will require agents to perform actions in the interval between the best n and, say, n/3.

  6. Mike,
    there’s nothing wrong with that idea as such. In fact, I don’t think that it’s any better or worse than either traditional satisficing or progressive consequentilism. In fact, it seems like all three views are logically equivalent as whatever version you take of one of the views there will be a co-extensive versions of the other two views. If the right actions are in the range of n and n/3, there will be some counterfactuals where the same acts improve things and also a view where you have to create more value than x utils that permit the same acts. So, it seems like these are just notational variants of each other.
    Also, the methodology seems to be the same. We have some intuitions about which acts are permissible and not too demanding. We then use those intuitions to fix the right counter-factuals, appropriate range from the top, or the right baseline. The traditional objection to this is that it is arbitrary or ad hoc. I’m not sure that’s quite the right word but at least the resulting views seem to have little explanatory potential left. But, then again, providing explanations might not be what ethical theories are for.

  7. Jussi, I ask mostly as a curiosity (not having read J&E’s paper), but what would their likely reaction be to recharacterizing their view as a form of mono-deontic intuitionism or deontology? In other words, the view could be seen as claiming that there is a single duty: beneficence, in W.D. Ross’ parlance. I realize their putting this forth as an alleged advance of consequentialism, but given your criticisms, the view seems more plausible as a species of non-consequentialism.

  8. The traditional objection to this is that it is arbitrary or ad hoc. I’m not sure that’s quite the right word but at least the resulting views seem to have little explanatory potential left.
    Doesn’t the empirical evidence undermine the ad hoc objection? We’re not merely selection an interval arbitrarily, we’re selection one that we have confirmed does not demand too much or too little. How is that ad hoc?

  9. In fact, I don’t think that it’s any better or worse than either traditional satisficing or progressive consequentilism. In fact, it seems like all three views are logically equivalent as whatever version you take of one of the views there will be a co-extensive versions of the other two views
    Several of your objections appeal to the difficulty of finding a baseline. I agree with this. But finding a topline is not hard. So this view has the advantage (or, so it seems) of avoiding all of the baseline location worries. The logical equivalence claim seems a bit strong, but who knows.

  10. Michael,
    I’m not sure about that. At least in Ross, the duty of beneficence is a maximizing one. If there’s no other relevant moral claims present you ought to help others as much as you can. There’s no slack in this respect. But, even if you could phrase J&E as saying that there’s only one duty – the duty of beneficence – this duty is special in that it’s only a duty to make things better and not the best. Of course this is deontological in the sense that there are moral options. You are permitted to bring about sub-optimal outcomes. Of course, satisficing is a deontic view too in this sense. So, you are right that there are tricky questions here about which views we should call consequentialist and which deontological (this never has seemed to interesting question anyway, but that may just be me).
    Mike A,
    I’m not sure how that is an empirical process. Of course, you need empirical evidence of what the actions are, what they demand, and what the consequences are. But, to think about whether an act demands too much or too little is a priori. And, it is something all these views take an advantage of. And, the main point is that we do this first, and only after this we build our theory to fit the intuitions. Some people think that this should go the other way around. We first build our moral theory on some solid, fundamental foundation, and then our theory gives us a vantage point to assess and justify our intuitions. Clearly, this isn’t possible if our theory is fixed by our moral intuitions about demandingness.
    Also, I don’t see what help the topline is. Honing in from that to the appropriate range of permissible actions seems to be the very same process as looking for the baseline. The top is up there on both views.

  11. I’m not sure how that is an empirical process. Of course, you need empirical evidence of what the actions are, what they demand, and what the consequences are. But, to think about whether an act demands too much or too little is a priori.
    This is a pretty clear point of disagreement. I don’t know how one could determine apriori whether a theory is too demanding. Certainly, you’d have to have some information about moral agents, their psychological, cognitive and physical capacities, and so on. All of that is known empirically, I think we can agree. There are no doubt worlds in which requiring that moral agents always do the best is not demanding at all; the capacities of agents in those worlds W might make it a breeze to do so. And there are worlds W’ in which it is extraordinarily burdensome to always perform the best action. I don’t think we could know where our world stands without some observation. And I don’t think I could know apriori, for every world in the range from W to W’, that one everyone in evey world in that range ought to do the best or that not everyone need do so.
    Also, I don’t see what help the topline is. Honing in from that to the appropriate range of permissible actions seems to be the very same process as looking for the baseline. The top is up there on both views.
    Yes, the topline is available for every view. Your objections to those views focus on their failure to locate a non-arbitrary baseline. I’m urging that they don’t need to locate a baseline, they can instead locate a non-arbitrary required interval of actions from the topline. It is non-arbitrary since there is (in principle, anyway) an empirical justification for the interval.It is selected on the basis of how demanding it is on actual moral agents, given their psychological, cognitive and physical capacities.

  12. Mike,
    I’m afraid I’m still not following you. Maybe it would be helpful to distinguish between too notions of demandingness which I worry that you are confusing.
    First, let us say that there is a descriptive, empirical notion of how much an act demands ‘effort’ in circumstances (though there are worries of how to measure this – maybe in terms of how much personal projects the agent will need to overlook). So, we need to know whether statements such as:
    Act A1 in C1 demands x amount of effort.
    Act A2 in C2 demands y amount of effort.
    Act A3 in C3 demands z amount of effort.
    To know whether these are true, you are right, we need to know a lot about agents, their psychological, cognitive, and physical abilities, how the world is circumstances C1, C2, C3, what kind of consequences the actions have in them, and so on. But, all of this will be a posteriori.
    However, there’s also another normative notion of demandingness that we use to determine whether morality could require x, y, or z amount of effort from the agents in certain circumstances for some good consequences. Here we ask is it the case that an agent *is obliged* to do A to bring about outcome O even if that would require x amount of effort from her. So we are in the realm of obligations and oughts here. One way to think about this is that we can form the following kind of conditionals from the previous claims to get:
    If act A1 in C1 demands x amount of effort, then the agent is obliged to do A1 because morality can require x amount of effort for bringing about O1.
    If act A2 in C2 demands y amount of effort, then the agent is no longer under an obligation to do A2 because morality cannot require y amount of effort for the sake of bringing about O2.
    Now, what I don’t see is how these conditionals could be empirically tested. We can test whether the antecedent is true empirically, but that doesn’t tell us whether the whole conditional is true. Neither does knowing what world we are in tell us anything about these conditionals. Of course, after we know which of these conditionals is true and which world we live in, we can tell what morality requires from us in our circumstances. However, to get to that point, the crucial piece of information – the conditionals – is a priori.
    Let me know if you think that there is an empirical way of knowing which of these kind of conditionals. If there is, you have discovered an empirical way to test ought claims. This would be a major coup in moral philosophy.
    As long as we need to know the conditionals a priori, even on your view locating the interval is based on normative intuitions. This is because it is the conditionals that do the work of telling us what the right interval from the top is for the correct ethical theory. Just having the antecedents of the conditionals won’t be enough for this. I’m not saying that this process of starting from the a priori conditionals makes your view arbitrary. It does make it just less explanatory.

Leave a Reply

Your email address will not be published. Required fields are marked *