Satisficing Consequentialism aims to capture the intuitive idea that we're not morally obligated to do the best possible, we merely need to do "good enough" (though of course it remains better to do better!). Ben Bradley, in 'Against Satisficing Consequentialism', argues convincingly against forms of the view which introduce the baseline as some utility level n that we need to meet. Such views absurdly condone the act of gratuitously preventing boosts to utility over the baseline n. But I think there is a better form that satisficing consequentialism can take. Rather than employing a baseline utility level, a better way to "satisfice" is to introduce a level of maximum demanded effort below which one straightforwardly maximizes utility. That is:
(Effort-based Satisficing Consequentialism) An act is permissible iff it produces no less utility than any alternative action the agent could perform with up to X effort.
Different theories of this form may be reached by fleshing out the effort ceiling, X, in different ways. It might be context-sensitive, e.g. to ensure (1) that it's never permissible to do just a little good when a huge amount of good could be achieved by an only slightly more effortful action; (2) that vicious people can't get away with doing little just because it would take a lot more effort for them to show the slightest concern for others; or (3) that your current effort ceiling takes into account your past actions, etc. I'll remain neutral on all those options for now.
To preempt one possible misreading, I should stress that this theory doesn't require (or even necessarily permit) you to "try hard" to achieve moral ends. That would be fetishistic. If you can achieve better results with less effort, then you're required to do just that! It merely places a ceiling on how much effort morality can demand from you. Within that constraint, the requirement is still just to do as much good as possible.
Some other features of the view worth flagging:
* Unlike traditional (utility baselines) satisficing accounts, it never condones going out of your way to make thing worse. Such action is rendered impermissible by the fact that there are better outcomes that you could just as easily — indeed, more easily — bring about (i.e. by doing nothing).
* It respects the insight that the "demandingness" of maximizing consequentialism cannot consist in its imposing excessive material demands on us, since the material burden on us is less than the material burden that non-consequentialism imposes on the impoverished (to remain without adequate aid). Instead, if there is an issue of "demandingness" at all, it must concern the psychological difficulty of acting rightly.
* It builds on the idea that there's no metaphysical basis for a normatively significant doing/allowing distinction. The only morally plausible candidate in the vicinity, it seems to me, is effortful willing.
* It provides a natural account of supererogation as going beyond the effort ceiling to achieve even better results. (As others noted in class, traditional utility-baseline forms of satisficing consequentialism have trouble avoiding the absurd result that lazing back in your chair might qualify as "going above and beyond the call of duty", if you have inferior alternative options that nonetheless exceed the utility baseline.)
So, all in all, this strikes me as by far the most promising form of satisficing consequentialism. Can anyone think of any obvious objections? How would you best flesh out the details (of how X gets fixed for any given situation)?
P.S. My follow-up post looks at why we might be led to a view in this vicinity, over (or as a supplement to) straightforward scalar consequentialism, and explores one possible way of fleshing out the mysterious "X".
Hi Richard,
I’m not sure how to cash out effort, but I wonder what you’d say to about a case like this.
There’s a trolley headed towards 5. You could:
(a) Do nothing and 5 die.
(b) Flip a switch and a trolley with two rolls into the way. 2 die.
(c) Pitch yourself over a railing onto the tracks below. You alone die because your body stops the trolley.
(d) Knock your enemy off of the bridge knowing that you’ll fall over as well. You and your enemy’s body stop the trolley and 2 die.
I don’t know what effort comes to, but I don’t see anything that rules out the following stipulations: both (b) and (c) involve a degree of effort greater than X, but (a) and (d) involve a degree of effort less than X. Even with these details stipulated, I don’t think we’d want to say that (d) is permitted and (c) is not. Again, that seems to depend upon how effort is cashed out. Suppose it’s cashed out in terms psychologically difficulty or physical exertion. On this account of effort, I think I can stipulate what I have but our intuitions wouldn’t favor the claim that (d) is permitted and (c) is not.
I worry that the view will get counter-intuitive results because it is formulated in terms of effort as opposed to self-sacrifice. Suppose that, in the given context, X = 100. Suppose that my top two acts in terms of utility production are (1) A1, which produces 50 units of utility for me and 1,000 units of utility for others and requires 100 units of effort from me, and (2) A2, which produces 51 units of utility for me and 1,001 units of utility for others and requires 101 units of effort from me. And assume that all other alternatives either produce less utility than A1 or require more effort than A1. It seems, then, (although I may not be understanding it completely) that EBSC implies that it’s permissible to perform A1.
This seems counter-intuitive to me. After all, isn’t absurd to condone, as EBSC seems to, making a self-sacrifice so as to produce less overall utility (as well as less utility for others)? One should not be permitted to fail to produce more utility for others when one can do so while at the same time benefiting oneself.
I agree with Doug that there is a problem on focusing on effort rather than overall cost to the agent. That could easily be corrected though.
I’d like to hear more about how context sensitivity solves familiar problems with threshold views whilst retaining the basic idea that there is morally significant threshold of effort or cost.
One problem arises due to the focus on individual actions.
Suppose that the cost threshold is 100
Compare:
1) D can v1 at cost 100 to provide 1000 utils and v2 at cost 100 to provide 1000 more utils. No actions below cost 100 will produce more utils in either case.
2) D2 can v1 at cost 1 util to provide 1000 utils and v2 at cost 101 to provide 1000 more utils. No actions below cost 100 will produce more utils in either case.
It seems implausible that D is required to v1 and v2 in 1) whilst D2 is not required to v1 and v2 in 2). The more general problem concerns treating actions individually. What reason is there against simply combining the costs of v1 and v2 rather than treating them individually and then treating one as context for the other? Treating actions individually also requires some account of the individuation of actions which seems difficult to provide in a non-aribitrary manner.
Another problem is small costs above the threshold that produce great benefits.
1) D can v at cost 100 to provide 101 utils. There is no alternative action that produces more utils.
2) D2 can v2 at cost 101 to save one million lives. The only action below cost 100 produces 101 utils.
It is implausible that D is required to v but D2 is not required to v2.
Overall, I find it difficult to believe that the transition of costs from a) 99-100 is powerfully different from the transition of costs from b) 100-101. The threshold would retain significance if bearing the extra cost in a is required even if the extra utility achieved is minimal, but bearing the extra cost in b) is not required even if the extra utility achieved is significant. What reason is there to believe that there is a shift at some cost threshold that is like this (whether we focus on individual actions or whole lives)? Doesn’t it seem more plausible to believe that there is a smooth curve where for each increase in costs that a person bears more utility must be achieved to render the action required?
In other words, context sensitivity of the kinds you hint at seems to do a great deal of work in rendering the view plausible. But given these difficulties, we might just conclude that it is difficult to believe that the effort (or cost) threshold has moral significance at all.
In response to Victor’s point “Why believe there is a morally significant threshold and not a smooth curve?” I would turn discussion to reasons. It is my view that satisficing and sufficiency views are best understood as picking out a “shift” which is understood as a discontinuity in the rate of change of the marginal weight of our reasons to benefit someone. Understood in this way we can more clearly assess whether a “shift” is warranted or whether a smooth curve best models the weight of our reasons.
I think that non-comparative, non-instrumental and satiable reasons can explain why a shift is preferred to a smooth curve. To give a simple, and not wholly accurate, example consider my reasons to secure more cash. If one of my reasons to secure more cash is that I require “enough” money for a bus ticket, once I have enough for a bus ticket my overall reasons to secure more cash shift. Reasons to give me more cash are thus more easily outweighed by considerations of effort or reasons to give cash to others for more important pursuits. The plausibility of the shift, as opposed to the curve, rests on the existence of similar, though distinct, reasons that are non-instrumental, non-comparative and satiable. Examples of such reasons are more evident in the literature on political philosophy than consequentialism but Rawls, Scanlon, and more recently Martin O’Neill, have argued that our reasons to avoid stigma, avoid deprivation and remove objectionable differences in power are of this kind. My worry is that in terms of “effort” no particular reason that would support a shift springs to mind but maybe someone else can thing of one.
I’m worried about this implication of Effort-based Satisficing Consequentialism:
“If you can achieve better results with less effort, then you’re required to do just that!”
It seemed like one motivation of satisficing consequentialism was to try to provide the amount of moral freedom we intuitively have. Now, that consequences of the Effort-based view seems to take away that freedom in many cases. These are the ones in which the least effort requiring act has the optimific consequences. The effort-based view still requires in these cases to act in this optimific way.
To illustrate this, intuitively if I had stayed in my bed today this would have required the least amount of effort from me. It’s also fairly intuitive that by coming to work slightly less goodness is created to the world. It’s very cold and the buses were not running and I’ll be marking essays all day, so I’m not very happy. In addition, I’m not really creating much utility to the world either for others – in fact, I’m making them do things they would rather not do. So, all in all, the option of staying in bed would have required less effort and it would have produced slightly more utility than what I’m doing now. According to the Effort-based view, I would have been required to stay in bed and by coming to work I’m doing something wrong. I find this slightly unintuitive.
There probably are fixes to this issue if we try to make the way in which permissibility is a function of the value of the consequences and effort more complicated than the proposal that’s on the table.
Thanks for the comments, all!
Clayton – your option (c) is permissible (indeed, supererogatory) on the proposed account, as it produces no less (indeed, more) utility than any action below the effort ceiling. (To clarify: it isn’t impermissible for an agent to choose to exceed the effort ceiling to obtain more optimal results. It merely isn’t obligatory.)
Doug – Sure, it would be absurd to condone going out of your way to harm both oneself and others, but EBSC does not condone that. It does condone failure to act in ways that help both oneself and others, but that seems to me a feature not a bug. We all do this all the time — relax with some trashy entertainment when we could instead have worked on some meaningful project (e.g. a new philosophy paper) that would be better for both ourselves and others. As I see it, a major motivation for satisficing is to accommodate such everyday akrasia. Such behaviour is sub-optimal, perhaps even silly of us in a way, but it would seem overly strict to call it impermissible.
Jussi – as noted above, I wouldn’t want to accommodate the moral freedom to gratuitously go out of your way to make things worse. That implication of standard SC views just seems crazily undesirable to me. So I guess I’m not so sympathetic to that particular motivation for SC. I’m more moved by certain ‘demandingness’ worries (on the particular psychological interpretation mentioned in the main post). If you did want to secure a broader range of permissible actions, then you’d probably be better off sticking with a standard version of SC, since this version is designed precisely to avoid that implication.
Victor – right, those are the kinds of problems we can avoid by an appropriately context-sensitive account of how the ceiling X gets fixed in any given situation. Do the complications suggest that effort is not so morally significant? I’m not sure. They might just show that its significance is complicated and context-sensitive! Seriously though, I’m not yet totally sold on satisficing, myself. I mainly just think that views of this form are an improvement over traditional ways of formulating SC. (Do you disagree with this comparative claim?) Whether this view is non-comparatively adequate will depend, I think, on whether the kind of story I develop in my follow-up post (deriving the effort ceiling from a ‘quality of will’ theory of blameworthiness) works out. And that’s very much an open question!
Hi Richard,
I see. That’s very interesting. I’ll need to think about that.
Okay, I didn’t think for too long. That’s probably a bad idea. But here’s what I’m thinking at the moment.
When we’re talking about the work involved in writing a philosophy paper, I’m with you. This sort of effort is quite taxing and one doesn’t need to work on writing a philosophy paper at any particular time to get it written and to reap the benefits from having it written. But suppose we are talking about a case where all I would need to do in order to get some extra benefit for myself and for others is push a button. And suppose that I wouldn’t mind pushing the button. Indeed, suppose that I would enjoy pushing the button. Pushing the button still involves effort. So do you think that it’s permissible for me to refrain from pressing a button (that I would enjoy pressing) when I and others would benefit from my doing so?
I assuming that effort is a matter of expenditure of energy. But there are plenty of expenditures of energy that I don’t take to be self-sacrificing at all (e.g., scratching an itch). It’s not just that I think that the sacrifice involved in expending the effort is outweighed by the benefit; it’s that I don’t see that the expenditure involves any sacrifice whatsoever, not even in a short-term sacrifice that is outweighed overall. Other expenditures of effort are self-sacrificing at least in the short-term. Such is the case with writing a philosophy paper. So the problem I see with EBSC is that it seems to condone failures to act in ways that help both oneself and others even in cases where one can help both oneself and others at no cost at all (and not just at no net cost) to oneself.
Perhaps, you’ll deny that effort can be entirely costless. But then I want to hear more about how you understand effort such that it necessarily involves some cost to expend.
Hi Richard,
Thanks! I’ll stay in bed on Monday…
Hi Doug, I was thinking of ‘effort’ as a matter of expending willpower or self-control (thus closely related to the phenomenon psychologists discuss under the banner of ‘ego-depletion’). Perhaps there’s some sense in which scratching itches consumes energy, but the way I’m thinking about it, such automatic actions typically don’t require any effort at all. (You don’t need to force yourself to do it. If anything, you might need to expend willpower to resist the urge!)
Given this understanding of effort, do you find it more plausible that there’s always some cost involved? It might not be a significant welfare cost to the agent, so it might well be imprudent for them to refrain from acting. But it is at least understandable, as there is some barrier or obstacle — a kind of psychological inertia — that they would need to overcome.
Depending on the details of the account, a very slight effort-cost might never be enough to excuse suboptimality. It depends on the details of the effort ceiling. (We might want to excuse only “moderate” levels of laziness / lack of self-discipline, where what counts as moderate or extreme laziness depends on various contextual factors.)
Hi Richard,
I’m not sure that captures the ordinary sense of ‘effort’. Suppose that I have to push down on a button quite hard to depress it. But suppose that I quite enjoy pressing down hard on buttons. I find it to be a pleasant experience. In that case, I don’t have to exercise any willpower or self-control to make me depress the button, but it’s odd to say that it doesn’t involve any effort. But, in any case, you can stipulately define ‘effort’ so that it means ‘expending willpower or self-control’. And, perhaps, that’s the best way to avoid my putative counterexamples to EBSC.
But, if you do go this route, then I worry that you might get other counter-intuitive results. Suppose that Jane doesn’t have to exert any willpower or self-control to get herself to build homes every Saturday for Habitat for Humanity. She just really likes building homes for needy people in the same way that I really like to press down hard on buttons. Does that mean that her building homes for Habitat for Humanity every Saturday doesn’t require her to exert any effort (in your sense of ‘effort’) and does it further follow on EBSC that she is obligated to build homes every Saturday for Habitat for Humanity?
Also, some people (psychopaths and pedophiles) have to exert a tremendous amount of willpower to avoid harming people. Me, not so much. I find it easy to refrain from harming people. It doesn’t require any willpower or self-control for me to refrain from raping, torturing, and murdering others. Does this mean that, on EBSC, a psychopath and myself can show the same concern for others but have different obligations with respect to refraining from harming others. That is, do psychopaths have a less stringent duty to refrain from harming others than I do because refraining from harming others requires much less effort (in your sense) on my part than on their part?
Perhaps, you just bite the bullet on this sort of case, but you can get similar sorts of examples involving aid. Perhaps, Mother Teresa was just naturally inclined to want to help lepers. But for me it would take a huge act of willpower. Is my obligation to help lepers less stringent than Mother Teresa’s?
Yeah, I think the view is going to have some implications along those lines. We might avoid the psychopath problem by requiring that the effort ceiling be at least at the level that would be required for the agent in question to act with a modicum of good will towards others. (That might be a much higher level for psychopaths than for saints.)
But I think it’s quite natural for a proponent of this view to think that ordinary people are more easily excused from helping lepers than is a Mother Teresa who is naturally inclined towards acting so well. Perhaps this doesn’t entirely fit with common intuitions, but it does seem to fit well with the specific “demandingness” concerns that motivated me to explore this theory. So I think I’m happy enough to embrace this implication.
(Importantly, Mother Teresa may still count as more virtuous than us, even if she ends up acting “impermissibly” more often than we do. As I suggested to “Unknown” back on my blog, this view naturally goes along with a deflated understanding of the importance of acting permissibly. It may be better to act impermissibly by high standards than permissibly by low standards, after all.)
Hi Richard, thanks for that.
I still don’t understand, after a quick read of your follow up post, what you mean by context sensitivity and whether we could really regard the view as a satisficing view once this is properly specified. I thought that the idea of sastisficing is that ‘enough’ plays a fundamental rather than merely derivative role in the theory – i.e. ‘enough’ isn’t just the outcome of a proportionality judgement, for example, where greater effort is required to secure better outcomes on a proportional basis. I wasn’t sure that your account was distinctively satisficing, as I understand it. It looks like any non-maximizing view might be satisficing. (I should say, though, that I may just not understand what ‘satisficing’ is supposed to mean – perhaps you could spell that out).
On the relationship between effort and virtue, wouldn’t it be more plausible to control for the increased effort required by vices. In your response to Doug you talk of excuses. This is the better way to think here – perhaps psychopaths have excuses because of the effort required to do well. But we don’t normally think that ‘D had an excuse for not ving’ implies ‘D lacked a duty to v’. We normally think, rather, that D had an excuse for failing to do his duty.
Hi Victor, yeah I merely mean this view to be ‘satisficing’ in the weak sense that it’s neither scalar nor maximizing.
Personally, I don’t think that I’d want to excuse psychopaths and others from meeting at least some minimal absolute standards of concern for others, no matter the effort involved. But I grant it’s an option worth considering (especially if, as you suggest, we understand the excuses as merely mitigating blameworthiness rather than moderating their moral obligations).
Hi Richard,
Since none of your other commentators have raised either of these two points, I fear that they may just be somewhat silly, so I apologize in advance if this is so. Nevertheless, I hope you will satisfy my curiousity.
1. I see nothing internal to consequentialism that motivates the need to carve out a permission for agents to perform less than the best available act. Therefore, the entire SC enterprise seems entirely ad hoc, and the need to resort to it an excellent reason–all other things being equal–to prefer competing normative theories that do not require us to untangle the complications (e.g. what is the level of X) arising therefrom.
2. Arguably, it is in the very nature of any plausible moral theory that it demand great, perhaps even the ultimate sacrifice under appropriate circumstances. From the pyschological perspective, such a sacrifice may involve supreme effort (above any X you might specify). Consider the following case. A microbiologist is kidnapped by terrorists, and given the following choice: “cooperate with us in constructing a certain toxin or die in a very bad way.”
The scientist determines based on her best information that if she cooperates a million people will die, and if she acceeds, she will be tortured, and then killed. Refusing to cooperate may take heroic effort, but it seems to me that she is morally obligated to take the bullet (although we might have some sympathy if she doesn’t–but this is a different thing from determining her moral obligation). Yet if I understand your theory, she would be justified in cooperating. This seems highly counter-intutive to me.
Sorry, in the second line of the last paragraph, “acceeds” should be “refuses”.
Hi Mark, the general structure of the theory doesn’t mandate any particular answer to the case you describe. It may be that the correct way to fix ‘X’ in that situation is to set it at whatever level of effort (however high) is required to save the million lives.
On the more general issue of “why satisficing?”, I can think of two main proposals (though again, I’m not sure how far I really endorse this myself):
(1) Reflective equilibrium: You might recognize (for whatever reasons) that some form of consequentialism must be correct, and yet have a methodological preference for a version of it that captures more of our (permissive) intuitions about cases.
(2) For the more theory-driven (like myself), my follow-up post proposes a more principled reason for thinking that there is a significant normative status — whether you want to call it ‘permissibility’ or not — that is non-maximizing and closely linked to blamelessness. SC can then be understood as an account of that normative property.
(You’re very welcome to comment further there, on whether or not you find my case to be persuasive!)
Hi Richard, Thanks for the explanations and the invitation.