Some philosophers – let’s call them “teleologists” – believe that there is an intimate connection between deontic terms like ‘required’, ‘ought’, and ‘permissible’, on the one hand, and evaluative terms like ‘better’ and ‘best’, on the other.
Teleologists face a problem with the intuitive idea of supererogation. This is the idea that sometimes we are not morally required to do the morally best thing, but may permissibly take options (e.g. to pursue our own personal projects, or to safeguard our own interests) that are morally suboptimal. As Sam Scheffler would say, we sometimes have an agent-centered prerogative to act in morally suboptimal ways.
In this post, I shall argue that two attempts at solving this problem – a simple threshold view, and a dual-ranking view – face serious intuitive difficulties. The best solution, I shall suggest, is not a dual-ranking view, but a triple-ranking view.
Teleologists think that the acts available to an agent can be ranked, from morally worse to morally better. (In my view, this need not be a ranking of the acts’ [total] “outcomes” or “consequences”, in the consequentialists’ style: it may fundamentally be a ranking of the acts themselves.)
Intuitively, it seems clear that supererogatory acts of heroic or saintly self-sacrifice are morally better than the acts that are just barely morally permissible. So, according to one simple view, there is just a threshold on the moral ranking of acts. What is morally required of the agent is that she should do something that is at least as good as the threshold. In effect, this is a kind of satisficing view: what is required of the agent is just that she must do something that is good enough.
However, this approach seems intuitively wrong. Suppose that I make a big sacrifice of my own interests to save some people’s lives, but I save these people in a way that doesn’t save their sight – they all end up becoming blind – even though at no greater cost to myself I could have saved their sight as well as their lives. At least so long as these people’s losing their sight wasn’t part of my intention in acting, this act seems to be morally better than my just doing the bare minimum required – i.e., just doing nothing and allowing all those people to die. So the act is above the threshold. But it seems morally impermissible: in this case, I am morally required, if I save these people, to save them in a way that saves their sight as well as their lives.
Reflections on this case may seem to motivate a “dual-ranking” view of the sort that has recently been developed by Douglas Portmore.
One key insight behind the “dual ranking” view is that the notion of what is morally required is not the same as the notion of what one ought to do, all things considered. I ought, all things considered, to buy a new pair of shoes, but I am not morally required to do so. There seem to be two main differences between these notions.
-
The reasons that make it the case that I ought to do something can include non-moral reasons, but the reasons that make it the case that I am morally required to do something must primarily be moral reasons.
-
If I am morally required to do something, then – unless I have an adequate excuse – I can be appropriately blamed by other people for failing to do it; but there are some acts that I ought to have performed (like buying a new pair of shoes) that no one else is entitled to blame me for.
What I ought to do, all things considered, is determined by how much all-things-considered reason I have in favour of each of the available acts. In this way, these acts can be ranked in terms of how much reason, all things considered, I have for doing them. So, we can make sense of two different rankings of acts: a ranking of acts in terms of how much moral reason I have for these acts, and a ranking in terms of how much reason I have for these acts all things considered. According to the dual-ranking view, an option is morally impermissible if and only if it is inferior to an alternative on both rankings.
This dual-ranking view handles the case that we have just discussed. In that case, the option of saving-no-one is not inferior to any alternative on the ranking in terms of reasons all-things-considered (and so this option counts as permissible), but the option of saving-these-people’s-lives-but-not-their-sight is inferior to the option of saving-both-their-lives-and-their-sight on both rankings (and so counts as impermissible).
Nonetheless, this view has problems with other cases. Suppose that I face four options: (a) I could sacrifice my own life and thereby save 5,000 people’s lives; (b) I could do nothing and save no one; (c) I could save 1,000 people at a cost to myself of $2,000; or (d) I could save 1,000 people at a cost to myself of $5,000. In the moral ranking, let us assume, (a) is morally best, and (b) is morally worst, while (c) and (d) lie in between (a) and (b). In the ranking in terms of all-things-considered reasons, let us suppose that (a), (b) and (c) are none of them inferior to any alternative, while (d) – in which I pointlessly impose an additional cost of $3,000 on myself – is inferior to (c).
In this case, the dual ranking view implies that option (d) is morally impermissible: it is inferior to option (a) in the moral ranking, and inferior to (c) in the ranking in terms of reasons all-things-considered.
But it is surely much too demanding to say that (d) is morally impermissible. Suppose that I help other people in a way that is not heroically virtuous, but goes significantly beyond the bare minimum that is morally required. Then, even if I do this in a way that imposes some unnecessary costs on myself, my action is surely not morally impermissible. No one else would be entitled to blame me for making the mistake of imposing these unnecessary costs on myself, given that I am also clearly going above and beyond the call of duty in helping people.
To solve this problem, we need a different approach. Roughly, I propose that what we need is a triple-ranking view. According to this view, there are three rankings: the ranking in terms of the moral reasons, the ranking in terms of all-things-considered reasons; and a ranking in terms of the non-moral reasons (such as the reasons of self-interest and the like).
On this triple-ranking view, an act is morally impermissible only if (i) it is inferior to an alternative on the all-things-considered ranking, and (ii) that fact is explained by the moral reasons against the act, and not by the non-moral reasons against it.
In effect, to be morally impermissible an act must not just be inferior to an alternative act in both the moral ranking and the all-things-considered ranking; it must also be inferior to an alternative act in the all-things-considered ranking precisely because it is inferior to an alternative in the moral ranking, and not merely because it is inferior to an alternative in the purely non-moral ranking.
In the example that I gave, the reason why saving 1,000 people at the unnecessarily high cost of $5,000 to myself is suboptimal in the all-things-considered ranking is not that it is morally suboptimal; the reason is that the act is suboptimal in prudential / self-interested terms. This is why the fact that it is suboptimal in both rankings is not enough to make the act morally impermissible.
Some readers might think it is too complicated to appeal to so many different rankings. But each of these rankings just reflects a different value or family of values. The idea that there are many different rankings is just the familiar idea that there are plural and conflicting values. This familiar idea seems to necessary to make sense of our intuitive judgments of moral permissibility.
Interesting. I think I’m skeptical about this proposal exactly for the same reasons that I’ve been skeptical about the dual-ranking view. The problem seems to be that it makes actions too easily impermissible when it comes to actions with trivial consequences and so the view does not generate enough moral freedom. So, imagine that you could either have pizza or Chinese as dinner after a departmental colloquium, and the only difference would be that pizza would give everyone tiny bit of more pleasure. In this case, it seems to me that Chinese is inferior to an alternative on the all-things-considered ranking, and that fact is explained b the moral reasons against the act (I’m assuming that everyone’s happiness is a moral reason for us to act) and not by the non-moral reasons against it. But, it doesn’t seem like going for Chinese is impermissible in this case.
Now, in Doug’s book, he has a response to this which is to distinguish between requiring and enticing reasons and perhaps you might have the view that all genuine moral reasons are requiring reasons. However, I worry that distinguishing between requiring reasons and enticing reasons in this context assumes the distinction between what is permissible and what is impermissible rather than explains it in terms of value. I should also say that classifying reasons under the categories of moral and non-moral reasons is pretty difficult.
A bit of self-advertisement: I tried to give an account of permissions for the teleologists in my recent Utilitas paper Consequentialist Options. Instead of different evaluative rankings, my proposal was based on the idea of the value of having a choice between many permissible options. Doing things in this way, does not require giving many different evaluative rankings.
I recall being persuaded by my colleague Ben Bradley that consequentialist satisfying would not work. Here is the paper, http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=439659, but I do not recall the arguments. I’ll hope to get back up to speed on what he said and then see if Ralph’s view would solve the problems Ben pointed out.
Hi Ralph,
On my dual-ranking view, an option is morally impermissible if and only if there exists an alternative that is inferior to it on both the moral ranking and the all-things considered ranking. So to establish that this view implies that option (d) is impermissible, it’s not enough to show that option (d) “is inferior to option (a) in the moral ranking, and inferior to (c) in the ranking in terms of reasons all-things-considered.” You need to show that option (d) is inferior to a single alternative option on both rankings. Now, although option (d) is inferior to (c) on the moral ranking, it is not inferior to (c) on both rankings. Indeed, it’s tied with (c) on the moral ranking. So you must be assuming that (d) is inferior to option (a) not only on the moral ranking but also on the all-things-considered ranking. That is, you must be assuming that I have more reason, all things considered, to sacrifice my own life to save 5,000 lives than to sacrifice $5,000 to save 1,000 lives. This is not obvious to me. Do you have any argument for why we should accept this assumption? And doesn’t your argument rest on this assumption? Moreover, if I were convinced that this assumption were true, then shouldn’t I think that option (d) is unreasonably selfish given that option (a) is available? That is, insofar as I find it intuitive to think that I have more reason, all things considered, to sacrifice my own life to save 5,000 lives than to sacrifice $5,000 to save 1,000 lives, shouldn’t I also find it intuitive to think that (d) is impermissible?
Thanks, Douglas Portmore!
I’m sorry for the slip in my argument. However, I believe that I can still make my point if I change the example slightly.
Your response to my original example relied on the suggestion that the all-things-considered ranking may be a partial ranking. You suggest that although option (d) is inferior to option (c) on the all-things-considered ranking, (d) is not inferior to (a) on this ranking. But then options (c) and (a) cannot be exactly equally good, since then (d) would have to be inferior to (a). Since neither (a) nor (c) is better than the other in the all-things-considered ranking, it must be that (a) and (c) are simply unranked in relation to each other.
But there will be cases where it’s not plausible to say this. Here is the recipe for constructing cases of this sort.
Suppose that (a) and (c) are extremely similar, and so seem to involve all of the same kinds of reasons and values. Then we should be able to find cases where although (a) is morally better than (c), the non-moral advantages of (c) exactly counterbalance the moral advantages of (a), so that they are exactly equally good in the all-things-considered ranking.
Then if I do an act (d), which is as similar to (c) as possible, except that it involves imposing some unnecessary costs on myself, it will count as impermissible, according to this toy version of the dual-ranking view.
Thanks David Sobel!
Please let me know what you think about whether Ben’s arguments undermine anything that I have said.
Thanks Jussi!
I’m willing to say that it is (slightly) morally impermissible to go for Chinese food in your case.
However, when an act is only very slightly wrong, we will find it hard to distinguish the case from cases where the act is not wrong at all. So it will seem a bit doubtful to us whether the act is wrong or impermissible at all, I agree.
Still, I don’t think that we should treat this doubtfulness as a strong reason for denying that the act is impermissible at all. (My point here is modelled on what my colleague Mark Schroeder says about how difficult it is for us to distinguish cases involving very weak reasons from cases involving no reasons at all.)
Hi Ralph
I find that strategy much less appealing and much more revisionary here than in the case of reasons (and even there I still don’t accept Mark’s view). One reason for this is that I find it attractive to think that there is a robust connection between rightness and wrongness of actions and the appropriateness of reactive attitudes. So, because there is no appropriate blame or criticism here, I find it attractive to think that there is no sense in which trivially suboptimal actions are wrong. In fact, the only reason to think that they are wrong is a prior commitment to a theory that entails that they are. However, I don’t quite see why I should accept such a theory given that there is a teleological theory of impermissibility that has no such consequence and which can deal with the other cases in a satisfactory way (or so I have tried to argue).
Jussi —
I agree with your thought that “there is a robust connection between rightness and wrongness of actions and the appropriateness of reactive attitudes”. But the reactive attitudes themselves come in degrees: we resent some acts more than others, and we blame some acts more than others, and so on.
On the face of it, there is no smallest degree of blame that we are capable of: however mildly and gently you blame one act, it is possible to blame another act even more mildly and gently. Eventually, as these degrees of blaming become smaller and smaller, we will lose the ability to distinguish between very mild degrees of blaming and no blaming at all. So, I do think that extremely mild reactive attitudes are appropriate in your Chinese good case — perhaps so mild that it would be wrong to express these reactive attitudes verbally in any way.
More generally, it seems to me that cases of this sort, involving very slightly impermissible acts and very mild levels of appropriate blame, seem bound to exist. So I don’t see why you think it is so implausible and “revisionary” to interpret your Chinese food case as being of this kind.
It’s intriguing that many folks seem to assume that the problem of “permissible suboptimality” has to find a fix on the value side.
Aquinas, to take a historical example, distinguished two ways of being “morally required”: one by universal commandments of natural law, and one by non-universal counsels of perfect virtue. Supererogatory acts exhibit a mode of perfection that some but not all are required to achieve.
Aquinas’s own account would not get much traction today, but it does at least suggest a type of solution that the literature has not explored, namely that “morally required” might be systematically ambiguous.
As someone who is suspicious that the distinction between moral and non-moral reasons will never be cashable in a non-question-begging fashion, I think this approach could be valuable.
Thanks, Michael!
The philosophers who think that we have to explain “permissible suboptimality” in terms of values are the ones whom I called the “teleologists” – i.e. precisely the ones whom I was discussing in my post. There are plenty of non-teleologist philosophers, and they would have to find another way to account for this phenomenon.
I’m not sure that I understand Aquinas’ view, as you interpret it, but I confess that I don’t like the sound of “non-universal counsels of perfect virtue”. It suggests a kind of moral elitism, according to which some agents are subject to more stringent requirements than others (as if the fact that these agents are required to do more than other agents were explained by their having a superior status of some kind…).
In general, I agree with you that the language of ‘requirements’ is multiply context-sensitive. In this context, however, I was using the term to express the concept of what is necessary for avoiding moral wrongdoing – where at least in the absence of an excuse, moral wrongdoing merits a certain familiar kind of blame or negative reactive attitude. (So, even if the super-virtuous person says, “I have to save those people”, she doesn’t mean that she is in my sense morally required to save them – that is, that it is necessary for her to save them if she is to avoid moral wrongdoing. What she means is something like that it is necessary for her to save them if she is to act in a way that she would regard as tolerable or acceptable in the circumstances.)
I also agree that it is a non-trivial task to explain the distinction between moral and non-moral reasons. I have tried to say something about this in my paper “The Weight of Moral Reasons”, in Oxford Studies in Normative Ethics (2013).
Ralph, it seems likely that the moral ranking and the non-moral ranking together determine the all-things-considered ranking. If that’s true, then we don’t really need a triple ranking view.
Do you think the moral ranking and the non-moral ranking together determine the all-things-considered ranking?
Thanks, Jamie!
Strictly strictly speaking, we probably need more than rankings to determine how much reason all-things-considered there is for each of the available acts: we probably need scales that measure how big the difference is between the available acts in terms of the relevant values.
But yes, I do think that once we have the scales measuring the available acts in terms of the relevant values — including both moral and non-moral values — this will determine the all-things-considered ranking. So, strictly speaking, as you rightly figured out, calling it a “triple ranking” view is not perfectly precise.
Thanks Ralph. I agree that there is mild wrongness and that blame also comes in degrees, including in a mild form. However, the suggested view predicts that in every case where there is a tiny bit all-things-considered alternative where this is explained by a tiny bit more moral reason to do that action there is some wrongness. And, it seems that this probably is the case in everything we do – there probably is always a just a little bit superior alternative like that to what we do.
However, common sense morality seems to give us much more leeway. It seems to recognise that we usually face a wide range of completely morally permissible options – we do nothing wrong unless we start harming people, breaking our promises, killing, failing to help people when we can easily do so and so on. Whether we play chess tonight, go dancing, watch tv, visit friends, work on a philosophy paper etc. etc. we are not doing anything wrong.
Now, I do grant that a version of Mark’s proposal explains these intuitions away on pragmatic grounds. What I am wondering about is why we should accept a theory that has this unintuitive consequence? In the case of reasons, Mark has the story about Ronnie and Bradley and giving an unifying explanation of why one of them has a reason to dance and the other doesn’t. This leads him to a naturalist reduction of reasons to certain facts that increase the probability of actual desires being satisfied, which then leads to the view that pretty much everything is a reason – just tiny bit. He claims that this is a bullet worth biting because it is the only way to explain Ronnie’s and Bradley’s reasons. So, I would just like to know what plays the role of that argument here? Why should we accept a view about impermissibility that has the same unintuitive consequence? If two teleological views can deal with all the same other cases, but one of them has this consequence is that not a reason to prefer the other view?
Well, rank uncertain prospects. Then you can extract a cardinal (interval) scale.
Ralph, could you say a bit more about the notion of explanation you have in mind? The worry I have is that you could have a case where something finding non-maximal rank in the ordering of all-things-considered reasons depends on both the moral and the non-moral reasons. Here’s a case:
There are moral reasons to not eat either shellfish or fish; if pressed, I’d say that there’s more moral reason not to eat fish but the difference is slight. But I like fish and I kind of like shellfish and I f)(&)(#@&ing hate vegetables. I’ve got the option to eat clams, perch, or veggies. On balance, I have most reason to eat shellfish, say (this seems pretty plausible.) It’s plausible that it’s morally impermissible to eat perch here, but the explanation of the fact that it’s non-optimal seems to rely both on the moral facts (shellfish is morally better to eat than fish) and the non-moral facts (I prefer fish to shellfish and both to veggies). My worry, to be precise, is that we can’t explain why fish is lower than shellfish in the all-things-considered rankings just by using the moral reasons since, if the non-moral facts were different (say I REALLY liked fish and kind thought shellfish were okay), it wouldn’t be. We need both the moral reasons and the distribution of non-moral reasons to get the right verdict.
Does that seem right or have I misunderstood you? I would think a way to finesse this is to let moral reasons be the difference maker in the explanation, but I’m not sure if that screws up other bits of your theory.