Here’s another entry on the paradox of deontology (the first was here). Let a deontological restriction be what Jeffrey Brand-Ballard has appropriately called a nonminimizing restriction: put intuitively, these are duties that forbid performing some act even when doing so would minimize the overall number of times that forbidden act is performed. E.g., Agent 1 must not kill Victim 1, even if killing Victim 1 would prevent Agents 2-6 from killing Victims 2-6. As I’ll recap in a moment, such restrictions are said to be paradoxical. In two relatively recent articles in Ethics, this kind of claim has been made: if deontologists can provide a rationale for restrictions, that’s enough to solve the paradox. I don’t believe that’s true.
First consider the two claims. Brand-Ballard (“Contractualism and Deontic Restrictions,” Ethics 114 (2004): 269-300) aims to examine whether contractualism can provide a rationale for restrictions (he thinks it can’t, but put that aside here). To provide such a rationale, he says, contractualism needs to show that (in the above case, which is different from the one he considers) Victim 1 can reasonably reject a rule permitting minimizing violations of the duty not to kill and that no one else (in particular, Victims 2-6) has at least equal reason to reject the opposite rule. “Otherwise, the paradox of nonminimizing restrictions recurs.” (p. 282)
Similarly, Paul Hurley writes: “All that is necessary…is to sketch even the outlines of an alternative [i.e., non-consequentialist] rationale that supports agent-centered restrictions, and the pressure to abandon agent-centered restrictions is dissipated.” (“Agent-Centered Restrictions: Clearing the Air of Paradox,” Ethics 108 (1997): 120-146, p. 131).
So both Hurley and Brand-Ballard seem to see the issue this way: the paradox of restrictions arises, according to the paradox’s proponents, only insofar as deontology or contractualism cannot provide a rationale for restrictions. In particular, restrictions violate “maximizing rationality,” which holds that when something is bad, as duty violations are, we should minimize occurrences of that thing. So the influence of maximizing rationality’s claim, that more restriction violations is worse than fewer violations, means that deontology/contractualism cannot provide a rationale for restrictions. Correlatively, solving the paradox comes down to providing a rationale for restrictions.
Again, I think this is a misrepresentation of the problem. First some caveats. (1) The putative inability of deontology to provide a rationale for restrictions is a (putatively) real problem. So Hurley and Brand-Ballard are right to focus on it. (2) I like Hurley’s answer to the rationale problem (long story short: the impartial rationality employed by, e.g., Kantian legislators can provide a rationale where maximizing rationality can’t). (3) I like Brand-Ballard’s discussion of all the straw-manning that has gone on in this debate, and I like his argument that contractualism has a difficult time providing a rationale for restrictions.
All that said, though, I don’t think it’s correct to say that solving the paradox is a matter of providing a rationale for restrictions, and I think that Scheffler made the relevant point in his reply to Foot’s “Utilitarianism and the Virtues” (in Scheffler (ed.), Consequentialism and its Critics, pp. 224-242). For Foot was arguing there that virtue ethics might be able to provide a rationale for restrictions. But even granting that it has this ability, for the sake of argument, it still leaves open a further problem. As Scheffler points out, it only means that “human practical reason may be at war with itself” (Scheffler, “Agent-Centered Restrictions, Rationality, and the Virtues,” p. 259). That is, even granting that virtue ethics or deontology or contractualism can provide a rationale for restrictions, we still have the problem that maximizing rationality gives us a rationale to not have restrictions, namely, that more duty violations is worse that fewer duty violations. And, while you can say all you want about rationales for restrictions, that principle of maximizing rationality also seems pretty plausible. So, in short, I think that the dialectical principle endorsed by Hurley and Brand-Ballard, that solving the paradox of deontology amounts to providing a rationale for restrictions, neglects Scheffler’s important point here: even if restrictions have a rationale, they still seem inconsistent with that truism of maximizing rationality (hence, ‘paradox’ rather than ‘falsehood’).
So, again, the rationale problem is a real problem, and we need an adequate solution (again, I’m partial to Hurley’s, but that’s a bit beside the point here) — that is, Hurley’s and Brand-Ballard’s projects are important. But they understate the issues involved to suggest that the project of providing a rationale for restrictions is identical to the project of solving the paradox of deontology. Rather, defenders of restrictions also need to show that practical reason isn’t at war with itself, i.e., that restrictions (with whatever one’s favored rationale is) can be made consistent with the claim of maximizing rationality that more duty violations is worse than fewer duty violations. (Conveniently enough, I’ve got a paper defending deontology on just that second issue of consistency, but I’ll save that one for another day.)
Josh: Would you admit that by providing a certain type of rationale for nonminizing restrictions one can solve the paradox of nonminimizing restrictions? (I won’t call it the paradox of deontology, because I believe that it is a mistake to equate deontology with the view that there are nonminimizing restrictions.) I have in mind an agent-relative consequentialist rationale for nonminimizing restrictions, where one holds that, from the agent’s perspective, the state of affairs where she violates a restriction is worse than a state of affairs where five others each violate the same restriction. If we accept that the disvalue of a restriction violation is agent-relative, then it’s not the case that maximizing rationality gives us a rationale to not have restrictions. So the project of providing a rationale for nonminizing restrictions and the project of solving the paradox of nonminizing restriction may not be as distinct as you seem to suggest. Of course, I agree that it’s important to note that it’s not enough to provide a rationale for restictions if that rationale doesn’t reconcile restrictions with maximizing rationality.
Doug,
Yes, I agree. As Scheffler points out (in that same article, I think), one strategy is to reconcile restriction-rationality with maximizing rationality, which is what you seem to be doing by suggesting that the restricted action is (under your suppositions) the action that minimizes disvalue.
We should note two things about this. First, the type of agent-relative consequentialism you suggested actually does do what I said needed to be done: provide a rationale for restrictions and make them consistent with maximizing rationality. So, I’d still maintain that there are two tasks to be done, not just the one as Brand-Ballard and Hurley suggest. So, I’d disagree with your claim that “the project of providing a rationale for nonminizing restrictions and the project of solving the paradox of nonminizing restriction may not be as distinct as you seem to suggest.” I still want to say that the paradox of restrictions isn’t solved merely by providing a rationale — it’s just that your theory satisfies both desiderata in one step.
Second, as Scheffler points out, and as you’re right to suggest, the kind of rationale you offer won’t be available to deontologists of any recognizable stripe. So we should say that a nonminimizing restriction becomes a deontological restriction when the principle on which it is grounded is somehow deontological. (Which then should provide a rationale but won’t necessarily be consistent with maximizing rationality.)
Off the top of my head here Josh-
I haven’t read the Brand-Ballard piece to know why he holds that a contractualist theory fails to justify nonminimizing restrictions, and I’d be curious to know why. But I’d have thought that such an approach would be attractive as a way of justifyfing such restrictions in a recognizably deontological way. That is, at least Scanlon’s contractualism would suggest that practical reason is not at war with itself precisely because maximizing rationality isn’t all there is to rationality, and certainly not at all there is to moral rationality. Within that version of contractualism, the principle of maximizing rationality is not “pretty plausible.” In the end, you’re right to stress that perhaps deontologists who aim to justify nonminimizing restrictions must at least indicate how these restrictions relate to maximizing rationality. But how could we show that (a) nonminimizing restrictions are justified, but are (b) consistent with maximizing rationality, and (c) (not following Doug’s consequentialist suggestion and treating the restrictions as genuinely deontological) these restrictions are not justified by being counted as states of affairs or consequences to be maximized? I guess I don’t see how that could be done, so isn’t a direct attack on maximizing rationality the most likely way for deontologists to complete this project?
MC
Michael,
You’re right that the natural strategy here is to say that maximizing rationality isn’t all there is to rationality, especially with respect to moral rationality. While Brand-Ballard doesn’t really elaborate in that way, that kind of strategy, in essence, is the lynchpin of Hurley’s view. So he offers “impartial rationality” as a way of limiting “impersonal” rationality (of which maximizing rationality is one element). But I don’t think, ultimately, that we (including me, a good deontologist) can deny the plausibility of this principle: “more duty violations is worse than fewer duty violations.” We should be able to recommend to agents that they should minimize their wrong conduct. Now, that’s not to say that such a principle can’t overridden; it’s just to say that it’s highly plausible. But if that’s so, then when impartial rationality violates this principle of impersonal/maximizing rationality, practical reason does seem “at war with itself.”
(As for your (a)-(c), that too is a natural approach, I think. But I also think that we should want to keep the maximizing principle. So, instead, in my paper I offer a theory where it doesn’t matter what one’s rationale is, restrictions can always be consistent with maximizing rationality. If I’m right, I think that will be the most direct solution to the problem. Unfortunately, I’m going to cop out a bit and not explain the rest of that paper’s solution, because presenting it would be a bit too long here.)
There’s something that I don’t understand here. Deontological restrictions, as you call them, are commonly supposed to capture a distinctive feature of deontological moral theories. But that common supposition is, I think, mistaken. Perhaps surprisingly, it seems that even non-deontological moral theories such as utilitarianism — yes, utilitarianism! — imply “deontological” restrictions. Let me explain.
A deontological restriction is a principle that identifies some type of act such that it is always forbidden to perform an act of that type, even when doing so would result in fewer acts of that type being performed overall. Typical examples of deontological restrictions involve types of acts such as killing, lying, promise-breaking, and so on. But, so far as I can tell, there’s nothing in the definition of a deontological restriction that limits it to those particular types of acts. Let us, then, consider a different type of act: non-utility-maximising. Clearly, utilitarianism implies that it is always forbidden to perform a non-utility-maximising act, even when doing so would result in fewer non-utility-maximising acts being performed overall.
A simple example shows this. Suppose that you wanted to minimise the number of non-utility-maximising acts. One way that you could do this is by annihilating the human race. If humans are allowed to continue to exist, then they are bound to perform a large number on non-utility-maximising acts. But if you were to annihilate them, that would put an end to non-utility-maximising once and for all. However, if you did annihilate the human race, that act would itself be a non-utility-maximising act. So, it is possible for you to minimise the number of non-utility-maximising acts overall, but only by performing a non-utility-maximising act yourself. What does utilitarianism tell you to do in this case? Surely it tells you not to annihilate the human race. Hence, we have a utilitarian “deontological” restriction.
I’m not entirely sure what this shows. But it does seem to undermine the so-called “paradox of deontology”.
Campbell: I think that Josh wasn’t as precise as he could have been in defining ‘deontological restrictions’. Consequently, your putative example of a deontological restriction on utilitarianism isn’t a genuine example. Let’s go directly to the primary source for a definition of ‘agent-centered restrictions’: Scheffler’s The Rejection of Consequentialism. As he defines them, agent-centered restrictions (a.k.a. deontological restrictions) prohibit the performance of certain act-types even where performing an instance of one of those prohibited act-types would minimize comparable instances of that same act-type. Here’s Scheffler’s own words:
In your example, one performs an act of non-utility-maximizing that minimizes other instances of non-utility-maximizing, but the instances that you prevent are not comparable to the instance you perform. So we would have to modify your example and imagine the case where you would have to annihilate the human race in order to prevent five other agents from each annihilating a comparable race of sentient beings. Now utilitarianism would never endorse a restriction that prohibited you from annihilating the human race in order to minimize comparable instances of that same act-type.
Campbell,
I was being a bit less precise than I could have been, and Doug’s characterization is better (thanks, Doug). But I do agree, as a separate point, that it’s true that it’s misleading to countenance a general principle that nonminimizing restrictions are unique to deontology (per Doug’s earlier comment) or, for that matter, necessary to deontology. Nevertheless, it’s also true that, as a matter of fact, deontologists have been more likely to countenance restrictions than have consequentialists, and it’s of course also true that any deontologist who does countenance restrictions faces the potential (potential!) objection that they are paradoxical.
Doug, I think that your reply might pick up on an accidental feature of the example I gave. So let me give another example.
Suppose that one day when collecting mail from your letterbox you find two envelopes addressed to two of your neighbours, which have been accidentally delivered to you. You can tell by the style of envelopes that they contain letters from a reputable charity inviting your neighbours to make contributions. Now, you happen to know that each of your neighbours would donate to the charity if they received these letters. But you also know that they would donate less than they could. In particular, each neighbour would donate only $10, when each could easily donate $30 without any significant sacrifice. Thus, if your neighbours receive the letters, they will each perform a non-utility-maximising act. You can prevent these two non-utility-maximising acts by throwing the letters in trash, rather than delivering them to your neighbours. But if you do so, then you yourself will perform a non-utility-maximising act. Moreover, it seems that these three instances of non-utility-maximising are “equally weighty” or “comparable” in your terms; each act would result in the charity’s receiving $20 less than it otherwise would.
But, of course, utilitarianism tells you to deliver the letters. According to utilitarianism, you ought not to perform a non-utility-maximising act even when doing so would minimise the number of comparable non-utility-maximising acts overall.
Campbell,
If I’m following your case right (and I may not be!), it seems like now we’ve got comparable acts (the neighbors’ two $20-short donations and the agent’s $20-reducing act of throwing out the letters), but not acts of the same type. That is, the two acts are (non-maximal) donating to charity and (non-maximal) throwing out the letters. You might say that they are of the same act-type, insofar as both acts are of the type non-utility-maximizing, but this seems to stray from Scheffler’s idea of an act-type. I do think that putting the screws to the idea of an act-type here might be warranted, but we’d need an independent argument for doing so.
Campbell: You say,
This seems false. My neighbor giving $10 doesn’t result in the charity receiving $20 less than it otherwise would have; it results in the charity receiving $10 more than it otherwise would have. The “otherwise,” I take it, denotes “had the act not been performed.” So whereas my act results in the charity receiving $20 less than it otherwise would have, each of the acts performed by my neighbors result in the charity receiving $10 more than it otherwise would have.
To make your case a genuine case of my performing a minimizing violation, it would have to be revised as follows. If I don’t throw my neighbors’ donation-request letters in the trash, you and Josh will each throw your neighbors’ donation-request letters in the trash. Of course, utilitarianism would require that, in this case, I throw my neighbors’ donation-request letters in the trash. Thus utilitarianism again permits performing a minimizing violation.
Josh,
I think you’ve got the example exactly right. In general, there are different ways of partitioning acts into types. Relative to some such partitions, the relevant acts in the example are of the same type; relative to other partitions, the acts are of different types. Notice that the same is true of your original example:
On one partition, these acts are of the same type: they’re all acts of killing. But the acts do differ in various ways: they’re performed by different agents, at different times, involve different victims, etc. Hence, there will be more fine-grained partitions on which these acts are of different types.
Now, you might think that my example doesn’t really exhibit a utilitarian deontological restriction because it rests on an unnatural or gimmicky partition of acts into types. But there’s a more substantive point to be made here. When elucidating the allegedly paradoxical nature of deontological restrictions you point out that such restrictions conflict with the principle that “more duty violations is worse that fewer duty violations”. But now we see that not even utilitarianism is consistent with that principle. According to utilitarianism, our duty is to maximise total wellbeing, and we ought not violate that duty even when doing so would minimise the number of violations overall.
Doug,
I was probably speaking a little too loosely. I should have said: each act would result in the charity’s receiving $20 less than it would have if the agent had acted in some other way that she could have. It may be helpful for me to describe the kind of case I have in mind in abstract terms. Consider the following:
1. You face a choice between two alternatives, X and Y.
2. There are n > 1 other agents, numbered 1,2,…,n. The alternatives that these agents face depends on your choice between X and Y. If you choose X, then each agent i must choose between Ai and Bi. But if you choose Y, then each agent i must choose between Ci and Di.
3. You know that these agents will choose as follows. Given a choice between Ai and Bi, agent i will choose Bi. And given a choice between Ci and Di, agent i will choose Ci.
4. The total utilities that would result from these acts is as follows. Your choosing X would result in greater total utility than your choosing Y. For each agent i, i’s choosing Ai would result in greater total utility than i’s choosing Bi; and i’s choosing Ci would result in greater total utility than i’s choosing Di.
Now, it follows that you face a choice between either maximising total utility or minimising the number of non-utility-maximising acts. If you choose X, you’ll maximise total utility, but each of the n agents will perform a non-utility-maximising act. If you choose Y, you will prevent these agents from performing non-utility-maximising acts, but you will fail to maximise utility; hence, you yourself will perform a non-utility-maximising act. (We can suppose that the relevant differences in total utility are such that all these instances of non-utility-maximising are equally weighty.) According to utilitarianism, you should choose X. Utilitarianism instructs us to maximise total utility; in other words, it instructs us not to perform non-utility-maximising acts. But it does not instruct us to minimise the number of non-utility-maximising acts. This looks like a “non-minimising restriction”.
Campbell: Whether your example is a genuine example of a non-minimizing violation, as we’ve characterized them, depends on whether my act is comparable to those of agents numbered 1,2,…, n. In one sense, they’re comparable: they all result in less utility than there would have been if the agent had acted in some other way that she could have acted, and less by the same amount. In another sense, though, they’re not comparable: my act results in less utility than there would have been had I not performed the act in question, whereas each of their acts result in more utility than there would have been had they not performed the acts in question. Now if there is any relevant sense in which they are not comparable, then they are not comparable violations of the same restriction. So you’ll need to argue that in assessing comparability the latter difference is irrelevant. I suppose you could do this by assessing relevance in terms of what utilitarianism takes to be morally relevant. You could then argue that what’s relevant on utilitarianism is only how much more utility an agent could have produced had she performed some other available alternative act instead. Is this correct?
Even so, what I think that your example shows is not that utilitarianism endorses agent-centered restrictions but that we haven’t yet, here, correctly characterized them. So here’s another stab at characterizing what an agent-centered restriction is:
Arguably, this is the correct characterization. First, given this characterization, it certainly makes sense to call them deontological restrictions, as people do — at least, it does so long as we note that many philosophers, mistakenly, use the term ‘deontology’ as a synonym for ‘non-consequentialism’ and also stipulate that consequentialism must rank states of affairs from an agent-neutral point of view. Second, this characterization is the one that Scheffler endorses, and he is I think the one who first coined the term or, at least, the one that brought into widespread use. He says,
On this characterization, utilitarianism, being an agent-neutral theory, cannot endorse agent-centered restrictions. Utilitarianism never prohibits performing an act that would produce the best available state of affairs.
By the way, this latest characterization helps us make better sense of why philosophers have thought that agent-centered registrictions are paradoxical. Those that endorse them admit, for instance, that it would be better, from an agent-neutral point of view, if there were fewer murders but, nevertheless, insist that it’s wrong to commit murder in order to minimize the number of murders overall.
Doug,
I really like your last proposal. I would put it like this. Say that a principle P is compatible with an ordering R of states of affairs iff satisfying P always coincides with realising a state of affairs that R ranks at least as high as all alternatives. Then we may say that a principle P is a deontological restriction iff P is incompatible with every ordering R. This definition captures nicely the last quote from Scheffler above. On this understanding a deontological restriction is a principle that cannot be interpreted as requiring us to maximise agent-neutral goodness. The definition might also help to reveal the conflict that Josh describes between deontological restrictions and “maximising rationality”.
It might be worth noting, however, that Scheffler himself seems confused on this issue. His earlier formulation of deontological restrictions — stated in terms of the number of violations — is not obviously equivalent to his later formulation — stated in terms of rankings of states of affairs. One suspects that the earlier formulation makes deontological restrictions seem more paradoxical.
Doug,
I think your characterization of agent-centered restrictions is not quite right, and not what Scheffler has in mind. That restriction R forbids the agent from doing what will “produce what is, from an agent-neutral point of view, a better state of affairs than refraining from doing so would”, is, as your quote from Scheffler indicates, an “effect” or implication of restrictions – but it is not the “essence” of restrictions themselves. Imagine a view that says “You must not tell a lie, even if this means reducing the overall amount of good in a world, except when doing so is necessary to minimize the number of comparable lies told.” This view would prevent us from producing the best possible state of affairs in many cases, and thus is non-consequentialist (assuming agent-neutrality), but the duty to not tell lies is not a nonminimizing restriction because it allows you to minimize the overall amount of lies told. Call this view 1.
Instead, we should use Scheffler’s definition, also on p. 80: “An agent-centered restriction is a restriction which it is at least sometimes impermissible to violate in circumstances where a violation would prevent either more numerous violations, of no less weight from an impersonal point of view, or other events at least as qeustionable, and would have no other morally relevant consequences.” Or something close to that, anyway. So on view 2, which does contain a restriction, “You must not tell a lie, even if doing so would minimize the overall number of comparable lies told.”
Both 1 and 2 are non-consequentialist, but only 2 contains an agent-centered restriction. Similarly, and in regards to your last point, only 2 is irrational or paradoxical in Scheffler’s sense, because only it requires not minimizing what it itself declares is morally bad, namely lying. 1 allows minimizing lies, and so it is not paradoxical in this sense, which is the sense usually used in discussions about the paradox of restrictions.
Josh,
Why can’t the essence of an agent-centered restriction be that it has a certain effect that other types of restrictions don’t have?
On my characterization, if there is an agent-centered restriction against lying, then it follows that it’s wrong to lie even if doing so would minimize the overall number of comparable lies told. On my characterization, an agent-centered restriction against lying implies both view 1 and view 2. And surely this is right. An agent-centered restriction against lying not only prohibits agents from lying in order to minimize comparable instances of lying, but also prohibits agents from lying in order to produce an impersonally better outcome that doesn’t necessarily involve fewer lies.
So I think that your suggested way of characterizing agent-centered restrictions is inferior to my own for two reasons: (1) your characterization doesn’t imply view 1 and it should and (2) your characterization allows that utilitarians can endorse agent-centered restrictions, as Campbell has shown.
Doug, of course “the essence of an agent-centered restriction [can] be that it has a certain effect that other types of restrictions don’t have.” The point was that this isn’t the essence of those restrictions that Scheffler and others accuse of being paradoxical. I think we’ve gone off on two different dialectics. You and Campbell seem to be seeking the nature of restrictions that are unique to non-consequentialism. But that’s not the issue with which this discussion began. That issue is what the paradox of restrictions comes to. Quite understandably, the question then arises of what a ‘restriction’ is supposed to be, which is then said to be paradoxical. I think Scheffler’s understanding, rather than yours, gets at this sense of ‘restriction.’
So I disagree that my/Scheffler’s understanding is worse than yours because it “doesn’t imply view 1”. It shouldn’t imply view 1 – view 2, not view 1, is what’s supposed to be paradoxical.
I also think that your second point, that “your characterization allows that utilitarians can endorse agent-centered restrictions, as Campbell has shown,” is neither here nor there. If true (and I’m still not sure that Campbell’s not working with another sense of ‘act-type’ than Scheffler is), all that means is that allegedly paradoxical restrictions aren’t unique to non-consequentialism. Which, I thought, was a point you wanted to make in the beginning of this discussion. The issue, though, isn’t to define non-consequentialism or deontology (that’s certainly an interesting issue, just not the one at stake here); rather, it’s whether and how certain kinds of theories – those that contain restrictions – are paradoxical.
Finally, you say “On my characterization, if there is an agent-centered restriction against lying, then it follows that it’s wrong to lie even if doing so would minimize the overall number of comparable lies told.” I don’t see why this follows – all that a duty has to have to be a restriction in your sense is that it not produce the optimal state of affairs. So on your version of ‘restriction,’ an duty against lying wouldn’t necessarily require minimizing the overall number of comparable lies told to be an agent-centered restriction, since it could also require producing a non-maximally good state of affairs in some other way.
Josh:
I’m not seeking, as you say, “the nature of restrictions that [is] unique to non-consequentialism.” I’m seeking to understand what the term ‘agent-centered restriction’, as it is employed in the literature, means. The reason for this is directly relevant to the topic of your post, for we can’t know whether agent-centered restrictions are indeed paradoxical and, if so, what about them is paradoxical unless we know what agent-centered restrictions are. Now I do think that the fact that the term ‘agent-centered restrictions’ is often used to denote something that supposedly utilitarianism cannot accommodate is a reason for rejecting a characterization, like yours, that implies that utilitarianism includes at least one agent-centered restriction.
Let me also say something about why I take my characterization to imply both view 1 and view 2. Here’s how I see things. An absolute agent-centered restriction against lying will prohibit lying always. A non-absolute agent-centered restriction against lying will prohibit lying except where doing so is necessary to prevent n amount of agent-neutral evil. Both will prohibit an agent from lying in cases where doing so will produce what is, from an agent-neutral point of view, a better state of affairs than refraining from doing so would. The absolute restriction will always prohibit lying for the sake of mininizing comparable lies. The non-absolute restriction will prohibit lying for the sake of minimizing comparable lies wherever the evil in more numerous lies being told is less than n.
Doug,
Okay, so it sounds like you agree with me that the point is to figure out what the things that are supposed to be paradoxical are, vis-a-vis recent debates about the paradox. (Note that this is a narrower set of debates than those in ethics generally, which may use ‘agent-centered restriction’ in a different sense.) If so, then shouldn’t we be understanding ‘restriction’ as the kind of thing that is said to be paradoxical, rather than the kind of thing that is not utilitarian? If we come up with a restriction that both would be deemed by Scheffler to be paradoxical (again, a point that I’m not ready to accept because of the slippery use of ‘act-type’, but which I’ll grant here) and that Scheffler would call utilitarian, wouldn’t that be a reason to say that since we’ve found our “it” – i.e., the thing that is paradoxical – we must also accept that utilitarianism can also have the restrictions that are said to be paradoxical? For that matter, isn’t this close to the point of your original comment on this post, and Campbell’s point, too?
As for your characterization of ‘restriction’ and whether it entails nonminimization, it still seems like there’s a kind of (non-absolute) duty that would be a restriction on your theory but not a nonminimizer (in the relevant sense). According to view 1, “You must not tell a lie, even if this means reducing the overall amount of good in a world, except when doing so is necessary to minimize the number of comparable lies told.” But for view 1 it is not the case that, in your words, “The non-absolute restriction will prohibit lying for the sake of minimizing comparable lies wherever the evil in more numerous lies being told is less than n.” View 1 will never prohibit lying for the sake of minimizing the total number of lies told (indeed, it obligates us to do so), regardless of whether the evil of the lies prevented is more or less than n.
Josh, You seem to be misinterpreting me. I didn’t say,
The omission of the word ‘even’ in the above significantly alters the meaning of the sentence. But whereas the restriction “You must not tell a lie, even if this means reducing the overall amount of good in a world, except when doing so is necessary to minimize the number of comparable lies told” might be an agent-centered restriction on the above, it isn’t an agent-centered restriction on my characterization.
To make things more clear, let me specify that an absolute agent-centered restriction prohibits an agent from performing a certain act-type, period, and thus prohibits the performances of that act-type even where doing so will produce what is, from an agent-neutral point of view, a better state of affairs than refraining from doing so would, even where doing so will minimize the number of instances of that act-type. A non-absolute agent-centered restriction prohibits an agent from performing a certain act-type when less than n amount of agent-neutral value is at stake, period, and thus prohibits the performances of that act-type when less than n amount of agent-neutral value is at stake, even where doing so will produce what is, from an agent-neutral point of view, a better state of affairs than refraining from doing so would, even where doing so will minimize the number of instances of that act-type. Does this clear things up?
And I should add to the definitions of both absolute and non-absolute agent-centered restrictions, that these restrictions must have the effect of denying that there is any non-agent-relative principle for ranking overall states of affairs such that it is always permissible to produce the best available state of affairs so construed. Thus a restriction against performing non-utility-maximizing actions would not count as an agent-centered restriction.
Doug,
Sort of. Let’s from here on out put aside absolute restrictions. I think that if we can say of a (non-absolute) restriction that it “prohibits the performances of that act-type when less than n amount of agent-neutral value is at stake…even where doing so will minimize the number of instances of that act-type,” then that effectively captures the spirit of Scheffler’s idea of restriction and the kind of restriction that is said to be paradoxical. Maybe I’m simply not following, but the work being done in your clarified version of a restriction that’s helping me out isn’t in the “even” (though I agree that this is important), but in the “period.” I’m also now not sure what you mean in saying that n amount of value is “at stake.” Does this mean “would be produced”?
Josh,
You ask,
I mean that such restrictions prohibit performing certain act-types in situations where less than n amount of agent-neutral value would be lost if one doesn’t perform an instance of that act-type. Suppose, for instance, that the act-type is lying and that n is 100 utiles. A non-absolute restriction against lying would prohibit lying in situations where less than 100 utiles would be lost if one fails to lie. Where more than 100 utiles would be lost if one fails to lie, lying would not be prohibited.