Welcome to what we expect to be an engaging and productive discussion of Theron Pummer‘s “Whether and Where to Give.” The paper appears in the Winter 2016 issue of Philosophy and Public Affairs, and it is available through open access here. Our conversation begins below with a critical précis by Johann Frick. Please join in the discussion!

Précis by Johann Frick:

It is a pleasure to kick off our discussion of Theron Pummer’s excellent and thought-provoking article “Whether and Where to Give” (Philosophy & Public Affairs, 2016). I will begin with a brief synopsis of some of Theron’s main claims, followed by some critical comments and questions.

Synopsis

According to some traditional views about charity, giving to charity is morally optional. You are morally praiseworthy if you do give, but you are not morally required to do so. Against this, philosophers like Peter Singer and others have urged us to accept the

Giving Requirement: For some agents, in some circumstances, it is morally obligatory to give a certain amount of their wealth to charity. It would be morally wrong not to give.

Even if we accept the Giving Requirement, however, we may believe that so long as we give the required amount, it is up to us which charities to give it to – regardless of the differences in the amount of impartial good that our donation would achieve in the hands of one charity or another. Effective altruists, such as Toby Ord, Will MacAskill, and Singer himself, challenge this. They defend the

Efficiency Requirement: In a choice between giving to a more or to a less efficient charity (i.e. giving to a charity where your donation will do more or less good in impartial terms), you morally ought to give to the more efficient charity, all else equal.

The central question of Theron’s paper is this: Is the Efficiency Requirement independent of the Giving Requirement? Suppose we go back to a traditional view of charity on which it is optional whether to give to charity at all. Or suppose you are contemplating a charitable donation which goes beyond what you would be required to give under the Giving Requirement. Is it nonetheless the case that, if you give to charity, you come under the Efficiency Requirement? That is, conditional on giving to any charity, is it the case that you morally ought to give to the more efficient charity? Theron’s answer is a qualified “yes”. His main claim is this:

Theron’s Main Claim: “In many cases it would be wrong of you to give a sum of money to charities that do less good than others you could have given to instead, even if it would not have been wrong of you not to give the money to any charity at all.” (p. 78)

Theron’s Main Claim is arresting, because it goes against the following “common assumption” in the ethics of giving:

Common Assumption: “If it is not wrong of you to keep some sum of money for yourself, then it is likewise not wrong of you to donate it to any particular charity you choose. Roughly: if it is up to you whether to donate the money, it is also up to you where to donate the money.” (p. 78)

Theron’s strategy for challenging the Common Assumption is to provide an explanation of why giving to charity at all is sometimes optional that doesn’t also make it morally optional to give to the more efficient charity, if you are going to give. Since, surely, charitable giving isn’t a special case, this is a project in the theory of beneficence in general. Theron’s goal, we might say, is to explain why acts of “suboptimal supererogation” are often morally wrong.

Theron first asks: what is the basis of the moral option not to help others, for example, by giving to charity, when doing so would produce the impartially best consequences? According to Theron,

“a very natural and plausible answer is that the basis of such a moral option is the cost to you, the agent. If in order to rescue a drowning child you had to sacrifice both your legs, then intuitively you would have the moral option not to help. But if the only cost to you of saving the drowning child were muddied shoes, you would lack the moral option not to help. (…) What makes it not wrong not to do what’s best in the first scenario is the substantial cost to you; since the cost to you in the second scenario is very small, no such moral option is triggered, and so here it is wrong not to do what’s best.” (p. 80)

“Cost”, according to Theron, includes “any loss to you in terms of your well-being, desires, projects, personal concerns, special relationships, and so on.” (p. 80) (Arguably missing from this list – as I’m sure Theron would agree –  are risks of loss. Being subject to a risk need not make you worse off in itself; but, all else equal, it is against your interests to be subject to a risk. Intuitively, this, too, is a consideration that can make a beneficent action optional).

Theron’s argument against the Common Assumption is an argument by counterexample. He gives us the following case:

“Arm Donor: There is one innocent stranger stuck on track A, and a runaway train headed straight toward her. There are one hundred innocent strangers stuck on track B, and another runaway train headed straight toward them. If you do nothing, all one hundred and one people will soon die. However, you can use your arm as a train-stopper; those tied to the tracks are far enough away that putting your arm on either track would cause the relevant train to slow down and come to a complete stop before reaching anyone in its path. That is, if you place your arm on track A, you will stop the train on that track in time and save the one, and, if you place your arm on track B, you’ll stop the train on that track in time and save the hundred. You have no other means of saving any of these people. Since the two tracks are twenty feet apart, you will not be able to sacrifice more than one arm in time to save all one hundred and one. Assume that losing your arm is a large enough cost to you to make it not wrong not to incur this cost, whether that is in order to save one life or one hundred. All other things are equal.” (p. 83)

Theron’s intuitive verdict about this case is that “while [it is] not wrong to do nothing, and not wrong to place your arm on track B, saving the hundred, it seems it would be wrong for you to place your arm on track A, saving the one. In general, it seems wrong to fail to save more lives when this is no costlier for you, other things equal.” (pp. 83-84).

Moreover, Theron asserts, his intuition about the specific case of Arm Donor is supported by the following intuitive general principle:

“Avoid Gratuitous Worseness (weak): It is wrong to perform an act that is much worse than another, if it is no costlier to you to perform the better act, and if all other things are equal.” (p. 84)

But if this principle is correct, then, contra the Common Assumption, there will be some cases in which, while it would have been permissible not to give to charity at all, certain suboptimal instances of charitable giving are nonetheless morally wrong, in virtue of the existence of an equally costly but more efficient alternative.

Nonetheless, the scope of application of Avoid Gratuitous Weakness (weak) is reduced by the fact that the costs of giving to the more efficient charity are often not equal to those of giving to the less efficient charity. For instance, what if the aims of the less efficient charity are particularly dear to the giver’s heart, such that not giving to this charity is experienced as a burden?

Theron believes that his argument can be extended to cover such cases as well, since it is plausible to endorse a strengthened version of Avoid Gratuitous Worseness:

“Avoid Gratuitous Worseness (still pretty weak): It is wrong to perform an act that is much worse than another, if it is slightly costlier to you to perform the better act, and if all other things are equal.” (p. 93)

By contrast, a very much stronger principle, Theron notes, is arguably not plausible:

“Avoid Gratuitous Worseness (strong): It is wrong to perform an act that is much worse than another, even if it is much costlier to you to perform the better act, and if all other things are equal.” (p. 93)

Indeed, this principle is arguably misnamed. The worseness that you avoid by performing the optimal beneficent act is not “gratuitous”, or even close to gratuitous. Rather, performing the suboptimal beneficent act would spare you from incurring costs that are much greater than those of performing the optimal beneficent act.

This concludes my brief summary of Theron’s paper. I will now present a few critical comments about Theron’s argument, to get the discussion started.

Comments

1. My first comment concerns the construction of Theron’s Arm Donor. I am not sure I share Theron’s intuition that it is morally optional, as opposed to obligatory, to sacrifice one’s arm to save 100 lives. I agree with Theron that if you will save either the one innocent stranger on track A or the 100 innocent strangers on track B by sacrificing your arm, you morally ought to choose the latter. However, my intuition is that this is because I have an unconditional obligation to save the 100 on track B, not because I merely have a conditional obligation to save the 100, if I will save either the one or the 100. In general, it seems to me that constructing a case that will pump Theron’s desired intuition about suboptimal supererogation is a delicate balancing act. Such a case will involve a choice between three options:

(1) Do no good; incur no cost.
(2) Do x amount of good, at cost c.
(3) Do y amount of good, at cost c, where y > x.

Now, the larger y, all else equal, the greater the gap between the amount of good you do by choosing (2) and by choosing (3). This will strengthen our intuition that if you will do either (2) or (3), you ought to do (3). But, at the same time, it weakens our intuition that doing (3) is morally optional. (That is my reaction to Arm Donor.) By contrast, if y is smaller, all else equal, this has the reverse effect: it tends to strengthen the intuition that doing (3) is optional, but it weakens the intuition that, conditional on doing either (2) or (3), you must do (3).

To be clear: I am merely raising a worry about the specific construction of Theron’s Arm Donor Case. I am not here claiming that it isn’t possible to construct a case that pumps the desired intuitions.

2. My second comment concerns some differences between the case of charitable giving and certain other instances of suboptimal supererogation. (My remarks in what follows overlap to a significant degree with Jeff McMahan’s views in his forthcoming paper “Doing Good and Doing the Best”, available here. So I make no great claim to originality).

There are many instances of suboptimal supererogation where we need not appeal to Avoid Gratuitous Worseness to explain why suboptimal supererogation is wrong. Here is a first example:

Island Rescue
You have three options:

(1) Stay at home and save no-one.
(2) Make the perilous journey across stormy seas to the island. Once on the island, go south and save 1 stranger.
(3) Make the perilous journey across stormy seas to the island. Once on the island, go north and save 5 strangers.

Intuitively, because of the great peril of getting to the island, you are not morally required to take either option (2) nor (3); it is permissible to choose option (1). However, if you do decide to brave the perilous journey and go to the island, it seems that you are then required to go north and save the 5. Going south and saving the one is morally impermissible. [An earlier case with an analogous structure is Shelly Kagan’s “Parrot/Baby” case in The Limits of Morality (Oxford: Clarendon Press, 1989), p. 16].

Structurally analogous judgments hold in the following case:

Adoption
You have three options:

(1) Don’t adopt Oliver. He stays in the orphanage.
(2) Adopt Oliver and treat him as a “second-class citizen” in your family.
(3) Adopt Oliver and treat him as well as your own children.

Again, while you are not morally required to adopt Oliver, if you do adopt him, it seems morally impermissible to treat him as a second-class citizen.

Finally, consider:

Employment
You have three options:

(1) Don’t hire anyone.
(2) Hire 100 workers (who have little bargaining power) and pay them very little.
(3) Hire the same 100 workers and pay them a decent wage.

Again, while you do not have to hire anyone, if you hire the 100 workers, you ought to pick option (3). The exploitative option (2) is morally wrong.

I maintain that there is a common explanation for our judgments in all three cases, which does not invoke Avoid Gratuitous Worseness. Consider the first case: Going to the island and saving someone is a complex action consisting of two constituent actions: traveling to the island (at t1), and saving someone (at t2). The first of these constituent actions is supererogatory, on account of the personal risk involved. Moreover, the following general principle seems plausible:

Not Required Part-Whole: If some complex action involves performing a constituent action that is not morally required on account of being supererogatory, then the whole complex action is not required. (NB: a complex action’s being not required is compatible with its not being permitted).

Not Required Part-Whole helps explain why option (1) is permissible: Choosing option (1) is permissible, since the only two alternatives, options (2) and (3), both involve a constituent action – traveling to the island at t1 – which is supererogatory and therefore not required, thus making it the case that neither (2) nor (3) are morally required.

Why is option (2) impermissible? Why is it the case that, conditional on going to the island, you must go north and save 5?

Once you are on the island (at t2), the empirical situation has changed. You now have the following options:

(1*) Save no-one.
(2*) At negligible cost or risk to yourself, go south and save 1.
(3*) At negligible cost or risk to yourself, go north and save 5.

This is just a version of the standard Numbers Problem. Hence, if the correct answer to this problem is that you are morally required to save the greater number then, even though at t1 it is supererogatory to go to the island, if you go to the island, you bring about a situation at t2 where you are morally required to go north and save 5.

Thus, the complex action of going to the island and then going south and saving 1 is impermissible, since it involves performing an action at t2 (going south and saving 1) which, given the alternatives available to you at that time, is impermissible. This is entailed by the following general principle, which also strikes me as very plausible:

Impermissible Part-Whole: If a complex action involves performing a constituent action that is impermissible, then the complex action is impermissible.

Much the same analysis applies to Adoption and Employment. Adopting a child and then treating it as a second class citizen is a complex action; likewise hiring workers and then paying them very little. In both cases, the (optional) first part of the complex action induces a change in the situation: in Adoption, you are now Oliver’s parent; in Employment, you are now in an employer-employee relationship with the 100 workers. This situational change makes it the case that new moral requirements apply to you: in Adoption, it is the requirement to treat Oliver no worse than your other children; in Employment, it is the requirement not to exploit your workers. These moral requirements, in turn, imply that, in a choice between (2) and (3), option (3) is obligatory and (2) is morally off the table.

Charitable donation (and Theron’s Arm Donor case) are crucially different. They seem analogous to the following variation on Island Rescue:

Two Islands
You have three options:

(1) Stay at home and save no-one.
(2) Make the perilous journey across stormy seas to Island A. Once on Island A, save 1 stranger.
(3) Make the perilous journey across stormy seas to Island B. Once on Island B, save 5 strangers.

Not Required Part-Whole still helps explain why choosing option (1) is permissible. By contrast, my above argument for the claim that option (2) is impermissible in Island Rescue cannot be applied to the present case. For in Two Islands, even conditional on deciding to save someone, there is no moment at which your choice reduces to a cost- or riskless choice between saving the one or saving the five, which would make saving the five morally required, and hence saving the one morally impermissible. Rather, the only point in time at which you have a choice between saving the one and saving the five is also one at which neither option is required, because either option is supererogatory. Exactly the same is true when choosing between two instances of optional charitable donations: by assumption, the point at which you must choose between making one or the other of these donations is one at which neither of the donations is morally required.

The main selling-point of Theron’s Avoid Gratuitous Worseness principle is that, insofar as we have the intuition that it is morally impermissible to pick option (2) in Two Islands, and that it is impermissible to give to a less efficient charity, all else equal, this principle captures that. By contrast, an analysis that seems plausible for other cases of suboptimal supererogation fails to capture these intuitions.

3. Theron, however, makes a stronger claim for Avoid Gratuitous Worseness: he thinks the principle “supports” our specific-level intuitions about cases. But is this really so?

To my mind, as I have mentioned, the main thing that Avoid Gratuitous Worseness has going for it is that it renders the intuitively correct verdict about certain cases of suboptimal supererogation. I have noted above that I disagree with Theron’s intuitions about the specific case of Arm Donor; but there are other cases, such as my Two Islands, where, to many, Avoid Gratuitous Worseness does appear to yield the intuitively correct result.

However, if the only reason to endorse Avoid Gratuitous Worseness were that it conforms with our intuitions about specific cases of suboptimal supererogation, then the principle could not support these intuitions. For, by assumption, any plausibility that the principle itself possesses it would get from being in conformity with our intuitions about these cases. The principle would be ad hoc.

Theron would likely respond that Avoid Gratuitous Worseness does lend additional support to our specific-level intuitions, because the principle itself seems inherently plausible, perhaps even self-evident.

But is this really the case? It strikes me that, once we fully grasp its implications, Avoid Gratuitous Worseness is far from self-evident. In particular, the principle implies that it is sometimes impermissible to bring about an outcome (going to Island A and saving one life), even though this outcome is better than another outcome which you could permissibly bring about (saving no-one). Moreover, this is said to be true just because there exists a third option (going to Island B and saving 5 lives), even though this is an option you are not actually required to take. Furthermore, we cannot appeal to a story about situational change, like the one I told about Island Rescue, to explain these claims. I submit that, once we spell out the implications of Avoid Gratuitous Worseness in this fashion, the principle begins to lose some of its apparent obviousness, and indeed may start to look rather puzzling.

Indeed, there are other cases in which Avoid Gratuitous Worseness looks not just puzzling, but quite perverse. Consider a second variation of Island Rescue:

Three Islands
You have four options:

(1) Stay at home.
(2) Make the perilous journey across stormy seas to Island A. Once on Island A, save 1 stranger.
(3) Make the perilous journey across stormy seas to Island B. Once on Island B, save 5 strangers.
(4) Make the perilous journey across stormy seas to Island C, a tropical resort where you go on to spend a wonderful two-week vacation.

Out of all four options, it is option (2) that is uniquely wrong according to Avoid Gratuitous Worseness. By contrast, the principle permits you not just to take the conservative option (1), which avoids the risk of venturing out to sea, but also option (4), which is just as dangerous as options (2) and (3), but where the danger is accepted in pursuit of a benefit for yourself rather than a benefit for others. This is because, although the outcome produced by (4) is impartially worse than that produced by (3), it is not gratuitously worse – there is a significant benefit to yourself in taking this option. However, the claim that an agent who risks his life to save a stranger on Island A acts impermissibly, whereas someone who instead goes on a tropical vacation acts morally permissibly, strikes me as hard to accept.

I am not claiming that Three Islands is necessarily a counter-example to Avoid Gratuitous Worseness. I am suggesting that this case shows that the truth of this principle cannot be considered self-evident. The principle requires a more robust defense. Can Theron say more to motivate and defend his principle?

4. Some authors, such as Jeff McMahan in “Doing Good and Doing the Best”, have concluded on the basis of considerations similar to those above that, in the case of charitable giving, suboptimal supererogation is not typically wrong. If McMahan were right then, insofar as there is a moral requirement to give to the more efficient of two charities, as effective altruists claim, this must be because this is a case in which giving to charity simpliciter is not optional. The Efficiency Requirement stands and falls with the Giving Requirement.

I myself am uncertain what to think about this issue. I agree with Theron that the intuition that suboptimal supererogation is morally problematic, even in cases like Two Islands and charitable giving, has considerable force. On the other hand, I am not convinced that we already have a fully satisfactory defense of this intuition.

I will end with a conciliatory proposal. I believe there is something that both sides to this debate should agree upon: Even if it were the case that suboptimal supererogation is not morally wrong in cases like Two Islands and charitable giving, it would not follow that an altruistic agent could not be criticizable for gratuitously doing less good than she could have done. In particular, I contend that such an agent could be criticized for acting irrationally.

Consider an altruistic agent who chooses to go to Island A over Island B. As an altruist, this agent has the end of doing good for others. However, by choosing to go to Island A, she promotes this end in an inefficient manner: at no greater cost or risk to herself, she could have done more good by going to Island B and saving 5 lives. This seems to fall afoul of the following intuitive principle of instrumental rationality:

Efficiency Principle: If I know that, at no greater cost or risk to myself, I can promote my end E to a greater extent by doing X rather than doing Y, and no other moral or prudential considerations are at stake, then I am rationally criticizable for doing Y rather than X.

This concludes my critical précis of Theron’s excellent paper. I very much look forward to the discussion!

Johann Frick
jdfrick@princeton.edu

24 Replies to “Theron Pummer: “Whether and Where to Give”. Précis by Johann Frick

  1. Thanks to Theron for an interesting paper and to Johann for an interesting critical response. Both of you seem to accept that there are cases where both (1) and (2) below are true. But accepting both (1) and (2) forces us to deny at least one of (3)-(5) below. I’m wondering which of (3)-(5) each of you would reject and why. I don’t think that we have a solution to this problem until we know precisely which of (1)-(5) should be rejected and why.

    (1) Saving no one (by, say, placing your arm on neither track A nor track B) is morally permissible.
    (2) Saving just the one (by, say, placing your arm on track A) is morally impermissible.
    (3) For any subject S and any two events φ and ψ, if S’s φ-ing is morally permissible and S’s ψ-ing is morally impermissible, then S has more moral reason to φ than to ψ.
    (4) Your saving just the one is morally better than your saving no one.
    (5) For any subject S and any two events φ and ψ, if S’s ψ-ing is morally better than S’s φ-ing, then S has more moral reason to ψ than to φ.

    P.S. I think that we should reject (5).

  2. Re the three islands case, a factor that is not made explicit but that might be relevant is that it is easy to see all the options as pairwise incomparable (from the point of view of one’s reasons for action), except for (2) and (3), with (2) being dominated and so in a uniquely bad position.

  3. Re the last two comments, if Doug is right and we should reject (5) in his list, then perhaps it also makes sense to say that even if S’s ψ-ing is morally better than S’s φ-ing, S’s ψ-ing and S’s φ-ing may be incomparable relative to what S has more moral reason to do (and so relative to S’s moral reasons for action). Combined with the preceding comment, this would explain why (2) in the Three Islands case is uniquely wrong relative to S’s moral reasons.

  4. Great comments on a great paper. Thanks to you both.

    Johann, I rather agree with you that there may be problems with Avoid Gratuitous Worseness. But the problem I have in mind is somewhat more external to that principle than those you raise. The difficulty is that AGW is at odds with other parts of common-sense morality, where it’s widely accepted that there are limits on moral demands such that the agent doesn’t have to justify not going beyond them. This is especially clear in the case of alleged ‘imperfect duties’. Consider someone who is already giving 50% of their salary to an effective charity. According to common-sense morality, they are already going well beyond what they are required to do. They are praiseworthy for doing so, and would be even more praiseworthy if they were to give 60%. But they don’t owe anyone a justification for not doing that – it’s up to them, as they’ve already done enough. I suspect that reflection on such cases might lead a non-trivial number of people to think that e.g. in the island cases that it really is up to the individual to save only one, if that’s what they prefer. Imagine, for example, that I’ve already saved a million, am about to retire, and then decide to save one more just for the hell of it.

    I expect that one place to expect some pressure is where you talk about complex actions. How do we decide which actions are complex, and what the components are? Someone might argue that all but basic actions are complex, for example. Or that there might be gerrymandered complex actions: I decide to say ‘Good morning’ to the first person I meet, and then shoot the second. (The contrast is what I find especially exciting.)

    Now consider the following case. Your aunt has given you a boat. You have expressed your thanks sufficiently, but you know she is watching you from the cliff top. She would enjoy seeing you demonstrate your sailing skills, but it’s not required that you do so. Then you note that there is someone on a rock out to sea, who is about to drown in the rising tide. You might use Not-Required Part-Whole to conclude that you don’t need to save them, since doing so will involve your demonstrating your sailing skills. In other words, because demonstrating your sailing skills is not required, neither is demonstrating your skills and saving the person’s life. (There are of course difficult questions in the background here concerning what it is to act, and in particular what it is to act under a certain description.)

    There might also be Professor Procrastinate-type problems with Impermissible Part-Whole. Imagine that I can save P’s life at little cost by going down road A. I know that my hated rival R is on road A, and that I will, impermissibly, be rude to him as I pass. I could refrain, but I know that I won’t. Am I required to refrain from the complex action of being rude to R and saving P?

    In your Three Islands case, you claim that AGW allows option 4. I take it that the version of AGW you have in mind is:

    Avoid Gratuitous Worseness (still pretty weak): It is wrong to perform an act that is much worse than another, if it is slightly costlier to you to perform the better act, and if all other things are equal.

    (Actually I wonder whether this version of AGW should in fact be disjunctive:

    Avoid Gratuitous Worseness (still pretty weak)*: It is wrong to perform an act that is much worse than another, if it is no or slightly costlier to you to perform the better act, and if all other things are equal.)

    (Or one might say: ‘if it is at most only slightly costlier…’. Also, the name does seem potentially misleading, since, as you point out, once there’s some cost to the agent in taking some option, not taking that option doesn’t seem ‘gratuitous’ in the common sense that word has of ‘pointless’.)

    But anyway doesn’t AGW forbid option 4, since it is much worse than 3 and it is only slightly costlier to you to perform 3 (you lose out on a holiday)?

    (Many thanks to Theron for comments on an earlier draft of the above.)

  5. Comment 1: If cost is truly 0, degree of worseness is irrelevant if it’s worse at all.

    As Theron implies in part 7 of his paper, I think an argument can be made for this stronger version of the ultimate principle and its intuitive applicability to non-consequentialists:

    Avoid Gratuitous Weakness (Weak as to Cost, Strong as to Worseness): It is wrong to perform the less efficient act, where there is ABSOLUTELY NO cost to you, and where the less efficient act is WORSE AT ALL than the alternative, and ceteris parabus. (as opposed to no cost to you, and the less efficient act is MUCH WORSE, which is the weakest formulation Theron explicitly describes and labels as “Avoid Gratuitous Worseness” (Weak)).

    It is difficult for me to imagine why, under any moral system, the slight degree of worseness would matter to a non-consequentialist without somehow smuggling in the concept of cost (e.g. “but it’s hard to calculate really fine degrees of worseness” is actually a cost, which we have assumed away in this formulation of the rule). I think that if Theron’s “Avoid Gratuitous Worseness (Weak)” is true, then the above formulation must also true.

    Comment 2: Costs are never truly 0.

    Keeping the information in comment 1 in mind, I think the ultimate problem with this is that, while it seems correct to me that both Avoid Gratuitous Worseness (Weak) (hereinafter AGW-Weak) is true and has intuitive appeal (and that the same is true of the version which is weak as to cost and strong as to worseness, hereinafter AGW-SW), these formulations are purely academic and apply to no real world cases.

    There is always at least some degree of cost (or omitted benefit, if that is relevant) to the agent, assuming the agent is a human being. Even assuming case 1 is donating $1,000 to one charity and case 2 is donating $1,000 to another, and the actual difficulty of transferring the money is identical in both cases, if the agent has even a SLIGHT preference for one charity over another (which seems to me will be true in all cases), then we can make no use of AGW-Weak or AGW-SW. And indeed in most cases, it seems like the difficulty in donating will be different of itself – suppose the URL for one donation website has one more character in it, or the phone number for one requires the user to move their finger slightly more. While this is an ultra de minimis cost, it’s still a cost, and that means that Theron’s AGW-Moderate or AGW-Strong will have to be used.

    Comment 3 – AGW-Moderate and AGW-Strong would be objected to by many non-consequentialists.

    There are lots of non-consequentialist systems, but a factor that is found in many common ones, and which seemingly has lots of intuitive appeal to many, is the concept of autonomy re: costs. A common formulation might be “A person is not morally obligated to undertake any cost except those where the obligation is generated by their own actions.” (e.g., parent must raise child, robber must reimburse victim)

    Someone holding this or a similar view would probably agree with AGW-Weak or AGW-SW. But with regard to AGW-Moderate or especially AGW-Strong, many non-consequentialists would start objecting, as violative of the principle above. The stronger the AGW principle gets, the more it cares about worseness, and the less it cares about costs, until we get to what might be called AGW-Ultra Strong, which would say “It is wrong to perform an act that is AT ALL worse than another, IRRELEVANT of cost to you to perform the better act, ceteris parabus.”

    AGW-Ultra Strong is essentially just consequentialism. A non-consequentialist could (and as I think, many would) object to AGW-Moderate as being the first step of a slippery slope, and that “cost autonomy” must be respected no matter the degree. To be sure, that objection creates some intuition problems of its own – does it REALLY matter if the URL for charity 1 is one letter longer, and shouldn’t we WEIGH the cost to the agent? – but this begins to look more and more like just a debate between consequentialism and non-consequentialism, and the point of this exercise is to find an AGW that applies to non-consequentialists.

    So basically my point is: there are AGW formulations that I think apply to and would be accepted by non-consequentialists, namely AGW-Weak and AGW-SW as described in comment 1. But I also think that AGW-Weak and AGW-SW do not capture any real world cases, while anything stronger would be objectionable on conceivable non-consequentialist grounds. I do think that establishing AGW-Weak and AGW-SW is valuable of itself as, perhaps, the basis for further argument yet unknown. But I worry that we have not, as yet, given non-consequentialists who subscribe to some form of cost autonomy a rule that applies to any cases in the real world.

    Comment 4: Three Islands tells against non-consequentialism rather than AGW

    I think that Johann’s three islands example is very interesting. I think that in the end the intuitive conundrum it correctly identifies presents a problem not for Theron’s AGW formulation, but for non-consequentialism which has the kind of cost autonomy I’ve just discussed.

    Presume, for a moment, the truth of “AGW-Island” which is: It is wrong to perform an act that is AT ALL WORSE than another, if the cost to you is less than the value to you of going on the luxurious vacation on Island C as assumed in the hypothetical. Assume also that going on the difficult journey to the islands be a cost of weight ABOVE that threshold. We need AGW-Island because we can’t be sure how much of a “cost” it will be considered to have lost the benefit of vacationing on Island C vs. going to Islands A or B.

    As Johann notes, AGW-Island would say: Doing nothing is not wrong because the cost of going on the trip is high enough. Visiting island A (less efficient altruistic island) is wrong because it is at all worse than island B and the cost is identical. Visiting island B (most efficient altruistic island) is not wrong because it is not worse than any other option. Visiting island C (selfish island) is not wrong because the cost to you is at least the value of visiting island C by definition.

    Johann’s argument is essentially that it seems unintuitive to say “The person who underwent sacrifice to save a stranger is wrong, but the person who underwent the same cost for selfish purposes is not wrong.”

    And that argument is correct – that DOES seem very unintuitive.

    But the question is, is this an intuitive problem unique to AGW, or is it a version of the same consequentialist kind of intuition that we’d use to object to “A person who undergoes de minimis sacrifice to save a stranger is not wrong, AND a person who does not do so is not wrong.”? In other words, how much of this intuition derives uniquely from AGW, and how much of it derives from the fact that we’re assuming a non-consequentialist viewpoint which carries with it the usual intuitive problems associated with such viewpoints? Is the resulting objection one that can apply uniquely to AGW without also applying to the underlying, and assumed, non-consequentialist viewpoint?

    I don’t know the answer to this because, as a consequentialist, it’s hard for me to avoid intuitive objections to non-consequentialism.

    It certainly seems to me, at least, that this hypothetical can be wielded as a pro-consequentialist intuition when confronting a viewpoint that accepts the truth of AGW-Weak or AGW-SW, but rejects AGW-Moderate-or-Greater because it subscribes to the kind of cost autonomy described above.

  6. Thanks to Johann for the great intro, and of course to Theron for the great paper!

    I want to offer another case that might help people warm up to Avoid Gratuitous Worseness. Suppose that a villain has kidnapped two people, P1 and P2, and is threatening to kill them. The villain summons you to his lair and places two pills, one red and one blue, on a table in front of you. If you take either pill, you will fall unconscious, and the villain will amputate your arms. If you take the red pill, the villain will release P1 unharmed. If you take the blue pill, the villain will release both P1 and P2 unharmed. You have three options: take neither pill, take the red pill, or take the blue pill. In this case, I have the strong intuition that (1) it is permissible for you to take neither pill and (2) it is wrong for you to take the red pill.

    Of course, this villain case is in one important respect different from both Theron’s Arm Donor case and most cases of charitable giving. In the villain case, you can either make a sacrifice in a way that saves one person, or make the same sacrifice in a way that saves this person and another person. So the option of saving the one person is dominated. In Arm Donor and most cases of charitable giving, your choice is not between saving one person and saving this person and another person, but rather between saving a smaller group of people and saving a larger group of different people.

    Still, if people share my intuitions, the villain case at least shows that it is sometimes wrong to bring about an outcome even though this outcome is better than another outcome that it is permissible to bring about. And once we are on board with that, we might be more inclined to agree with Theron’s intuitions about Arm Donor.

    And the villain case might also help with another source of doubt about Avoid Gratuitous Worseness. Suppose that, in Arm Donor, there are two people who could place their arms on the track. The first person declines to make the sacrifice and walks away. The second person places his arm on track A, saving the one person rather than the one hundred. If we accept Theron’s intuitions, it seems we have to conclude that the second person is more blameworthy than the first, even though the second person at least saved one person. That might be hard to accept. But the parallel implication in the villain case seems to me quite intuitive. Suppose that two people are summoned to the villain’s lair and offered the pills. The first person declines to make the sacrifice and walks away. The second person takes the red pill, saving only P1. In this case, it seems to me quite intuitive that the second person is blameworthy, whereas the first person is not. If people share my intuition here, they might warm up to the parallel implication in Arm Donor.

  7. Thanks for the paper and the commentary.

    Johann, I don’t believe that the distinction between simple and complex actions is morally decisive in cases like those we are considering. I wonder whether it is even morally salient.

    Compare:

    Robot:

    I can press one of two buttons, or do nothing.

    Nothing: No cost to me; 100 people die.
    Button 1: High cost to me, h; a robot saves the 100 without harming them
    Button 2: High cost to me, h; another robot saves the 100 but disfigures them badly.

    If I am permitted to do nothing because of the cost, it is nevertheless clearly wrong for me to pick Button 2. That is so even though I would be permitted to pick Button 2 over Nothing if Button 1 was not available.

    Now compare

    Robot 2:

    As Robot 1, except I must press either Button twice to complete the saving: once to start it and once to make it perform the action. I bear the cost at the first press.

    I am tempted by the view that these cases are morally on a par. I wonder if there is more to say to show that this is not true.

    Theron,

    As you note principles like AGW are not quite right as stated, because an act can be worse than some other act in a way that is not duty conferring. They need to be weakened or restricted in some way. You mention non-identity cases, but a simpler case is this: suppose that I can save 100 in a way that will benefit me, or save 100 in a way that will not benefit me. Other things are equal. I am permitted to do the latter, but principles like AGW imply otherwise. So these principles at least need to be qualified.

    I think that the principle you have in mind is something like:

    Avoid the Kind of Gratuitious Worseness that Can be Independently Duty Conferring (snappy title, no?) (AGWDC): If X can perform one of three actions, a, b and c, and a is worse than b in some respect that would be duty conferring on its own, and other things are equal between a and b, X is required to do b rather than a. This is so even if X is permitted to do neither a nor b because doing either act is much more costly to X than c.

    I wonder whether this principle is true, though. It suggests that the costs that a person bears never give a person extra moral latitude to do something that they would be required not to do had they not borne a cost. And I am not sure whether this is so. The cases under consideration certainly don’t demonstrate that it is.

    For example, suppose that there are self-regarding duties. I can save 100 people at great cost to myself in one of two ways. One way involves doing something that would have fulfilled some self-regarding duty whereas the other does not. I may not be required to pick the first over the second, because saving the 100 at great cost to myself already does enough for me to have made something of my life.

    Some might think the same about other regarding duties. If I could save 100 at great cost to myself in a way that independently keeps an unimportant promise, or do so in a way that does not, some might think that I have latitude not to keep the promise. I don’t know whether this is true, but your examples only show that there are some cases where AGWDC has the right implications, not that it is true.

    The Robot case above, as well as some of your cases, Kagan’s parrot case, and others suggest that we are sometimes required to perform the better of two options when we are not required to pursue either option because both are too costly. But that does not support AGW or even AGWDC. Until we have decided what the principle is that we should accept, we will not know what to say about charitable giving.

  8. Many thanks to PEA Soup for hosting this discussion of my paper! It is truly an honor, and the discussion is already proving to be incredibly interesting and stimulating. Special thanks to Johann for writing such a wonderful critical précis. His objections are excellent; some of them immediately seem right, and others I’ll have to think about a bit more. I plan on watching the discussion unfold for a bit, and then jumping in with some responses on Monday or Tuesday (so that I can write up a series of responses to several commentators all at once).

    For now, a very quick comment on Doug’s (1) through (5). In my paper I assume (1) and argue for (2). (3) seems hard to reject. So I am leaning toward rejecting either (4) or (5). Doug, why do you favor rejecting (5) rather than rejecting (4)? I could imagine someone saying that saving just the one results in a better outcome (impartially considered) than saving no one, but denying that the act of doing so is morally better.

  9. Erratum: Sorry I misdescribed Robot 2 above. In order to make it like one of Johann’s cases, let there be a separate button which starts both robots, the pressing of which is costly to me, after which I still have a choice which to use to save the 100.

    I am still tempted to think that the cases are on a par. If it is wrong for me to press button 2 in Robot 2, it is wrong to do so in Robot

  10. Hi Theron,

    I don’t see any plausible rationale for rejecting (4). (4) says: “Your saving just the one is morally better than your saving no one.” And it’s hard to see how any plausible account of what makes one option morally better than another could hold that it’s not the case that saving one life is, other things being equal, morally better than saving no lives.

    So, I reject (5). (5) says: “For any subject S and any two events φ and ψ, if S’s ψ-ing is morally better than S’s φ-ing, then S has more moral reason to ψ than to φ.” You ask why I reject (5). It’s a bit complicated, but I’ll do my best to give a brief explanation. I believe that there are good reasons for thinking that how much moral reason there is to perform an option that’s not maximally specific depends not on its own moral goodness, but on the morally goodness of the maximally specific options that entail them. To illustrate, take the case of Professor Procrastinate. His accepting the invitation is itself morally bad given that he wouldn’t write the review if he were to accept. By comparison, his declining the invitation (which results in the journal’s getting the next best person to do the review) is morally good by comparison. Thus, his declining the invitation is morally better than his accepting the invitation. So, if we accept 5.14, we must hold that he has more moral reason to decline the invitation than to accept. But note that accepting and writing is Procrastinate’s best option and it entails accepting. I don’t think that’s right. And, if we hold that what determines whether there is more moral reason to accept than to decline is, not whether accepting is itself morally better than declining, but whether the more specific option of accepting and writing is morally better the more specific option of declining and either writing or not writing, then we get what I deem to be the right answer: Procrastinate has more moral moral reason to accept than to decline.

  11. I’m sorry the sentence “I don’t think that’s right” ended up in the wrong place. What I don’t think is right is that Procrastinate has more moral reason to decline than to accept. I do think that accepting and writing is his best option and that this option entails accepting.

  12. Thanks to all who have participated in what is already a very stimulating discussion! I will follow Theron’s lead and hang back for another day or so, before posting replies to the several comments directed at me.

  13. I have not yet had a chance to read Theron’s paper (which I very much hope to do in the near future!), so please forgive me if Theron addresses this issue in the paper. But I wonder whether Theron’s argument ends up defending a very weak version of the Efficiency Requirement, one that many of us who reject the general ideas behind Effective Altruism would gladly endorse (but of course I can only speak for myself here). So I am assuming that the point of defending ER is to defend something like Effective Altruism; my remarks below would be irrelevant if one were simply defending ER as an interesting theoretical principle in normative ethics.

    Suppose one has two options for donating one’s spare income such that all that one knows about these two options is the following:

    Choice 1:
    A. Option A will save 40 children
    B. Option B will save 60 children

    I tend to agree that in such case, if I donate at all, I must choose option B. But compare such a case with the following case(s):

    Choice 2:
    A. Option A will save 40 children in Chicago/ from dying from cancer/ who are victims of a civil war
    B. Option B will save 60 children in New York/ from dying from starvation/ who are victims of a crime gang

    Here I think that there is nothing wrong in choosing A over B. My own view is that the difference between choice 1 and choice 2 is something similar to the difference between the opaque and non-opaque choice situations between incomparable goods described in Caspar Hare’s “Take the Sugar”. Making this argument would depend on some controversial assumptions, but fortunately, I don’t need to engage in this debate here, since it seems to me that the Efficiency Requirement already allows me to choose Option A in Choice 2. If I understand it, the “all else equal” clause in Theron’s principle already allows us to distinguish the two choices. Since loss in projects and personal concerns are among the things that Theron recognizes as cost, it seems that I could choose option A simply on the grounds that I am specially concerned about the plight of children in Chicago, or of those who were victims of this civil war, or one of my projects is to help (to some extent) people who suffer from cancer. But, ordinarily, when we choose to donate to “less efficient” charities, we face choice situations like Choice 2, not like Choice 1.

  14. Hi Roger,

    Thanks for these great points!

    1) Regarding your first question: what is it to be a “complex action”? I employ the term as follows: a complex action is a sequence of constituent actions a1, a2, … an performed by the same agent where, having performing an earlier action in the sequence, it is feasible for the agent to either perform or not perform the next action in the sequence. This is why in complex actions a normative question arises about each step of the sequence. (“May I/must I do a1?”; “May I/must I do a2?”, etc.).
    That’s what allows Impermissible Part-Whole and Not Required Part-Whole to get a grip.

    2) Next, to your sailing example. The case is interesting. Let me first clean it up a little, to make it completely transparent that it involves a complex action. Let us suppose that in order to rescue the man’s life, I must sail to the rock at t1 and help the man onto my boat at t2. Intuitively, I am morally required to perform this complex action. Your worry is this: Suppose that, under the circumstances (my aunt is watching) performing the first part of this complex action unavoidably involves demonstrating my sailing skills to my aunt. But this, you had said earlier, is something you assume I am not required to do, although it would please my aunt (I’ve already thanked her enough). Does Not-Required Part-Whole then imply, falsely, that I am not required to rescue the man’s life?

    I see two strategies for responding to this worry. They differ over whether to accept or reject the following principle:

    If I am morally required to φ in circumstances c, and it is unavoidable, in c, that if I φ I also ψ, then I am morally required to ψ in c.

    Suppose we reject this principle. We should then say the following: What I am required to do in in this case is sail to the rock and help the man onto my boat. (φ) Under the circumstances – my aunt is watching – performing the first part of this complex action will unavoidably involve demonstrating my sailing skills to my aunt (ψ). But that does not mean that I am morally required to demonstrate my skills to my aunt. (Compare: I’m a doctor and can use my medical knowledge to save a person’s life, at no cost to myself. I know that, unavoidably, saving this person’s life will impress a bystander. Many would say: Morality requires that I save the man’s life; morality does not require me to impress the bystander. It just so happens that, for contingent reasons, I can’t save the man’s life without also impressing the bystander). So we can grant that I am not required to demonstrate my sailing skills to my aunt, without this making trouble for the claim that I am required to sail to the rock and save the man.

    By contrast, suppose we accept the above principle. I would then simply deny that you are not required to demonstrate your sailing skills under these circumstances. Had the drowning man not been present, you wouldn’t have been required to demonstrate your skills; but his presence changes your obligations. Thus understood, Not-Required Part-Whole doesn’t get a grip on this case.

    3) On to your “being rude to your rival” case.

    In this case, you have three options:

    (1) Don’t go down Road A and don’t save P.
    (2) Go down Road A while being rude to your rival, and save P.
    (3) Go down Road A without being rude to your rival, and save P.

    You are right that Impermissible Part-Whole implies that (2) is impermissible. For (2) involves a constituent action (going down Road A while being rude to your rival) which is impermissible.

    By itself, that seems unproblematic. For you have available to you the different complex action (3). And this action is of course permissible. So far, I don’t see a problem for Impermissible Part-Whole.

    Is your worry the following: If I know that, in a choice between doing (2) and (3) I will actually do (2), and doing (2) is impermissible according to Impermissible Part-Whole, then I may be pushed into doing (1)? That would be a bad result, since (1) produces a far inferior outcome to (2) or (3).

    If that’s your worry, I would say two things in response: a) Since I’m a possibilist (or at least lean towards possibilism), I would be tempted to deny that the fact that you won’t do (3) should “push you” towards either of the other options. What you ought to do isn’t affected by what you will do. So what you ought do continues to be option (3), and that’s all there is to it.
    b) But suppose possibilism is false and we should really be actualists. Even then, your predictable failure to do (3) should only push you into doing (1), if (1) were actually permissible, in contrast to (2). But I don’t think (1) is permissible in this case. (There’s nothing in the case that would appear to make it optional that you save P’s life). And indeed doing (1) seems more seriously wrong than doing (2). So, if you won’t do (3), the “normative second-best” isn’t that you do (1); it’s that you do (2).

    4) Finally, regarding your take on my Three Islands case. Something like this response occurred to me as I was writing up my comments on Theron’s paper. I tried to block it by making option (4), the tropical vacation, sound very attractive, so that missing out on such a vacation would be more than a “slight cost”.
    Of course there are versions of AGW that would rule option (4) impermissible, even so described. The question is whether these aren’t implausibly strong versions of AGW.

  15. Hi Victor,

    Thank for your post! Your comment seems to assume that I believe that cases of suboptimal supererogation involving complex actions (like your Robot Case II) and cases not involving a complex action (like your Robot Case I) are morally not on a par.
    But I don’t make that claim. To the contrary, I agree with you that pushing Button 1 seems wrong in both your cases. In my comments on Theron’s paper, I merely point out that, in a case like Robot Case I, we could explain our intuition that pushing button 1 is wrong without having to appeal to a principle like Theron’s AGW. A principle like my Impermissible Part-Whole will do the trick. By contrast, Impermissible Part-Whole will not apply to a case like your Robot Case II. To explain our intuition in that case, we would need something like AGW.
    I take it that one of the attractions of Theron’s AGW is that it allows for a unified account of the wrongness in both types of cases. My reservations about this principle stem not from its implications like Robot Case II (or my Two Islands Case), but from its implications for other cases like Three Islands. That is why I’m not fully convinced that AGW is the solution to our problem. But, again, I agree with Theron and yourself that we do need an explanation for our intuition that pushing button 1 is wrong in both cases.

  16. Dear all,

    Thank you so much for these incredibly interesting comments on my paper. These are going to help me immensely as I continue exploring these topics. Below I offer a series of replies to the comments that have come in so far; I’ve tried to avoid responding to the comments that are directed at Johann. I prefer temporally concentrated blogging to temporally dispersed blogging, and may accordingly hold off on checking in on the conversation until later in the week.

    JOHANN

    Thank you again, Johann, for this truly wonderful critical précis. Below are some replies to your four numbered comments:

    1. Good point. I knew I shouldn’t have departed from the traditional ‘one versus five’ setup in trolleyology! At least, it’s more intuitive that you’re permitted not to sacrifice your arm to save five (than it is that you’re permitted not to sacrifice your arm to save one hundred). However, making it five on track B rather than one hundred doesn’t really weaken my intuition that it’s wrong to put your arm on track A, saving just the one. At no extra cost to yourself whatsoever, you could have saved five instead, by putting your arm on track B. In fact, my intuition about the wrongness of putting your arm on track A would persist even if we had made it just two people on track B. And to at least most of those of us who believe in moral options, it does seem pretty plausible that you’re permitted not to sacrifice your arm to save two. I’ll note that part of the reason I departed from the traditional ‘one versus five’ setup, making it instead ‘one versus one hundred’, was to make the Arm Donor example more analogous to the sorts of charitable giving cases that Effective Altruists are fond of calling our attention to – those in which $X would do good G if given to charity A but $X would do good 100G if instead given to charity B (the guide dogs versus trachoma surgeries example I gave at the beginning of the paper arguably gets at a goodness difference of G versus 2,000G). Then I thought I could just tell readers to *assume* that it’s permissible not to sacrifice an arm to save one hundred; but I agree that it might be hard to get our intuitions to genuinely take that assumption on board.

    2. I found your discussion here to be fascinating, and the Not Required Part-Whole and Impermissible Part-Whole principles to be very much worth exploring. I see you are already engaged in a discussion with Roger and Victor on this front, so I will instead focus on your other two comments.

    3. I apologize for being nit-picky, but I want to clarify my position here: I would not say that Avoid Gratuitous Worseness (henceforth AGW) supports *intuitions* about cases. I think of intuitions as seeming states. Intuitions are not claims. So I’d instead say that AGW supports certain *claims* about cases. Particular claims about particular cases can be both intuitively plausible and supported by general principles that are themselves intuitively plausible. For example, I think it is intuitive that saving just the one is impermissible in cases like Arm Donor and in Two Islands, that these particular claims are entailed by AGW, and that AGW is intuitive. As I note immediately after introducing AGW, and as the discussion with Victor below suggests, AGW needs to be restricted in various ways if it is itself to be intuitive. But I think that, once the relevant restricting is done, AGW is in itself no less intuitive than the corresponding intuitive claims about Arm Donor and Two Islands. AGW should point to the relevant features common across these particular cases in which it is intuitively impermissible to save just the one (or, more generally, do the lesser good).

    You suggest that if we fully grasp AGW’s *implications*, we may not find it as intuitive as we otherwise might, and we may instead find it puzzling. In particular, you suggest that it may be puzzling that there is sometimes the following *Discrepancy* between the comparative deontic status of acts and the comparative goodness of the outcomes these acts might bring about: it is impermissible to bring about an outcome (saving just the one) that is better than another outcome you could permissibly bring about (saving no-one). (The term ‘Discrepancy’ is inspired by Tina Rulli, who in an excellent draft on conditional obligations refers to the above worry as the discrepant rankings objection.)

    I should note that, strictly speaking, Discrepancy (that it is sometimes impermissible to bring about an outcome that is better than another outcome you could permissibly bring about) is not an implication of AGW alone. Instead, it is an implication of the conjunction of AGW and the claim that there are moral options, e.g. moral options significant enough to permit you to fail to save many lives if it’d cost you an arm. AGW is compatible with moral options, but it doesn’t entail them. So even if we thought Discrepancy were implausible, it is not clear that that would count against AGW, let alone detract from how plausible AGW seems considered independently.

    It’s worth noting here that Peter Singer and some other consequentialists have responded to me (when I gave this paper in Australia in 2015) by arguing that the independent implausibility of Discrepancy together with the independent plausibility of AGW yields reason to reject moral options.

    However, if we find moral options independently plausible, we needn’t find Discrepancy puzzling at all. In my paper, I claim that Discrepancy might seem puzzling if we block out the fact that the act that brings about the better outcome (saving just the one) has a serious wrong-making feature: it is a deliberate refusal to do something much better at no extra cost (saving the five). The act with the worst outcome (no-one is saved) does not have this serious wrong-making feature. That the one act brings about a better outcome does not intuitively prevent the serious wrong-making feature in question from making this act wrong all things considered. Seeing this asymmetry between these two acts makes it seem perfectly non-puzzling to me that it is sometimes impermissible to bring about a better outcome than another you could permissibly bring about. I think Joe’s Villain case below lends further support to these claims.

    Your Three Islands case is brilliant, and I’m still trying to work out what exactly to say about it. However, I should note that, just as AGW does not alone imply Discrepancy, AGW does not alone imply that option (2) in Three Islands is *uniquely* wrong, though AGW does imply that (2) is wrong, given the presence of option (3). Again, it’s compatible with AGW that (1) is wrong. But, perhaps most importantly for present purposes, it’s compatible with AGW that (4) is wrong. Indeed, one might defend moral options of a sort that imply option (1) is permissible, but fail to imply that (4) is permissible. Perhaps there’s room for a more complex view here that opposes you taking a large risk for the sake of getting a two-week vacation, when you could have taken an equally large risk for the sake of saving one life, or five lives. A tentative response, then: insofar as it’s implausible that taking on a large risk to save just one life (2) is impermissible whereas taking on an equally large risk to get a two-week vacation (4) is permissible, we might explore a more complex view according to which (4) is impermissible (and that (1) is permissible, (2) is impermissible, and (3) is permissible). On the other hand, we might argue that it isn’t implausible that (4) is permissible while (2) isn’t, just as Discrepancy isn’t implausible. I am not entirely sure yet which route to take here. But either way, I think more would need to be said to show that Three Islands reveals a problem for AGW as such, or detracts from its independent plausibility.

    4. The conciliatory proposal appealing to the principle of instrumental rationality (Efficiency Principle) is extremely interesting, and I hope that this sort of idea gets explored thoroughly. One immediate thought here is that many people might lack the end of *doing good for others*. Perhaps you save just the one because you have the end of making a new friend, or the end of doing good for *just one* other (a kind of individualistic spin on the end of doing good for others). If so, your end would not be better promoted by saving the five over the one, and so Efficiency Principle would fail to imply that you’d be rationally criticizable for saving just the one. To be able to criticize you in such a case, we might need to appeal to a further claim (inspired by Caspar Hare in *The Limits of Kindness*). We can call the claim *Minimal Decency*: minimally morally decent agents have the end of doing good for others. Then the criticism of the agent who saves just the one would take a disjunctive form: either this agent has the end of doing good for others and so is rationally criticizable for saving just the one (by Efficiency Principle) or the agent does not have the end of doing good for others and is morally criticizable for lacking this end (by Minimal Decency). I wonder what you and others make of this somewhat revised version of your conciliatory proposal. Also note that the proposal would need to be revised a bit further to generate criticism of saving just the one in cases in which saving the five over just the one is slightly costlier to you.

    DOUG

    Thanks for your comments and reply to my earlier question. I will have to think about this further, but what you say here sounds plausible to me. Part of what’s at issue in deciding between rejecting (4) or (5) on your list of five claims is whether it’s ‘more moral reason than’, ‘morally better than’, or both, that change in virtue of extra alternatives being present. You are okay with it being the case that there’s more moral reason to choose B over A when these are the only alternatives, but more moral reason to choose A over B when C is an alternative too. At least in the present context, you are not okay with the parallel claim about moral betterness.

    Also, I think Johann could be read as raising skepticism about claim (2) on your list (he raised skepticism about (1) in connection with the specific ‘one versus one hundred’ version of Arm Donor I had in my paper, but if I had instead offered a ‘one versus five’ version of Arm Donor I suspect he’d accept your (1) but have doubts about your (2)).

    While I’m at it, I want to make sure two things about my paper are clear (I have no special reason to think that you aren’t already clear about these things, Doug, but this seems like a good place to make these hopefully clarificatory remarks). First, in the paper, when I say one act is better than another, I mean it brings about an impartially better outcome. This leaves it open what to say about the *moral betterness* of acts in the sense we’ve been dealing with here, including whether moral betterness can change in virtue of extra alternatives being present. Second, in the paper I wrote, “It is a familiar feature of nonconsequentialist ethics that the moral status of an act can depend on which alternative acts are available.” This is not what I should have written. As Johann correctly pointed out to me in an earlier conversation, consequentialist views also have this feature: B might be obligatory when just A and B are the alternatives (if out of A and B, B is the act that would promote the most good), and yet impermissible when A, B, and C are the alternatives (if out of A, B, and C, C is the act that would promote the most good). Instead I should have pointed out that it is a familiar feature of nonconsequentialist views that how A and B *compare morally* (whether in terms of ‘morally better than’ or ‘more moral reason than’) can depend on which alternative acts are available.

    CHRISOULA

    Thanks for your comments. I agree that there’s a plausible case for thinking in Three Islands all the options are pairwise incomparable or incommensurate with respect to your moral reasons for action, except for options (2) and (3), with (2) being in one sense dominated. (I say ‘in one sense’ because (2) won’t be dominated in the sense Joe refers to in his Villain example below, as in Three Islands there’s no overlap between the people who would be helped by (2) and those who’d be helped by (3).)
    (2) wouldn’t be dominated in variants of the case in which (3) is slightly costlier to you than (2), but presumably in this variant of the case (2) and (3) would nonetheless be pairwise comparable, with (2) coming out as morally disfavored. As I said in my reply to Johann, though, I am not sure what to think about option (4): maybe you do have more moral reason to go for (2) over (4), and likewise to go for (3) over (4). This would not upset the moral ranking of (2) and (3), but it would bear on which options are pairwise comparable with respect to moral reasons, and on whether (2) is *uniquely* morally disfavored.

    ROGER

    Thanks for your comments to me and to Johann. You suggest that, according to common sense morality, once an agent has done enough good, she doesn’t have to justify not doing more. I am not sure what common sense morality would say about this. After all, in many cases in which an agent has done a lot of good, or what is intuitively *enough good*, she has also incurred a lot of cost to herself in doing all that good, and it does seem plausible that, according to common sense morality, once an agent has incurred *enough cost* to herself in doing good, she is not required to incur any further cost to do still more good. Common sense morality on these matters might imply that, once an agent has done enough good, she’s incurred enough cost, and so *has* a justification of not doing more.

    In “Non-Consequentialism, the Person as an End-in-Itself, and the Significance of Status” (p. 356), Frances Kamm suggested that a “moral moderate” may (once he’s promoted enough good already) deny that there is a reason to promote the greater good because “he thinks it understandable that a person might resist the idea that he is a means for achieving the greater good, that he might assert his independence from that goal and so recognize his own existence as an end-in-itself.” We might read this as a different conception of common sense morality than the one I sketched above. On this different conception of common sense morality, once an agent has done some sufficiently large amount of good, then, even if doing all this good cost her nothing and even if it would cost her nothing to do some still much greater amount of good, and if other things were equal, she would be permitted not to do all this extra good. But, modulo the sorts of worries Victor raised about how to restrict ‘worseness’ in AGW, it seems to me that, no matter how much good you’ve already done, you *do* have to justify not doing more. Putting the point in terms of justifications *to* people for whom your failure to help would be worse: it would strike me as pretty odd that, even after having (costlessly) saved a million, you would not owe someone you could costlessly save a justification for not doing so. It seems that at least in this case and in related ones, you do owe a justification (perhaps a justification to people, or perhaps a justification construed more impersonally) for not doing more good. If this isn’t entailed by common sense morality, then I boldly claim common sense morality needs revision.

    Earlier I said that it seems plausible that, according to common sense morality, once an agent has incurred *enough cost* to herself in doing good, she is not required to incur *any* further cost to do still more good. As I noted in my paper just before introducing the various strengthened versions of AGW, we can suppose that in my cases (and indeed it seems plausible that in many real world cases of giving to charity) the agent in question has not yet reached the threshold of cost to herself that would permit her not to incur *any* further cost (however slight) to do still more good (however how much more good is at stake). Nonetheless, even if before and after incurring some substantial cost C an agent would still not have reached the aforementioned threshold of cost, it is plausible that there are cases in which she is permitted not to incur C for the sake of promoting some particular amount of good (where doing so would be better impartially).

    On whether to formulate AGW disjunctively in the way you suggested: I had thought about doing this when writing up the paper, and perhaps should have stuck with that formulation! Also, I agree with you and Johann that AGW (Avoid Gratuitous Worseness) could have a better name, as *gratuitous* doesn’t seem quite right. Maybe just ‘Avoid Worseness’ would have worked.

    CALEY

    Thanks for your comments. Below are some replies to your four numbered comments:

    1. This might be right. Suppose you can either (A) do nothing, (B) promote some good G at cost C to yourself, or (C) promote good G+ at cost C to yourself. In cases where the difference in goodness between G and G+ is very slight, and in cases in which G is itself a very slight amount of good, some might argue that there is moral freedom to choose between (B) and (C). (Suppose it’s equally costly to me to raise my eyebrows as it is not to, but if I did raise them this would very mildly amuse someone – am I required to raise my eyebrows?) In *The Limits of Morality* (chapter 7), Shelly Kagan gets into some issues along these lines. I had hoped that by offering formulations of AGW in which the difference in goodness to be promoted is large, I could bypass these sorts of issues about moral freedom, and make it appear more plausible that you do not have moral freedom to choose between promoting G and some much greater G++.

    2. Whether or not AGW-weak applies to any real world cases might interact with views about specificity and precision in measuring cost, and what we can reasonably mean when we say that the cost to you of doing A is *the same size as* the cost you of doing B. If it is true that cost is in principle measurable *very precisely and very specifically*, this would presumably make it more improbable that there’d be any real world cases in which the cost of doing one thing is *exactly the same size* as the cost of doing something else. Suppose that is right. It would remain true that AGW-still pretty weak (p. 93) would apply to many real world cases.

    3. I agree that AGW-moderate or AGW-strong may plausibly be objected to by nonconsequentialists, but the extreme *Cost Autonomy* view you sketched (a person is not morally obligated to undertake any cost except those where the obligation is generated by their own actions) seems very implausible. Notice, e.g. that it’d imply that there’s no obligation to at a very low cost save a drowning child. I believe nonconsequentialists should not accept this extreme view. You are right that if we are nonconsequentialists but do not accept Cost Autonomy, and instead claim that you’re obligated to promote a lot of good if you could do so at small cost, there is a ‘slippery slope’ worry lurking here: what if you could promote still more good by incurring a slightly bigger cost, and still *more* by incurring a slightly bigger cost than *that*, and so on? But this sorites-like issue is probably going to come up whenever there are multiple morally relevant factors that are smoothly gradable, can be scaled up, and can trade off against one another. I am not sure the nonconsequentialist who accepts AGW-still pretty weak, for example, is in a *specially* worse boat on this score than many others who admit more than one morally relevant factor (even hedonistic utilitarians say that pain intensity and duration both matter, and get into *Hangnails for Torture* type troubles). It would be very interesting if this were not the case, and if nonconsequentialists faced a special pressure to accept Cost Autonomy. That would make consequentialism look comparatively more attractive than it otherwise would.

    4. I’m not entirely sure yet what to say about this last comment, but it is interesting, and I will think about it more. In my response to Johann’s third comment, above, I noted that Singer and other consequentialists have responded to me by arguing that the independent implausibility of Discrepancy together with the independent plausibility of AGW yields reason to reject moral options. I wonder if your point here is similar to that one.

    JOE

    Thanks for this neat example, which I’ll refer to as *Villain*. As I said in my response to Johann, Villain does, I think, help remove puzzlement surrounding what I called the Discrepancy. As you have already hinted at yourself, a critic might respond to us that there’s an important disanalogy between cases like Villain in which the good to be promoted *overlaps* (such that the good done by one act is in some sense a proper part of the good done by another act), and cases like Arm Donor or Two Islands in which the good to be promoted does not overlap in this way. Indeed, Jeff McMahan suggests something along these lines in the excellent paper in progress of his that Johann linked to above. McMahan quotes a case from Parfit in which you can either do nothing, or, at great cost to yourself save a stranger’s right arm, or, at the same great cost to yourself save both of this stranger’s arms. Now, this case is different from Villain in that in Parfit’s case the all the benefits at stake here would be going to the *same person*, but the cases are similar in that they both involve the relevant sort of overlap (the good promoted by one act is in some sense a proper part of the good done by another act) flagged above. McMahan writes, of Parfit’s case, that of his two different helping acts, the agent can “confer only one benefit at great cost to himself or confer that *same* benefit to the same person *and* confer *another* equally great benefit at no further personal cost. While this agent had sufficient reason to provide neither benefit, he has *no* reason to provide only one rather than both. To prevent the loss of only one of the stranger’s arms would be *gratuitously* to allow the stranger to suffer the loss of an arm. And to allow a great harm to occur when one could prevent it at no cost to anyone is wrong.” Perhaps there is something *especially morally undesirable* (and disrespectful) about promoting the lesser good in Parfit’s case and in Villain, something that’s lacking in Arm Donor and Two Islands. But even if so, I do not see how this difference between cases involving overlap and cases not involving such overlap could go all the way to justifying the (counterintuitive) view that it is not wrong to do the lesser good in Arm Donor, or in Two Islands.

    VICTOR

    This is a good worry to raise about how to restrict or formulate AGW, and I appreciate any help on this front (including offering a more robust defense of AGW). I am not sure what to think just yet about your snappily-titled AGWDC. But let me say a few things about the first part of your comment addressed to me. You wrote, “…suppose that I can save 100 in a way that will benefit me, or save 100 in a way that will not benefit me. Other things are equal. I am permitted to do the latter, but principles like AGW imply otherwise.” Whether AGW will really imply otherwise depends on how big the benefit to you is, in particular whether it triggers the ‘much worse’ clause of AGW. Suppose the benefit to you is big enough to trigger this clause. Now, unqualified AGW would indeed imply that it is wrong to save 100 in a way that will not so greatly benefit you. We might find this implausible, and so seek to further restrict what feeds into ‘worseness’ in AGW. Perhaps, just as we could screen the well-being of merely possible persons out of ‘worseness’ in AGW, we could likewise screen the agent’s own well-being out of ‘worseness’ in AGW. Could I get away with this? As I suggested in my reply to Johann, AGW should point to the relevant features common across various particular cases in which it is intuitively impermissible to save just the one (or, more generally, do the lesser good). So we might not have found the right formulation of AGW until we’ve looked at a lot of cases – they will inform how to relevantly restrict or qualify ‘worseness’ in the principle. AGW would remain intuitive considered independently and as a general principle, but the various particular cases highlight different ways of spelling out ‘worseness’ that, when plugged into the general principle, could affect its independent intuitiveness. The worst case scenario for me, on this front, would be that there’s no general level intuition to provide *further* support for already independently intuitive claims about particular cases. I still think I’d be well-positioned to appeal to intuitions about cases like Arm Donor and Two Islands, and then argue by analogy against Common Assumption. But again I don’t think I have to make that sort of retreat, as a general principle in the vicinity of AGW does seem independently plausible.

    SERGIO

    Thanks for this comment, which helpfully picks up on the connections between this discussion and Effective Altruism. And no need to apologize for not reading the paper yet. First, I take my argument (starting in section VII of the paper) to defend a claim stronger than what Johann called the Efficiency Requirement. The stronger versions of AGW (Avoid Gratuitous Worseness) imply that it is wrong to fail to do much more good that you could have done at only slightly greater cost to yourself (where, as you note, the slightly greater cost could be a slightly greater cost in terms of your projects). Second, other things equal, it’s probably harder to tell which act does more good in non-opaque choice situations. But I take it Effective Altruists would point out that there are plenty of non-opaque choice situations in which it’s plausible that one act does much more good than another (e.g. provide one blind person with a guide dog or cure 2,000 people of their blindness altogether). They could also point out that there are plenty of cases in which giving $X to a much more cost-effective charity rather than to a much less cost-effective charity would not be much costlier to you in terms of projects and the like. In “The Moral Imperative Towards Cost-Effectiveness” Toby Ord plausibly claims that inefficient giving is *typically* the result of *ignorance* about the significance of the cost-effectiveness landscape. Even if there are plenty of cases in which our projects steer us toward charities that do considerably less good impartially assessed than we could do if we gave elsewhere, there are plenty of other cases in which our projects don’t pull strongly either way (especially when looking *within* charitable cause categories) and yet the good does.

  17. Hi both

    Sorry, I wasn’t meaning to attributing to you the view that the distinction between simple and complex cases is salient Johann. I was just exploring your view about it a bit further, and suggesting mine. As you agree that there is no difference between simple and complex action cases, your three island case can be strengthened as follows. Suppose that there is one island. X bears a great cost or risk to get to a single island. Once there he can either lie on the beach having a nice holiday, save one person or save five. It seems even more implausible that X may lie on the beach. But then I agree with Theron that it is not clear that AGW permits X to lie on the beach.

    I think that what this case suggests is that we need to ground our discussion of these cases in a deeper analysis of agent-relative prerogatives, and what counts as a (roughly speaking) ‘sufficiently great cost’ for a person not to be required to prevent harm to others. We might understand a cost as an option that the person has reason to value, and the degree of cost as relating to the strength of the reason that the person has to value the option. A person is permitted to stay on the shore because she has powerful reason to value her own safety or welfare. But she is not permitted to take the risk and go to the island to lie on the beach, because she does not have powerful reason to value that option. This is only a rough suggestion, of course, but it does explain our intuitions in these cases. As I say, ultimately I think that the best way to make progress is to consider more deeply just what grounds agent-relative prerogatives. These complex cases should be understood in the light of the best account of that.

    Finally, I think that what your own discussion, Theron, and my discussion of self-harming suggest is that it is wrong to focus on gratuitous worseness. What you should rather be focusing on is differences in facts between actions where these facts are independently duty conferring.

    This shouldn’t be surprising, because, at least for non-consequentialists, only certain kinds of worseness are duty conferring. We don’t have any general duty not to make things worse for ourselves, or to write better novels rather than worse ones, or to cook better dinners rather than worse ones, or to tell better jokes than worse ones. And that is so even if the novels the dinners, and the jokes are much worse than alternatives that we could secure.

  18. One other thought: in order to be plausible, ‘worse’ or ‘better’, as properties of acts, had better track duty-grounding facts about acts, or variations on AGW will fail for another reason. For example, I am required to keep one of my promises rather than help you to keep two of yours if doing either thing is equally costly to me. Some might claim that helping you to keep two of yours is a better act, but nevertheless wrong. If that is right, AGW fails. You might respond that acts that are wrong are necessarily worse than acts that are required. Doug suggested earlier, though, that acts that were permissible need not be morally better than acts that are wrong, so I wonder whether this is true.

    In the light of this, it is also worth reflecting on the fact that it can be permissible for a person not to bear a cost, but if she does she must do the act which brings about the worse outcome. Consider:

    Kids on the Island: It is extremely dangerous for me to get to an island where my child and two stranger children are ill. If I get there I can cure either my child, or the two stranger children.

    It may be permissible for me not to make the trip if the trip is dangerous enough and the illness is not sufficiently dangerous. But if I do go I should cure my child.

    Again, you might claim that saving my child is the better act (worse in terms of outcome, but better in some other way), so it is not a counterexample to AGW. But the case does reinforce the idea that we are interested in differences between acts that ground duties rather than other ways in which acts can be better or worse.

  19. Many thanks for this, Victor. *Kids on the Island* is a nice case, and I share your intuition about the sort of conditional obligation present there. But I am not sure this case or the promises case could function as counterexamples to AGW, as AGW has an ‘other things equal’ clause (and both cases bring in special nonconsequentialist factors). Still, with you, I think it’s plausible that not all worseness is independently duty conferring, and that there are things besides worseness that are.

  20. Hi Victor. Thanks again for your helpful comments. I would be curious to know what you or others think to the following:

    Earlier I said that not all worseness is independently duty conferring. The thought then would be to restrict AGW so that it doesn’t refer to *worseness per se*, but only to worseness that is independently duty conferring. This may be enough to bridge the intuitive verdicts about nearby rescue cases (Arm Donor and Two Islands) with the rejection of Common Assumption (about distant rescue cases of charitable giving). That is, one could argue that just as the fact that you can avoid the worseness of five dying rather than one dying at no extra cost to yourself in nearby rescue cases is independently duty conferring, so too is the fact that you can avoid the worseness of five dying rather than one dying at no extra cost to yourself in distant rescue cases of charitable giving. However, I think what you are suggesting is that some nonconsequentialists might prefer that I reformulate my arguments without appealing to worseness at all (as they might claim that we’re not under any pro tanto duty to promote the impartial good per se).

    Indeed, I want my argument to appeal to nonconsequentialists of this stripe, and I should have done more to accommodate this sort of position in my paper. I believe I *can* still say much of what I wanted to say initially without appealing to worseness (perhaps I can say nearly all of what I wanted to say that’s practically relevant to how nonconsequentialists should think about Effective Altruism). For example, just as the fact that you can avoid five dying rather than one dying at no extra cost to yourself in nearby rescue cases is independently duty conferring, so too is the fact that you can avoid five dying rather than one dying at no extra cost to yourself in distant rescue cases of charitable giving. Or is there special reason to be skeptical about the obligation to save the greater number in distant rescue cases?

    Similarly, note that in my response to Roger I wrote “modulo the sorts of worries Victor raised about how to restrict ‘worseness’ in AGW, it seems to me that, no matter how much good you’ve already done, you *do* have to justify not doing more.” Again, an analogous claim could have been made without appealing to worseness at all. The thought instead might be, no matter how many lives you’ve already saved, you do have to justify not saving more lives (even if you don’t have to justify not doing more good). Whether in a nearby rescue case or in a distant rescue case, you need a justification of saving the one rather than the five. There is a background normative pressure to save the greater number, and this pressure needn’t be characterized in terms of impartial goodness.

  21. Hello Theron,

    Here is a way to clarify the general theoretical issue:

    Compare the following three option cases:

    Case 1:

    i) Do nothing
    ii) Do something that is costless to me, nc, and then do something that is better than nothing, x
    iii) Do something that is costless to me and then do something that is better than nothing, y

    In some cases, the difference between x and y is duty-conferring. Suppose that this is true, and that as a result I am required to pick c).

    Case 2:

    i) Do nothing
    ii) Do something that is costly to me, c, and then do something that is better than nothing, x
    iii) Do something else that is costly to me, c, and then do something else that is better than nothing, y

    If there are agent-relative prerogatives, and c is costly enough, I am permitted to pick i). Suppose this is true. The question is whether I am required to pick iii) rather than ii) if I do c.

    This is true only if the fact that I do c makes no difference to my duty to pick y over x. Call the view that doing c makes no difference No Difference (this is roughly the same idea as AGWDC I offered above). On this view, if I am required to pick iii) in some version of Case 1, I am required to pick iii) over ii) in the relevantly similar version of Case 2.

    I then had two thoughts about cases like this. First, the practical implications of cases like this for cases of charitable giving depend on which differences between x and y are duty conferring. Principles like AGW need to be amended if gratuitous worseness is not, or not always, duty conferring. That, though, is just a general question about what differences between options can make it true that I am required to pick one option rather than another. And whilst it is difficult to resolve this question, almost all of us agree about some cases – for example, that there is a duty to save the greater number from a threat of death.

    But the more interesting theoretical issue is whether No Difference is true, and I think that you implicitly rely on something like this. In an earlier post I offered some tentative suggestions why it is not true, and I find it hard to see what general argument can be offered to support it. It might be true that the difference between doing c and nc sometimes makes a difference to whether I have latitude to pick x over y in some cases.

    Some cases we have looked at suggest that any difference between c and nc is sometimes not decisive, so that my duty to pick y over x persists regardless of whether I need do c or nc to do x or y. But this does not show that No Difference is true, so it does not tell us, more generally, what we should say in cases of charitable giving. It tells us what we are required to do only in cases that are relevantly like Arm.

  22. Thanks Victor – that’s a very helpful clarification! I totally get where you’re coming from now; the No Difference issue is distinct from the issues about accommodating nonconsequentialists who reject the idea that worseness per se can be duty conferring (i.e. who reject a pro tanto duty to promote the good) that I had in mind earlier today.

    I do *not* want to appeal to No Difference (a simple appeal to No Difference should presumably set off “additive fallacy” alarm bells). I should have made this clearer in the paper, but I think we can understand the AGW’s ‘other things equal’ clause in a way that renders the principle compatible with the rejection of No Difference. The idea being that, if you’re required to do Y over X when it is costless to you to do either, but not required to do Y over X when they each would cost you C, then there’s a kind of contextual interaction to do with the presence of cost C that makes it the case that not all other things are equal. But then, as you rightly note, we’ll need to do more work to figure out when these contextual interactions occur in order to determine when AGW does and doesn’t apply. For the purposes of establishing my main claim, I just need AGW to apply in *many* cases of charitable giving (it does appear to apply in many such cases, as many are relevantly similar to Arm Donor). But my paper certainly leaves room for more work to be done, sorting out what to say about other cases!

    While I don’t in the paper get into the sorts of interesting cases about self-regarding and other-regarding duties you noted in an earlier response, I do mention one way in which a close cousin of No Difference might fail. No Difference has a cousin, No Difference*. According to No Difference*: if you’re required to do Y over X when doing either would cost you *a little* and Y would cost you slightly more, then, however much it would cost you to do Y and however much it would cost you to do X, as long as it’d cost you only slightly more to do Y over X, you’re required to do Y over X. On p. 93, I say, “We might make an exception for those who are very badly off, or who have in aggregate incurred very large costs for the sake of promoting the good. Perhaps such people are not required to incur any further costs for the sake of promoting the good, no matter how small the costs and no matter how much good they would promote if they did incur them.” That goes against No Difference*, as the idea is that if the cost of promoting either the lesser or the greater good is sufficiently great to kick you over the relevant overall cost threshold, then you’re not required to promote the greater good over the lesser good even if the difference in cost to you is slight (and since AGW-still-pretty-weak’s ‘other things equal’ wouldn’t be satisfied here, it wouldn’t imply that you are required to promote the greater good over the lesser good here).

  23. Thanks Theron,

    I agree that it is highly plausible that there are plenty of cases where the difference between c and nc will not make the relevant difference because the contextual interaction does not occur, so we can conclude from the duty to do y over x in case 1 that there is a duty to do y over x in case 2. And I agree that there are plenty of cases of charitable giving where this is true. So I think that your conclusion is along the right lines. The discussion has helped me get clearer about how to understand these cases, so thanks for that.

    I think that the way that I put it helps to clarify what the normative issues are a bit, though. But thanks for your excellent paper and for the discussion – I’ve really learned a lot.

  24. Thanks to you too, Victor! You’ve greatly helped me better understand these cases.

    Many thanks again to others who have commented, with special thanks again to Johann for the wonderful critical précis (his part-whole principles, and his conciliatory proposal appealing to the Efficiency Principle, are very much worth exploring further). This PEA Soup discussion has really given me a lot to think about going forward!

    If anyone wants to continue the conversation with me, or has any relevant reading recommendations, please feel free to send an email.

Leave a Reply

Your email address will not be published. Required fields are marked *