A Dilemma for Effective Altruism

This post focuses on an underappreciated debate in normative ethics, viz. the actualism/possibilism (A/P) debate and a problem that I believe it poses for effective altruists (EAs). Roughly, EAs implicitly assume contradictory positions in this debate and, moreover, taking a consistent stance will force EAs to (i)take on commitments they don’t want to take on and (ii)that are seemingly antithetical to the movement. I’ll first provide a quick overview of the A/P debate and then pose my dilemma for EAs.≈

  1. The Actualism/Possibilism Debate:

Suppose that you have been invited to attend an ex-partner’s wedding and that the best thing you can do is accept the invitation and be pleasant at the wedding. But, suppose furthermore that if you do accept the invitation, you’ll freely decide to get inebriated at the wedding and ruin it for everyone, which would be the worst outcome. The second best thing to do would be to simply decline the invitation. In light of these facts, should you accept or decline the invitation? (Zimmerman 2006: 153).

Roughly, actualists hold that you’re obligated to decline the invitation because what would actually happen if you decline is better than what would actually happen if you accept. By contrast, possibilists hold you’re obligated to accept because doing so is part of the best series of possible acts that you can perform.

  1. The Problem for Effective Altruism

I’ll understand the term effective altruistto refer to someone who believes that they ought to be doing the most good they can either because they endorse effective altruism as a normative thesis or because they have adopted effective altruism as a non-normative project.

2.1 Effective altruists’ implicit actualist assumptions

EAs implicitly assume actualism most often when trying to assuage concerns about the demandingness of being an EA. For example, in The Most Good You Can Do, Peter Singer seems to endorse the strategy of affluent people keeping some “modest level of comfort and convenience” even if it’s possible for them to do more good by giving up those luxuries. He endorses this strategy since giving up more is likely to be “counterproductive” (p. 9). He raises similar considerations later in the text when discussing the demandingness of taking a high paying job one doesn’t find intrinsically valuable in order to earn to give. He recognizes that earning to give is “not for everyone” and cautions against it for people who won’t be enthusiastic about “making profits for their employer” even if doing so is necessary for one to do the most good they can over the course of their life (p. 47). Such considerations are echoed by Will MacAskill in Doing Good Better (p. 149). MacAskill also factors such considerations into the expected value of the choices one makes, which implicitly assumes actualism over possibilism. In the concluding chapter of his book, MacAskill even advises his readers to set-up recurring charitable donations for actualist reasons.

2.2 The problem with accepting actualism and being an effective altruist

Although appeals to actualism may help mitigate demandingness worries, actualism also seems to require actions seemingly antithetical to the commitments of effective altruism. To illustrate, consider Partying Pete.

Partying Pete: Pete is contemplating gambling away his millions of dollars over a weekend in Vegas. In doing so, he’ll bring some pleasure to himself and friends. Regardless of his intentions today, if Pete does not choose to spend his money in Vegas this weekend, he’ll later decide to spend it on blood diamonds for himself, although he could, at the later time, decide to donate any money he has to an effective charity.

Actualism entails that Pete is obligated to spend his millions partying in Vegas. But surely Pete doesn’t even come close to doing the most good he can by gambling away his millions. After all, actualists and possibilists agree that Pete can forgo a Vegas trip and then donate his money to an effective charity. Yet, actualists deny that Pete has an obligation to forgo gambling away his millions in Vegas since what he would otherwise do is worse. It seems to me that Partying Pete is a terrible EA, though actualist EAs have to say otherwise.

2.3 Effective altruists’ implicit possibilist assumptions

EAs implicitly assume possibilism in response to worries that effective altruism is too permissive. Ethical offsetting is one such example. Ethical offsetting is “the practice of undoing harms caused by one’s activities through donations or other acts of altruism”. Carbon offsetting is an instance of ethical offsetting. A more unconventional example would be eating meat and then donating to non-human animal charities to “cancel out” the number of non-human animal deaths one causes. EA discussion forums show that EAs typically oppose moral offsetting on the grounds that one can do more good by both <abstaining from eating meat and donating to animal charities> and by both <not traveling and purchasing carbon offsets>.

Most notably, effective altruists have repeatedly argued that people should not simply give to charities that are “close to their heart” because doing so is often radically ineffective (Singer (2009, ch. 4-7); Singer (2016, § 4); MacAskill (2016, ch. 3-5)). This too assumes possibilism.

2.4 The Problem with accepting possibilism and being an effective altruist

Although appeals to possibilism may prevent effective altruism from being too permissive in one sense, it also requires EAs for perform actions that are seemingly antithetical to the commitments of effective altruism. To illustrate, consider a variation of Parfit’s Russian Nobleman case.

Greedy Gaelis thinking about giving away her expendable income each week to the most effective charities at the time. However, she would do more good if she invested all of her money and donated it on her deathbed to the most effective charity at that time. However, what Gael would actually do if she invests all of her money is, on her deathbed, decide to burn it rather than let anyone else benefit from it.

Possibilism entails Gael is obligated to forgo donating any money now, instead investing it all, even though this would result in a suboptimal outcome. This seems antithetical to effective altruism because it requires people to act in ways they know will not only result in them acting wrongly, but also result in the least (not the most) amount of good.

  1. Conclusion

To recap, EAs implicitly assume contradictory positions in the A/P debate. Each position in the debate forces EAs to take on commitments they don’t want to and that are seemingly antithetical to the movement. Actualism entails that Partying Pete is a good EA and undermines commons arguments effective altruists make against moral offsetting and giving to charities close to the heart. Possibilism entails that Greedy Gael ought to forgo donating her money in favor of investing it and undermines common responses effective altruists give to demandingness objections.

12 Replies to ““A Dilemma for Effective Altruism,” Guest Post by Travis Timmerman for Normative Ethics July

  1. Interesting post! I disagree that EAs tend to assume possibilism at all. More commonly, I think, they assume that people currently can form effective intentions to do better. If pressed on the possibility that the more realistic alternative is “doing nothing”, they will (at least in the conversations I’ve been part of) agree that suboptimal giving is better than that — while then stressing that we should work to change cultural norms around giving so that giving effectively becomes a psychologically feasible option for more people.

    Assuming, then, that EAs go “whole hog” on Actualism (as it seems to me they clearly should, given their commitment to “effectiveness”, with bonus points for Actualism being the correct view anyhow), is Partying Pete a serious problem? I wouldn’t think so. Partying Pete is a terrible EA because he’s psychologically incapable of (now) forming effective intentions to give effectively. (It’s not as though “whether someone fulfils their current obligations” is the only way, or even a remotely adequate way, of evaluating their overall character!)

    Moreover, given that Pete is *later* capable of giving effectively, and instead freely chooses to instead buy blood diamonds, his later self is clearly acting very wrongly by EA lights.

    I think the issue would be much clearer if you introduced time indexing, evaluating agential time-slices. You write: “Actualism entails that Pete is obligated to spend his millions partying in Vegas. But surely Pete doesn’t even come close to doing the most good he can by gambling away his millions.” But actually, given the stipulations of the case, Pete *at t1* does do the most good he can by gambling away his millions, and that’s precisely the reason why Actualists (and, indeed, anyone sensible) would encourage him to do so. Right?

  2. Hi all,

    Before I reply to any comments, I should note that this is an extremely truncated version of an argument I make in a forthcoming paper titled “Effective Altruism’s Underspecification Problem.” In the full paper, I flesh out the dilemma more, consider objections, and offer what I take to be the best way out for EAs. There’s no view that will give EAs everything they want, but they can get most of what they want if they reject actualism and possibilism in favor of an alternative view Yishai Cohen and I developed known as hybridism.

    The paper is a chapter in the book “Effective Altruism: Philosophical Issues” edited by Hilary Greaves and Theron Pummer and forthcoming with Oxford University Press.

    https://global.oup.com/academic/product/effective-altruism-9780198841364?q=effective%20altruism&lang=en&cc=gb

    Feel free to email me (travis.timmerman@shu.edu) if you want an advance copy of the paper. Check out the book too since the other chapters were written by some really amazing philosophers.

  3. Hey Travis,

    With Partying Pete, I think much of the unintuitiveness of what you say comes from calling him ‘a terrible EA’, a description I think is inaccurate. The point is that EAs have motivations that are highly inconsistent with his psychology. It’s more plausible to say both (a) that he is terrible simpliciter and (b) that he is not an EA. However, a good utilitarian should say that he should do what you suggest the actualist should say: i.e. party instead of use his money for bad purposes (again, we are assuming that everyone is certain that this is what he *would* do with his money). That he would do this (party) does not make him a good EA, but it does make him a good utilitarian.

    Greedy Gael is also odd because of his bizarre psychology. He has EA-tendencies–at least today!–but it’s very difficult to understand how an individual who wants to make the world a better place would fail to do so given the opportunity, especially when he sets up the opportunity himself. Given this alien psychology, I think our intuitions are a little confused. Regardless, as someone who rejects possibilism–and sees no reason for EAs to accept possibilism, I think the conclusion that he ought not donate is correct.

    Anyways, I am with Richard as well.

    [PS: A public service announcement. When posting URLs, you can truncate anything after the “?” Here, the string “?q=Pummer&lang=en&cc=gb&fbclid=IwAR1Hb57MuaIE0Q9pR5YVuTA4U1yOwgzhUeV1ejRItoYdItdwcE7cGIIlAgU” is just a bunch of information for Facebook and OUP to track you. The cleaner and less tracking URL is “https://global.oup.com/academic/product/effective-altruism-9780198841364”]

  4. Hi Richard,

    Thanks so much for your comments! While some of the biggest names in EA (Peter Singer, Will MacAskill) are actualists, I give examples in the full paper where those EAs do seem to at least implicitly assume possibilism in certain cases. At the same time, EA discussion forums seem to reveal that many implicitly assume possibilism. I would expect effective altruists to be divided on this debate simply because it’s a complicated debate where each view on offer has much to be said for, and against, it. But it would be interesting to see whether most effective altruists are inclined to accept actualism. If they are, then being an effective altruist will be much less demanding for most people than is typically assumed.

    You wrote “if pressed on the possibility that the more realistic alternative is “doing nothing”, [effective altruists] will (at least in the conversations I’ve been part of) agree that suboptimal giving is better than that — while then stressing that we should work to change cultural norms around giving so that giving effectively becomes a psychologically feasible option for more people.” I agree, though I don’t think this response assumes actualism. This sort of response also seems consistent with my favored view, known as hybridism, which posits a possibilist moral obligation and a practical ought that guides action when an agent cannot presently ensure that they will do what they are obligated to do over the course of their life.

    I would be interested to hear what effective altruists want to say about Partying Pete, but I do take it to be a serious problem for them. Actualism would license lots of behavior typical EAs are opposed to (e.g. People could be permitted (even obligated) to eat meat for mere gustatory pleasure, never donate their expendable income to an effective charity even when they could do better, etc.). But I think there’s a bigger problem. It’s *not* the case that Partying Pete is fulfilling his current obligations and will later act wrongly by purchasing blood diamonds. According to actualism, Partying Pete may fulfill his moral obligations at every moment of his life, and so would not ever be acting wrongly by actualist effective altruists’ own lights. To illustrate, suppose that the act-sets available to Pete are the following, which are ranked from best to worst.

    (1) Choose to forgo gambling in Vegas at t1 and donate all his money to an effective charity at t2.
    (2) Gamble away all his money in Vegas at t1 and purchase nothing at t2.
    (3) Choose to forgo gambling in Vegas at t1 and purchase blood diamonds at t2.

    To keep things simple, suppose that Pete dies immediately after t2, and so cannot perform any acts after t2. Now, suppose that the following subjunctive conditional is true.

    No matter what Pete intends to do at t0, were he to , then he would freely decide to .

    Finally, suppose that, at t0, Pete actually choose to .

    Now, since he’s gambled away all of his money, he won’t purchase blood diamonds. According to actualist versions of effective altruism, Pete has fulfilled his obligations at every point in time. He’s done nothing wrong. As such, it doesn’t seem like he’s criticizable at all. If anything, he should be praised since he’s fulfilled all of his moral obligations and, we can suppose, he did so for the right reasons.

    You’re right that, at t1, Pete performs the t1-act, from among the set of t1-acts he can perform, that *would* bring about the best outcome. But, I take it, actualists and possibilists agree that Pete doesn’t perform the best act-set he can from t1-t2. The best act-set he can perform in from t1-t2 is (1). After all, actualists and possibilists agree that, at t1 Pete can simply by intending to do so and, they agree, at t2 Pete can simply by intending to do so. So, they agree, Pete can (1) from t1 to t2 by forming the right intentions at the right times. While actualists agree Pete *can* do this, they just won’t regard it is a *relevant option* for him at t1.

    It seems to me, however, that performing (1) is a relevant option for Pete at t1. He shouldn’t avoid incurring an obligation to (1) simply because he’s disposed to act wrongly. You wrote that anyone sensible would encourage Pete to gamble away his millions in such a case. I agree that this is what any sensible person should encourage him to do, but that doesn’t mean Pet is, in fact, obligated to gamble away his millions. It just means that bystanders are obligated to bring about the best outcome *they* can and that requires encouraging Pete to perform a wrong act now in order to prevent him from performing an even worse act at a later time. Likewise, according to my favored form of hybridism (https://philpapers.org/rec/TIMMOA), Pete is obligated to (1). But since he won’t (1) no matter what he intends to do at t1, he *practically* ought to , not because it’s the right thing to do, but because it is the t1 action that will minimize his wrongdoing over the course of his life. Nevertheless, on my view, he still is obligated to (1) because that’s the best thing he can do. It seems to me that this verdict better captures the spirit of effective altruism than actualism (or possibilism, for that matter). But I’m not sure. Perhaps my judgments are out of line with the EA community. It would be interesting to see what position effective altruists are inclined to take when presented with each view.

  5. Hi Kian,

    Thanks so much for your comment (and for the helpful information about posting URLs). Can you say a bit more about why Pete is terrible simpliciter and why he is not an EA? According to actualism, Pete fulfills all of his moral obligations at every point in time. How could he be terrible? I’ll stipulate that he’s committed to the project of effective altruism, and I’ll stipulate that he even believes he’s obligated to maximize utility. According to actualist forms of effective altruism and utilitarianism, Pete succeeds on both fronts. On what grounds could actualists say he’s morally terrible and not an effective altruist? Those inclined to accept possibilism or a form of hybridism (which posits a possibilist moral obligation and an actualist practical ought) can say Pete is a terrible simpliciter and not an EA, but I don’t see how actualists can say that.

    With respect to Greedy Gael, just suppose Gael is prone to akrasia, like everyone. Every effective altruist (pretty much every person, for that matter) wants to make the world a better place, yet will often fail to do so given the opportunity. The best effective altruists just fail much less frequently than the typical person. I don’t think Gael’s psychology is particularly bizarre. Lots of people’s commitments to their values (and the values themselves) change over time. Everyone suffers from akrasia too. But even if Gael’s psychology is unique, it still seems reasonably clear to me that she’s obligated to do the best she can, though she practically ought to do less than the best. Practically, Gael should bring about the suboptimal outcome of making monthly donations to effective charities in order to prevent herself from performing an even worse act on her death bed.

  6. Interesting post!

    One quibble: you take Singer and MacAskill as sincerely expressing views on normative ethics. However, there may be a conflict between telling people what you think their normative obligations are and telling people what you think you should tell them in order to get them to do the most good. Actualism may be rhetorically useful, even if it is false. I imagine that EAs would opt to promote ethical views that are false when doing so has a greater expected utility.

  7. Thanks Travis, that further info about your hybrid view is helpful! I agree it’d be interesting to learn whether more folks might lean that way upon learning of the view.

    Just speaking for myself, while I’m most interested in the practical ought (so happy that your view agrees with actualism there), I worry about one potential cost of retaining a possibilist notion of obligation, namely, it seems to imply that Pete is blameworthy for something he does *at t1*. But it seems to me that we can imagine the case in such a way as to render Pete at t1 as perfectly good-willed, hence not blameworthy (at t1).

    So, while I agree with you that we want to be able to make some kind of *overall* criticism of Pete, even in the case where he complies with his Actualist obligations, it doesn’t seem to me that positing a violated obligation at t1 is the right way to do this. The problem with Pete (at least in one way of fleshing out the case) lies entirely at t2, and so our criticism should likewise be localized to this time. The way to do this, I would suggest, is to appeal to facts other than just whether he has fulfilled his actual obligations. He is criticizable, at t2, precisely because he (now) lacks altruistic motivations, and hence precluded his altruistic-but-practically-wise t1-self from achieving better outcomes that would have depended upon his t2-cooperation.

    (Of course, in the version of the case where he lacks good motivations at either time, he can be criticizable at both times! But that is less illuminating for setting up the contrast with your hybrid view.)

  8. I am glad that you mentioned that Derek! This comes up in the paper a bit, and I want to make a few comments about it.

    I agree that effective altruists may be practicing esoteric morality in certain contexts. Singer certainly does. Though, in addition to implicitly assuming actualism in some of his work, he confirmed that this is his view in a personal correspondence. Singer could have been practicing esoteric morality when I asked him what his view was, but I think that’s highly unlikely. I am inclined to take him at his word.

    As far as I can tell, MacAskill has only very recently affirmed he is an actualist, which he does in this interview.

    https://fivebooks.com/best-books/effective-altruism-will-macaskill/

    Perhaps MacAskill is practicing esoteric morality here, but I think it’s more probable that he too is inclined to accept actualism. I’m less confident about MacAskill’s position than Singer’s though.

    Second, while advocating for actualism might be rhetorically useful in certain contexts, I believe it will fail in others. Sometimes it might be seen to license bad behavior, whereas advocating for possibilism might actually motivate people to do better than they otherwise would.

    Third, even if we can’t be sure what effective altruists actually believe, since they may be practicing esoteric morality, it still seems important to figure out what they ought to believe (not that you denied this).

  9. Thanks for the follow-up Richard. On some plausible accounts of blameworthiness, I think Pete is blameworthy for his actions at t1, though hybridism is also consistent with plausible views of blameworthiness that deny he’s blameworthy at t1 (he might not be blameworthy at all).

    My own view is that Pete is blameworthy for gambling away his money at t1 in virtue of culpably performing a wrong act, though he’s less blameworthy than he would be if he ends up culpably purchasing blood diamonds at t2. I also think Pete would be blameworthy if he were to refrain from gambling away his money at t1 if he also knows this will result in him purchasing blood diamonds at t2. So, I’m personally committed to the existence of a kind of blame dilemma, though hybridism itself isn’t.

    Though I find this implication somewhat counterintuitive, I think the alternatives are worse. Your view, for instance, seems to suggest that Pete could be criticizable at t2 even if he’s only ever done the right thing for the right reasons. I find that conclusion even more counterintuitive. It’s true that Pete lacks altruistic motivations at t2, but I don’t see why he should be criticizable for that, especially if his lack of altruistic motivations at t2 was the product of him always fulfilling his actualist obligations in the past and doing so for the right reasons.

    Here’s another way to put the worry. I find this consequence to be implausible because the following principle seems like a plausible desideratum about blame to me.

    Blameless Desideratum (BD)**: Agent S is blameless for x-ing if x-ing is itself subjectively permissible and will (relative to the agent’s evidence) only result in S performing subjectively permissible acts and if S xs for all the right reasons and does not x for any of the wrong reasons.

    Hybridism is consistent with this because it holds that Pete is obligated to (1), which requires choosing to forgo gambling at t1. If Pete is blameworthy, hyrbidists can hold he’s blameworthy in virtue of either performing a wrong act or performing a subjectively permissible act that he foresees will result in subjective wrongdoing. On the other hand, actualism coupled with what you say about blame, seems inconsistent with BD**. I have a harder time rejecting BD** than accepting the existence of a kind of blame dilemma.

    I will note that BD** comes from a paper Philip Swenson and I co-authored titled “How to be an Actualist and Blame People,” where we examine actualism and possibilism in light of desiderata about accounts of blameworthiness.

    https://philpapers.org/rec/TIMHTB

    I’m sorry to link a second paper in a second comment. I know it’s really annoying to link whole papers in replies to short questions. But I’d feel like I wouldn’t be giving Philip his due if I neglected to mention the source of BD**. I will aim to avoid linking any other papers in response to any other comments!

  10. Pete could be criticizable at t2 even if he’s only ever done the right thing for the right reasons. I find that conclusion even more counterintuitive.

    Interesting! I think I can accept your (BD)**. But are you assuming that agents are only criticizable for their *acts*? I would have thought that other forms of moral evaluation and criticism are also important, e.g. criticizing vicious character or motivations. A sadist who genuinely wants to torture babies (and delights at reading about others’ suffering in the news) is surely criticizable even if they’ve never had the opportunity to act on their evil desires, and (again due to severely limited opportunities) have only ever done the right thing for the right reasons.

  11. Oh, good question Richard. I don’t want to make that assumption. I agree that there are other forms of moral evaluation and criticism that are also important. I should have explicitly said that while BD** is a sufficient condition for blamelessness for performing an act, I don’t want to rule out the possibility that agents can be criticized for other things.

    I agree with you that a “sadist who genuinely wants to torture babies (and delights at reading about others’ suffering in the news)” is criticizable. I would think they’re criticizable for the combination of their current desire to do wrong and their disposition to culpably perform a wrong act given the chance. As I am imaging the case, if they knew of some way they could get their hands on some babies to torture, they’d do it. I was imagining the Partying Pete case to be slightly different. Let’s suppose Pete cares about doing the right thing for its own sake and gambles away his money in order to avoid wrongdoing. Unlike the sadist, he doesn’t want to do the wrong thing now, though he would freely do the wrong thing at t2 if he forgoes gambling at t1.

    Perhaps you want to say that Pete is criticizable in virtue of his disposition to do wrong at t2 if he forgoes gambling at t1 irrespective of whether he actually does anything wrong in his life and irrespective of whether he currently desires to do something wrong. I see the appeal of that view, though I worry that it would prove too much. I think it would entail that everybody is always blameworthy for all sorts of horrible things, as I am sure everyone would perform seriously immoral acts in all sorts of situations in which they never actually find themselves. Maybe that’s the right view though I do find this distinction between Pete and the sadist compelling at the moment.

Leave a Reply

Your email address will not be published. Required fields are marked *