Welcome to what we expect will be a very interesting and productive discussion of Matthew Rendall’s “Discounting, Climate Change and the Ecological Fallacy.” The paper is published in the most recent issue of Ethics, and is available here. Samuel Scheffler has kindly agreed to contribute a critical précis, and it appears immediately below. Please join in the discussion!
Samuel Scheffler writes:
Matthew Rendall’s paper is rich and complex. I will limit my summary to his main line of argument, ignoring many other interesting issues that he raises. The primary question he considers is how we should take into account costs and benefits to future generations when formulating social policies, especially policies related to climate change. If we approach this question from a broadly utilitarian perspective, as many economists do, and if we assume that the interests of future people matter just as much as the interests of those now living, then it seems we may be obligated to make enormous sacrifices for the sake of future generations. They are likely to vastly outnumber us and so the aggregate benefits to them of our sacrifices may well outweigh the costs to us. Even very small benefits to a sufficiently large number of our descendants may be enough to justify such sacrifices. And this is so even if, despite climate change, our descendants will be much better off than we are.
This has led many economists to conclude that, in order to avoid imposing excessive sacrifices on ourselves, we must accept some degree of “pure time preference.” That is, we must discount future costs and benefits simply because they lie in the future. But what exactly is the justification for such discounting, apart from the fact that it would enable us to avoid making sacrifices that we would rather not make? Surely the point cannot be that future people are less valuable than we are.
In a paper published in 1999, Kenneth Arrow appealed to the idea of an “agent-centered prerogative,” which I introduced in my book The Rejection of Consequentialism, as a justification for discounting. Just as it may be permissible for individuals to give more weight to their own interests than would be impersonally optimal, Arrow suggested, so too the present generation may permissibly give more weight to its own interests than would be impersonally optimal. Matthew Rendall is sympathetic to the general idea of a prerogative, and he takes it as his starting point. But he does not think that the prerogative as I described it can do what Arrow wanted it to do. For one thing, if the prerogative is construed as a permission to give some limited amount of additional weight to the interests of existing people, then it is still be liable to be outweighed by benefits to a sufficiently large number of future people. So the problem of excessive sacrifice would be postponed but it would not be avoided.
Drawing on, but also modifying, Scanlonian ideas about reasonable rejection, Rendall makes an alternative proposal. We have, he suggests, an absolute prerogative to reject sacrifices made for the benefit of people, however many of them there may be, who would in any event be much better off than we are. In other contexts, however, we have no absolute prerogative. We may give extra weight to our own interests, but that weight is limited and we may in some cases be obligated to make significant sacrifices despite the additional weighting.
As applied to climate change, it might seem that a prerogative understood along these lines would require us to make only minimal efforts to address the problem. As Rendall observes, economists tend to assume that economic growth is likely to leave our descendants richer and better off in the aggregate than we are, despite the damage done by climate change. If so, then it would appear that, according to Rendall’s prerogative, it would be permissible for us to avoid making any sacrifice whatsoever to address the future effects of climate change.
That is not, however, the lesson that Rendall draws. Instead, he maintains that this kind of argument for climate inaction involves two “ecological fallacies.” First, it ignores the fact that, as Thomas Schelling emphasized, even if our descendants are likely to be richer on average or in the aggregate than we are, it is likely that some of them will nevertheless be poorer than the richest among us. Second, it ignores the fact that, even if our descendants are likely to be wealthier and better off on average than we are, there is still some chance of a catastrophic outcome that will leave them much worse off. Even advocates of substantial discounting, such as William Nordhaus, concede that the possibility of a catastrophe cannot be altogether ruled out. And if a catastrophe does materialize, then the fact that it was antecedently improbable will do nothing to address the plight of those living in its aftermath.
Rendall concludes that, in thinking about climate change policy, we must avoid both fallacies. We should follow Schelling and “disaggregate future people into rich and poor.” But, and this is his primary focus, we should also disaggregate different possible outcomes or states of the world: those in which, in the absence of aggressive efforts to address climate change, our descendants are better off than we are, and those in which they are much worse off than we are.
What does this mean in practice? Here Rendall follows Martin Weitzman, who has argued influentially, and in opposition to Nordhaus, that the chance of a catastrophic outcome – a “bad tail” scenario – must be taken very seriously. By failing to take aggressive action to mitigate the effects of climate change, Rendall argues, “the inhabitants of the industrialized countries are taking a small chance of leaving an enormous number of people worse off than they are.” There is, he says, “no justification for discounting away these expected losses.”
Yet this does not mean that all forms of discounting are indefensible. The kind of prerogative Rendall favors, when interpreted in such a way as to avoid the second ecological fallacy, provides the basis, he thinks, for a defensible approach to discounting. He proposes a four-step procedure for assessing any proposed climate policy.
- Assess the impartial value of the policy’s costs and benefits under various growth scenarios.
- Ignore any costs and benefits that would accrue to people who will be much better off whether or not the policy is adopted.
- Weight all other costs and benefits by whatever degree of preference we think a defensible prerogative would allow.
- Multiply each scenario’s weighted value by its estimated probability and sum the products to arrive at the policy’s expected value.
Following this procedure, he argues, we would not be required to make sacrifices for the sake of future people who will in any case be much better off than we are. But we would be required to make sacrifices to avoid sufficiently catastrophic low probability risks to our descendants.
As Rendall recognizes, this may seem to reinstate worries about excessive sacrifice and over-demandingness. After all, almost anything we do might, however improbably, lead to catastrophe. Yet in most ordinary cases, he argues, we have no more reason to believe that our actions will lead to catastrophe than that refraining from those actions will. So, in effect, these tiny abstract risks cancel each other out. By contrast, climate change belongs to a small number of apocalyptic threats – nuclear war is another – for which we have a well-understood causal theory that gives us good reason to believe that certain courses of action (such as those involving high levels of greenhouse gas emissions) are genuinely dangerous. In these cases, aggressive action is called for. In a sobering aside, he adds that the class of similar threats is likely to grow as technology continues to develop, so we may at some point find ourselves living in a “nightmare world” where apocalyptic risks are everywhere. For now, the lesson of his article is that, even if we accept a reasonable agent-centered prerogative and even if we adopt a defensible policy of discounting, we are nevertheless obligated to take aggressive action to combat climate change in order to avert “any realistic danger of catastrophic damage to the planet.”
I fully endorse Rendall’s conclusion that we need to take aggressive action to address climate change. And I agree with him that neither a reasonable agent-centered prerogative nor any defensible form of discounting provides a sound basis for thinking otherwise. I do, however, have a few questions about his position. These are mostly requests for elaboration or clarification of aspects of his view. I will raise three questions, before concluding with a final observation.
1. Consider three cases. In all three cases, we can make some sacrifice. In the first case, our sacrifice would benefit only people who will be better off than we are whether or not we make the sacrifice. In the second case, our sacrifice would benefit only people who will be worse off than we are whether or not we make the sacrifice. In the third case, our sacrifice would benefit people who will be worse off than we are if we fail to make the sacrifice, but better off than we are if we do make the sacrifice. The absolute component of Rendall’s prerogative would allow us to decline to make the sacrifice in the first case. (Here I set aside a complication discussed in question 2 below.) In the second case, his prerogative might allow us to give some limited additional weight to our own interests, but no more than that, and so we might be required to make the sacrifice. But what about the third case? Most of what Rendall says suggests that he would treat it like the second case. For example, he says several times that the absolute component of his prerogative applies only when, as in the first case, our sacrifice would benefit those who would in any event be better off than we are. In the third case, it is not true that the beneficiaries will be better off than we are if we decline to make the sacrifice. Yet the second and third cases raise different issues, since in the third case but not the second, the question is whether we are obligated to take steps that will leave us worse off than the people we are benefiting. And in fact, at least one of the examples Rendall gives – the one he calls Mistake – appears to have a structure like that of the third case, yet he treats it the way he treats the first case rather than the second. In Mistake, Bill must decide whether to tell the NHS that it has, as the result of a computer glitch, mistakenly decided to cure his rare disease, thus sparing him a year of severe pain, rather than treating the day-long migraines of ten million people. Here, it seems, Bill will be worse off than the ten million if he alerts the NHS to its mistake, but better off than the ten million if he does not. Rendall thinks Bill may decline to alert the NHS to its mistake, and he suggests at the beginning of Section IV that this case falls within the scope of the absolute component of a reasonable prerogative, despite the fact that the ten million will not in any event be better off than Bill. On the other hand, another of his examples – Big Mistake – also has the same structure, but in this case the ten million will die if Bill does not report the mistake. Here Rendall thinks Bill must make the sacrifice, thus treating this example like the second case above rather than the first. The upshot is that it is not clear to me how exactly Rendall wants to handle cases of the third kind, and it would be helpful to hear his thoughts about this question, since it may well be relevant in the context of climate policy.
2. Rendall thinks we have an absolute prerogative to reject sacrifices that would benefit only those who would in any event be much better off than we are. But what about cases where the beneficiaries would in any event be better off but not much better off? Occasionally, Rendall’s language suggests that the absolute component of his prerogative might apply in these cases too. If so, we would have an absolute entitlement to reject a very modest sacrifice even if it would provide enormous benefits to people who would otherwise be only slightly better off than we are. We might, to take a fanciful example, be permitted to reject a small tax increase that would lead to the development of a permanent cure for cancer in a hundred years, provided that, in the absence of such a cure, our successors would enjoy a slightly higher standard of living than we do. This kind of prerogative may strike some people as too generous. Yet if the absolute component of the prerogative is limited to cases where the beneficiaries of our sacrifices would in any event be much better off, then we need a clear and clearly-motivated distinction between “better off” and “much better off.” One alternative to Rendall’s proposal of an absolute prerogative with a limited scope would be a scalar view according to which the strength of our reasons to make sacrifices normally diminishes as the level of well-being attained by the potential beneficiaries, even in the absence of our sacrifice, increases in comparison to our own. On such a view, there is no sharp dividing line between cases in which the beneficiaries will, without our sacrifice, be slightly worse off than we are and cases in which they will be slightly better off. Nor is there a sharp dividing line between cases in which they will be better off and cases in which they will be much better off. Yet the possibility that we might be obligated to make sacrifices in cases like the cancer case just described, where the potential gain to the beneficiaries is very great, would not be ruled out. I wonder whether a scalar view of this kind might serve Rendall’s purposes as well or better than an absolute prerogative with a limited scope.
3. As Rendall describes his four-step procedure for evaluating policies, there seems no limit in principle to the size of the sacrifice we might be obligated to make in order to avert a small risk of a sufficiently catastrophic outcome. In practice, he suggests that the costs that rich countries would have to bear to stabilize greenhouse gases would be relatively modest. But what if that were not so? Some people may think that there is an upper bound to the kind of sacrifice that we can be required to make. Alternatively, some may think that, rather than applying a uniform weight in Step 3 of the procedure, a defensible prerogative would assert that the degree of extra weight we may assign to our own interests varies depending on the severity of the sacrifice under consideration. With modest sacrifices, we may be allowed only modest extra weight. With extreme sacrifices, we may be allowed greater extra weight. I am uncertain whether Rendall would want to resist these ideas.
4. So far, I have provisionally accepted Rendall’s normative framework and raised questions about some of its features. But let me conclude by saying that, when thinking about climate change policy in particular and future generations more generally, I myself would move much further away from a utilitarian approach than Rendall does. Rather than beginning from a broadly utilitarian, optimizing framework and then modifying it by steps so as to avoid implausible implications, I think we need to reconsider the fundamental normative ideas that should govern our thinking about future generations. In my recent book Why Worry about Future Generations? (Oxford, 2018), I argue that we have a variety of compelling reasons for wanting the chain of human generations to extend into the indefinite future under conditions conducive to human flourishing, and for taking steps to ensure that that happens. Even if this is true, of course, it does not eliminate the need to consider what costs we should be willing to bear in order to avert the potentially catastrophic effects of climate change and other similar threats. But it provides a different regulative standard to guide our thinking about these questions, and it is a standard that has roots in some of our deepest values. It seems clear to me that this standard supports Rendall’s conclusion about the need for aggressive action to address climate change. However, although this is not the place to argue the point, I believe this standard is more faithful to our actual concerns about the fate of our descendants than is the utilitarian optimizing framework, even as modified by Rendall’s innovative proposals.
Many thanks to Sam Scheffler for his comments. I have found his argument for an agent-centered prerogative as helpful for thinking about discounting as did Kenneth Arrow, though it led me to different conclusions. Nor could I have written this paper at all without volumes 1 and 2 of Derek Parfit’s *On What Matters*, which Sam edited. Let me try to respond to the three questions he raises.
Point 1: Scheffler notes an important flaw in my formula. I maintain that in deciding whether we have a duty to make a sacrifice on behalf of others, we can defensibly ignore costs that would go to people who will be much better off whether or not the policy is adopted—they are, in T. M. Scanlon’s language, just not relevant. I formulated this in the second step of my decision procedure as “When costs and benefits would go to people who will in any event be much better off, disregard them—no matter how many stand to gain—unless they are cost-free to provide” (p. 461).
I claimed that this gives the intuitively right answer in cases like *Mistake*, in which Bill can either (1) notify the NHS of a clerical error, allowing it to relieve ten million headaches at the cost of a year of severe pain for himself; or (2) keep mum. But as Scheffler points out, that is not the case. Assume that treating Bill is quick and pain-less (just exorbitantly expensive–the patent is held by a cutthroat Big Pharma firm). If Bill chooses (2), the others will not be better off than he is—they will all suffer headaches. My formula wrongly implies that Bill may not ignore this cost.
In determining what we owe to others, I follow Scanlon in holding that the principles must be ones that no one could reasonably reject. This means that Bill should compare the end states of the *losers* in each outcome. I should have formulated the second step as “When costs and benefits would go to people who will in any event be much better off than one would be if one made the sacrifice, disregard them—no matter how many stand to gain—unless they are cost free to provide.”
The reason Bill could defensibly ignore the headaches in *Mistake* is that if Bill chooses (2), the losers would still be much better off than Bill would be if he chooses (1). Conversely, Bill should speak up in *Big Mistake*, in which Bill can either (3) notify the NHS of a clerical error, allowing it to save ten million lives at the cost of a year of severe pain for himself; or (4) keep mum. If Bill chooses (3), he will be the loser, suffering a year of severe pain; if he chooses (4), the loser will be the ten mil-lion, who will die. These victims will *not* be better off than Bill will be if he chooses (3), and he should not ignore this cost.
Some may object that the losers in (4) will be dead, and that it makes no sense to speak of the dead being better or worse off than the living. Even if that’s true, it remains the case that the losers will not be better off than Bill. Moreover, if we take a whole-lives view of well-being, we can say that their *lives* will be worse in (4) than Bill’s in (3), or at any rate not much better.
My response here may be vulnerable to a worry raised by Larry Temkin (*Rethinking the Good*, pp. 71-72). Some of those who will suffer headaches if Bill chooses (2) will already have horrible lives for other reasons. Some will not be better off than Bill even if he chooses (1). Suppose that of the ten million headache sufferers, 100,000 fall into this category. Why isn’t the headache relief to *them* relevant, and when added together, won’t it outweigh Bill’s year of pain? I didn’t think of this when wrote the paper, and now it troubles me.
Nevertheless, I believe my rule consequentialist argument for an agent-centered prerogative retains some force to resist this conclusion. Psychologically, it would be extraordinarily burdensome for us all to be prepared to accept a year of pain just to save strangers from one-day headaches, even if those strangers were very badly off and very numerous. It is hard to believe that we could acknowledge such a duty and still regard our own lives in a healthy way.
Alternatively, perhaps we should bite the bullet and accept that Bill *should* accept a year of pain for the sake of the 100,000 wretched others. But then the claim that we should sacrifice half of next year’s income for the sake of a small but permanent rise in future income in William Nordhaus’s “wrinkle experiment,” the case with which the paper starts, may seem less absurd. Some of the vast number of future people who would benefit will also have lives not much better–and in some cases worse—than our own, despite their greater wealth. Perhaps if there will be enough of them, we ought to make the sacrifice. That is hard to believe, but not impossible.
Point 2: In my view, an agent can defensibly assert an absolute prerogative to refuse assistance when the other parties will still be much better off than the agent would be if she offered her help. Unless this condition is met, she should at most give weighted preference to her own interests. This would clearly require her to lend her assistance to developing the cure for cancer in Scheffler’s example.
I do not see how there can be a sharp dividing line between “better off” and “much better off.” These concepts are inherently vague. A scalar view, however, would amount to accepting only weighted prerogatives, which would still require one to sacrifice everything to benefit much better off people, if the latter were numerous enough. It seems better to acknowledge that there will be some cases where the beneficiaries will be determinately “much better off,” others in which they are determinately only “better off,” and some in which they are neither clearly one nor the other.
Nordhaus’s wrinkle scenario is troubling because it appears to require a great sacrifice for people who appear obviously to be much better off. So long as we can justify an absolute prerogative in such clear-cut cases, we need not be overly worried by the ambiguous ones. Our decisions about the latter may seem arbitrary, but our intuitions about them will not be strong.
Point 3: While I agree that we can defensibly give our own interests disproportionate weight, I do not see why this weight should increase in proportion to the demands we face. However, the priority view—which holds that benefits matter more when they go to the worse off—will justify assigning increasing weight to our own welfare as the burden on us grows. Even so, the mere size of the burden would never render that weighting absolute. That’s as it should be. As Elizabeth Ashford remarks, “Any plausible moral theory must hold that there are some situations in which agents face extreme moral demands—for example, a situation in which the only way of stopping billions of people suffering an agonizing death was by hacking off your left leg with a fairly blunt machete” (“The Demandingness of Scanlon’s Contractualism,” p. 274).
Thanks Matthew (and Sam) for this exciting discussion! Of course, we agree about many of the substantive points; in particular, both of us believe climate change is not being taken nearly as seriously as it should be. However, I am still concerned about the fundamental claim of the paper:
“We can defensibly ignore costs and benefits that would require sacrifices on behalf of people who would be in any case much better off. That is not because these benefits would have no value. Rather, it is because we could not reasonably be expected to make the sacrifice needed to produce them.”
Calculating expected value in this way would generate some potentially absurd conclusions. So, for instance, the US national highway system, generated by Eisenhower in 1956 cost something on the order of half a trillion dollars (in 2006 dollars, https://usatoday30.usatoday.com/news/opinion/columnist/neuharth/2006-06-22-interstates_x.htm). This tax revenue was raised from citizens as a whole and, by any use of the term, half a trillion dollars qualifies as a sacrifice on their part. For simplicity, let us suppose that the federal government was pretty accurate about how the economy would develop (so for them there was only one overwhelmingly likely scenario–the one that actually occurred where the US became much wealthier overall since 1956), where the vast majority of the population is much better off in material terms in the subsequent years.
On your view, the only gains that Eisenhower should have considered were the gains to those who were not much better off than those in 1956. That would be a small subset of the population. It would not have been justified on those terms to build the highway network, which was fundamental in all kinds of material and knowledge flows. This to me seems absurd. Even though many of the gains that accrued from building the highway network ended up going to people who lived (much) better than those in 1956, those were still gains he should have counted (and did).
This is a general point which applies beyond the US to almost any large-scale infrastructure project. *Very* few infrastructure, or educational, or environmental investments would be justified under your picture.
You might say that your principle is Scanlonian so only holds that those in 1956 could reasonably reject the highway system. But that still seems absurd; again, adopting your principle would imply that almost all investments would be reasonably rejectable. I don’t think the highway system (or the railways in Germany or the airports of Singapore or…) are reasonably rejectable in light of many of the gains they generated being reaped by those much wealthier than those who paid for them.
Thanks, Kian. Note first that many environmental investments *would* be justified on my reasoning. In many cases, environmental conservation can create a very long stream of future benefits. In some possible futures, our descendants will not be (much) better off than we are. We ought not to discount costs and benefits in these scenarios away. That could be true not only of environmental preservation, but also of research, and even some public works projects. In 1956, it would have been reasonable to assign a significant probability to future Americans not being richer than the present generation, in light of the risk of nuclear war. An interstate highway system might not have been of much use to the survivors of a nuclear holocaust, but research in medicine or agronomy would.
Moreover, it may be that projects like the interstates generally do pay off for the generation that makes them, or at least its children and grandchildren. In ‘Making Our Children Pay for Mitigation’, Aaron Maltais argues that most benefits to future generations are a byproduct of investments made for the sake of the present. If it seems unreasonable to reject projects like the highway system, that may be because they don’t actually impose a sacrifice on the present generation.
That said, I would not want to push these arguments too far. Let’s assume that (a) an investment would be very productive over the long term, but (b) be a net loss for the present generation, and (c) that it was overwhelmingly likely that future generations would be much richer. That may well have been the situation of the people who founded a state university in my home town of Eugene, Oregon in 1876. If so, were they morally obliged to make this investment?
Note that the question is not whether was *wrong* for them to do so–it is always permissible, as Scheffler argues in *The Rejection of Consequentialism*, to bring about the impartially best outcome. Rather, it is whether it was obligatory. It’s not clear to me that it was. In fact, the fact that I admire the generosity of these 19th-century Oregonians, most of whom surely never attended a university themselves, suggests I think was supererogatory, rather than a mere duty.
In contrast, I’m inclined to agree that in the 1950s there were some investments it would have been wrong to refuse, even if they could not be expected to pay off for the present generation, and even if the latter had somehow known that future generations would be richer. What’s the difference?
Remember that my criterion is not whether future generations will be much *wealthier*, but whether they will be much *better off*. By the 1950s, most Americans were no longer the victims of absolute poverty, and many were reaching the point where additional consumption does little to increase welfare–or so I think the literature in ‘happiness economics’ would suggest. If they had a duty to make such investments, it may be because they had reason to think that their beneficiaries would not be so much better off as to render the benefits morally irrelevant.
It looks like we’re coming to the end of the discussion. In closing, I’d like to make one amendment to my preceding comments: I wrote ‘it is always permissible, as Scheffler argues in *The Rejection of Consequentialism*, to bring about the impartially best outcome’. I should have qualified that with the proviso ‘if it does not violate the best set of rules’. Like Parfit, I believe that there can be cases–such as when a doctor can save several lives by carving up a single patient–when a plausible rule consequentialism will forbid individually optimific actions.
Thanks to Chike Jeffers for organizing this discussion, to Kian for his comments, and especially to Sam Scheffler, who put his finger on a significant flaw in my argument. Even when one’s argumentary edifice looks solid, it’s good to have a surveyor!
I’m sorry to be chipping in to this interesting conversation rather late. Matthew’s paper is, as Sam Scheffler notes, a rich one, and I hope it is widely read not only by philosophers but by economists working on climate policy (the proposals at the end of the paper are of course plausible independently of rule consequentialism, which remains an unpopular position in philosophy). I wish Derek Parfit were still around so we could ask him what he thought of Matthew’s rule consequentialist proposal to combine Scanlonian and Kantian contractualism. Towards the end of his life, as the final volume of *On What Matters* shows, he was taking quite seriously the kind of intuitions about doing and allowing, and about demandingness, to which Matthew appeals at crucial points.
Let me make a couple of brief remarks. The first is about the ‘black hole’ problem for sufficientarian accounts of justice (p. 446). One possible way out of that problem in at least certain cases might be to ignore trivial harms and benefits. That might seem rather ad hoc, but it needn’t be if the view in question is not merely a quick fix, but stands up to independent reflection. Why shouldn’t justice concern itself only with what is significant rather than insignificant? Further, sufficientarianism can be seen as one of several principles governing distribution, and one of those other principles might be something like a principle of impartial beneficence. This would provide another way – in certain cases, including the one Matthew mentions — to permit the sacrifice of the interests of those below the threshold for the sake of benefits to those above.
I’d also like to raise the question of how unreasonable it is to accept some absolute limit to the degree of sacrifice that can be demanded from an individual by morality or rationality. Unlike Matthew, I didn’t think it is clear that Bill should speak up in *Big Mistake*, though I am myself inclined to the view that he should. In his response above to Sam’s point 3, Matthew cites Elizabeth Ashford in support of the claim that there is no limit: ‘Any plausible moral theory must hold that there are some situations in which agents face extreme moral demands—for example, a situation in which the only way of stopping billions of people suffering an agonizing death was by hacking off your left leg with a fairly blunt machete’. But this is consistent with there nevertheless being a limit at some more extreme point. It might be, for example, that it would always be reasonable, and hence perhaps permitted by morality (though myself I’d prefer to avoid this unnecessary terminology), to refuse to undergo, say, fifty years of the most agonizing torture imaginable, whatever the cost to others. (If fifty years still seems insufficiently extreme, just add some more …)