Welcome to our discussion of Christian Barry and Garrett Cullity’s “Offsetting and Risk Imposition,” published in the most recent issue of Ethics. You can find the paper here. Elizabeth Cripp’s critical précis is immediately below. Please join the discussion!!
***
Suppose you fly to the Bahamas on holiday. You know that your hefty carbon footprint raises the risk that people will die from extreme weather, disease, or starvation, because of climate change.* So you buy carbon offsets: enough to reduce greenhouse gas emissions by the same amount as your flights increase them. Are you off the hook, morally speaking?
*Or, for the sake of argument, let’s accept the threshold-based arguments made by John Broome and others, and assume I know it.
This channels important philosophical questions, including the following. It is often wrong to impose a risk of harm on another person or population. Can we make our actions OK (what Barry and Cullity call ‘risk-imposition justifiable’) by doing something else that reduces the risk to that same person or population by at least the same amount?
Here’s Barry and Cullity’s example. I live near a river. Boating is popular but boats put toxins into the river, so people living downstream are dying of cancer. It’s a beautiful day and I fancy a boat trip. If I go, the extra toxins may well cause no deaths. But they might cause one, or two… or ten. The way the probabilities add up, the expected harm of my trip equals one death.
I want to go, so I install a toxin-extractor to remove at least the same amount of toxins from the water as my boat puts in. I set off, with my sunglasses and picnic, and I tell myself I’m morally in the clear because I’m not making the risk any worse than it would have been if I’d done nothing at all. Am I right?
On the face of it, yes. I pose no excessive risk compared to what Barry and Cullity call the Standard Baseline (risk levels if I had neither boated nor installed the extractor). But things turn out to be a lot more complicated than that.
I might have been paid to install the extractor by the people downstream or some philanthropic third party. Then, I’d be fulfilling an independent obligation by installing it. Using that to justify my boat trip as well is a kind of double counting. So we also need a Moralized Baseline. This sets aside any risk reduction independently required of me, before calculating the overall impact of my action on risk levels.
Is that enough? Not according to Barry and Cullity. What if I spot my neighbour about to launch his boat? Rather than buy an extractor, I pay him not to go, and take out my boat instead. The risk is no more than if I’d done nothing, and I wasn’t fulfilling anyindependent obligation. But, say Barry and Cullity, it’s not my neighbour who has raised the expectation of harm (as it would otherwise have been). The increased risk is now attributable to me. I’m no more in the clear than if I’d paid a local teenager not to go joyriding and done that myself instead. Even if I’d paid off someone who would have been justified in imposing the risk themselves, their justification wouldn’t transfer to me.
Accordingly, Barry and Cullity add an Attributional Baseline. This sets aside any risk reduction I do by preventing others from imposing risks, before calculating whether I’ve reduced risk overall.
But even this isn’t enough. What if I do both of those things, ask Barry and Cullity? I pay my neighbour not to boat and I’m paid by the downstream community to install an extractor. Then I take my boat out. The risk is lower than it would be if I hadn’t stopped my neighbour from boating. It’s lower than it would have been if I hadn’t fulfilled my independent obligation. But I still can’t justify that expected cancer death.
Thus, we have a fourth and final Combined Baseline. If I prevent someone else from risk imposition and fulfil an independent obligation, both these risk reductions are set aside before calculating the net risk-consequences of my actions. On Barry and Cullity’s Principle of Aggregate Risk Imposition (PARI), my actions must be measured against all four baselines.
Now return to your holiday in the sun. Do the offsets justify the climate risk of the flight? Sometimes. But not all offsets are equal. On Barry and Cullity’s reasoning, sequestering offsets, which remove greenhouse gases from the atmosphere, can. However, forestalling offsets, which prevent others from emitting them, fail on the attribution front. Planting trees, yes; buying solar panels for families in poverty, no.
This is an incredibly rich paper, so detailed and careful that it is hard to find flaws. Accordingly, my thoughts are exploratory (and perhaps rather underhand, since they focus as much on the ultimate utility of PARI as on the principle itself). They revolve around one general observation: a lot of work is done by the notion of attributability. Perhaps too much.
Point one. I might prevent more risk than I impose. Suppose my neighbour’s boat is inefficient, and mine is super-efficient, so there’s an expected 0.5 deaths associated with my trip, but an expected four deaths associated with his. On Barry and Cullity’s argument, even if I pay him not to go, I am not risk-imposition justified in taking my boat out. Suppose further (perhaps implausibly, in the climate change case) that I have no separate moral duty to persuade or help him not to take the trip. We could end up in the odd situation where I would be wrong to make the situation slightly better (by paying and boating) but have no moral duty to make it much better instead (by paying and not boating). I end up not doing something that would make the downstream population less at risk, so that I am not the one imposing any risk.
It is strongly intuitive to assign heavy weight to attributability in cases of direct individual harm. If I feel like shooting someone, find a would-be murderer and pay him to let me kill his victim instead, I’m clearly wrong. It’s not just about whether the victim is worse off than he would have been. I am a moral agent: it matters that I shot him. However, there is a difference between directly causing harm and running the risk of causing harm (perhaps especially where no individual harm could be attributed to you, but the risk is of increasing overall harm across a population).
We see this in the demandingness conditions assigned by common sense morality: very stringent when it comes to not killing the person in front of you; less so in cases of very small risks, even of death. I can’t permissibly crash my car into you even to save my own or someone else’s life, but I can drive around, for mere convenience, knowing there is a small risk that I will kill dozens of people.
As we move away from individual harm to the imposition of population risk, the balance between overall benefit and individual responsibility considerations could shift. It might matter more that something not-as-bad happens (or something bad is less likely to happen), and less that I not be associated with bringing that not-so-bad (but still quite bad) thing about.
(An interesting side question is whether attributability considerations can be subsumed under the Moralized Baseline. If I have an independent duty to pay my neighbour not to boat – he is very poor, the downstream population is very vulnerable, and I am rich – I can’t use that payment to justifying imposing a lower risk myself. But if there is no such independent obligation, it’s less clear that I can’t.)
Point two. We have different motives for risk imposition. There is considerable middle ground between the carbon emissions we need to survive, or are committed to by social infrastructure, and what we do for fun.
Suppose my boat trip funds specialist help for my child with reading, for my husband to retrain for his dream job, or to support my favourite climate justice charity. The toxin extractor would cost as much as the trip would make, but a much smaller sum would convince my neighbour not to take his boat out. Am I wrong to pay him and go instead? More specifically, could I be justified even if my reasons would not be enough to justify my boat ride, if I didn’t ‘offset’ the risk in this way? With less moral weight attached to attributability, I could be.
Of course, this needn’t undermine PARI. Offsetting is a moral minefield. There are many other individual duties in the climate case, and moral reasons to cut our carbon footprints independent of individual risk imposition. Barry and Cullity all know this. Whatever PARI decrees, further considerations may justify my actions all things considered (or unjustify them). Barry and Cullity know this too. But they think there’s no such justification for forestalling rather than sequestering, even if it is more efficient. This is, perhaps, called into question by what I have just said.
Christian and Garrett, thanks for this excellent and really interesting paper. Thanks also to Dr Cripps for the very helpful précis. My thoughts here overlap with hers.
I wanted to build on some of our earlier correspondence about this stuff to make a case for the following principle: one way to adequately justify a risk imposition (other things being equal) is by pointing out that it is needed to compensate you for some (otherwise) supererogatory sacrifice that you made earlier, or will make later, such that the risk imposition and sacrifice are (somewhat better than) net-neutral in terms of the risk that they impose on each potential victim.
I put “somewhat better than” in brackets because I’m undecided about whether it’s needed.
This principle correctly distinguishes Business Opportunity from Boating and Extraction; it might be (something like) the principle that Broome had in mind in his argument for emission offsetting; and, unlike your Principle of Aggregate Risk Imposition, it supports the permissibility of “offsetting by forestalling”.
Now, this principle doesn’t capture your intuitions about cases like Pre-emptive Joyride, at least if we assume that my joyriding would be somewhat less risky than yours. But it isn’t obvious to me that your intuitions about those cases are correct. Consider the two cases below.
CASE 1: Amy is going to inflict BIG pain on Bobby at 9am. I could pay Amy not to, but it’d cost me X, where X is enough to make this supererogatory. However, Carly really wants to see me inflict SMALL pain on Bobby at 10am, and she is willing to pay me X for the pleasure.
CASE 2: Amy is going to inflict BIG pain on Bobby at 9am. I could pay Amy not to, but it’d cost me X, where X is enough to make this supererogatory. However, Carly really wants to see me inflict MODERATE pain on Bobby at 10am, and she is willing to pay me X for the pleasure.
It seems to me that, if the difference between BIG and SMALL is large enough, then it is not only permissible but also morally required for me to act in CASE 1. But if it’s morally required for me to act in CASE 1, it seems to me at least plausible that it’s permissible for me to act in CASE 2, and hence in Pre-emptive Joyride.
You might disagree with my intuition about CASE 1. So, let me try an argument for that intuition. Suppose that in
CASE A: Amy is going to inflict BIG pain on Bobby at 10am. I could pay Amy not to, but it’d cost me X, and I don’t have X. However, Carly really wants to see me inflict SMALL pain on Bobby at 9am, and she is willing to pay me X for the pleasure.
It seems to me that, if the difference between BIG and SMALL is large enough, then it is not only permissible but also morally required for me to inflict SMALL pain on Bobby at 9am so that I can prevent him suffering BIG pain at 10am.
Suppose next that in
CASE B: Amy is going to inflict BIG pain on Bobby at 10am. I could pay Amy not to, but it’d cost me X, where X is enough to make this supererogatory. Carly really wants to see me inflict SMALL pain on Bobby at 9am, and she is willing to pay me X for the pleasure.
In this case, I could just pay Amy X, but doing so is supererogatory. However, there is a way for me to costlessly ensure that Bobby suffers only SMALL pain rather than BIG. If it was morally required for me to do that in CASE A, it seems to me it must also be morally required for me to do it (or perform the supererogatory act) in this case.
If it’s morally required for me to act in CASE B, it seems to me that it must also be morally required for me to act in CASE 1 (and hence it seems to me at least plausible that it’s permissible for me to act in CASE 2 and Pre-emptive Joyride).
What do you think?
Many thanks to Liz Cripps, who has done a terrific job of distilling the main points of our paper, and raising some excellent questions about it. Here are some quick initial responses to her two points.
Liz’s first point relates to what in the paper we call the risk of attributable harm. If I go for a high-speed joyride, the risk I impose is a risk that *I will harm* someone. It can reasonably be doubted whether that is the right thing to say about GHG emissions or the boating cases we use throughout the paper. It’s not just that in causally complicated cases like these it will never be *known* who has been harmed by me: rather, given that any harm in which I am casually implicated will be the product of very many other no less significant factors, my contribution will not be sufficient to make it true that *I harmed* them.
We can see the attraction of saying this. In the paper we don’t assert it since we are not sure how to give a clear account of the distinction between attributable and non-attributable harm. Still, you might wonder whether this matters in cases with the kind of preemptive structure we discuss in the paper. If I take a Preemptive Joyride, for example, I am morally accountable for the risk (of attributable harm) I impose in place of the other would-be joyrider; but if I engage in Boating and Buying Off, or emitting and buying forestalling offsets, then perhaps I shouldn’t be thought of as engaging in some sort of preemptive wrongdoing, given that non-attributable harms are at stake. Liz sensibly wonders about this, and it is also something we puzzled with while writing the paper.
We can’t see how to make this convincing, though. Suppose that in the case of Boating and Buying Off, no other boaters go out today: only one boat trip happens. Ned would have gone boating, but I pay him not to, and go boating myself. Then the downstreamers’ prospect is worse at the end of today than it was yesterday. One more boat-trip’s worth of toxin has gone into the river; and if yesterday the river’s overall level of toxicity gave them the expectation of n cancer deaths, by the end of today that expectation has risen to n+1. The risk they faced has increased; and the person who increased it is me, since I am the one who went boating. So the imposition of this *risk* is attributable to me, even if any harm that actually eventuates is not. And though the risk I impose in this case may not be one of attributable harm, it is a risk that matters morally. The downstreamers can ask for a justification for the worsening of the prospect they face, and I am the person who must provide it.
On Point Two, we’re inclined to agree with Liz. Our claim is that sequestering can be a way of removing any risk-imposition that calls for justification, but forestalling cannot (it is just a way of imposing risk in someone else’s stead). She considers cases where I can offer a better justification for boating than just the fact that I will enjoy it, and then asks: could it be the case that forestalling someone else’s risk-imposition could help to lower the bar for justifying my risk-imposition? It seems plausible that it could. (Compare: if you harm one person to help another, it can be relevant to justifying your action that the first person would have been harmed anyway.) But our claim in the paper is not that preemptive risk-imposition can never be justified. It is that it does need to be justified. The fact that when you preemptively impose risk your risk-imposition is accompanied by an attempt to ensure that the prospect of those you expose to risk is not worsened may lower the justificatory bar. But our main point remains: there is still a risk you impose, and you do need to justify it.
I agree with Elizabeth Cripps that Christian Barry and Garrett Cullity have written an “incredibly rich paper, so detailed and careful that it is hard to find flaws” and that the critical thoughts I am able to provide here are largely exploratory.
I share Elizabeth and Joe’s hunch that the case of “risk reduction by preemption” might be more complex than Christian and Garrett make it out to be (not necessarily in a way that contradicts PARI, but in a way that limits its practical significance, maybe). At least in cases where my preempting someone else’s risk imposition is supererogatory and I have good reason to impose whatever risks that I impose (though not a morally sufficiently weighty reason to make the imposition permissible in the absence of some kind of risk reduction), it seems that the supererogatory preemptive action could morally offset (or at least act as a justification for) the risk that I impose. Maybe this holds true particularly in cases where any offsetting that I might achieve through means other than preventing others from imposing risk is very costly to me (boat filtering technology is very expensive, and I am poor), whereas convincing others not to impose risks they would not have offset is much more feasible (I am great at motivating others).
Re Elizabeth’s (and Gustaf’s) point about overall risk reduction, I was reminded of Frances Kamm’s Principle of Secondary Permissibility, which states that an action which would otherwise be impermissible can be rendered permissible by the fact that it makes all victims better off. For example, suppose that it is permissible for me to kill you as a side effect of saving five others. Instead of killing you, I could inflict on you some serious harm that falls short of the harm of death to save the five others, but I would have to harmfully use you as a means to save the five. It would not be permissible to save the five by using you harmfully as a means in this way. But because the harm to you is less serious than the harm of death, and because I would be permitted to kill you in a “side effect killing”, I may (this is the secondary permissibility) in this instance use you as a means instead of killing you. With risk imposition, the idea is that if preemption makes all those affected on expectation better off than “proper risk reduction”, this could make one’s own risk imposition plus preempting someone else’s permissible. (As Elizabeth points out (and as Christian and Barry argue in the paper), these thoughts are consistent with PARI).
A further (but somewhat related) issue that came to mind while reading the paper: it is currently quite trendy to suggest that the “greenest” (most climate friendly) thing you can do is not to have any children. I was wondering what follows for this idea if we take Christian and Garrett’s ideas seriously. Is it simply: well, you still have to offset your own emissions (by sequestering) to make your actions risk imposition permissible (in the absence of some other morally valid justification), but apart from this, yes, e.g. in terms of expected welfare effects, not having children might be a good idea?
Finally (and possibly most interesting to me), I have some doubts about the (rather implicit, hence maybe I’ve misunderstood the authors here) assumption made in the paper that all risk impositions need something like a justification. I am not sure this is true, especially in cases where (i) there is intervening impermissible agency (harm will eventually occur only if other agents act in ways they should not act); (ii) the risks that I impose through my actions are miniscule. (Note that climate change related harms would seem to combine both (i) and (ii), but either (i) or (ii) taken by itself seems enough, to me, to raise doubts about a presumption in favour of the idea that relevant risk impositions require justification). Take (i) and the example (which I have from Kim Ferzan) of the “provokable neighbour.” Suppose my neighbour is prone to anger. I have reason to suspect that he yells at his wife and children. Maybe he even beats them. I also have reason to suspect that it makes him angry if I leave my house before 8 am (I’ve overheard him complain about it, say). Does this mean that I am not permitted to leave my house before 8 in the morning unless I have a morally weighty reason for doing so (or make an extra effort to appease him in some other way)? My intuitions are unclear, I feel some pull in both directions. To argue against Christian and Garrett’s assumption: isn’t it crazy to think that my neighbour’s temper (which he should learn to control) could have such a huge influence on my life? Of course, if I hear him yell, I may be required to call the police. But do I really have to speculate about the actions that I should avoid because they would seem to make it more likely that he might later lash out? Re (ii), an example of mine: the available evidence suggests that stress is one of the main causes of heart attacks. Suppose I regularly rush through Central London, brushing against tourists, every now and then giving one of them a fright. It seems clear that each time I give a tourist a fright, their adrenaline level goes up, and this raises their probability of suffering a heart attack later in life by a tiny bit. If I behave in this way over many years, I might (on expectation) “destroy” six months of worthwhile life. But does my rushing through London really require justification? Isn’t it something like an acceptable and inevitable fact of living together that we constantly impose on each other such miniscule risks of harm? (Consider: maybe the filter that I have to put into my boat to filter out toxins also raises the risk of harm to some identifiable group? And even apart from the toxins, boating raises the risk of harm to other boaters in the river, and to swimmers. Etc.)
Hi all!
I really think this is an excellent paper and I’m glad it’s being discussed here. I have a lot to say about carbon offsetting, but most of it is beyond the scope of Garrett and Christian’s paper. Here I’ll raise a technical objection, different from the ones already raised, and then I’ll say where I think it leads. The authors use some examples to argue that there are cases in which the Standard Baseline is required to explain why a course of action is impermissible and the other Baselines are insufficient. I don’t think their examples succeed in doing so.
In Boating Delivery (p. 365), the agent is paid to reduce harm, and in the course of delivering he increases harm. They say that the Moralized Baseline alone implies that the action is risk imposition permissible, so the Standard Baseline is needed to give the correct prediction that it is impermissible. The problem is that, as far as I can tell, the Standard Baseline also implies that the action is permissible. The risk caused by delivering the extractor by boat is the same as the risk caused by not delivering the extractor by boat. That is, at least if we assume that means not delivering the extractor at all, and I don’t see any reason given in the paper not to assume so. They say “I am answerable for the risk I impose in meeting my obligations as well”. This seems correct. My objection is to the following “To detect that risk, my actions must also be measured against the Standard Baseline,” this seems to me incorrect. The more intuitive explanation in the example is that the obligation to deliver the extractor could have been fulfilled without imposing that extra risk. What is required is a better formulation of the Moralized Baseline, not the Standard Baseline.
A second example brought to justify the need for the Standard Baseline is Reaction (p. 368). In this case the agent does an action that will most probably prevent someone else from imposing risk but it has some probability of failing to do so and actually increasing the harm (when combined with the agent’s risk imposing action). The Attributional Baseline, as they formulate it, implies that the risk of added harm doesn’t count, and that seems wrong. They think that incorporating the Standard Baseline does the needed work here. I don’t see how. First, even if the added harm doesn’t count, the Attributional Baseline still yields the desired prediction that the action is risk-imposition-impermissible, there is no need for the Standard Baseline for that. Second, at most the Attributional Baseline gives the wrong result as to how impermissible it is – how much excessive risk is imposed in comparison to the right baseline. The set of actions in Reaction should be more risk-imposition-impermissible than the Attributional Baseline suggests. But then Standard Baseline gives an even worse prediction. According to the Standard Baseline, preventing someone else from imposing risk counts, so the Standard Baseline in Reaction will be a higher level of risk than the Attributional Baseline and therefore more permissive.
Where do I think this should lead the authors? I think this should push them towards an account that keeps all their intuitions, but has a single baseline formulated somewhat differently from their current method. The single baseline is along the lines of their combined condition 4, that both independent duties to reduce risk and preventing others from imposing risk don’t count as offsetting one’s risk imposition. But I suggest that the precise way of formulating this idea needs to be refined.
Perhaps though I made some mistake somewhere here. The paper has so many fine details that I worry that my comments stem from having missed something.
Thanks Susanne, for this fascinating comment–you raise so many excellent questions here!
A few comments on some of the central points.
You write: “it seems that the supererogatory preemptive action could morally offset (or at least act as a justification for) the risk that I impose”.
I think we agree with you about this. Our claim is that even if this is true, we impose a risk in this case that requires a justification. That it is supererogatory and preemptive may help supply such a justification. But it is not clear that just any supererogatory preemptive action could do so. It seems supererogatory to pay Ned not to go boating—we don’t have any independent obligation, given all the other things we could use our resources on, to pay him not to go boating. But paying him not to go boating does not itself necessarily provide a sufficient justification for my boating trip.
We think we also agree with you about the circumstances under which a set of acts that includes a risk-imposition and a supererogatory preemptive action may turn out to be risk-imposition permissible. The costs of alternative means of offsetting, for example, may make it permissible for an agent instead to impose a risk along with such preemptive action. On our account, though, this will still be a case of permissible risk-imposition, rather than a case where we can say that we have imposed no risk. Such a case would be similar, in that respect, to the case of very poor person who needs to go boating to meet their basic needs, where paying for offsets of any kind would also render it impossible for them to meet those needs. Their interest in meeting their basic needs gives them a good justification for the risk they impose, but they do impose a risk.
The connection you make with Kamm’s Principle of Secondary Permissibility is really intriguing. As you extend the idea: “if preemption makes all those affected on expectation better off than “proper risk reduction”, this could make one’s own risk imposition plus preempting someone else’s permissible.”
This is an interesting proposal. It could be developed as a stronger or a weaker claim. The stronger claim is: if preemption makes all those affected on expectation better off, that is by itself enough to make one’s risk-imposition plus preempting permissible. This seems to us to have fairly counterintuitive implications. In particular, it seems to make my preemptive joy ride permissible. Here too I am making them better off than they would be in a situation in which I undertook ‘proper risk reduction’ by means of erecting barriers to my joyride, but I do not seem to act permissibly in taking the ride, even after I have airlifted the child to the hospital. [Things might be different, of course, if I could obtain in advance the consent of those potentially affected by these sets of actions.]
The weaker claim is: if preemption makes all those affected on expectation better off, that could make one’s risk-imposition easier to justify than it otherwise would be. We agree with that. If I’m driving very fast not as a joyride but for some more serious purpose, then the fact that my driving is accompanied by another action that makes all those affected on expectation better off could help to make it permissible.
Thanks also for encouraging us to think about the case of imposing risk though having children.
This issue raises many interesting questions. Some of these are empirical–is it really true that, in expectation, my creating a child will impose more risk than it reduces? Much may depend on the type of child I can be expected to create, and what their habits and dispositions will be. Lasting action on climate change is likely to require political solutions, and we will only get political solutions if enough people vote for and mobilize in support of them. If all those who care about climate change refrain from having children, then on the assumption that children at least tend in expectation to inherit some of the values of their parents, there may be fewer people around in the future who care about doing something about climate change, and correspondingly less prospects for meaningful political change.
Setting this sort of issue to one side, the moral question it raises goes beyond the scope of what we say in the paper, in that it encourages us to think about what kinds of risks can be attributed to a person. If I create a person and they go on to impose risk, to what extent can these risks also be attributed to me? Here we are inclined to think that it does make sense to take such risks into account. Whether the imposition of such risks can be given an adequate juistification will depend on a host of other considerations, including the importance to me and my well-being of having children along with other expected benefits and costs of having a family.
Relatedly, you raise some challenges to what you take to be an implicit “assumption made in the paper that all risk impositions need something like justification.”
I think we do assume that all actions that impose risk need something like a justification. Kim’s case of a provokable neighbour case is a fascinating one. And we too feel the pull of different considerations here. We don’t think that issue here is whether we need to consider the risks of our leaving the house at all, given that the risks are mediated by intervening impermissible agency. Rather, this is a case where the cost to me of not acting to provoke the neighbour on any particular occassion may be small, but the standing policy of restricting my behaviour in ways as not to trigger their tantrums becomes quite high. If we assume that such a policy is quite costly to us over time, then leaving the house when it raises the risk of my neighbour’s tantrum may be a case of permissible risk imposition. So objecting to your neighbour’s having such a huge influence on your life does not mean that you must reject the idea that all risk impositions require justification. Rather, it points out one way in which you can provide such a justification: that refraining from doing so is very costly to you.
Your case of slightly raising the risk of heart attacks in central London is a fascinating one. Elsewhere, Garrett discusses a similar case: your small contribution to raising ambient noise levels carries with it some expectation of harm. As we see it, this sort of case can be used to raise questions about Broome’s argument from expected harm, which we assume rather than argue for in the paper. If we reject that argument, it will no doubt be harder to ground duties not to impose risks on others through our GHG emissions.
However, to see the plausibility of the claim that risk-impositions of all of these kinds require justification, consider: suppose there was absolutely no cost to me in refraining from stressing my neighbour or central Londoners. Then it seems right to us to say that my risk-imposition lacks a justification and is wrong.
Thanks for the great paper! I’m really looking forward to reading through it more carefully. But since the discussion is happening now, I thought I’d put an objection out there that I’m concerned about based on the quick read-through I’ve managed so far.
Suppose Evil Guy starts a boulder rolling down a hill towards a footpath walked by Innocent. There’s an 85% chance the boulder continues on its current path and moderately harms Innocent, a 10% chance it continues on its current path but misses Innocent, causing no harm, and a 5% chance that Bystander will put himself in the boulder’s way, suffering a somewhat more than moderate harm to stop it.
I take it that when Evil Guy starts the boulder rolling, he has thereby imposed his risk of harm on Innocent. By stopping the boulder, Bystander may be able to prevent Evil Guy from imposing harm on Innocent, but he would not have prevented Evil Guy from imposing risk of harm. If this is right, then the Attributional Baseline will not require Bystander to consider the baseline risk to Innocent where Evil Guy never starts the boulder rolling in the first place.
But now imagine Bystander performs the following action set. Action 1: stop the boulder at serious cost to self. Action 2: for fun, start the boulder rolling towards Innocent, with an 80% chance it will hit and moderately harm Innocent and a 20% chance it will not.
Bystander’s Action 2 seems clearly impermissible to me. But it looks like it will be deemed permissible by PARI, since it is part of the risk-imposition-permissible set of stopping the boulder and then starting it again. Stopping and starting the boulder does not impose additional risk on Innocent judged in relation to the Standard Baseline. Indeed, it slightly reduces it. But nor does it impose additional risks judged in relation to any of the other baselines, which as far as I can tell will not differ from the Standard Baseline in this case. Bystander is not obligated to perform Action 1 (which would cause more harm to Bystander than it might prevent Innocent from suffering) and Action 1 does not prevent anyone’s risk imposition.
(This is a lot like the Disaster Warning case in the appendix, but the risk reduction in my case is performed by Bystander, so it isn’t answered by the response you give there.)
So I’m wondering whether this is a genuine problem for the account as it stands or what I have misunderstood if not (perhaps what prevention of risk imposition is?). And if it is a problem, I wonder whether there’s a quick fix to avoid it that doesn’t cause trouble elsewhere.
Thank you Joe: great question (as usual)!
It seems to us that there’s a significant difference between your view and Liz’s. Liz agrees with us that (1) in cases of pre-emptive *harming*, I have to justify the harm I inflict, and the fact that I have also prevented someone else from doing greater or equal harm does not itself supply the needed justification. But she questions our parallel claim that (2) in cases of pre-emptive *risk-imposition*, I have to justify the risk I impose, and the fact that I have also prevented someone else from imposing greater or equal risk does not itself supply the needed justification. You’re challenging both (1) and (2). Indeed, you’re challenging (2) via (1), since your examples involve harm-infliction rather than risk-imposition.
As we see it, your challenge is not a challenge to PARI itself, but to the claims we make about its application to GHG offsetting. PARI tells us when there is a risk-imposition that needs to be justified. It implies that if you emit and sequester, you can be performing a set of actions that imposes no risk; but that if you emit and forestall, you do impose risk (in someone else’s place), and your risk-imposition then calls for a justification. We then make the further claim that (*) “it is hard to see where that justification can come from” (378). So if your principle is correct, it would be a challenge not to PARI but to (*): it would show that in the forestalling cases that PARI says involve a risk-imposition that calls for justification, that justification can be supplied.
So: is your principle correct? It does, as you say, seem counterintuitive to us, in morally permitting me to buy myself a dangerous joyride as long as I find someone else to pay off. However, you present further cases help to vindicate it (and therefore give us reason to think that our intuitions about the joyride case are mistaken). It’s an interesting challenge, compatible with the reflective equilibrium methodology we employ throughout the paper. But do the cases you provide support this principle?
We think they fall short of making a compelling defence of this principle. It’s explicit in Case A that I don’t have X – so getting X from Carly is a necessary means of sparing Bobby the big pain (while facilitating the small one). That seems crucial. And this is precisely the kind of case we allow for on p. 366:
“we don’t deny that the fact that the imposition of a risk is preemptive could be relevant to justifying it—for example, if preempting you were somehow required in order to lessen the overall risk of harm to nearby pedestrians, or to achieve some other morally crucial end.”
In cases like Boating and Buying Off, Preemptive Joyride, and emitting and forestalling, however, the risk-imposition is not a necessary means to the risk reduction. Here, as we see it, my imposition of risk invites the question, posed on behalf of the risk-exposed population, “Why have you made our prospect worse than it was?” We can see that a good answer to this could be: “Because I needed to do that in order to ensure that it didn’t get even worse.” That may be the right thing to say about the cases you introduce. But in the cases that we focus on I can’t say that. All I can say is: “Because the activity that makes your prospect worse is one that I enjoy, and I happen also to be doing something that makes your prospect better than it would have been.”
Thanks, Dan, for giving the paper such a close reading and pressing this important challenge!
We agree that it would be nicer if PARI could be simplified, so that instead of measuring risk-imposition against four different baselines, we could replace them with a single baseline that covered all of the relevant cases. So if your line of challenge works, we’d welcome that as good progress.
Your comments on Boating Delivery reveals an issue that we hadn’t seen before, which does invite amendment to the Moralized Baseline. However, we think the paper does say the right thing about the Reaction case, and hence that the Standard Baseline is not redundant.
In Boating Delivery, we were assuming a case in which, if I hadn’t been around to do the extractor-delivery for the downstreamers, they would have paid someone else to do so. (No doubt we should have made that clearer in the paper.) If so, the prospect they face without the performance of my set of actions (in delivering the extractor by boat) is one in which they will probably get their extractor by road, and I impose risk when measured against the Standard Baseline.
However, your problem still arises when we ask: what is the right thing to say about a case in which we make a different set of stipulations. I’m the only available delivery method: the extractor will not be delivered if it’s not delivered by me. Now, since the extractor will be used to extract many boat-trips’ worth of toxin, my delivering it by boat will *improve* the downstreamers’ prospect, measured relative to the Standard Baseline. So here, we think you’re right: intuitively, the complaint to make against me in this case is that there is a range of different ways of fulfilling my obligation, and I’ve chosen to fulfil it in an unnecessarily risky way. When we make this judgement, we’re using as our baseline *the least risky way of fulfilling my obligations*. We think the Moralized Baseline should indeed be revised along those lines.
If, having said this, we also agreed with you about the Reaction case, then you’d be right that the Standard Baseline could be dropped without cost. But we don’t think that is correct. To make the example more concrete, suppose that after I pay Ned and start boating the probability that he will also still go boating is 50%; if we both go boating the chemicals discharged by our boats will certainly react; and that if they do the associated expectation is horrendous: 2000 cancer deaths. PARI then says there are two different ways of measuring the risk I impose, and both stand in need of justification. Measured against the Attributional Baseline, the risk I impose is an expectation 1 cancer death; measured against the Standard Baseline it is an expectation of 1000 cancer deaths. Now it is true that if we ignore the latter, Boating and Buying Off may still be risk-imposition impermissible, and if so ignoring the Standard Baseline will not misclassify the case as risk imposition permissible when it’s not. But it will overlook something morally important. I am answerable (Ned is too) for exposing the downstreamers to the very serious risk associated with the possibility of the reaction. So in this case we do think the Standard Baseline is necessary to account for all of the morally significant risk-imposition that is present.
Hello Christian and Garrett,
Great paper (though I read this a while back and haven’t re-read in detail)
I also had a question about pre-emption. LIke Joe, I thought it easier to justify preemptive harming than I think you do, as long as we act with the right intention. I found it a bit hard to get clear about how you parse pre-emption cases, though, so perhaps I’m missing something.
Consider:
Pain relief: Poison has been put in X’s glass. I can’t stop them drinking. They will die a slow and painful death. I can make the death less painful by putting a pain relief drug into their glass. I can’t get their consent.
I take it this is clearly permissible; perhaps required.
Pain relief 2: As pain relief, except I have two options – either put the pain relief drug in the glass or replace the poison with another poison. The death will occur at the same time in either case, but the pain will be even less if I substitute the poison.
Substituting the poison seems clearly permissible, (I think you agree though I’m not sure); perhaps required.
But I think we have a further intuition about this – the fact that I will kill in Pain Relief 2 makes no difference at all to my decision if I act with the intention of reducing pain. I would act wrongly, I think, if I acted because I wanted to be the one to kill.
Now consider:
Flight 1: If I go on a flight this will result in the pain relief drug falling into X’s glass as in Pain Relief.
This is clearly permissible.
Flight 2: There are two flights I can take. One will result in a pain relief drug falling into X’s glass as in Pain Relief. The other will result in poison being substituted as in Pain Relief 2.
If I can substitute in Pain Relief 2, I can take the second flight in Flight 2 can’t I? Or at least I can to do so as long as I intend to reduce pain.
Now consider:
Flight 3: I can either stay at home or take a flight like the second flight in Flight 2.
If I am permitted to take the second flight in Flight 2, aren’t I also permitted to fly in Flight 3?
And now can’t we just generalise, and say that our initial intuition in Pain Relief 2 that there is no moral difference between pain relief by adding a pain relief drug and pain relief by substituting a terrible drug for a bad but better drug generalises across moral contexts?
I should say that I share your intuition about Joy Ride. But I wonder if that is to do with the significance of intentions. If I had the intention to reduce the risk, and my riding achieved this, and I didn’t have a duty to reduce the risk even further in some other way, I think I act permissibly. But in the offsetting cases, I can act with good intentions of overall reducing the risk to others, and as long as I don’t have the independent duty to reduce these risks even further, I wonder whether there is any real moral difference between just reducing the risk, which is surely permitted, and substituting a greater risk for a lesser risk.
Thanks Mike—we think that you have raised a really important challenge!
In the cases of Pre-emptive Joyriding and Boating and Buying Off, you intervene to forestall someone else’s risk-imposing action, and perform the same action yourself in its place. We’ve claimed that the risk imposed by your joyriding should be attributed to you, and that the Attributional Baseline measures this risk.
As you point out, in the Evil Guy case Bystander doesn’t prevent Evil Guy from performing a risk-imposing action. However, it does seem right to say that Bystander intervenes to remove the risk imposed by Evil Guy and replace it with a risk-imposition of their own. Although Bystander hasn’t prevented Evil Guy’s action, once they stop the boulder they have prevented Evil Guy’s risk imposition. And if so, Bystander does impose risk when measured against the Attributional Baseline. Note that the Attributional Baseline has us consider the prospect of harm faced by Innocent without the *risk imposition* of Evil Guy and without Bystander’s actions. It’s like a case in which Evil Guy fires a missile in the direction of a populated area; you intercept the missile and destroy it; then you launch another missile on the same trajectory. It is true that you haven’t prevented him from performing a risk-imposing action; but you have removed the risk he was imposing and then replaced it with a risk that you impose.
But this way of responding to your case prompts a new challenge: Why aren’t Boating and Extracting, and emitting and sequestering, relevantly similar to your case? In those cases too, others have acted to impose a risk that already exists (by releasing toxin or GHG molecules). You act, after their actions have taken place, to reduce that risk (by extracting molecules that others released). But then you impose another one in place of the risk that has previously been imposed.
This is not a challenge we take up in the paper, but as we see it there are three different ways to respond to it.
The first is to say that they are relevantly similar and this makes offsetting-by-sequestering more morally problematic than we suggest in the paper. When you offset by sequestering others’ emissions, you are replacing their risk-imposition with your own, just as you do when you offset by forestalling. In offsetting by forestalling you prevent someone else’s risk-imposing action and replace it with a risk-imposition of your own; in offsetting-by-sequestering you intervene after others’ risk-imposing action, remove (part of) the risk it imposes, and replace it with a risk-imposition of your own.
This would carry the following implications for GHG offsetting. Pre-industrial atmospheric CO2 concentrations were 280ppm; the current figure is 420ppm. We can infer that, on average, 2/3 of any CO2 we sequester is “natural” and 1/3 anthropogenic—so that 1/3 of it can be described as removing a justification-requiring imposition made by others. What the above line of thought suggests is that we need to discount that 1/3 in any offsetting-by-sequestering that we do. One has to sequester in a way that carries the expectation of removing an equivalent amount of CO2 to one’s own emissions from the “natural” 2/3 of atmospheric CO2—any collateral anthropogenic CO2 that one happens to sequester at the same time doesn’t count as offsetting one’s own emissions, since it’s a justification-requiring imposition that one would simply be replacing.
This would still allow that offsetting-by-sequestering can be done in a way that gives you no risk-imposition to justify. But to achieve this, you have to perform offsetting actions that carry an expectation of removing 150% of the GHG that you have emitted.
This would lead to a conclusion that is similar to but stronger than the one we defend in the paper: offsetting by forestalling is dubious; offsetting-by-sequestering can be vindicated, but merely taking as much GHG from the atmosphere as I put in won’t be sufficient to make my conduct permissible—I have to remove a great deal more than I emit.
The second response says: the forestalling and sequestering cases are relevantly similar and this makes offsetting-by-forestalling less morally problematic than we suggest in the paper. This response travels in the direction recommended by Joe Horton, and rejects the significance we attach to the Attributional Baseline. But this response implies my conduct in cases like Joyriding and Forestalling, Bystander’s conduct in the boulder case, and the missile-interceptor’s conduct, impose no risk that stands in need of justification. And we find that hard to believe.
Finally, there is a third response: there are morally significant features that distinguish cases like Boating and Extracting and ordinary GHG offsetting-by-sequestering from the boulder and missile interception cases. The differences between the kind of risk imposed by releasing a boulder or missile and the kind of risk imposed by releasing a toxin into the river or GHG into the atmosphere explain why it is correct to say that your replacing someone else’s risk-imposition in the first case but not the second. We are inclined to think that the third response is defensible, but doing so clearly goes far beyond what we do in the paper. We recognize that this is a very important challenge, one we hope to take up in future work.
Thanks Victor, for these (characteristically) interesting cases for us to consider and for presenting so clearly a challenge to our treatment of cases involving pre-emption.
We think that a critical question in these pre-emption cases is: is pre-empting a harmful or risky act with another act that is less harmful or risky a necessary means to, or a causal consequence of, preventing the greater harm or risk of harm? If so, then it seems permissible so long as one acts with good intentions. A good answer to “Why did you harm me / impose this risk on me?” is “I had to do it in order to make you better off overall.” So with respect to that sort of case, at least, we think we are in agreement. Suppose that imposing the smaller harm or risk is not strictly necessary for preventing the greater harm or risk? Then we think it will be hard to justify the pre-emptive act. But that is not to say there is no way of justifying it. Perhaps imposing the lesser harm or risk of harm is not strictly necessary to prevent the greater harm or risk of harm, but the only other ways of doing so are so very costly for you that it would be seriously heroic for you to choose any of them. In that case too you might be justified in acting pre-emptively. But note that in this case you can say to the victim something that the risk imposers cannot say in cases like Boating and Buying off or Pre-emptive Joyride, which is that pre-emptive harming or risk imposition is the only way to reduce the risk faced by others without taking on undue cost.
The same considerations seem to apply in your flights cases: if taking the second flight in Flight 2 is a necessary means of lessening the harm faced by X, then all else being equal it seems permissible if you act with the right intentions. Perhaps it is also permissible if it is the only means of lessening the harm faced by X that would not involve undue sacrifice. But where that is not the case, we think such acts are harder to justify. Note that in the Preemptive Joyride case it is not my joyriding that reduces the risk faced by the nearby pedestrians. (Suppose I’m a safer high-speed driver, so the risk is indeed reduced.) What reduces the risk is my paying the other would-be joyrider. I then impose a risk on the nearby pedestrians through my joyride. So it is hard to see this case as one in which I could credibly claim to be joyriding with the intention of reducing risk. This makes it different from your intervention to replace a poison with a less painful poison, in pain relief 2. If there were a joyriding case where I joyride because it is the only way of preventing your much riskier joyride, then we agree that that could potentially justify your risk imposition.
Following up on your response to my critique regarding Reaction: I think the Attributional Baseline needs to be revised as well, and once you do so, I suspect the Standard Baseline will turn out to be superfluous even in cases like Reaction. Here’s a version of Reaction that shows why. Suppose:
(1) If Ned doesn’t go boating nobody is expected to die whether or not you go as well. That is because your emissions only become toxic when reacting with Ned’s.
(2) If only Ned goes boating, 10 people are expected to die.
(3) If Ned and you go boating, 20 people are expected to die.
(4) If you pay Ned not to go boating, there is 50% probability that he will cancel his outing.
In this case both the Standard Baseline and the Attributional Baseline imply that it is risk permissible to boat and pay. (Note: You’ll need to correct the numbers if you think like Buchak that we should be risk averse when imposing risk on others). But I don’t think this implication fits well with your own intuitions about offsetting by preventing others from imposing risk. I think you should say that paying and boating in this case is risk-imposition-impermissible.
Now a less technical contribution to the discussion. Following up on some of preceding exchanges, I think it’s helpful to point out two features of Garrett and Christian’s account that limit the ways in which one might object to it.
First, their account is not an account when when risk imposition is all things considered justified or unjustified. So, for any counterexample of the type: your account says A is permissible but it isn’t, they can say that it is impermissible for a different reason. (See for example their response to Sea Wall on p. 380). For any counterexample of the type: your account says that A is impermissible, but it is permissible, they can say that it is a case in which risk imposition indeed serves as a consideration against the action, but it is outweighed by some other consideration.
Second, they don’t give an account for what counts as excessive risk, and they say that what is excessive is context dependent (p. 356). So for any counterexample in which some risk imposition seems permissible, they can say that it is because in the context the risk imposition isn’t excessive.
What this means is that the kinds of intuitions you need in order to object to their account aren’t just that some action is im/permissible. Their account can accommodate any such intuition. Rather, the kinds of intuitions you need are intuitions about the explanation of why some action is im/permissible. The explanation can be inconsistent with their account.
Thanks for that. I think I see more clearly. But I I think there is still trouble ahead.
Suppose
Pain Relief 2. X has been poisoned and will suffer a slow and painful death if I do nothing. All I can do for X is substitute the poison with another poison, making X’s death less painful.
I permissibly substitute on your view, because my substituting is necessary to reduce the harm, and the reduction in the harm is a causal consequence of my harm causing act.
Now consider:
Pain Relief 3. Like Pain Relief 2, except I could also reduce X’s pain equally by getting pain relief medicine. However, doing this will costly.
Now my causing the harm is not necessary to achieve the pain relief. I can do it in another way, but at a cost. The strict necessity implies that I may not substitute however high the cost, because inflicting the harm is not strictly necessary. So if the only way of administering the simple pain relief involves my being killed, I may not substitute. But that can’t be right as you note.
My own view, though, is that I can substitute to avoid any cost at all. To reinforce this view, suppose that I bear the cost and add only the pain relief medicine, or I substitute the poison for an identical poison that is mixed with pain relief. Either way, X is going to drink an identical quantity of poison and pain relief. Isn’t it hard to believe that where I act just for the sake of X, I must bear a cost to ensure that I have not substituted the poison? That seems true even if administering the poison mixed with pain relief comes at no cost. But it seems even more powerful in the case where administering the poison mixed with pain relief is costly to me, but the pain relief alone would come at a greater cost. And it is EVEN EVEN more powerful in the case where my administering is supererogatory.
I now see that your Joyride case has a different structure, though. In that case, I can:
1) Do nothing.
2) Pay the joyrider not to joyride and then not joyride.
3) Pay the joyrider not to joyride and then joyride.
You might say: 1) is permissible; 2) is permissible; but 3) is wrong. And 3) is wrong even though 3) is better for everyone than 2). The possibility of 2) on this view makes 3) wrong. This is like a version of the all or nothing problem. I agree that there are cases like this, and perhaps this is one.
Some people might think something like: I am permitted to pick the cost-causing suboptimal option only if I would have been permitted to pick between two similar options where I bear no cost. As I would not be permitted to joyride without paying, it is wrong to pay and joyride. I must either pay and not joyride or not pay.
But that view is far too strict. I take it that this is Joe’s thought (correct me if I’m wrong Joe) – the fact that I bear a cost to achieve the good sometimes gives me latitude to impose a cost for my benefit in order that I offset the cost that I bear.
To see this, consider:
Rescue: I can:
1) Do nothing:
2) Rescue X at financial cost n.
3) Rescue X at financial cost n, but in a way that will result in transferring O.5 n from X to me.
I may well be permitted to pick 3. That is so even though it would have been wrong for me to rescue X in a way that transfers 0.5 n to me had I been able to do that without bearing n. One reason is that X would owe me compensation for the rescue at cost n.
So now returning to the Joyride type cases, why not think that sometimes I am permitted to Joyride? This is so where people who bear the risk would owe it to me to bear that risk to get the pleasure of joyriding in return for my paying the joyrider not to impose the greater risk on them.
Sorry, working too quickly. I wrote:
You might say: 1) is permissible; 2) is permissible; but 3) is wrong. And 3) is wrong even though 3) is better for everyone than 2). The possibility of 2) on this view makes 3) wrong. This is like a version of the all or nothing problem. I agree that there are cases like this, and perhaps this is one.
I meant “1) is permissible; 2) is permissible; but 3) is wrong. And 3) is wrong even though 3) is better for everyone than 1). The possibility of 2) on this view makes 3) wrong. This is like a version of the all or nothing problem. I agree that there are cases like this, and perhaps this is one.”
Thanks for the helpful response! I just want to double-check that I understand where our judgments about my argument diverge. I presented the two cases below.
CASE A: Amy is going to inflict BIG pain on Bobby at 10am. I could pay Amy not to, but it’d cost me X, and I don’t have X. However, Carly really wants to see me inflict SMALL pain on Bobby at 9am, and she is willing to pay me X for the pleasure.
CASE B: Amy is going to inflict BIG pain on Bobby at 10am. I could pay Amy not to, but it’d cost me X, where X is enough to make this supererogatory. Carly really wants to see me inflict SMALL pain on Bobby at 9am, and she is willing to pay me X for the pleasure.
I suggested that, if the difference between BIG and SMALL is large enough, then it is not only permissible but also morally required for me to ‘preempt’ in CASE A. I think you agree with that. (You say: “It’s explicit in Case A that I don’t have X – so getting X from Carly is a necessary means of sparing Bobby the big pain (while facilitating the small one). That seems crucial. And this is precisely the kind of case we allow for”.)
I then suggested that, if it is morally required for me to preempt in CASE A, it is also morally required for me to preempt (or instead make the supererogatory sacrifice) in CASE B. Is that the inference that you reject?
Dan, Victor, and Joe—this is just a quick message thanking your for your further points—lots to think about! It’s late in the evening here in Australia but we will reply to them all in our morning. Best, Christian and Garrett
Sweet dreams
Christian and Garrett’s paper is very important, particularly for us academics. We are generally high emitters of greenhouse gas because of our propensity to travel. Moreover, their conclusion is troubling. They tell us that emitting greenhouse gas may be permissible if it is offset by sequestering, but it requires special justification if it is offset by forestalling. The trouble is that sequestering greenhouse gas is very expensive compared with forestalling. Indeed reliable sequestration is not yet available to us as individuals. Many of our emissions cause carbon to be taken out of the geology and put in the air. Reliable sequestration requires carbon to be put back in the geology, or at least set on a reliable path to return to the geology. Planting trees is a poor substitute.
One thing concerns me about their argument. If you perform a joint act of emitting and offsetting by any means, you leave the same amount of greenhouse gas in the air as would have been there had you not done this act. You add some and you subtract some. I think it’s fair to say you cause no increase. However, C&G say you cannot claim credit for the subtraction if it’s done by forestalling. This seems plausible because your causing the subtraction is done through other agents. Other people were going to emit, and you stop them from doing so. As Bernard Williams said long ago, you are not equally responsible for everything you cause, when other agents are involved. C&G say that your joint act, though not causing an increase in emissions, shifts the responsibility for some emissions from other people to you. For that reason it does not make your joint act of emitting and offsetting permissible. I shall call it ‘unsuccessful’.
What puzzles me is that they think you can claim credit for subtraction by sequestering, even if the sequestering is done through other agents. They think it’s ok to pay other people to do your sequestering for you. Why the asymmetry? Many people in Africa tear down trees for firewood, in order to cook their food. According to C&G, you can successfully offset your emissions by paying them to plant trees to replace the ones they tear down, but not by paying them not to tear them down in the first place (for instance, by buying them efficient stoves that require less firewood). This doesn’t seem plausible. C&G give an example (p. 376) where the boundary between sequestering and forestalling at first seems hard to draw, and they explain how to draw it. But to me their example does not show the difficulty of drawing that boundary. It shows that the boundary between sequestering and forestalling does not mark the moral boundary between successful offsetting and unsuccessful offsetting.
The conclusion of this line of thought may be even more troubling. It may turn out that you cannot claim credit for any sort of offsetting that you do not do by your own hand. Since you cannot sequester successfully by your own hand, it seems offsetting cannot be successful.
Suppose that is true – what should we conclude? Go back to the beginning. If you emit and offset – whether successfully or not – you do not cause the quantity of greenhouse gas in the air to increase. You therefore do not cause an increase in the risk of harm to people. The worry is all about attribution of responsibility. Although the risk remains the same, if your offsetting is unsuccessful, your joint act shift the responsibility for the risk from other people to yourself. Why should we be worried about that? Because of the widely recognized moral principle that you may not harm other people. According to C&G’s perhaps less widely recognized moral principle, you may not impose a risk of harm on other people. So, even if the overall risk remains the same, it is your moral duty not to shift the responsibility for imposing risk on to yourself.
Now, notice that even if offsetting is successful, its success is achieved at the cost of sacrificing potential good. Any money you spend on offsetting could be used to do much greater good in other ways. By my calculations, which are highly contentious of course, $10 spent on preemptive offsetting on average extends peoples lives, added up around the world, by about 4 hours. On the other hand, $10 contributed to some charities listed on the GiveWell website extends life by about 60 days. I think this large disparity should make us wonder whether buying offsets is morally justified. Avoiding being responsible for doing harm is a moral aim. So is doing good. When the disparity is so great, perhaps the first aim should be sacrificed to the second. And if morally successful offsetting is really impossible, that aim has to be sacrificed.
What would be the consequence of sacrificing that aim? Morality would permit us to emit greenhouse gas, even without offsetting, but only to the extent that doing so is beneficial, taking account of our own benefits and harms as well as other people’s.
Look at C&G’s example of ‘preemptive joyride’ in which ‘I’ pays ‘you’ to abandon a planned joyride, and joyrides herself instead. This joint action does not add to overall risk, but it shifts the responsibility for some of the existing risk from you to I. What I does seems wrong. C&G give the impression that what she does wrong is make herself responsible for some of the risk. She has moral duty not herself to impose risk, and she violates that duty. No doubt this is indeed wrong. But the example is effective because we assume that the joyride is a bad thing to do, whether done by you or by I. That means I does a further wrong thing, which is to joyride. If joyriding is actually beneficial, taking account of the joy and the bad consequences for other people, I think we would be less disturbed by I’s taking over responsibility for emissions.
Dan, thanks again for these helpful comments.
On your first comment (at 2.48):
That’s an ingenious example! But maybe we’re not getting the structure properly. We can see how there’s no risk imposition measured according to the Standard Baseline: the prospect of harm faced by the downstreamers when Ned boats and you don’t = 10 deaths. If you boat and pay Ned, then there’s a 50% probability of 0 deaths and a 50% probability of 20 deaths, so (although as you point out the prospect isn’t the same) it’s at least arguable that extra risk has not been imposed.
However, doesn’t your set of actions (boating + buying off) impose risk when measured against the Attributional Baseline? That’s the prospect of harm faced by the downstreamers when Ned doesn’t boat and you don’t = 0 deaths. Measured against that baseline, the risk that boating + buying off imposes is an expectation of 10 extra deaths.
On your second comment (at 3.06):
Yes, that is quite right (and very nicely put). Our two key claims are:
(1) In sequestering-type cases, you could legitimately say (if you do enough of the right type of sequestering) there is no risk-imposition that calls to be justified; but in forestalling-type cases, there is a risk-imposition that calls to be justified.
(2) If the risk associated with emissions is as significant as Broome and others think it is, then it’s hard to see how reasons of enjoyment or convenience can succeed in justifying any emissions that are offset by forestalling.
PARI gets us (1) but not (2). PARI tells us how to measure the risk-imposition that calls for justification, but not what is required in order to provide that justification. We doubt whether there’s any useful principle that sets out a necessary condition for providing an adequate justification for the imposition of a risk of a given magnitude on a population. (But if we’re wrong about that, it could be added to the rest of our account.) However, we do say some things about what we take to be inadequate justifications of risk-imposition. And developing sets of criteria for the justification of the imposition of risk is extraordinarily important. Unsurprisingly, then, much of the discussion on the forum has so far been about this topic—Susanne, Liz, Joe, and Victor, for example, seem more willing to treat the fact that a risk-imposition is paired with an act that prevents a greater risk imposition as a sound justification for the imposition of that risk than we do.
Hi Christian and Garrett,
Thanks for your reply.
I agree with you that the second response doesn’t seem very plausible if it allows Bystander to stop and then start the boulder.
I don’t think I’m really getting the first response yet, but is it right that on this view if the original risk of harm comes entirely from some other agents’ actions, one cannot permissibly contribute and offset at all, but if the original risk of harm comes entirely from natural processes, one may contribute and offset in a 1:1 manner (assuming one is not otherwise obligated to intervene)?
That would make for an interesting asymmetry between how one can interact with risks that come from agents’ actions and otherwise similar risks that don’t. If it somehow turned out that all GHGs were produced by moral agents (suppose the non-anthropogenic GHG molecules were put there by aliens), then we could not permissibly emit and offset (except when doing so would have some morally weighty independent justification). But if our world had just popped into existence as it is by a random natural process, we could permissibly emit CO2 if we offset by sequestering the same amount we emit.
That’s not necessarily a problem–it strikes me as a little odd, at least without some further story, but I don’t have clear judgements one way or the other about the cases themselves.
But if this also carries over to my boulder case, that does seem wrong to me. If the boulder rolling wasn’t started by Evil Guy but rather by a falling tree, I still don’t think Bystander would be permitted to stop it at considerable cost to himself and then start it rolling again with the same or slightly lower (or even very small) risk of doing harm.
So I think we should also reject the first response as I understand it. Though again I’m not confident that my understanding is what you had in mind.
My own view is that something along the lines of your third response is best. The way I’ve been thinking about it hasn’t been in terms of risk imposition, but in terms of acting in a way that makes an impending harm worse or less bad. I’ve got a paper coming out in OSNE this year where the view I end up is one where the identity of harm events matters: if one acts in a way that makes a harm worse but also in a way that makes that same harm less bad by as much or more, one acts permissibly (unless there’s something else wrong with one’s act). But if one reduces or prevents one harm, one is not thereby permitted to worsen or create a different harm to replace it.
This is driven by mainly by judgements about cases, though, and I don’t yet have a satisfying story about why identities of harms (or risks of harm) would matter. So I’m also open to the possibility that what looked like permissible cases of offsetting are instead to be lumped together with the cases of harm prevention and replacement, and so are really (with some provisos) impermissible after all. In any case, I’d be very interested if you come up with a way of spelling out that third response.
Thanks Joe, for pushing us to think further about your interesting case.
You ask: if it is morally required for me to pre-empt in CASE A, it is also morally required for me to pre-empt (or instead make the supererogatory sacrifice) in CASE B?
As far as we can see, the argument of our paper doesn’t commit us either way. We do make the negative claim: in Preemptive Joyride, the fact that you’ve prevented someone else’s dangerous joyride doesn’t justify you in taking one of your own. That commits us to rejecting the principle in Joe’s first comment (we think of Preemptive Joyride as a counterexample to that principle.)
But we don’t attempt a general explanation of what is necessary in order for the fact that imposing a lesser risk or harm on a person or population is accompanied by the prevention of a greater risk or harm on the same person or population to contribute to a successful justification of the imposition. We think it does so in Case A, and that it doesn’t in Preemptive Joyride.
However, for what it’s worth— this doesn’t connect very clearly with the argument in the paper—we’re inclined to think that there are going to be some versions of Case B in which inflicting the small pain is permissible (including versions in which you’re required either to inflict the small pain or perform the supererogatory act), and others in which it’s not. This will depend on how big the big pain is, how small the small pain is, and how big the cost X is. If the big pain is an hour of torture, the small pain is a pin-prick, and X is losing your life savings, then the justification for inflicting the pin-prick can be put like this: “I’m inflicting the pin-prick in order to acquire the means (X) of sparing Bobby the big pain, and it’s more reasonable for the cost of using those means to be distributed to Bobby (as the pain of a pin-prick) than to me (as the loss of my life savings.”
But we don’t think that a good justification for inflicting the small pain in Case B would be simply: “Inflicting the small pain is needed to compensate me for the (otherwise) supererogatory sacrifice I would be making in acting to prevent the large one.”
We think that doesn’t establish enough of a morally relevant connection between the harm or risk averted and the harm or risk imposed to provide a satisfactory justification for the imposition. As we see it, that is shown by cases such as Preemptive Joyride, or this one:
Radiation Experiments. Carl has always longed to conduct radiation experiments — the only way to conduct the experiments using current technology will expose other lab workers as a foreseen side effect to radiation causing long-term harm but not death. A disease outbreak occurs in California. Carl supererogatorily spends X on travelling to California where he works to save the lives of many lab workers. After they recover, he then conducts his experiments in their lab, recouping in satisfied curiosity the X that he supererogatorily spent on saving their lives, with the foreseen side-effect that they are all harmfully irradiated.
Many thanks for joining the discussion John, and for raising these interesting challenges to some of the distinctions we make in the paper.
As we see it, the main question we want to raise in the paper is whether it’s right to reason as you do here in your comment above:
“If you perform a joint act of emitting and offsetting by any means, you leave the same amount of greenhouse gas in the air as would have been there had you not done this act. You add some and you subtract some. I think it’s fair to say you cause no increase.”
We agree with the first sentence:
“If you perform a joint act of emitting and offsetting by any means, you leave the same amount of greenhouse gas in the air as would have been there had you not done this act.”
Framing a parallel thought in terms of risk-imposition, it’s also true that you perform a joint act of emitting and (sufficient amounts of effective) offsetting by any means, you leave the global population no more risk-exposed than it would have been had you not done this joint act. That’s measuring risk-imposition against the Standard Baseline.
However, you then say:
“You add some and you subtract some.”
We think that’s true only of sequestering. It’s false of forestalling. Forestalling doesn’t subtract GHG from the air; it prevents others from adding GHG to the air.
You then say:
“I think it’s fair to say you cause no increase.”
But we want to question this too. When you offset by forestalling, it’s true that you don’t cause the amount of GHG in the air to be higher than it would have been if you had neither offset nor forestalled. But you do cause to it to increase. Suppose every emitter in the world also offsets by forestalling. Then the amount of GHG in the air keeps going up; and it is the emitters who have caused it to go up.
This is also pretty clear in the Boating and Buying Off case. Before either Ned or I go boating, the toxicity level in the river is L. Ned plans to go boating, which would raise the level to L+1. I pay him not to, and go boating myself. The toxicity level increases to L+1, and I cause the increase. Associated with this increase is the expectation of an additional cancer death. That risk is imposed by me.
However, it is also true that the downstreamers are no more risk exposed, after I boat and buy off, than they would have been if I had done neither. The further question (expertly pressed by others in this discussion!) is whether that fact qualifies as a justification for imposing the risk that I do impose.
Consequently, we don’t think it’s right to say that in forestalling “your causing the subtraction is done through other agents.” You’re not causing a subtraction but preventing an addition. It’s this that we think is significant, not whether what you do is done through other agents. (As you point out, real-world sequestering involves paying other agents to sequester—and real-world forestalling also involves paying other third parties to cause would-be emitters not to emit.)
You write; “What puzzles me is that they think you can claim credit for subtraction by sequestering, even if the sequestering is done through other agents. They think it’s ok to pay other people to do your sequestering for you. Why the asymmetry? Many people in Africa tear down trees for firewood, in order to cook their food. According to C&G, you can successfully offset your emissions by paying them to plant trees to replace the ones they tear down, but not by paying them not to tear them down in the first place (for instance, by buying them efficient stoves that require less firewood). This doesn’t seem plausible.”
In light of the discussion above, here is the explanation of the asymmetry:
Case 1: The current level of GHG in the atmosphere is L. People want to tear down trees, which would raise the level of GHG in the atmosphere to L+1. I pay them not to, and take a trip to London. As a result of my trip, the concentration of GHG in the atmosphere increases to L+1, and I cause the increase.
Case 2: The current level of GHG in the atmosphere is L. I want to go to London, which, if I do nothing, would raise the level of GHG in the atmosphere to L+1. However, I pay someone to sequester the same amount of carbon as my flight puts in. As a result of my trip and the sequestration efforts, the concentration of GHG in the atmosphere remains at L.
The issue is not by whose hand the offsets are brought about, but rather whether in offsetting and emitting in these different ways we ensure that there is no increase in the concentration of GHG in the atmosphere.
Thanks Victor, for pressing us to think further about these issues!
On your pain relief case, you say that an agent need not take on any cost to avoid substituting a greater risk for a lesser risk with reducing a greater risk to a lesser risk, when this makes no difference to the prospects of those put at risk.
We’re inclined to agree with you here. But as you note, our joyride cases have a different structure, as do the other risk pre-emption cases we discuss in the paper (thanks for revealing the connection between these cases and those that have figured in discussions of the all or nothing problem.)
Turning to your Rescue case, we agree that there could be cases with this structure where I’m permitted to pick 3.
So: what’s the relevant difference with Joyride?
Here, our thought is this. Consider first
Simple Joyride, where my options are:
1) not joyride, or
2) joyride, gain pleasure of degree P and impose risk of magnitude R on bystanders X.
Choosing 2) is wrong (we can stipulate), because my gaining P is an insufficient justification for subjecting X to R.
Now move to Preemptive Joyride, in which I joyride, gaining P and imposing R on X, while preventing someone else from imposing R+ on X.
Here, we want to ask: given that in Simple Joyride P is not an adequate justification for imposing R on X, how can X owe it to me in Preemptive Joyride to bear R for the sake of my getting P, in return for my preventing the imposition of R+? P is not something I’m in any position to complain to X about missing out on, if it’s not capable of justifying the imposition of R on X in Simple Joyride.
We agree with you that a joyride case could be constructed in which we should say similar things to what you say about Rescue. For example:
Distributed-cost Joyride Pre-emption
Someone else is planning to joyride, imposing risk of magnitude R+ on bystanders X. I can prevent this by paying C. I can also transfer part of C onto X.
Here, we agree that it could be permissible to transfer part—perhaps even all—of C onto X.
But we don’t think that shows that *missing out on P* should be regarded as a cost I can legitimately transfer onto X in Preemptive Joyride by taking a joyride, given that P isn’t sufficient to justify option 2) in Simple Joyride.
Thanks for the follow-up Mike—we’d be very interested in your new paper on offsetting. The view you take would seem to interact interestingly with some of the cases that Victor has presented above. We are still developing our best attempt at the third type of response, and would be happy to share it when it is in a more presentable state.
On your comment: The first response would indeed assume an asymmetry between how one can interact with risks that come from agents’ actions and otherwise similar risks that don’t. Your adaptation of the Evil Guy case puts significant pressure on this kind of response. There are other cases that seem less counterintuitive. Here is one:
Natural Toxicity
Natural processes have released toxins into the river: the toxicity of the water is at level L. I then run my extractor and simultaneously go boating, discharging the same amount that I extract. At all times, the toxicity level remains at L.
Here, the reasoning in the paper seems pretty attractive to us: no one’s prospect is worsened by anything I do, so I have no risk-imposition to answer for. If that’s right, it suggests that to make the first response credible, you’d need to point to some relevant difference between, on the one hand, stopping a big projectile and launching a new one and, on the other, scooping out some naturally occurring toxin and putting some back in. As it happens, we think that a credible version of the third response will also have to distinguish between different ways I can be causally implicated in different harms, such that I can be morally accountable for some but not others. We’re still not sure such a case can be convincingly made.
We’ll follow your own work on these issues with interest!
Just to explain my example: I was following your footnote 28. Accordingly, when checking the Attributional Baseline, what we compare is you boating but not Ned, with you not boating and Ned not boating. In both cases, since Ned is not boating, the prospect is 0.
If you revise what you say in that footnote along the lines you suggest here, then there are other counterexamples: Suppose out of good will you run an extractor to remove toxin from the water. Seeing you run the extractor, Ned might decide not to go boating. If you don’t run the extractor and Ned doesn’t go boating, then the expectation of harm will be worse than if you run the extractor. The Attributional Base will now say that it is impermissible to run the extractor. That’s obviously wrong.
I still might have made some mistake here. Anyhow, I enjoyed the challenge of working through your account!
Thanks for the response.
Whilst I think your conclusion about the joyride case is right, I don’t think the reasoning you imply explains it.
Here’s a more abstract version of the view you suggest (if I have it right)
A). Suppose it would be wrong for X to secure Benefit, imposing a cost on Y, H without doing anything for Y.
Then,
B). It would also be wrong for X to both 1) bear a cost H2 benefiting Y with Benefit 2 and, through an independent action 2) secure Benefit imposing H on Y.
You suggest that this is so because A) shows that X has no complaint about missing out on Benefit given the cost to Y.
But the fact that X has borne H2 for the sake of Y, conferring Benefit 2 must sometimes matter. It changes the overall distribution between X and Y. Why should X not sometimes be permitted to reduce their overall cost that they incur by bearing H2 by receiving Benefit, given that Y has received Benefit 2?
The fact that they would not be permitted to do this were they not to have conferred Benefit 2 on Y at a cost surely isn’t decisive.
There really is no important difference, I think, between the rescue case and the joyride case along the lines you suggest. It would be wrong to take money from the person you rescue without having rescued them. But that doesn’t show that it is wrong to take the money once have borne a cost in rescuing them.
I think that the reason why the Joyride case obscures this is that Joyriding just doesn’t seem valuable, and so is not something that X has much reason to want as John’s comment also suggests. But that isn’t going to carry over to standard offsetting cases.
Dan,
Yes, you make a good point about our reply to you above being out of line with footnote 28. The expected-harm measure we’re applying in that reply is not:
Σ PS&~A(Hi) ⋅ V(Hi) – Σ P~S&~A(Hi) ⋅ V(Hi)
Instead, it’s:
Σ PS(Hi) ⋅ V(Hi) – Σ P~S&~A(Hi) ⋅ V(Hi)
[with apologies for any garbled formatting].
We’ll need to think some more about whether the footnote 28 measure should be replaced, and if so with what. Your further example does look like a good counterexample to just replacing the first measure with the second. (It wouldn’t itself imply that running the extractor is impermissible. But it would imply that doing so imposes a risk that stands in need of justification, and that does seem bad enough.)
Thanks for the excellent food for thought!
Victor,
Great: thanks for pressing further on this. We’ll give Joyriding vs Rescue some more thought and get back to you…
Thanks everyone for taking the time to engage so carefully and interestingly with our work. This forum has really helped deepen our thinking about the issues in the paper and indicated interesting new
lines of inquiry. We’re happy to continue the conversation on email. (If there are further posts to this thread we will try to respond, but it may be a couple of days before we can do so since it is Friday
afternoon here.)
Correction to my last comment: If you don’t run the extractor and Ned doesn’t go boating, then the expectation of harm will be *less* than if you run the extractor.
And the reason the result is bad is because if you don’t run the extractor, Ned will go boating, so obviously you only have reason to run the extractor, no reason against it.
Dear Christian and Garrett
You say
“Case 1: The current level of GHG in the atmosphere is L. People want to tear down trees, which would raise the level of GHG in the atmosphere to L+1. I pay them not to, and take a trip to London. As a result of my trip, the concentration of GHG in the atmosphere increases to L+1, and I cause the increase.
Case 2: The current level of GHG in the atmosphere is L. I want to go to London, which, if I do nothing, would raise the level of GHG in the atmosphere to L+1. However, I pay someone to sequester the same amount of carbon as my flight puts in. As a result of my trip and the sequestration efforts, the concentration of GHG in the atmosphere remains at L.”
But that is not so. The level increases to L+1. You forget that other people emit 1 unit.
Then you say
“The issue is not by whose hand the offsets are brought about, but rather whether in offsetting and emitting in these different ways we ensure that there is no increase in the concentration of GHG in the atmosphere.”
But in cases 1 and 2 the increase in concentration is the same. So I think the issue has to be who is responsible for the increase.
Thanks again Victor. In your last comment you represent us as arguing from:
A: Suppose it would be wrong for X to secure Benefit, imposing a cost on Y, H without doing anything for Y.
to
B). It would also be wrong for X to both 1) bear a cost H2 benefiting Y with Benefit 2 and, through an independent action 2) secure Benefit imposing H on
and thus defending a principle from which the wrongness of Preemptive Joyride can be inferred.
We don’t see the dialectic that way. In your (3.15) post, you gave us the Rescue case, and asked why the point there doesn’t transfer to joyride cases. In our response to you we took ourselves to be offering an answer to that question that ventures an explanation of what would need to be true of *a* joyride case in order to be relevantly similar to Rescue, where “X would owe me compensation for the rescue at cost n.”
What we were trying to do is to answer the question, “If option 3 in Rescue is permissible, why doesn’t this show that Pre-emptive Joyriding is permissible too?” (Bearing in mind that you agree that PJ isn’t permissible.) So we weren’t offering an argument proving that PJ is impermissible; we were offering an explanation of why accepting the permissibility of option 3 in your Rescue case doesn’t commit one to accepting the permissibility of PJ.
The more general question you, Liz, and Joe have been pushing us to consider is: “to what extent do past instances in which an agent has borne cost, perhaps supererogatorily, to protect another person from risk alter their duties not to impose risk on that person?”—is a very interesting and challenging one. It obviously will have implications for what a principle like PARI implies for ordinary agents like us. Providing a general answer is not something we do in the paper or can do here. Here, though, are a few initial thoughts on the kinds of considerations that seem relevant.
Let’s assume a simple boating case:
Ned is about to impose, impermissibly, a risk on Susan (by boating without extracting).
I too have a duty not to impose this risk on Susan. I have a non-trivial purpose—I want to go fishing for dinner and such trips help feed my family. It’s impermissible to do so without extracting, though, because the cost to me of only taking the trip if I run the extractor is not excessive, given what is at stake for Susan.
Suppose that at t1 I pay Ned not to impose this risk—and that I had no duty to take on that cost to protect Susan from Ned’s risk imposition, so my act is supererogatory.
How does this now affect my duties regarding imposing the risk (by boating without extracting) on Susan at t2?
Two things that clearly matter a lot are the magnitude of the risk and what is at stake for me. We assumed at the outset that the risk was great enough that my imposing it was impermissible at t1. What then has changed? Not, we’ll assume, Susan’s interest in not being exposed to this risk. What may have changed is how her interest weighs against my interest in imposing this risk—of boating without paying for the extraction. Suppose my situation has changed as a result of the supererogatory act such that the costs to me of refraining from imposing risk this have increased significantly—I still need to fish but can no longer afford to pay to run the extractor. This clearly has bearing on whether my risk imposing act on Susan is permissible. That consideration does not, of course, depend on my having protected her previously—had I become poor for some other reason, that too could make it permissible for me to take the trip without extracting.
What independent role is played by the fact that I supererogatorily protected Susan from Ned’s imposition at an earlier point? We find plausible your suggestion that Susan should be willing to bear some additional cost that she would otherwise not have been required to shoulder some of the burden of my not imposing this risk on her. For example, she could be required to pay for some of the cost of running the extractor. How much cost should she be willing to bear? That would seem to depend on how other details of the case are filled in.
Suppose she does not shoulder this cost. That may or may not make it permissible for me to impose the risk anyway. If I boat and extract, then I may have suffered a wrong, since Susan ought to have shared this financial burden. If on the other hand I boat without extracting, I may have wronged Susan, at least if she was not required to pay the full costs of the extraction. In either case there may be wrongs—which of them is more significant (and hence whether I can justifiably impose the risk) will depend on the moral seriousness of our respective claims.
This kind of case shows the fact that joyriding is done for trivial reasons isn’t, as you and John suppose, a distraction – it’s an important part of our argument. When a risk imposing activity is not of great importance it can be difficult to justify, even if we have previously protected those potentially affected from other risk impositions of that sort. Our having previously protected those potentially affected from other risk impositions of that sort does not leave us with no risk to justify, even if it can perhaps lower the justificatory burden.
You worry that conclusions about joyriding cases aren’t going to carry over to standard offsetting cases. Here I think we may disagree. Take a person like ourselves—a relatively affluent academic considering whether to pay for the sequestration of carbon from our flight or instead take the flight without paying for such offsetting. Does the fact that we have previously convinced one of our colleagues not to take the flight (we know he is unwilling to pay for offsets) mean that we no longer face a justificatory burden to those put at risk should we fly without offsetting?
Perhaps if the only alternative to flying was not flying at all, we could meet that burden (though with the exception of trips that are of real importance to us, we might not). But if the alternative is flying without risk imposition through paying for the offset, we may be required to bear that cost.
Dear John,
Sorry for the slow reply. You’re right to point out that when we say in Case 2:
“As a result of my trip and the sequestration efforts, the concentration of GHG in the atmosphere remains at L”
and then:
“we ensure that there is no increase in the concentration of GHG in the atmosphere”
it sounds as though we’re claiming that when I travel and pay for sequestering, this guarantees that no one else ever adds any GHG to the atmosphere, and the level remains permanently at L! That’s obviously false.
The point we were trying to make in Cases 1 and 2 (but not phrasing well) was about what my joint action does to the GHG level in two types of case. In one type of case I raise it to L+1; in another, if it rises to L+1, it is because someone else has raised it: I don’t raise it above L. So yes, as you say: the issue is who is responsible for the increase. You’re right that if I can claim credit for the tree-planting that I pay for as my sequestering, I should also be able to claim credit for the non-tearing-down that I pay for as my forestalling. We’re not claiming that there is an asymmetry there. But when I emit and plant the emissions I add to the GHG level are accompanied by a matching subtraction; when I emit and don’t tear down, the emissions are only accompanied by not making a further addition.
We appreciate that there would clearly be problems with a policy that incentivized paying people who had torn down trees to replant them, but didn’t incentivize them not to tear them down in the first place. We think that’s a compelling reason not to adopt a policy of that sort. (And if that policy were adopted, it would affect the expectation associated with paying for tree-planting offsets.) But this still leaves us thinking that the two cases have to be described differently (as matching your addition with a subtraction in one case, and matching your addition with the prevention of a further addition in the other), with the implication that there is a risk-imposition to be justified in one case but not the other.
(There’s another slip in our earlier posting. We wrote: “When you offset by forestalling, it’s true that you don’t cause the amount of GHG in the air to be higher than it would have been if you had neither offset nor forestalled.” The last bit of this should read instead: “… if you had neither emitted nor forestalled.”)