Welcome to our discussion of Christian Barry and Garrett Cullity’s “Offsetting and Risk Imposition,” published in the most recent issue of Ethics. You can find the paper here. Elizabeth Cripp’s critical précis is immediately below. Please join the discussion!!
Suppose you fly to the Bahamas on holiday. You know that your hefty carbon footprint raises the risk that people will die from extreme weather, disease, or starvation, because of climate change.* So you buy carbon offsets: enough to reduce greenhouse gas emissions by the same amount as your flights increase them. Are you off the hook, morally speaking?
*Or, for the sake of argument, let’s accept the threshold-based arguments made by John Broome and others, and assume I know it.
This channels important philosophical questions, including the following. It is often wrong to impose a risk of harm on another person or population. Can we make our actions OK (what Barry and Cullity call ‘risk-imposition justifiable’) by doing something else that reduces the risk to that same person or population by at least the same amount?
Here’s Barry and Cullity’s example. I live near a river. Boating is popular but boats put toxins into the river, so people living downstream are dying of cancer. It’s a beautiful day and I fancy a boat trip. If I go, the extra toxins may well cause no deaths. But they might cause one, or two… or ten. The way the probabilities add up, the expected harm of my trip equals one death.
I want to go, so I install a toxin-extractor to remove at least the same amount of toxins from the water as my boat puts in. I set off, with my sunglasses and picnic, and I tell myself I’m morally in the clear because I’m not making the risk any worse than it would have been if I’d done nothing at all. Am I right?
On the face of it, yes. I pose no excessive risk compared to what Barry and Cullity call the Standard Baseline (risk levels if I had neither boated nor installed the extractor). But things turn out to be a lot more complicated than that.
I might have been paid to install the extractor by the people downstream or some philanthropic third party. Then, I’d be fulfilling an independent obligation by installing it. Using that to justify my boat trip as well is a kind of double counting. So we also need a Moralized Baseline. This sets aside any risk reduction independently required of me, before calculating the overall impact of my action on risk levels.
Is that enough? Not according to Barry and Cullity. What if I spot my neighbour about to launch his boat? Rather than buy an extractor, I pay him not to go, and take out my boat instead. The risk is no more than if I’d done nothing, and I wasn’t fulfilling anyindependent obligation. But, say Barry and Cullity, it’s not my neighbour who has raised the expectation of harm (as it would otherwise have been). The increased risk is now attributable to me. I’m no more in the clear than if I’d paid a local teenager not to go joyriding and done that myself instead. Even if I’d paid off someone who would have been justified in imposing the risk themselves, their justification wouldn’t transfer to me.
Accordingly, Barry and Cullity add an Attributional Baseline. This sets aside any risk reduction I do by preventing others from imposing risks, before calculating whether I’ve reduced risk overall.
But even this isn’t enough. What if I do both of those things, ask Barry and Cullity? I pay my neighbour not to boat and I’m paid by the downstream community to install an extractor. Then I take my boat out. The risk is lower than it would be if I hadn’t stopped my neighbour from boating. It’s lower than it would have been if I hadn’t fulfilled my independent obligation. But I still can’t justify that expected cancer death.
Thus, we have a fourth and final Combined Baseline. If I prevent someone else from risk imposition and fulfil an independent obligation, both these risk reductions are set aside before calculating the net risk-consequences of my actions. On Barry and Cullity’s Principle of Aggregate Risk Imposition (PARI), my actions must be measured against all four baselines.
Now return to your holiday in the sun. Do the offsets justify the climate risk of the flight? Sometimes. But not all offsets are equal. On Barry and Cullity’s reasoning, sequestering offsets, which remove greenhouse gases from the atmosphere, can. However, forestalling offsets, which prevent others from emitting them, fail on the attribution front. Planting trees, yes; buying solar panels for families in poverty, no.
This is an incredibly rich paper, so detailed and careful that it is hard to find flaws. Accordingly, my thoughts are exploratory (and perhaps rather underhand, since they focus as much on the ultimate utility of PARI as on the principle itself). They revolve around one general observation: a lot of work is done by the notion of attributability. Perhaps too much.
Point one. I might prevent more risk than I impose. Suppose my neighbour’s boat is inefficient, and mine is super-efficient, so there’s an expected 0.5 deaths associated with my trip, but an expected four deaths associated with his. On Barry and Cullity’s argument, even if I pay him not to go, I am not risk-imposition justified in taking my boat out. Suppose further (perhaps implausibly, in the climate change case) that I have no separate moral duty to persuade or help him not to take the trip. We could end up in the odd situation where I would be wrong to make the situation slightly better (by paying and boating) but have no moral duty to make it much better instead (by paying and not boating). I end up not doing something that would make the downstream population less at risk, so that I am not the one imposing any risk.
It is strongly intuitive to assign heavy weight to attributability in cases of direct individual harm. If I feel like shooting someone, find a would-be murderer and pay him to let me kill his victim instead, I’m clearly wrong. It’s not just about whether the victim is worse off than he would have been. I am a moral agent: it matters that I shot him. However, there is a difference between directly causing harm and running the risk of causing harm (perhaps especially where no individual harm could be attributed to you, but the risk is of increasing overall harm across a population).
We see this in the demandingness conditions assigned by common sense morality: very stringent when it comes to not killing the person in front of you; less so in cases of very small risks, even of death. I can’t permissibly crash my car into you even to save my own or someone else’s life, but I can drive around, for mere convenience, knowing there is a small risk that I will kill dozens of people.
As we move away from individual harm to the imposition of population risk, the balance between overall benefit and individual responsibility considerations could shift. It might matter more that something not-as-bad happens (or something bad is less likely to happen), and less that I not be associated with bringing that not-so-bad (but still quite bad) thing about.
(An interesting side question is whether attributability considerations can be subsumed under the Moralized Baseline. If I have an independent duty to pay my neighbour not to boat – he is very poor, the downstream population is very vulnerable, and I am rich – I can’t use that payment to justifying imposing a lower risk myself. But if there is no such independent obligation, it’s less clear that I can’t.)
Point two. We have different motives for risk imposition. There is considerable middle ground between the carbon emissions we need to survive, or are committed to by social infrastructure, and what we do for fun.
Suppose my boat trip funds specialist help for my child with reading, for my husband to retrain for his dream job, or to support my favourite climate justice charity. The toxin extractor would cost as much as the trip would make, but a much smaller sum would convince my neighbour not to take his boat out. Am I wrong to pay him and go instead? More specifically, could I be justified even if my reasons would not be enough to justify my boat ride, if I didn’t ‘offset’ the risk in this way? With less moral weight attached to attributability, I could be.
Of course, this needn’t undermine PARI. Offsetting is a moral minefield. There are many other individual duties in the climate case, and moral reasons to cut our carbon footprints independent of individual risk imposition. Barry and Cullity all know this. Whatever PARI decrees, further considerations may justify my actions all things considered (or unjustify them). Barry and Cullity know this too. But they think there’s no such justification for forestalling rather than sequestering, even if it is more efficient. This is, perhaps, called into question by what I have just said.