So the results are in. 63% of the respondents said that they
would have most reason to refrain from making such significant sacrifices. 62%
said that they would not be morally required to make such significant
sacrifices. And 77% denied that the
fact that the climate disaster would have been averted had, contrary to fact,
everyone (or even just nearly everyone) been willing to make the significant
sacrifices required to drastically reduce their carbon footprints constitutes a
reason for them to make these sacrifices that is strong enough to outweigh the
reasons that they have to avoid doing that which would be significantly
detrimental to how they and their families fare.
This matches my
intuitions. And the only point of these surveys was to verify that my
intuitions are not out of whack with those of the majority, because I want to
be able to appeal to them in an argument and rely on their having some
persuasive force with the majority of my readers.
For those interested,
the argument is that certain forms of rule consequentialism must reject
moderate moral rationalism: the view that there is always sufficient reason to
do what one is morally required to do. Clearly, certain forms of rule
consequentialism require us to make significant sacrifices in this situation. For
only a code that requires all (or nearly all) of us to make significant
sacrifices in this situation would be one that if accepted and/or complied with
by the vast majority of us would result in our averting climate disaster. And
given how bad climate disaster is, it seems that the code with the highest
expected value must include such a requirement. Thus, it seems that some forms
of rule consequentialism require us to make significant sacrifices in this
situation even though each of us lacks sufficient reason to fulfill this
requirement. The source of the trouble, I suggest, is that whereas moral
theories such as rule consequentialism make what I ought to do a function of
what good consequences would result from all (or nearly all) of us acting a
certain way irrespective of whether it’s in my power to ensure that sufficient
others will act in this way, plausible theories of practical reason do not take
what I have most or decisive reason to do to be a function of what good
consequences would result from all (or nearly all) of us acting in a certain
way irrespective of whether it’s in my power to ensure that
sufficient others will act in this way. (By ‘ensure’, I don’t mean cause. If S
is going to do X regardless of what I do, then I can, in the relevant sense,
ensure that S will do X even though I do not cause S to do X.)
Hi Doug
a couple of points.
1. About the climate change intuitions. Firstly, this is a good example of why we should not rely on our intuitions but rather on philosophical arguments. I used to share your intuitions but reading John Broome’s new book Climate Matters convinced me out of them. I just think that the majority is wrong here.
Let’s start from rough estimates of the facts. An average person born in the 1950s will emit around 800 tonnes of carbon dioxide in their lifetime. As a result, the average person contributes half a billionth of a degree to global warming during their lifetime. In a modest estimate, this tiny increase in temperature itself will wipe out half a year from healthy human life in total when we aggregate small consequences. This is a serious harm you cause to other people irrespective of what others are doing (contributing or not). In so far as (you think that) you ought not to harm others, you should not be doing this.
Fortunately, avoiding harming others is very easy and cheap as long as others are not doing the same. Off-setting your carbon emissions costs only about $10 for a tonne, so for $8000 you could be carbon neutral for your lifetime and so avoid harming anyone. Of course, if others started doing this, the price would eventually go up but as long as others are not acting accordingly no significant sacrifices are required. This argument has convinced me that it is wrong not to refrain from making sacrificing and that we have overriding reasons to do so.
2. Of course, this doesn’t affect the argument against RC. You could just find another example. I reckon global poverty would work just as well. In your argument though, it is worthwhile to emphasise that this is a problem for only certain forms of RC. Consider Hooker’s according to him we need to take into account the internalisation costs. In the chapter on poverty, he argues that inculcating extremely altruistic motivations would be very costly (and also many important human goods would need to be sacrificed as a result). If this were right, then RC would not be require us to make significant sacrificing (in fact, the helping principle which Hooker ends up with is fairly moderate). In this case, RC could still attempt to protect rationalism (especially if you take into account the justification based reason to do the Rule-consequentially right thing).
Hi Jussi,
(1) Just to clarify: I wasn’t talking about the actual world and what our obligations are in it. The prompt for the survey asks respondents to consider a hypothetical example. In my hypothetical, your individual contribution to pollution makes no more difference as to whether someone is suffering more or less than my removing one hair from your head would make a difference as to whether you are more or less balding. I take it that Broome didn’t argue against my intuitions about my hypothetical case, but rather against some people’s intuitions about our moral obligations in the actual world.
(2) Hooker’s view would, I think, have the implication that each of us is morally required to make significant sacrifices in my hypothetical situation — at least, it would if I were to stipulate that the ensuing climate disaster would be sufficiently bad to outweigh the internalization costs. Let me hereby stipulate that such is the case.
Hi Doug
I just don’t understand how the physics of the hypothetical world could work. It would require that the 800 tonnes of carbon dioxide a person emits over their life did not make a difference to the temperature of the world (that the sunrays that hit these molecules did not warm up the atmosphere) and so that the greenhouse effect was not occuring. This would be a strange world. Or, it would require that there is no causal link between increased temperature and what happens to human beings. This too is odd. I cannot therefore see either of these being possible. So, it is hard for me to conceive the hypothetical situation as it is literally described. This is why I do think that people’s intuitions really are about a scenario closer to the actual world.
Another way Hooker avoids that consequence in this kind of non-compliance situations is the avoid disasters clause. The avoid disasters principle will in all likelihood be a part of the ideal code. In situations in which others are not doing there part and in which me doing my part would have bad consequences, this principle then requires me not comply with the other principles. If doing your share has significantly bad consequences for people around you, then the avoid disasters principle could be argued to require you not to do so.
Hi Jussi,
(1) Why must my hypothetical example (and physics) necessitate that each person emits 800 tonnes of carbon dioxide?
In any case, the heart of your objection lies with the following dilemma that you pose. You write:
I just don’t understand how the physics of the hypothetical world could work. It would require that the…carbon dioxide a person emits over their life did not make a difference to the temperature of the world…. This would be a strange world. Or, it would require that there is no causal link between increased temperature and what happens to human beings [that is, to whether they suffer more or less]. This too is odd.
This seems a bit like arguing in the following way with regard to the case where I pluck one hair from your head: “I just don’t understand how the physics of the hypothetical world could work. It would require that your plucking a hair from my head does not make a difference to how many hairs I have on my head. Or, it would require that there be no causal link between a decrease in the number of hairs on my head and whether I am more or less balding.”
My plucking several hundred hairs from your head may make a difference as to whether you are more or less balding even though my plucking one hair from your head can’t. And I think that we can say the same thing about my hypothetical. I’m imagining that the amount of pollution that any one person emits isn’t enough to make a difference as to whether any person suffers more or less even though the amount of pollution stemming from several millions of people does. Keep in mind that I’m assuming, for the sake of argument, that being more or less balding and suffering more or less are both vague. Perhaps, you deny this, but my interlocutor in the paper that I’m writing doesn’t.
(2) I think that Hooker’s RC will imply that the ideal code must include a rule requiring each of us to make significant sacrifices in my hypothetical.
Here’s my argument:
(1) On Hooker’s RC, the ideal code is the one with the greatest expected value (simplifying a bit).
(2) We can expect that any code that doesn’t require each of us to make significant sacrifices in my hypothetical would be one that if nearly universally internalized would result in climate disaster.
(3) We can expect that any code that did require each of us to make significant sacrifices in my hypothetical would be one that if nearly universally internalized would result in climate disaster being averted.
(4) A world with the internalization costs associated with a rule requiring our making significant sacrifices in my hypothetical and no climate disaster is, other things being equal, much better than a world with climate disaster.
(5) Therefore, Hooker’s RC implies that we are each required to make significant sacrifices in my hypothetical case.
What fault do you find with this argument?
Now, I admit that the ideal code would include a rule that permits us to fail to one’s own share when one knows others are not going to do their shares if our world is one where such a rule would have good effects if nearly universally internalized. But in my hypothetical our world in one where such a rule would have disastrous effects if nearly universally internalized. If such a rule was nearly universally internalized in the possible world picked out in my hypothetical the result would be climate disaster.
Hello, both,
Jussi is right to note that ‘the tiny increase in temperature itself will wipe out half a year from healthy human life in total when we aggregate small consequences’. But for Jussi’s argument to go through, he would have to show that we *should* aggregate small consequences when assessing the impact of one agent’s actions. The case for doing so is not obvious. If we do not aggregate, we can question whether raising the temperature half a billionth of a degree, or turning Parfit’s torture machine up one notch, does any harm at all. In this case, by stipulation, it doesn’t. True, Parfit claims that harms that are too small to be perceptible can still be harms. Jussi might contend that the individual emissions *must* do harm, and thus challenge the stipulation. But even if that’s true, an individual’s emissions wouldn’t equate to ‘wiping out half a year from a healthy human life’.
Now, I believe that there is reason to curb your emissions when enough other people are doing the same thing to make a perceptible difference. And that, in fact, describes the actual world we live in. But that is not true, by stipulation, in the example. Under such circumstances, to insist that the agent should do what would be optimal if enough other people were doing it is pointless, or worse. To take various critics’ examples, it will tell you to mount a one-woman revolt against Hitler, keep fighting when all your fellow soldiers have deserted, or keep trying to push a car that is clearly too heavy for you to move on your own.
Doug, the rule you are proposing in your last post, I assume, would be something like ‘Curb your emissions whenever enough other people are curbing their emissions to make a difference—otherwise, don’t’. It is hard for me to see this rule’s near-universal internalization *could* lead to disaster. If nearly everyone internalized this rule, hardly anyone would defect except when enough others were already defecting to render co-operation pointless. And it doesn’t seem that that condition could be met if nearly all had internalized the rule.
Now, it’s true that in the world you describe, everybody else *is* defecting. But in that case, it seems logically impossible that everybody should have internalized the foregoing rule. It’s true that getting everyone to internalize a rule without an escape clause would solve this problem. But getting everyone to internalize the foregoing rule—i.e., *with* an escape clause–would solve the problem just as well. Or am I missing something?
Hi Matthew,
Yes, that’s more or less the rule that I have in mind (although I think that the rule would probably need to be more general: play your part in some cooperative enterprise if most others are playing their part but not if it is clear that there not enough are going to play their part). In any case, if everyone (or nearly everyone) were to internalize such a rule, then people would be disposed to not make, and thus would not make, significant sacrifices in my hypothetical. And, in my hypothetical, climate disaster will ensue unless people make significant sacrifices.
So do you still deny that this rule leads to disaster in my hypothetical?
Hi, Doug,
Your example involves two premises that in my original post I held for logically incompatible:
(1) Nearly everyone internalizes the rule ‘play your part in some cooperative enterprise if most others are playing their part but not if it is clear that not enough are going to play their part’.
(2) Not enough people are playing their part.
I now see that I was wrong. If the agents are not presently co-operating, then even if everyone internalizes the rule in (1), it won’t rescue them from this disastrous equilibrium.
However, what if we tweak the rule? Say: ‘Be prepared to play your part in some cooperative enterprise if enough others are prepared to play their part, but not if it is clear that not enough others are prepared to play their part’. If everybody internalized *that* rule, then I don’t see how we could run into trouble, provided that all the agents were rational and had perfectly reliable information about each other’s intentions.
But in fact, that’s an enormous, and unrealistic, assumption, and on reviewing your original scenario, I see it doesn’t make any stipulation on the question, though I think we’ve been implicitly assuming perfect information. In the real world, we’d have to worry about agents misperceiving each other’s actions and intentions. In particular, even if everybody internalized the revised rule, we could get a breakdown of co-operation if enough people falsely believed that enough other people had failed to internalize the rule.
If misperception is possible, then I think it changes the answer as to what we should do in your example. Suppose each agent has *good reason to expect* that too few others will co-operate, but isn’t—as would surely actually be true—100% sure. Given the colossal costs of everyone misperceiving each other’s intentions and failing to co-operate when co-operation would have actually succeeded, I think a good case could then be made that you *would* be morally obliged to make the sacrifice. It would be a situation resembling Pascal’s Wager: even if you think the odds are very much against enough others co-operating, you ought to bet on their doing so, because of the near-infinite cost of getting it wrong.
However, what if we tweak the rule? Say: ‘Be prepared to play your part in some cooperative enterprise if enough others are prepared to play their part, but not if it is clear that not enough others are prepared to play their part’. If everybody internalized *that* rule, then I don’t see how we could run into trouble, provided that all the agents were rational and had perfectly reliable information about each other’s intentions.
To be rational, does one have to be willing to play one’s part? If so, then my hypothetical is one where everyone is irrational. Thus, your rule would say that given that everyone else is irrational, no one is required to make significant sacrifices. And if no one is required to make significant sacrifices on your rule, then your rule is one that if universally accepted or followed would have disastrous consequences: climate disaster. So we can conclude that the ideal code in my hypothetical world does not include your rule.
And, by the way, there is a 100% objective chance that climate disaster will ensue in my hypothetical. Just assume that my hypothetical involves a deterministic world in which everyone is causally determined to refrain from making significant sacrifices.
Hi, Doug,
We need to get clear about what internalizing a code means. Just looking at *Ideal Code, Real World* quickly, I don’t find a definition, though I may have missed it. I’d have thought that it means at least two things: (a) forming an intention; (b) acting on that intention when given the opportunity. If you don’t do both of these things, then by definition you haven’t internalized the code.
If that’s the way we understand internalization, then most of the people in your hypothetical *can’t* have internalized my code. If nearly everyone had formed the conditional intention to co-operate if enough others formed the conditional intention to co-operate, and if each of them then acted on this intention, then they wouldn’t act the way that you’ve stipulated they’re going to act.
However, we might understand internalizing a rule merely as acquiring the intention to follow it. One might have an intention, be capable of carrying it out, and nevertheless fail to do so. And in that case, my code, despite being internalized by most of the population, wouldn’t preclude disaster.
But neither would the code without the escape clause that you say Hooker’s RC favours. You claim that ‘We can expect that any code that did require each of us to make significant sacrifices in my hypothetical would be one that if nearly universally internalized would result in climate disaster being averted’. But if people can internalize a code but nevertheless not act on it, then that’s not necessarily true.
Hi Matthew,
In his article entitled “Rule Consequentialism,” Hooker says that one counts as having internalized a code of rules if and only if one “has a disposition to comply with them, dispositions to feel guilt when one
breaks them and to resent others’ breaking them, and a belief that the rules and these dispositions are justified.” And I take it that those who are disposed to comply with a rule will generally do so.