This marks the eighth of eleven e-meetings of our virtual reading group on Derek Parfit’s Climbing the Mountain—see here for further details. Next week, we will discuss Chapter 10 of the June 7th version of the manuscript, which can be found here.
In Chapter 9, Parfit brings some more tough questions to bear on (what he is calling) Kant’s Law of Nature Formula (LNF), to see if it gets the intuitively correct answers in various cases. Parfit holds that when we’re asking, with LNF, whether we could rationally will that some maxim be acted on by everyone, the comparison class is whether it could be acted on by no one. And, according to Parfit, the best version of LNF is that it is irrational to will that everyone act on some maxim if there is a better alternative maxim for everyone to act on.
The kinds of cases considered in this chapter are tricky insofar as the permissibility of an act seems to depend on what other agents are doing.
The first set of potentially troubling maxims are ones that arise in “each-we dilemma” cases, where “if each rather than none of us does what would be, in one way, better, we would be doing what would be, in this way, worse” (163). That is, if each does what is better for herself, as a group we will do what is worse for each of us. In a Samaritan dilemma, for example, in suitably large communities it is better for each to not help others in need, but if we all do this, then this will be worse for us. Or in Contributor’s dilemmas, it would be better for each not to contribute to public goods, but if we all do this, it will be worse for each of us.
LNF works when such cases are solved through psychological means like the installation of moral beliefs that prioritize the well being of others, for LNF can say to free riders, ‘What if everyone did that?’ LNF can also prompt us to revise some of our moral beliefs, and it does so “forceful[ly],” by showing us that it is we who are failing. Finally, LNF can be a “moral microscope” (167) by getting us to see wrong acts that are not easily apparent as wrong, such as our use of fossil fuels. Here, if I benefit myself, I impose a burden on others, but that burden is shouldered by each only to a very small extent or imperceptibly. But when we all do this, we get catastrophic effects.
A worry in such each-we dilemmas is that LNF requires us to benefit others even when others are not cooperating, which costs us the benefit to self without securing the greater benefit for everyone. So we should add an “escape clause,” that you can act in “partly similar ways” when others are not doing what is ideal.
So, onto the coordination maxims that generate the “Permissible Acts Objection” (§29). Such maxims include “Have no children, so as to devote my life to philosophy,” or “Consume food without producing any.” It would not be rational to will everyone to act on such maxims, so they are wrong on LNF. But they are obviously permissible. (To me it’s unclear that letting the species die out is irrational, so I’ll focus on the food-consumption maxim.)
One solution from Pogge is to retreat to the Moral Belief Formula: “we could rationally will a world where everyone believes such acts to be permitted” (169). On this principle, just because (to focus on the food consumption maxim) we would all believe that it’s permissible to eat but not produce any food, this doesn’t mean that everyone will act in that manner. Parfit’s problem with this solution is not clear to me. His worry seems to be that “We always have some reason to want ourselves and others not to have false moral beliefs.” But it’s unclear to me why there are any false beliefs here (after all, it is true that it’s permissible to eat but not produce food).
A solution that Parfit likes better, also from Pogge, is that maxims are sometimes conditional, e.g., we would not act on them if they would have certain bad effects. So when the maxim is to eat but not produce food, that holds only so long as others are producing food. It is possible to rationally will that everyone acts on such a conditional maxim—getting the right result.
But sometimes people have unconditional maxims, e.g., to not have children, or to become an Icelandic dentist no matter what. Since we can’t will a world where everyone does this, such maxims would be mistakenly forbidden by LNF. Parfit thinks this is a real (but not new) objection, which again can be solved by replacing maxims-as-policies with a focus on intentional actions, as in LN3: “We act wrongly unless what we are intentionally doing is something that we could rationally will everyone to do” (171). Parfit thinks that this gets by the objection against LNF, because in acting on unconditional maxims, we would intentionally be (say) producing no food, “when we knew that there were not too many people who were acting in these ways.” It would be rationally possible to universalize this.
As I mentioned in last week’s discussion of Chapter 8, I think this is much ado about nothing: just as what Parfit is calling intentional actions incorporate into the intention the knowledge that not too many people are acting in those ways, so can maxims incorporate that as one of the circumstances.
A separate objection is the “Ideal World Objection” (§30). Apparently “it is sometimes claimed” that LNF requires us to act on the maxim “never use violence,” since we couldn’t rationally will everyone to act on some other conflicting maxim. This is the incorrect result, because it was not wrong to fight Hitler with violence. Or consider the Mistake scenario, where you and I should both do A as it is the only way to save everyone, but you’re confused and do B. Now if I do B as well, we’ll save some people, but if I still do A, we’ll save no one. The problem is that since LNF says that the only rational thing to will both of us to do is A, A-ing is obligatory, even in the sub-optimal case where you do B.
Thus the Ideal World Objection is that LNF requires us to act in certain ways even when this would have unnecessarily bad results because others are not acting in the same way. Thus Parfit suggests M2: “Do whatever I could rationally will everyone to do, unless some other people don’t act in this way, in which case do whatever, given [their actions], I could rationally will that people in my position do” (173-4). A better violence maxim is thus (V2) “Never use violence, unless others do, in which case use necessary defensive violence.” Because we could rationally will that everyone act on this, LNF does not mean we have to be pacifists.
But that is merely a permission. Another maxim is (V3) “Never use violence, unless others have, in which case kill as many people as I can.” If everyone acted on this maxim, no one would ever use violence, so I would also be permitted by LNF, Parfit says, to act on this maxim. I don’t follow this argument, since it seems that if we’re making a rational comparison between universalizing either of what I’m labelling (V2) and (V3), (V2) is clearly the better alternative maxim, so by Parfit’s compare-the-maxims approach to rationality, I cannot will that everyone act on (V3) if everyone acting on (V2) is clearly better.
In any event, Parfit arrives in this way at the “New Ideal World Objection: Once a few people have failed to act on any good maxim, Kant’s formula requires nothing from anyone else.” To deal with this, Parfit suggests another revision to LNF: (LN4) “It is wrong to act on some maxim unless we could rationally will it to be true that this maxim be acted on by everyone, or by any other number of people.” This (correctly) prohibits (V3), since everyone would kill as many people as they can, as soon as anyone uses aggressive violence, “and that is not something that we could rationally will.”
Parfit’s conclusions are that for LNF to avoid the Ideal World Objection, we must appeal to conditional maxims and to whether any number of people act on such maxims. This goes beyond the “What if everybody did that?” line of thinking. The Moral Belief Formula should avoid the objection too, because “we can plausibly assume that everyone ought to have the same moral beliefs.”