This is the 9th of 11 virtual meetings on Derek Parfit’s book manuscript, Climbing the Mountain.
In this pivotal chapter, Parfit finally ties together several of the loose threads of the last several chapters to come very close to endorsing a kind of Kantian “supreme principle of morality,” which turns out to be contractualist in nature. He begins with an interesting discussion of the Golden Rule, which Kant dismissed as “trivial,” and “unfit to be a universal law.” What Parfit does, though, is show why Kant’s objections to the Golden Rule can actually be answered. If he’s right, Kant’s contempt for the formula is unjustified. Perhaps, however, Kant’s Formula of Universal Law is just a better principle than the Golden Rule? This, as it turns out, is false as well. In terms of making us more impartial, the Golden Rule, Kant’s Consent Principle, and the Impartial Observer Formula (according to which we are to determine what it would be rational to choose from the imagined point of view of an impartial observer, rather than from our own or an affected party’s point of view) are all superior to the Formula of Universal Law.
I had several questions about this section, however. First, consider Kant’s reasons for rejecting the Golden Rule. As he puts it:
It cannot be a universal law, because it does not contain the ground of duties toward oneself, nor that of duties of love toward others (for many a man would gladly agree that others should not benefit him if only he might be excused from benefiting them); and finally it does not contain the ground of duties owed to others, for a criminal would argue on this ground against the judge who punishes him.
Parfit’s gloss on this quote is that the Golden Rule (GR) doesn’t imply that we have duties to ourselves, duties of beneficence to others, and duties of obligation to others. He then goes on to show how the GR (either as it stands or in some slightly revised form) actually resists such criticism. But Kant is claiming here that the GR fails as a universal law because it gets the grounds of these duties wrong, and so that’s why it gets the specific duties themselves wrong. So insofar as Parfit can successfully argue that the GR can actually get these duties right, that wouldn’t yet touch the more fundamental objection Kant seems to be raising, which is, presumably, that violating the principle to treat others as we’d have them treat us doesn’t violate some fundamental principle of rational consistency, whereas violating the Formula of Universal Law (FUL) would.
Second, and perhaps more tangentially, I’m not sure if Parfit’s various formulations and reformulations of the GR address one very interesting objection to it. There’s a story some of you might have heard about a Nazi soldier who killed Jews without mercy when given a chance, and he claimed it was his duty as he interpreted the Golden Rule. What he allegedly said was that Jews were evil and subhuman, and that if he were Jewish he’d want someone to kill him, so in killing Jews himself, he was treating them as he would want to be treated. (In what is likely an apocryphal O. Henry-type conclusion to the story, he discovers late in the war that he’s actually half-Jewish himself, at which point he does indeed kill himself.) Now Parfit thinks the best version of the GR is what he calls “G3”:
We ought to treat others only in ways in which we would rationally choose that we ourselves be treated, if we were going to be in these other people’s position, and we would be relevantly like them. (183)
But it seems that this formulation fails to yield the right answer in this case, viz., that the Nazi ought not be killing Jews. Indeed, were he in his victim’s positions and be relevantly like them, he would (so he avers) want to be killed. The only thing that might help out the advocate of G3, then, is the clause about what one would “rationally” choose. But beyond it not being clear what that term does or doesn’t include, as long as there’s room for rational suicide on Parfit’s account (as there is – see the earlier stuff on killing yourself to save others), it would seem difficult to eliminate it as something even someone with false beliefs might nevertheless rationally choose.
A final point about this section: Parfit points out that, while the GR may be “theoretically inferior” to the other possible impartiality principles, it may actually be the most effective at providing us with moral motivation (and so that might explain why it’s been so popular throughout the world for so long). By forcing us to imagine our way into other people’s positions, we are indeed forced to be more impartial than perhaps we’d otherwise be. I like this point. It’s via empathy, I think, that we are typically moved to sympathy with others, and that process is much more indirect (and perhaps less successful) if we merely try thinking about what some ideal observer would choose, or, more abstractly, about what things to which those affected by our actions would consent. Indeed, moral education starts with precisely this question: how would you feel if someone did that to you?
In the second section, Parfit discusses two objections to the FUL. The Rarity Objection we have run across before: FUL implies that, if the sort of action I’m considering is rare enough, it may be rational to will that everyone acts in that fashion, given that most won’t, even if what I’m doing is obviously wrong. But there’s a new objection that’s somewhat related. According to the High Stakes Objection, as long as some benefit of my action would be large enough, I could rationally will a world in which everyone acts as I do, even if what I’m doing is obviously wrong. If both Grey and Blue have been bitten by a cobra as they traveled through the desert, and Grey steals an antidote that Blue has prudently brought along, Grey could rationally will that everyone acts on the maxim ‘Steal when it’s the only way to save my life’ when it applies to them, not only because it’s unlikely that anyone else would be in such a situation, but also because even if he were in such a situation where someone might steal from him, the risk of that would be more than outweighed by the certain benefit he’d get now for stealing from Blue. And even if there were some torturous way to revise FUL to account for the case, it simply fares worse than the other three possible principles – the GR, the Consent Formula, and the Impartial Observer Formula – given that those other principles take into account the greater loss incurred by Blue (insofar as Blue is much younger than Grey, for one). It’s the impartiality forced upon us by these other three principles that makes them superior to the FUL. FUL has us simply considering things from our own perspective, whereas these others force us into considering things from the perspectives of others.
In the third section, Parfit considers another, more serious objection to FUL. It has us asking, not “what if that action were done to me?” but rather “what would happen if everyone did that?” In other words, can I rationally will that everyone does my considered action to others? But there will be some instances in which I know that, even if everyone does these things to others, no one will do them to me, i.e., my wrong act may be non-reversible. This will actually hold for many real-world cases, e.g., cases in which there are those with a great deal of power who are considering some action that harms some powerless person(s). It also holds for cases in which the wrong action I’m considering doing is one that everyone already does. Kant considers only those maxims that aren’t such laws, but there are sometimes wrongdoer’s maxims that are already universal, so of course one could rationally will them to be universal laws. Think here of cases of racism or treating women as subordinates, etc. The other principles, again because they force impartiality out of us, fare much better here.
In the final section, Parfit considers a “Kantian solution” to these difficulties with FUL. There are various possible interpretations of FUL that Parfit considers (coming from Nagel, Rawls, T.C. Williams), but Parfit thinks that none works as an explication of what Kant meant, given Kant’s own words to the contrary. But if instead of thinking of these as interpretations of Kant that will get him off the hook we think of them as revisions, then Parfit thinks Scanlon’s (in the Moral Belief version) is the best, viz.:
It is wrong for us to act on some maxim unless everyone could rationally will it to be true that everyone believes that such acts are morally permitted (197).
Now plug in the switch from talk of maxims to talk of intentional actions for which Parfit argued earlier, and with the addendum that when people believe an act is permitted they accept a principle permitting that act, we’ve got:
The Formula of Universally Willable Principles: An act is wrong unless such acts are permitted by some principle whose universal acceptance everyone could rationally will (198).
This formulation now avoids all the previous objections, i.e., the New Ideal World Objection, the Flawed Maxims Objection, the Permissible Acts Objection, and the Rarity, High Stakes, and Non-Reversibility Objections. This is also a formula that remains very close to Kant’s own view, and is also clearly contractualist, so we can tighten up the formulation a bit more to get what just might be the supreme principle of morality:
The Kantian Contractualist Formula: Everyone ought to follow the principles whose universal acceptance everyone could rationally will (199).
And away we go….