This marks the eighth of eleven e-meetings of our virtual reading group on Derek Parfit’s Climbing the Mountain—see here for further details. Next week, we will discuss Chapter 10 of the June 7th version of the manuscript, which can be found here.

In Chapter 9, Parfit brings some more tough questions to bear on (what he is calling) Kant’s Law of Nature Formula (LNF), to see if it gets the intuitively correct answers in various cases. Parfit holds that when we’re asking, with LNF, whether we could rationally will that some maxim be acted on by everyone, the comparison class is whether it could be acted on by no one. And, according to Parfit, the best version of LNF is that it is irrational to will that everyone act on some maxim if there is a better alternative maxim for everyone to act on.

The kinds of cases considered in this chapter are tricky insofar as the permissibility of an act seems to depend on what other agents are doing. 

The first set of potentially troubling maxims are ones that arise in “each-we dilemma” cases, where “if each rather than none of us does what would be, in one way, better, we would be doing what would be, in this way, worse” (163). That is, if each does what is better for herself, as a group we will do what is worse for each of us. In a Samaritan dilemma, for example, in suitably large communities it is better for each to not help others in need, but if we all do this, then this will be worse for us. Or in Contributor’s dilemmas, it would be better for each not to contribute to public goods, but if we all do this, it will be worse for each of us.

LNF works when such cases are solved through psychological means like the installation of moral beliefs that prioritize the well being of others, for LNF can say to free riders, ‘What if everyone did that?’ LNF can also prompt us to revise some of our moral beliefs, and it does so “forceful[ly],” by showing us that it is we who are failing. Finally, LNF can be a “moral microscope” (167) by getting us to see wrong acts that are not easily apparent as wrong, such as our use of fossil fuels. Here, if I benefit myself, I impose a burden on others, but that burden is shouldered by each only to a very small extent or imperceptibly. But when we all do this, we get catastrophic effects.

A worry in such each-we dilemmas is that LNF requires us to benefit others even when others are not cooperating, which costs us the benefit to self without securing the greater benefit for everyone. So we should add an “escape clause,” that you can act in “partly similar ways” when others are not doing what is ideal.

So, onto the coordination maxims that generate the “Permissible Acts Objection” (§29). Such maxims include “Have no children, so as to devote my life to philosophy,” or “Consume food without producing any.” It would not be rational to will everyone to act on such maxims, so they are wrong on LNF. But they are obviously permissible. (To me it’s unclear that letting the species die out is irrational, so I’ll focus on the food-consumption maxim.)

One solution from Pogge is to retreat to the Moral Belief Formula: “we could rationally will a world where everyone believes such acts to be permitted” (169). On this principle, just because (to focus on the food consumption maxim) we would all believe that it’s permissible to eat but not produce any food, this doesn’t mean that everyone will act in that manner. Parfit’s problem with this solution is not clear to me. His worry seems to be that “We always have some reason to want ourselves and others not to have false moral beliefs.” But it’s unclear to me why there are any false beliefs here (after all, it is true that it’s permissible to eat but not produce food).

A solution that Parfit likes better, also from Pogge, is that maxims are sometimes conditional, e.g., we would not act on them if they would have certain bad effects. So when the maxim is to eat but not produce food, that holds only so long as others are producing food. It is possible to rationally will that everyone acts on such a conditional maxim—getting the right result.


But sometimes people have unconditional maxims, e.g., to not have children, or to become an Icelandic dentist no matter what. Since we can’t will a world where everyone does this, such maxims would be mistakenly forbidden by LNF. Parfit thinks this is a real (but not new) objection, which again can be solved by replacing maxims-as-policies with a focus on intentional actions, as in LN3: “We act wrongly unless what we are intentionally doing is something that we could rationally will everyone to do” (171). Parfit thinks that this gets by the objection against LNF, because in acting on unconditional maxims, we would intentionally be (say) producing no food, “when we knew that there were not too many people who were acting in these ways.” It would be rationally possible to universalize this.

As I mentioned in last week’s discussion of Chapter 8, I think this is much ado about nothing: just as what Parfit is calling intentional actions incorporate into the intention the knowledge that not too many people are acting in those ways, so can maxims incorporate that as one of the circumstances.


A separate objection is the “Ideal World Objection” (§30). Apparently “it is sometimes claimed” that LNF requires us to act on the maxim “never use violence,” since we couldn’t rationally will everyone to act on some other conflicting maxim. This is the incorrect result, because it was not wrong to fight Hitler with violence. Or consider the Mistake scenario, where you and I should both do A as it is the only way to save everyone, but you’re confused and do B. Now if I do B as well, we’ll save some people, but if I still do A, we’ll save no one. The problem is that since LNF says that the only rational thing to will both of us to do is A, A-ing is obligatory, even in the sub-optimal case where you do B.


Thus the Ideal World Objection is that LNF requires us to act in certain ways even when this would have unnecessarily bad results because others are not acting in the same way. Thus Parfit suggests M2: “Do whatever I could rationally will everyone to do, unless some other people don’t act in this way, in which case do whatever, given [their actions], I could rationally will that people in my position do” (173-4). A better violence maxim is thus (V2) “Never use violence, unless others do, in which case use necessary defensive violence.” Because we could rationally will that everyone act on this, LNF does not mean we have to be pacifists.


But that is merely a permission. Another maxim is (V3) “Never use violence, unless others have, in which case kill as many people as I can.” If everyone acted on this maxim, no one would ever use violence, so I would also be permitted by LNF, Parfit says, to act on this maxim. I don’t follow this argument, since it seems that if we’re making a rational comparison between universalizing either of what I’m labelling (V2) and (V3), (V2) is clearly the better alternative maxim, so by Parfit’s compare-the-maxims approach to rationality, I cannot will that everyone act on (V3) if everyone acting on (V2) is clearly better.


In any event, Parfit arrives in this way at the “New Ideal World Objection: Once a few people have failed to act on any good maxim, Kant’s formula requires nothing from anyone else.” To deal with this, Parfit suggests another revision to LNF: (LN4) “It is wrong to act on some maxim unless we could rationally will it to be true that this maxim be acted on by everyone, or by any other number of people.” This (correctly) prohibits (V3), since everyone would kill as many people as they can, as soon as anyone uses aggressive violence, “and that is not something that we could rationally will.”


Parfit’s conclusions are that for LNF to avoid the Ideal World Objection, we must appeal to conditional maxims and to whether any number of people act on such maxims. This goes beyond the “What if everybody did that?” line of thinking. The Moral Belief Formula should avoid the objection too, because “we can plausibly assume that everyone ought to have the same moral beliefs.”

37 Replies to “Parfit’s CTM, Chapter 9: What if everyone did that?

  1. I was worried in this chapter that Parfit wasn’t too fair on Kant/Kantians. For him, when testing the LNF with maxims the results are counterintuitive. Things seem better when intentional actions are used to test the LNF. Thus, Kant needs to be radically revised – LNF is not a test for maxims but for intentional actions.
    I guess I would have liked to have seen a little more thought on what Kant and many Kantians count as maxims – the subjective principles of volition. Kant’s own examples are often elliptic and misleading. But, as I read him principles of volition must be the kind of thoughts that are able to move the agent into intentional action. They must include agent’s conception of the situation she is in, an end that is desired, and a idea of an action that gets from the current state to the end. As Onora O’Neill has put it, maxims are to be written in the schema:
    When in X, do Y, to achieve Z.
    If this is correct (maybe somewhat charitable) interpretation of Kant, then most of the ‘maxims’ Parfit is testing to argue against the LNF do not even constitute maxims and thus are irrelevant for whether LNF works or not. If maxims are understood in this way, then if intentional actions gets the right results in the test as Parfit claims, then I do not see why maxims wouldn’t do too. The sort of maxim above seems to be found in all intentional action in one form or another. So, I’m not sure if Parfit’s revision of Kant is necessary – or goes beyond the charitable interpretations of Kant we already have had.

  2. Josh,
    Nice summary. Again, we seem to have the same disagreement.
    You write,

    As I mentioned in last week’s discussion of Chapter 8, I think this is much ado about nothing: just as what Parfit is calling intentional actions incorporate into the intention the knowledge that not too many people are acting in those ways, so can maxims incorporate that as one of the circumstances.

    But can’t someone act on an unconditional maxim such that knowledge of whether or not too many people are acting in these ways is irrevelent to the agent’s end? I don’t see how appealing to the circumstances helps. Here’s Parfit’s argument as I understand it:
    P1) Someone in the real world, say, Doug could act on the following maxim: No matter how many other people are producing food, I will consume food without producing any in order to focus more time on being a better philosopher.”
    P2) It would not be rational to will everyone to act on such a maxim.
    P3) Since it would not be rational to will everyone to act on such a maxim, LNF implies that Doug’s act of consuming food while not producing food is wrong.
    P4) Doug’s act of consuming food while not producing food is not wrong.
    C) Therefore, LNF is false.
    Can you explain how your claim that maxims can incorporate knowledge of what others are doing as one of the circumstances entails that this argument is “much ado about nothing”? Which premise is false of your view?

  3. Josh,
    You write,

    I don’t follow this argument, since it seems that if we’re making a rational comparison between universalizing either of what I’m labelling (V2) and (V3), (V2) is clearly the better alternative maxim, so by Parfit’s compare-the-maxims approach to rationality, I cannot will that everyone act on (V3) if everyone acting on (V2) is clearly better.

    What do you take Parfit’s compare-the-maxims approach to be? I didn’t think that he had said that we are to compare different maxims, only that, on Kant’s view, we are to compare the world where no one acts on a maxim with the world where everyone acts on a maxim — in this way, we are to determine whether it is rational to will that everyone acts on some maxim. And, on LNF, it is permissible to act on any maxim that it is rational to will that everyone acts on. It is rational to will that everyone acts on V3. Yet, in some circumstances, it is wrong to act on V3. Therefore, LNF incorrectly says that it is permissible to do what it is, in fact, wrong to do.

  4. Josh: Nice summary again.
    Doug’s right about the comparative argument Parfit makes: it’s rational to will that everyone, rather than no one, acts on V2, but it’s also rational to will that everyone, rather than no one, acts on V3. But sometimes it can’t be right to act on V3, so the formula is false.
    I have a worry that is really an extension of Doug’s response to your “much ado about nothing” claims to Parfit himself. As you say, “[J]ust as what Parfit is calling intentional actions incorporate into the intention the knowledge that not too many people are acting in those ways, so can maxims incorporate that as one of the circumstances.”
    Now Doug has (correctly, I think)replied, “But can’t someone act on an unconditional maxim such that knowledge of whether or not too many people are acting in these ways is irrevelent to the agent’s end? I don’t see how appealing to the circumstances helps.” True enough, but doesn’t this same complaint apply to Parfit’s own move to intentional actions? That is, isn’t it possible for someone to be intentionally having no children where he *doesn’t* know that there are not too many people acting in this way? Perhaps he’s simply overwhelmingly focused on doing philosophy and he takes necessary measures not to have any kids as a result (not that such a fellow would have many opportunities for such, although that’s neither here nor there…). All that he’s intentionally doing, then, is having no children (and producing no food, as it turns out), with no knowledge conditions about what others are doing included. Surely, he could not rationally will that everyone acted in these ways, so his action is wrong on LN3, but that’s got to be a mistake.

  5. Josh, you write: “On this principle, just because (to focus on the food consumption maxim) we would all believe that it’s permissible to eat but not produce any food, this doesn’t mean that everyone will act in that manner. Parfit’s problem with this solution is not clear to me. His worry seems to be that “We always have some reason to want ourselves and others not to have false moral beliefs.” But it’s unclear to me why there are any false beliefs here (after all, it is true that it’s permissible to eat but not produce food).”
    I understood the false belief here to refer to what the *Kantian formula* would imply to true, not what we in fact know to be true to the contrary (that it’s permissible to eat but not produce food).

  6. Dave,
    You write,

    [I]sn’t it possible for someone to be intentionally having no children where he *doesn’t* know that there are not too many people acting in this way? Perhaps he’s simply overwhelmingly focused on doing philosophy and he takes necessary measures not to have any kids as a result (not that such a fellow would have many opportunities for such, although that’s neither here nor there…). All that he’s intentionally doing, then, is having no children (and producing no food, as it turns out), with no knowledge conditions about what others are doing included. Surely, he could not rationally will that everyone acted in these ways, so his action is wrong on LN3, but that’s got to be a mistake.

    Two points: (1) In the previous chapter, Parfit says, “To judge whether some act is, or would be, wrong,…[w]e must know this person’s immediate aims, or what she is directly trying to achieve. We must also know what effects the agent believes that his act might have.” On Parfit’s view, we must look not only at the intended effects of our actions but also the foreseen effects of our actions. So Parfit might respond: Surely, this someone, you speak of, believes that his act might have the effect of ending the human race. After all, he doesn’t know what others are doing, and the only way he could know that his act wouldn’t have this effect is if he knew that others were procreating. And, surely, it is wrong to act in way that one believes might cause the end of the human race. So LN3 gets it right. It is wrong to act in this way, with such careless disregard for the monumental repercussions that one’s act might have. (2) Ultimately, Parfit wants to reject all versions Kant’s Law of Nature Formula, right? So, even if LN3 does get it wrong, that should be fine with him.

  7. Doug: Regarding (1), it’s unclear that the agent I’ve described has any relevant beliefs about the effects of his action. And we can certainly stipulate an agent who just has no beliefs about the unintended effects of his action. While this might be a fault — perhaps even a moral fault — its lack doesn’t necessarily render the intended action wrong. Perhaps, then, what I’m now leaning towards are cases of a certain kind of negligence, and it’s difficult to see how LN3 could address such cases adequately.
    With regard to (2), it’s not so clear that Parfit abandons LN3. He considers LN4 later in this chapter, but there he’s back to maxims in the formulation. And he’s still considering a Law of Nature formula in Ch. 10 (albeit once more in the maxims version). If he were to abandon LN3 altogether, it would also seem that this and the previous chapter are fairly unmotivated (esp. Parfit’s discussions of abandoning maxims in favor of just intentional action).

  8. Fair enough, but I’m not entirely clear on why you think that LN3 can’t address cases of negligence. It does, at least, seem to get the case you gave right. Your original point was, I thought, that LN3 entailed that this someone’s act was wrong, when, intuitively speaking, his act clearly wasn’t wrong. But now that you’ve specified the case such that it is clearly a case of negligence, I don’t see why I should think that this someone’s act is permissible contrary to what LN3 entails. It does seem wrong to act negligently and put the end of the human race at risk. His action is like driving drunk not knowing whether there is anyone on the roads that he might hurt only worse, because what’s at stake is even greater. (I’m assuming, with Parfit, that the end of the human race through nonprocreation would be a terrible evil, although I’m not sure that I agree.)

  9. It’s likely the case that consideration of negligence cases moves us off the original point (which is my fault), but now that we’re on it, it’s of real interest to me. While drunk driving may count as one type of negligent action, in other cases it’s not so easy to identify just what the relevant action to assessed is. Think here of a homeowner’s failure to salt his icy sidewalk and then hosting a party (I know this example is difficult for Phoenix-ers to fully appreciate). Someone then falls and gets injured on that sidewalk. My original case was actually more like this than the drunk driving case, given that the philosopher/homeowner are both failing to act, and they’re both utterly unwitting with respect to the harm their failure to act may cause. If, then, the philosopher were simply to be focused on doing his philosophy, without having a single thought regarding procreating, would LN3 still condemn what he’s doing, even if other people aren’t procreating? If so, how so? (Same goes for the icy sidewalk case.) What’s the intentional action, after all? It seems that all you’ve got to work with are the effects of one’s actions, but here Parfit talks explicitly only about the effects one believes there to be.
    (I don’t think this is actually any worse a problem for Parfit than it is for Kantians generally — most theorists of agency and responsibility, actually — but this goes along with my point that I’m not seeing how Parfit’s move away from maxims to intentional actions is that much of an improvement.)

  10. Dave,
    Okay, good point. Your case is not analogous to that of drunk driving.
    Now LN3 says, “We act wrongly unless what we are intentionally doing is something that we could rationally will everyone to do [emphasis added].” It seems that the only thing that this someone is doing is focusing exclusively on philosophy. And you’re right we can’t will that everyone do that. That would mean there would be no farmers, no doctors, no construction workers, etc. This is just the Ideal World Objection, right? But this gets me back to my point (2). It seems that Parfit wants to reject LN3. He says, “To answer this new objection to Kant’s Law of Nature Formula, we should again revise this formula.” He, then, presents LN4. So I think that Parfit is rejecting LN3 for exactly the reason that you think that we should: in many cases, it is enough to reply to ‘What if everyone did that?’ by saying ‘Not everyone will’. In the actual world, not everyone will focus exclusively on philosophy and that’s why it’s not wrong for someone like us to do just that in the actual world. This is what LN3 gets wrong in focusing on the Ideal World.

  11. Doug: What keeps me from signing on to your claim about the move from LN3 to LN4, though, is precisely the inclusion of talk of “maxims” in LN4. Perhaps you’re right: the Ideal World Objection is problematic for both formulations (Kant’s and Parfit’s), but then I don’t see the motivation for all the talk about eliminating maxims to begin with. If both versions wind up being nailed by the same objection, why bother spending so much time articulating and defending the non-maxims version?

  12. Dave,
    I think the answer is that the intentionally-doing stuff is relevant to how we should best formulate Kant’s Moral Belief Formula. He says that the “phrase ‘such acts’ here [referring to the the Moral Belief Formula] refers to what we are intentionally doing.” And at the end of this chapter he holds out hope that, unlike Kant’s Law of Nature Formula, Kant’s Moral Belief Formula appeals to a different and better idea, “which might be successfully applied to all kinds of case[s].” So it seems to me that he thinks that the Moral Belief Formula is Kant’s most plausible formula and that we should understand this formula, not in terms of maxims, but in terms of intentional doings. But you’ve read ahead and have suggested above that he still goes teh LN Formula and with maxims. Is that right?

  13. Well, I’ve now completed reading Ch. 10, and I’ve got the answer: he’s actually going to advocate a Kantian contractualist formula, most closely akin to Scanlon’s, that nevertheless still makes reference to what people could rationally will. He moves from what he thinks is the most plausible articulation of the Moral Belief formula to a formula about principles of rightness/wrongness. At that point, he says that, if we drop talk of maxims in favor of intentional actions in this formula, we avoid the objections discussed earlier. And if we change this other aspect as well, we avoid the rarity, high-stakes, etc. objections. So the idea is to build up to what he thinks is a supreme principle of morality, one that avoids all the counterexamples presented thus far. So now my objection is purely stylistic: why not flag to the reader that that’s where you’re going? (Although it would ruin the M. Night Shymalan-style surprise ending!).

  14. Thanks for all the great discussion, Jussi, Doug, and Dave.
    Doug, you write (and Dave, you agree with Doug’s claim that):

    What do you take Parfit’s compare-the-maxims approach to be? I didn’t think that he had said that we are to compare different maxims, only that, on Kant’s view, we are to compare the world where no one acts on a maxim with the world where everyone acts on a maxim — in this way, we are to determine whether it is rational to will that everyone acts on some maxim.

    According to Parfit, on p. 162, right after he introduces the condition that the comparative class for a maxim to be acted on by everyone is a maxim to be acted on by no one, he also says this, as (I take it–though maybe it’s a misread) a sub-condition of no one acting on the maxim:

    On the best version of Kant’s formula, we could not rationally will it to be true that everyone acts on some maxim if there is some other better maxim on which we could rationally will everyone to act.

    If that’s right, then when evaluating everyone acting on (V3), we compare it to no one acting on (V3), which could include their acting on “some other, better maxim,” such as (V2).

  15. Josh,
    Okay, I overlooked that. That’s really strange, though. It doesn’t really come up in the rest of the discussion, as far as I remember. How are we supposed to determine the relevant comparison class? After all, there are a lot of different maxims that I could have acted upon. Instead of acting on the maxim that I did in writing this comment, I could have acted on the maxim “Whenever I have some free time, I will volunteer for Oxfam in order to save lives.” Is that the sort of alternatives that we’re supposed to be looking at when we compare maxims? Or are we only supposed to be looking a different versions of the same maxim? But then how are we supposed to determine whether one maxim is a different maxim or only a different (better or worse) version of the same maxim? And what makes one maxim better than another? Clearly, a maxim on which we could rationally will everyone to act is better than a maxim on which we could not rationally will everyone to act, but Parfit is comparing maxims on which we could, in some sense, rationally will everyone to act. What would make one such maxim better than another?
    Does anyone have any thoughts on this? Josh, You should ask Parfit about this. Or I’ll do it if you don’t want to. But it is your point, so you should get credit.

  16. Doug,
    I share all of your questions. It’s unclear how we’re supposed to sort out the comparison set of maxims. I’ll bring it up with Parfit (though you should feel free to as well, of course!).

  17. Doug,
    To get back to the maxim issue, it does sound like we have the same disagreement. The short answer is that (P1) is false, given certain important premises, including some points made by Dave and Jussi. Please forgive the impending long comment to explain, but here’s the basic line of response: either the agent does or does not know what others are doing; either way, LN3 is no improvement over a maxim-oriented formula. (So note that the objection I meant to raise was not (necessarily) to say that LNF works, but only, as Dave rightly emphasizes, that LNF’s talk of maxims is no worse than Parfit’s preferred talk of intentional actions. So let’s (for the time being, anyway) just focus on that.)
    Begin by stipulating–as you and Parfit both do–that the agent in question knows that not everyone is acting on the maxim “to eat but not produce food.” If that’s the case, then, as I said last time and as Jussi stresses this week, on conventional charitable readings of Kant’s formula, we should incorporate into the maxim the circumstances of the act, one of which is the fact that others are producing food. So the agent might say “no matter what anyone else does,” but this might be excluded from that agent’s maxim, because the principles of relevant description have determined that it does matter what others do. You asked, “But can’t someone act on an unconditional maxim such that knowledge of whether or not too many people are acting in these ways is irrevelent to the agent’s end?” The answer is that such knowledge (again, assuming the agent has such knowledge) is irrelevant only if the principles of relevance say so. If they say it is relevant, then essentially, your agent’s “unconditional” maxim relevantly gets spelled out as two conditional maxims: “I will eat but not produce food when others are producing food,” and “I will eat but not produce food with others are not producing food.” Presumably the first is universalizable but the second is not–getting the correct answers.
    Recall here that the agent does not have total discretion to decide what goes into the maxim: if the principles of relevant description say that it must include such-and-such circumstances (like that others are producing food) and exclude so-and-so circumstances (like that the patient is eating strawberries), then that is what must be included and excluded.
    Now you might, alternatively, have someone who does not know (or merely believe?) that others are producing food, in which case that fact cannot be incorporated into the agent’s intention/maxim. (It might have been unclear but note that Dave switched to this way of talking–not such that the knowledge is irrelevant, but such that there is no such knowledge). But then you and Dave think, if I follow correctly, that such intended actions are (mistakenly) rendered wrong by LN3.
    So LN3 seems no improvement over the maxim-focused formula. And, as I tried to suggest last week, there’s an easy explanation for this: maxims just are relevant descriptions of the agent’s intended action (hence the “much ado about nothing” claim).
    Note also that your (P3) doesn’t follow from your (P1) and (P2). What follows is that the maxim in (P1) is wrong. You wrote for (P3) “LNF implies that Doug’s act of consuming food while not producing food is wrong”. But it should read (P3*) “LNF implies that Doug’s acting on a maxim of consuming but not producing food ‘no matter how many other people are producing food’ (from P1) is wrong.” Then, one might argue, (P4*) that is a wrong maxim to act on, since it means you’re willing to let people starve to death, which then would block your (C).
    Now, on the one hand, maybe you’d disagree with (P4*)–as Parfit seems to–and hold that just because I’d be willing to let people starve in certain circumstances doesn’t mean that I’m wrong to eat but not produce food when people are not starving. On the other hand, in your discussion of the negligent agent, you seemed to think that such a maxim is indeed wrong (as you say, “…surely, it is wrong to act in way that one believes might cause the end of the human race…It is wrong to act in this way, with such careless disregard for the monumental repercussions that one’s act might have”).

  18. Thanks for the tip on what’s coming in the next chapter, Dave. Here’s another stylistic question. I can see why we’d want to consider and rule out alternative plausible-but-incorrect principles as a way of settling on whatever principle he’ll eventually settle on as correct. But why consider a number of implausible principles, especially the ones at the beginning of chapter 8? Maybe just because “it is sometimes said” that they are plausible?

  19. Josh,
    I don’t have time to respond now, but I will tomorrow. In the meantime, would you tell me what principle of relevance you’re working with such that someone can’t act on an unconditional maxim such that knowledge of whether or not too many people are acting in these ways is irrevelent to the agent’s end. Parfit has given a principle of relevance (I stated it in my comments last week) such that someone can act on such a maxim. So presumably you owe us an alternative principle and an explanation of why your alternative principle is better.

  20. Doug,
    Given the way I’ve been putting things lately, that’s probably a fair question. I’ll give a tentative answer in a second, but first let me back off and put things the way I did last week, and similar to the way Jussi does this week: Before Parfit’s cases can work as counterexamples, he should consider the alternative plausible Kantian proposals and/or establish his principle of relevance as the only one, but he doesn’t consider some of them and he doesn’t establish that. So the burden is on him to do those things, not on me to defend those alternate proposals.
    That said, if I were (as I’m tempted to be) a pluralist about Kantian moral relevance, I’d utilize at least two principles, allowing that I might need more later on: (1) maxims are to be characterized as they are willed (Herman); (2) maxims are to include, under constraint (1), all and only information that bears on agents’ humanity (in the technical Kantian sense of ‘humanity’) (Timmons). Things like strawberry-eating do not bear on humanity, and things like people producing food do bear on humanity.
    I won’t try to defend those principles of relevance here (leaving it to the relevant literature), but just suggest them as some of the things worthy of consideration before Parfit’s cases can be judged counterexamples.

  21. Josh,
    Here is the form of the argument that I gave:
    P1) Someone in the real world, say, Doug, could act on the maxim M1.
    P2) It would not be rational to will everyone to act on M1.
    P3) Since it would not be rational to will everyone to act on such a maxim, LNF implies what Doug did was wrong.
    P4) What Doug did was not wrong.
    C) Therefore, LNF is false.
    Your response, as I understand it, was firstly to deny (P1). But, in doing so, you suggested that Doug acts on another maxim, M2, which you claim it would also not be rational to will everyone to act on. (Actually you say that I act on two maxims, which I don’t quite get. But, even so, LNF would still imply that my action was wrong.) How does this help? Whether the maxim is M1 or M2, if it is one that it could not be rational to will everyone to act on, then this still gets us (P3).
    Secondly, you claim that given the correct description of what I do, (P4) is false. You say, I’m “willing to let people starve to death, which then would block your (C).” Remember, though, that I know that other people are producing enough food. You agreed to this stipulation. In that case, I’m not acting in a way that contributes to or even risks the demise of the human race. Of course, it’s true that given that I’m willing to continue to act in this way even if people are not producing enough food, it follows that I am willing to let people starve to death, which is wrong. But how does the fact that I willing to act wrongly show that I have acted wrongly. Remember, all I do is focus on philosophy. And in my actual circumstances where others are doctors and farmers, there is nothing wrong with that. So I don’t understand how you can deny (P4) and thereby block the inference to (C).

  22. Josh: Just to address one of your questions, regarding the false beliefs argument on p. 169…
    I took the argument to be as follows: could we switch to the moral beliefs formula, such that I could rationally will a world in which everyone believes acts like eating-but-producing-no-food are permissible, given that there’d be no danger in everyone’s acting in that way? Parfit’s answer is that this isn’t sufficient to block the Permissible Acts objection. Why? People would have to believe what’s false, viz., that it would be permissible for everyone to eat-but-produce-no-food, even if this meant everyone would die as a result. That is, we’d have to believe it would be permissible to act such that the human race is extinguished. But that’s false: acting in that way would be wrong, so believing such acts would be permissible would require a false belief, and because there’s antecedent reason not to have false beliefs (about anything, including morality), and because there’s simply no contrary reason to maintain the false belief in this case (e.g., prudential or otherwise), we shouldn’t adopt a moral view that requires it.

  23. Josh,
    Just to be clear…
    M1 is “No matter how many other people are producing food, I will consume food without producing any in order to focus more time on being a better philosopher.” And M2 is “I will eat but not produce food when others are not producing food in order to focus more time on being a better philosopher.
    You said that M2 is not universalizable, “getting the correct answer.” Moreover, M2 seems to be the correct description of the maxim that Doug acts on given your proposed principle of relevance. But, again, I have to wonder how this helps you to avoid (C).

  24. I might have to pull out of this discussion for a little while, so I apologize in advance if I don’t get right back to any comments. But before I depart, here are some quick but hopefully moderately sensible replies to the latest.
    Dave,
    Thanks: I misread you above–I now see what you were saying. As you now put it,

    “could we switch to the moral beliefs formula, such that I could rationally will a world in which everyone believes acts like eating-but-producing-no-food are permissible, given that there’d be no danger in everyone’s acting in that way? Parfit’s answer is that this isn’t sufficient to block the Permissible Acts objection. Why? People would have to believe what’s false, viz., that it would be permissible for everyone to eat-but-produce-no-food, even if this meant everyone would die as a result.”

    I don’t see how this inference works. Just because I would be attempting to “rationally will a world in which everyone believes acts like eating-but-producing-no-food are permissible,” how does that entail that “[p]eople would have to believe what’s false, viz., that it would be permissible for everyone to eat-but-produce-no-food”? It’s true, right, that everyone can believe that it is permissible to eat but not produce food (as the Moral Belief formula would have them do) without everyone believing that it is permissible for everyone to eat but not produce food? Only that latter belief would be false, but as far as I can tell, the Moral Belief formula only leads to the former.
    Doug,
    Right, thanks for pushing me to clarify. Note that Doug does not act on two maxims, only that he has two maxims, one of which he acts on. One of these is (M2) “I will eat but not produce food when others are not producing food in order to focus more time on being a better philosopher.” My thought was that by replacing (M1) with (M2), (P3) still follows, but makes (P4) false (thereby blocking (C)). (~P4) It is wrong for Doug to act on the maxim, “I will eat but not produce food when others are not producing food in order to focus more time on being a better philosopher,” because Doug would be philosophizing while the entire world is starving to death. It is true by our using (M2) that the world would be starving: no one producing food is required for (M2) to be the relevant maxim.
    Now you (rightly) say, “Ah, but we’re stipulating that the world is not starving to death, because people are producing food!” True, but if that’s the case, then Doug’s relevant maxim is not (M2), but his other maxim (M3): “I will eat but not produce food when others are producing food.” If that’s the case, now (P2) is false (blocking (P3) and therefore (C)).

  25. Josh,
    Am I understanding you correctly? Are you claiming that whether the maxim that Doug is acting upon is M2 or M3 depends on whether or not people are starving for lack of food production? But I thought that a maxim was a subjective principle of volition such that what matters is not how the external world is, but what the agent’s volitions are — that is, what matters is what the agent intends to do, under what circumstances she intends to do it, and for the sake of what end she intends to do it. But you seem to be claiming that there could be two Dougs, Doug1 and Doug2, that are qualitatively identical in every respect (they have the same beliefs, dispositions, intentions, etc.), and if Doug1 lives in a world where people are starving, then his maxim is M2. But if Doug2, by contrast, lives in a world where people are not starving, then his maxim is M3. By ‘his maxim’, I mean ‘the maxim that he is acting upon’.

  26. No, Doug, I shouldn’t have implied that it depends on the world–or, rather, that it depends solely on the world. Instead of “no one producing food is required for (M2) to be the relevant maxim,” I should have said “Knowledge (or more likely mere belief) that no one is producing food is required for (M2) to be the relevant maxim.” How the world is is relevant to maxim selection only insofar as it affects what we believe and therefore what we do. So, for instance, I have a policy (in the normal, non-Parfitian sense) that “I will bring an umbrella when it is raining” and another policy that “I will not bring an umbrella when it is not raining.” Which maxim I act on then depends on (what I believe to be) the state of the world. Same goes for “Doug”. Whether he acts on M2 or M3 depends on (what he believes to be) the state of the world. By tying it to belief, of course, we get old worries about the road to hell being paved with good intentions (or, as it were, reliable beliefs), but that’s a different objection.
    In any case, however we spell out the details of this kind of maxim-oriented view, it seems likely that any problems that afflict it will also afflict Parfit’s focus on intentional actions, right?

  27. Okay. So if Doug believes that others are not producing food, then the following argument is sound, right?
    P1) Someone in the real world, say, Doug, could act on the maxim M2.
    P2) It would not be rational to will everyone to act on M2.
    P3) Since it would not be rational to will everyone to act on such a maxim, LNF implies what Doug did was wrong.
    P4) What Doug did was not wrong.
    C) Therefore, LNF is false.
    Now you earlier wrote,

    My thought was that by replacing (M1) with (M2), (P3) still follows, but makes (P4) false (thereby blocking (C)). (~P4) It is wrong for Doug to act on the maxim, “I will eat but not produce food when others are not producing food in order to focus more time on being a better philosopher,” because Doug would be philosophizing while the entire world is starving to death.

    But that last clause is just false. In the real world, Doug is philosophizing while no one is starving for the lack of his food production. Let me just stipulate that Doug has been doing exactly what I’ve been doing, working on becoming a better philosopher by reading philosophy, writing philosophy, and blogging philosophy. Meanwhile the farmers of the world have been producing more than enough food to feed everyone in the world. Doug may have believed that he was causing people to starve by focusing on philosophy rather than on food production, but, as a matter of fact, he didn’t cause anyone to starve or to be harmed. He didn’t lie, he didn’t steal, and he didn’t even do anything that put anyone at risk at being harmed. Doing philosophy is a pretty safe activity.

  28. Josh, You also wrote,

    In any case, however we spell out the details of this kind of maxim-oriented view, it seems likely that any problems that afflict it will also afflict Parfit’s focus on intentional actions, right?

    I haven’t made up my mind about this. But it seems to me that Parfit has given some decisive counter-examples to the maxim-oriented approach such as the one that involves someone acting on a maxim like M2 in the real world.

  29. Doug, most recently you write:

    “Josh, You also wrote,
    ‘In any case, however we spell out the details of this kind of maxim-oriented view, it seems likely that any problems that afflict it will also afflict Parfit’s focus on intentional actions, right?’
    I haven’t made up my mind about this. But it seems to me that Parfit has given some decisive counter-examples to the maxim-oriented approach such as the one that involves someone acting on a maxim like M2 in the real world.”

    Fair enough, but, while I clearly feel more confident than you about the maxim-oriented approach considered on its own, the former question is the one that I been concerned to discuss here–as has Dave, I think, and it’s the question Parfit raises. The basic concern I’ve had throughout the last two weeks is that since maxims just are relevant descriptions of what we intend to do, a focus on intentional actions won’t be an improvement over the traditional Kantian focus on maxims, contra Parfit. (And so one way this is manifested is that whatever Parfit thinks intentions can include to solve some problem cases can also be included in maxims.)

  30. Doug,
    Going back to the food production case, it seems like we’ve now got a different scenario on our hands. The first two scenarios were cases where (C1) Doug knows (implication: truly believes) that people are starving but keeps philosophizing rather than producing food, and (C2) Doug knows (truly believes) that people are not starving and keeps philosophizing rather than producing food. My discussion so far has meant to address those kinds of cases; I also take it that Parfit is raising such cases.
    Now you’ve introduced a case where (C3) Doug falsely believes that people are starving but keeps philosophizing. You’re right about this new case: at least on its face, FLN seems to incorrectly render acting on such a maxim wrong.
    But, first, Parfit also has this problem with his focus on intentions (Doug can’t rationally will for everyone to act on M2). And, second, this problem is both old and different from the normal Ideal World objection. If I follow, it’s the problem that good intentions don’t always produce good actions, i.e., the road to hell is paved with good intentions–or in this case, bad intentions pave the road to heaven. As I understand it, the usual lesson drawn from such cases is that deontic assessment should focus on acts rather than intentions. Unless I’m missing something, it’s just the flip-side of the case where someone helps another person with a heavy load, acting on a maxim of beneficence, because our agent does not realize that the other person is robbing the art museum. (I believe this objection has received some healthy discussion too, but I’m not up on the lit. here.)

  31. Once we move from LNF to LN3, we should no longer talk about whether it is rational to will everyone to act on some maxim, so whether it’s true, as you say, that “Doug can’t rationally will for everyone to act on M2” is neither here nor there. What matters is what Doug is intentionally doing and whether we can rationally will everyone to do that. What Doug is intentionally doing beyond not producing food in a world where there is plenty of food is not clear to me. That much (not producing food in a world where there is plenty of food), though, is certainly something we could rationally will everyone to do. Admittedly, Parfit puts things differently. He says, “If we acted on these unconditional maxims, what we would be intentionally doing would be…producing no food, when we knew that there were not too many people who were acting in these ways.” Parfit isn’t clear on what Doug would intentionally be doing where he falsely believes that others are not producing food.

  32. From what Parfit says about what an agent’s intentional doings consist in, my intentional doings include the intended and foreseen effects of my actions, but nothing he says suggests that my intentional doings include what I believe that I am doing but am not, in fact, doing. So, in the Doug case, Doug may believe that he’s letting people starve in not producing food, but that doesn’t mean that he is letting people starve to death. So I don’t see that the formulas stated in terms of intentional doings have the same problems that the formulas stated in terms of maxims more cleary do.

  33. Josh,
    Here’s where I think that we disagree. You say, “since maxims just are relevant descriptions of what we intend to do, a focus on intentional actions won’t be an improvement over the traditional Kantian focus on maxims, contra Parfit.” But this assumes, incorrectly I believe, that what we intend to do (relevently described) is the same as what we intentionally do. I suspect that Parfit would say that we can intend to do X but not intentionally do X. For instance, suppose that I believe falsely that sending money to Oxfam kills people. In that case, what I intend to do is to kill people. But it seems that the only thing that I intentionally do is to send money to Oxfam, which is not wrong and which is something that we could rationally will everyone to do. So LN3 does seem to me to be an improvement over maxim-type version of LN. That said, I think that Parfit rejects all versions Kant’s Law of Nature Formula, and suspect that an appeal to intentionally doings will come up in whatever Parfit takes to be the correct Kantian Formula. Does this sound like a charitable and plausible reading of Parfit?

  34. Doug,
    Yes, I think you’ve got the right reading of Parfit. Unfortunately, I’m going to have to head onto other things and, well, treat this like a reading group, leaving some disagreements unresolved and some ideas half-baked. I’m a little unsure of where we stand at the moment, so I’ll just sum up where I’m at with the following, and then you should have the last word.
    First, it doesn’t seem to me as though Parfit’s cases present an insurmountable problem for maxim-focused formulas. Second, the different, “false belief” case you’ve lately presented poses a (different) potential problem, but it’s not clear to me that traditional Kantians can’t come up with some solution or that Parfit’s focus on intentional acts gets past whatever those potential problems are. As you rightly point out, Parfit is focused on intentional actions, but if maxims just capture the intentional part of the equation, it is unclear (to me anyway) how exactly this solution is going to go–in part for the reasons that you point out about the lack of clarity in Parfit’s view. I don’t think I’m assuming here that what we intend to do is what we intentionally do (I agree with you that those aren’t the same)–but only that in some way that remains unclear to me both should be capturing the agent’s intention. For instance, in your latest case, it doesn’t seem to me as if the agent’s intentional action is only to send money to Oxfam; rather, it seems like the agent’s intentional action is to send money to Oxfam in the (misplaced, it turns out) hope of killing people. Finally, the new “false belief” cases might reveal a problem, but if it is, it strikes me as a different objection than the Ideal World objection. I realize that this might leave some questions open, so, as I said, you should have the last word here.

  35. Small point. Doug suggests that you can intend to do something without doing it intensionally. That is obviously correct. You can intend to do something without doing it, but you can’t do something intensionally without doing it. But it’s less clear that you can do something which you intend to do without doing it intensionally.

  36. Cambell,
    a good example of the latter is luck cases. So, when I buy a lottery ticket I’m intending to win. However, when I’ve won (I wish) my winning the lottery won’t be intentionally done. Intentional action seems to require some sort of control over the action which is not inluded in intending the out-come. Al Mele discusses these cases a lot.

Leave a Reply

Your email address will not be published. Required fields are marked *