This is the 9th of 11 virtual meetings on Derek Parfit’s book manuscript, Climbing the Mountain.
In this pivotal chapter, Parfit finally ties together several of the loose threads of the last several chapters to come very close to endorsing a kind of Kantian “supreme principle of morality,” which turns out to be contractualist in nature. He begins with an interesting discussion of the Golden Rule, which Kant dismissed as “trivial,” and “unfit to be a universal law.” What Parfit does, though, is show why Kant’s objections to the Golden Rule can actually be answered. If he’s right, Kant’s contempt for the formula is unjustified. Perhaps, however, Kant’s Formula of Universal Law is just a better principle than the Golden Rule? This, as it turns out, is false as well. In terms of making us more impartial, the Golden Rule, Kant’s Consent Principle, and the Impartial Observer Formula (according to which we are to determine what it would be rational to choose from the imagined point of view of an impartial observer, rather than from our own or an affected party’s point of view) are all superior to the Formula of Universal Law.
I had several questions about this section, however. First, consider Kant’s reasons for rejecting the Golden Rule. As he puts it:
It cannot be a universal law, because it does not contain the ground of duties toward oneself, nor that of duties of love toward others (for many a man would gladly agree that others should not benefit him if only he might be excused from benefiting them); and finally it does not contain the ground of duties owed to others, for a criminal would argue on this ground against the judge who punishes him.
Parfit’s gloss on this quote is that the Golden Rule (GR) doesn’t imply that we have duties to ourselves, duties of beneficence to others, and duties of obligation to others. He then goes on to show how the GR (either as it stands or in some slightly revised form) actually resists such criticism. But Kant is claiming here that the GR fails as a universal law because it gets the grounds of these duties wrong, and so that’s why it gets the specific duties themselves wrong. So insofar as Parfit can successfully argue that the GR can actually get these duties right, that wouldn’t yet touch the more fundamental objection Kant seems to be raising, which is, presumably, that violating the principle to treat others as we’d have them treat us doesn’t violate some fundamental principle of rational consistency, whereas violating the Formula of Universal Law (FUL) would.
Second, and perhaps more tangentially, I’m not sure if Parfit’s various formulations and reformulations of the GR address one very interesting objection to it. There’s a story some of you might have heard about a Nazi soldier who killed Jews without mercy when given a chance, and he claimed it was his duty as he interpreted the Golden Rule. What he allegedly said was that Jews were evil and subhuman, and that if he were Jewish he’d want someone to kill him, so in killing Jews himself, he was treating them as he would want to be treated. (In what is likely an apocryphal O. Henry-type conclusion to the story, he discovers late in the war that he’s actually half-Jewish himself, at which point he does indeed kill himself.) Now Parfit thinks the best version of the GR is what he calls “G3”:
We ought to treat others only in ways in which we would rationally choose that we ourselves be treated, if we were going to be in these other people’s position, and we would be relevantly like them. (183)
But it seems that this formulation fails to yield the right answer in this case, viz., that the Nazi ought not be killing Jews. Indeed, were he in his victim’s positions and be relevantly like them, he would (so he avers) want to be killed. The only thing that might help out the advocate of G3, then, is the clause about what one would “rationally” choose. But beyond it not being clear what that term does or doesn’t include, as long as there’s room for rational suicide on Parfit’s account (as there is – see the earlier stuff on killing yourself to save others), it would seem difficult to eliminate it as something even someone with false beliefs might nevertheless rationally choose.
A final point about this section: Parfit points out that, while the GR may be “theoretically inferior” to the other possible impartiality principles, it may actually be the most effective at providing us with moral motivation (and so that might explain why it’s been so popular throughout the world for so long). By forcing us to imagine our way into other people’s positions, we are indeed forced to be more impartial than perhaps we’d otherwise be. I like this point. It’s via empathy, I think, that we are typically moved to sympathy with others, and that process is much more indirect (and perhaps less successful) if we merely try thinking about what some ideal observer would choose, or, more abstractly, about what things to which those affected by our actions would consent. Indeed, moral education starts with precisely this question: how would you feel if someone did that to you?
In the second section, Parfit discusses two objections to the FUL. The Rarity Objection we have run across before: FUL implies that, if the sort of action I’m considering is rare enough, it may be rational to will that everyone acts in that fashion, given that most won’t, even if what I’m doing is obviously wrong. But there’s a new objection that’s somewhat related. According to the High Stakes Objection, as long as some benefit of my action would be large enough, I could rationally will a world in which everyone acts as I do, even if what I’m doing is obviously wrong. If both Grey and Blue have been bitten by a cobra as they traveled through the desert, and Grey steals an antidote that Blue has prudently brought along, Grey could rationally will that everyone acts on the maxim ‘Steal when it’s the only way to save my life’ when it applies to them, not only because it’s unlikely that anyone else would be in such a situation, but also because even if he were in such a situation where someone might steal from him, the risk of that would be more than outweighed by the certain benefit he’d get now for stealing from Blue. And even if there were some torturous way to revise FUL to account for the case, it simply fares worse than the other three possible principles – the GR, the Consent Formula, and the Impartial Observer Formula – given that those other principles take into account the greater loss incurred by Blue (insofar as Blue is much younger than Grey, for one). It’s the impartiality forced upon us by these other three principles that makes them superior to the FUL. FUL has us simply considering things from our own perspective, whereas these others force us into considering things from the perspectives of others.
In the third section, Parfit considers another, more serious objection to FUL. It has us asking, not “what if that action were done to me?” but rather “what would happen if everyone did that?” In other words, can I rationally will that everyone does my considered action to others? But there will be some instances in which I know that, even if everyone does these things to others, no one will do them to me, i.e., my wrong act may be non-reversible. This will actually hold for many real-world cases, e.g., cases in which there are those with a great deal of power who are considering some action that harms some powerless person(s). It also holds for cases in which the wrong action I’m considering doing is one that everyone already does. Kant considers only those maxims that aren’t such laws, but there are sometimes wrongdoer’s maxims that are already universal, so of course one could rationally will them to be universal laws. Think here of cases of racism or treating women as subordinates, etc. The other principles, again because they force impartiality out of us, fare much better here.
In the final section, Parfit considers a “Kantian solution” to these difficulties with FUL. There are various possible interpretations of FUL that Parfit considers (coming from Nagel, Rawls, T.C. Williams), but Parfit thinks that none works as an explication of what Kant meant, given Kant’s own words to the contrary. But if instead of thinking of these as interpretations of Kant that will get him off the hook we think of them as revisions, then Parfit thinks Scanlon’s (in the Moral Belief version) is the best, viz.:
It is wrong for us to act on some maxim unless everyone could rationally will it to be true that everyone believes that such acts are morally permitted (197).
Now plug in the switch from talk of maxims to talk of intentional actions for which Parfit argued earlier, and with the addendum that when people believe an act is permitted they accept a principle permitting that act, we’ve got:
The Formula of Universally Willable Principles: An act is wrong unless such acts are permitted by some principle whose universal acceptance everyone could rationally will (198).
This formulation now avoids all the previous objections, i.e., the New Ideal World Objection, the Flawed Maxims Objection, the Permissible Acts Objection, and the Rarity, High Stakes, and Non-Reversibility Objections. This is also a formula that remains very close to Kant’s own view, and is also clearly contractualist, so we can tighten up the formulation a bit more to get what just might be the supreme principle of morality:
The Kantian Contractualist Formula: Everyone ought to follow the principles whose universal acceptance everyone could rationally will (199).
And away we go….
I think what is supposed to be doing the work in G3 against the Nazi example is the ‘relevantly like them’. The idea probably is that when we imagine ourselves to the position of others we would have to adopt a large chunk of their beliefs, values, desires and so on. A version of an actual Jew person who held the Nazi world-view would then not count as relevantly similar position to be adopted. The Nazi who wanted to test his actions with the Golden Rule would then have to adopt the Jew’s position with the beliefs and values that come with it. Of course from that perspective the evil deeds could not be rationally chosen and thus the actions have to count as wrong. My small worry is that G3 is such a revised version of the Basic Golden rule that it’s hard to see how it is the same principle.
Perhaps you’re right, Jussi, but then there is real difficulty in making clear sense of the phrase, “What if I were an X?” In order for me to imagine myself as someone else, there has to be enough of myself retained to gain some sort of imaginative purchase on the moral conclusions to draw for me. If the person into whose shoes I’m stepping is very different from me and I’m somehow to imagine being relevantly like him, then unless I retain some of my core psychological elements in the imaginative leap, I have no more clue as to what I should do than if I did no imaginative projection at all. I was assuming, then, that among the core psychological elements retained by the Nazi in the imaginative leap would be his beliefs about the nature of Jews, perhaps projecting himself into what he took to be a “clearheaded” version of the specific Jewish person in question.
I share your worries about the relation of G3 to the Golden Rule, though. I’ve been having similar thoughts about the relation of the contractualist formulat Parfit winds up with to Kant (or Kantianism).
Isn’t the problem in the Nazi Case that he’s not doing what he thinks that he’s doing because he has a false belief? What he intends to be doing is killing beings that are evil and subhuman, but what he is in fact doing is killing beings that are neither evil nor subhuman. So, as Josh would say, it all depends on what the relevant description of the act is. Even if we could rationally choose that we ourselves be killed if we were evil and subhuman, G3 doesn’t imply that what the Nazi did wasn’t wrong.
At the end of this chapter, Parfit notes that “some people have come to believe that Kant’s Formula of Universal Law cannot help us to decide which acts are wrong, or help us to explain why these acts are wrong.” As O’Neill puts it, FUL “gives either unacceptable guidance or no guidance at all.” Parfit responds that once we revise FUL in the ways that he recommends it doesn’t give unacceptable guidance. But what about the other horn of O’Neill’s dilemma? Does it give us any practical guidance at all? Does it help us decide which act are wrong? Can anyone tell me whether, for instance, it is wrong to kill one to save five on the Kantian Contractualist Formula?
Doug: What the Nazi would rationally choose can depend on false beliefs, can’t it? So he could rationally choose to kill beings he falsely believes are evil and subhuman.
As to your second question, I believe the point of the next chapter (11) is to deal with those sorts of practical guidance questions.
Dave: You write: “he could rationally choose to kill beings he falsely believes are evil and subhuman.” Heck, he could rationally choose to kill beings that he knows are neither evil nor subhuman so long as it was in his self-interest to do so. But I thought the issue was whether he could rationally choose that others kill him when they falsely believe that he is evil and subhuman? I don’t think that he can rationally choose that given the prevalence of false beliefs about such things.
Dave: As to the second question, there you go cheating again by reading ahead. That spoils all the fun.
Doug: Your way of framing the matter utterly undermines the very possibility of using G3 as a decision procedure or as a moral motivator, for most (if not all) of my false beliefs are beliefs I don’t know are false. So how could the issue be “whether he could rationally choose that others kill him when they falsely believe that he is evil and subhuman?” That might work as a criterion of rightness, but it sure as hell won’t work as a moral motivator or a decision procedure.
Instead, the Nazi can, it seems, treat Jews only in ways in which he would rationally choose that he himself would be treated, were he a Jew with the beliefs he has about the evil of Jews, regardless of whether or not those beliefs are true.
And back to Jussi: Parfit claims that what it means to imagine being relevantly like the people whom our acts affect is to imagine ourselves having their “desires, attitudes, and other physical or psychological features” (183). Beyond my earlier worries about identity, there’s no explicit specification here that I would have to imagine myself with the other’s nonmoral beliefs.
Dave,
Actually, I should have said, as I did above, the issue is “whether he could rationally choose that others kill him if he were (as he mistakenly thinks the Jews are) evil and subhuman?” And, as I said above, even if he could that doesn’t mean that G3 implies that what the he did was permissible, for G3 would then only imply that it is permissible to kill those who are evil and subhuman and the Jews he killed are neither evil nor subhuman.
Now you express the following worry:
I don’t follow. Why isn’t G3, as I understand it, a good decision procedure? It tells us (or at least those of us who don’t have the same false beliefs that the Nazi has) that it is wrong to kill the Jews. And it motivates me, because it gets me to see things from the other’s position. For instance, I might say, “Boy, I wouldn’t want to be killed if I weren’t evil and subhuman.” Of course, G3 doesn’t help those who have the relevant false beliefs to accurately determine what’s right and wrong. But what prinicple does? (Only false ones, I imagine.) And sometimes G3 will even motivate people with false beliefs to act wrongly. But isn’t that true of all moral principles?
OK, I think I cede the point about motivation and decision procedures given false beliefs. There might be a few principles that escape the worries about false beliefs, but you’re right that they’d be true of at least most.
But now I’m wondering whether or not your appeal to false beliefs is going to work here. What he intends to be doing (killing those who are evil and subhuman) may not be what he’s in fact doing, but isn’t there a distinction between what you intend to do and what you’re intentionally doing? Or have I got the wrong distinction?
I have a follow-up question to one of Doug’s questions. If the supreme principle of morality is supposed to “explain why these acts are wrong,” then KCF (that looks distressingly like a fast food joint) should do the work of a criterion of rightness. But this kind of formula strikes me as getting further from the moral criterion (allowing that formulas of this kind might be better decision procedures). Is what makes A not punching B right that it comports with “the principles whose universal acceptance everyone could rationally will”? That seems far-fetched to me, as does any answer that appeals to counterfactual agreements. On these sorts of questions, this is where traditional consequentialism (the punch reduces welfare, etc.) or Kantianism (it disrespects humanity) seem to me to have much more intuitive answers. Or is there some way of construing KCF such that it seems more intuitive as an explanation for why wrong acts are wrong, particularly those that where contracts are irrelevant (e.g., ordinary violence)?
Dave,
You ask, “What he intends to be doing (killing those who are evil and subhuman) may not be what he’s in fact doing, but isn’t there a distinction between what you intend to do and what you’re intentionally doing?”
Yes, there is this distinction. As I understand it, the distinction is such that what you intentionally do is broader than what you intend to do. In the Trolley Case, you may not intend, but only foresee, the killing of the one on the side track, but, nevertheless, killing the one on the side track is one of your ‘intentional doings’. In any case, I don’t see the relevance here since the killing of the Jews was both intended and an intentional doing of the Nazi.
Note that I’m not appealing to false beliefs in formulating G3. The false belief is only employed to explain why the Nazi incorrectly believes that G3 implies that it is permissible to kill the Jews. In fact, G3 implies that it is impermissible to do so regardless what one’s (true or false) beliefs are, for we could not rationally choose for others to kill us when we are not either evil or subhuman — at least, not in the sort of circumstances in the case at hand.
Josh,
Good point. It seems that if an explanation of why certain acts are wrong is supposed to tell us what makes these acts wrong, then KCF fails here as well. This same objection, if I recall correctly, came up when we read Scanlon’s book. And I believe you wrote an excellent post extending this sort of objection to all moralities by authority. See Josh’s Are Deontology, Consequentialism, and Pluralism the only viable theories of ethics?.
Josh (and Doug): One possible reply is that KCF doesn’t have to be a criterion of wrongness; rather, it’s merely a characterization of wrongness.
Second, even if it is a criterion of wrongness, it itself wouldn’t have to be what explains first-order acts of wrongness directly; instead, those would be explained with respect to some specific principle(s).
Regarding your first point, whether KCF does or doesn’t need to be a criterion of rightness depends on what Parfit wants from KCF. If he want it to explain to us why, at the most fundamental level, certain acts are right and others wrong, then it does need to be a criterion of rightness, right?
Regarding your second point, that doesn’t sound like a criterion of rightness to me. A criterion of rightness, I thought, was supposed to tell us what the most fundamental right-making and wrong-making features of acts are. On your conception here, KCF doesn’t tell us that. Rather, it is the specific principle(s) that tell us that.
Went to preview my comment, and whadya know, Doug wrote almost the exact same thing! Just one follow-up point. Even if Parfit doesn’t want fundamental principles to provide a criterion of wrongness, it seems reasonable for us to want it. (Or for us to want Parfit to say that he’s only searching for half of the supreme principle of morality. Or for him to explain what other sense of “explains” he has in mind.)
Josh (and Doug),
I agree with Dave. KCF can be seen as characterizing the nature of wrongness, which is different from specifying a wrong-making property or principle (Scanlon makes this distinction in What We Owe to Each Other, pp.10-1 and p. 391n.21). True, in a way the formula of humanity seems more direct. But this may be deceiving, as it is not obvious what it means to respect the humanity of someone else (KCF may help us to identify what this means in different contexts, by asking what other human beings could rationally accept).
I want to raise another question. I wonder whether the Non-Reversibility Objection applies to Kant´s FLN test only if we accept a construal of the maxims mentioned that is too specific. If we construe the maxims Parfit mentions in more general terms, or as implying more general policies (say, e.g. that you can oppress members of other, weaker groups when that would benefit you in some immediate way), then it is not obvious that the wrong acts are not reversible. Your group is dominant now, but it may turn out to be dominated later on, and you might not rationally will that you then be treated on the basis of a policy that permits opression of weaker groups…
Is this worry a fair one?
I thought the worry was that there’s something too indirect in KCF to explain why it’s wrong for A to punch B. But what I was trying to say in my second possible rejoinder was that the specific principles can address that (“it’s wrong to cause physical harm to other people for no good reason”), and then if you want to know the more fundamental reason for that principle, you can appeal to KCF. So KCF can explain the most fundamental right-making and wrong-making features of actions are, but when it comes to the more humdrum actions we perform, the more direct, or immediate, explanation will be one (or more) of the specific principles whose universal acceptance everyone could rationally will.
Pablo: I agree with Dave too insofar as he claims that “KCF can be seen as characterizing the nature of wrongness, which is different from specifying a wrong-making property or principle.” The question, for me, is whether that’s enough.
Dave: If KCF is taken to be explaining what the most fundamental right-making and wrong-making features of actions are, then I think that it gets the wrong answer. What makes A gratuitously punching B wrong is not that it fails to comport with “the principles whose universal acceptance everyone could rationally will,” but that it gratuitously causes harm. After all, it could turn out that, on correct substantive account of rationality, A’s gratuitously punching B does comport with the principles whose universal acceptance everyone could rationally will. It could not, however, turn out that gratuitously causing harm is permissible.
Dave: The argument I just gave is probably not the best. Even if A’s gratuitously punching B does necessarily violate the principles whose universal acceptance everyone could rationally will, it still seems, to me anyway, to be the wrong explanation as to what fundamentally makes A’s doing so wrong.
Doug: Oooh, I had just written up a reply to that argument when I saw your new comment. Now you’ve got the old “seems to me to be the wrong explanation” argument, so I’ll respond in kind: it seems to me to be the right explanation. The principle of rightness at issue should be (it seems to me, anyway) something for which violations render one morally responsible (absent certain conditions like accidents, ignorance, insanity and the like), in the sense of being accountable. But being accountable is being accountable to others, and that surely depends on being susceptible to moral demands whose source is in the claims of those others. So again, I can agree with you that causing gratuitous harm is wrong (at least most of the time), but that will be true only insofar as the victim (or some other member of the moral community) has a legitimate claim against being gratuitously harmed.
As for the issue of whether or not a characterization of wrongness is enough here, I don’t know. I’m inclined to think it’s not, and Parfit’s remarks throughout suggest that that’s not what he’s gunning for. But as the original objections seemed to have arisen in response to the move to contractualism generally, that’s a legitimate response, I think, one that Scanlon himself makes (as Pablo rightly points out).
Dave,
Even within your accountability approach to wrongness, and even under a claim-based approach, I still want to say that KCF is a counter-intuitive moral criterion. It’s not because others can justifiably make claims on me not to punch them that punching them is wrong; rather, it’s because of the content of those justifications, namely that punching them gratuitously harms them (or some such). So (by my intuitions, anyway), someone can say “harming me was wrong” not because he has a claim against me harming him; rather, he has a claim because harming him is wrong for independent reasons. Maybe it’s wrong because it reduces his welfare, or causes suffering, or treats his humanity as a mere means. But those seem like our justifications for the claims, rather than being justified by the claims. Or look at it in your terms: the claimant must have a “legitimate” claim. Whatever makes it legitimate is the criterion of rightness, the basis for the act being wrong that is more fundamental than, and explains why, a pro-harm principle is not among “the principles whose universal acceptance everyone could rationally will.”
On an interpretive sideline, there are a couple of places where Scanlon sounds like he agrees, by basing rightness on the more fundamental value of humanity (e.g., p. 268). Though he also says many things that seem to run against this (e.g., the intro, among many other places).
To Pablo: I think your other worry about maxim description is indeed fair (I’ve gone on about similar worries so much over the last couple weeks that I should probably stop now!).
David,
you wrote:
‘And back to Jussi: Parfit claims that what it means to imagine being relevantly like the people whom our acts affect is to imagine ourselves having their “desires, attitudes, and other physical or psychological features” (183). Beyond my earlier worries about identity, there’s no explicit specification here that I would have to imagine myself with the other’s nonmoral beliefs.’
I don’t think the Nazi world view can be purely non-moral beliefs. Repulsive ideas like that Jews would be inferior and evil sound more like evaluative and moral beliefs. Thus, they would be on the side of attitudes (which beliefs too are) and other psychological features.
Josh, you wrote:
What’s interesting about what you say is that each of the justifications you give — reducing welfare, causing suffering, treating humanity as a mere means — sounds legitimate to me, despite appealing to very different criteria of wrongness. So what might possibly unite them all into a single fundamental criterion of rightness? Simple: their universal acceptance is something everyone could rationally will.
(BTW, “‘legitimate’ claims” was just shorthand for “claims referencing principles whose universal acceptance everyone could rationally will.”)
Jussi: It depends. Is the belief that some entity is non-human, or even sub-human, necessarily a moral belief? I’ve wondered explicitly about this issue when considering Susan Wolf’s article “Sanity and the Metaphysics of Responsibility,” in which she argues that Nazis were partially normatively insane, and so partially non-responsible for their actions. Normative insanity is defined, roughly, as the inability to recognize moral reality, whereas cognitive insanity is defined as the the inability to recognize nonmoral reality. But it’s unclear whether or not the inability to recognize that there are no relevant psychological differences, say, between oneself-qua-Nazi and Jews is an example of normative or cognitive insanity, given those construals.
There’s two ways to go here I think. First, ‘sub-human’ could be a thick term which, if it would apply, would imply negative evaluations of the object and normative conclusions about what to do to things in that category. In this sense, it is of course an empty term. But, having beliefs that something would satisfy the term would count as a moral belief because of the evaluative/normative content of the belief. Second, it could be a purely descriptive term in which case no evaluative or normative implications existed for what to do to beings in that class purely in virtue of that belief. In this case, the Nazi would need some (false) evaluative/normative beliefs about what ought to be done to subhumans if there were such in purely descriptive sense. Either way, the Jew would lack the normative/evaluative beliefs.
Dave: You write: “What’s interesting about what you say is that each of the justifications you give — reducing welfare, causing suffering, treating humanity as a mere means — sounds legitimate to me, despite appealing to very different criteria of wrongness.”
What are your grounds for asserting that these three appeal to “different criteria of wrongness”? Perhaps, they each amount to the same thing. It seems to me that the three all appeal to the very same criterion: namely, they all entail producing disvalue without producing any compensating value. Reducing welfare is bad, causing suffering is bad, and treating humanity as a mere means is also bad.
My own inclination would’ve been to say that those things are bad only insofar as they violate principles to which everyone would reasonably consent. What makes reducing my welfare (without compensating value) bad? It’s something to which the reasonable me would never consent. What makes reducing my welfare with a certain level of compensatory value permissible? It’s something to which the reasonable me might consent. (I’m borrowing a Scanlon-esque formulation here, and I’m putting things roughly, but I’m sure you get the idea.)
Jussi: Two things. First, I don’t understand what you mean when you say, “having beliefs that something would satisfy the term [on the thick understanding of “subhuman”] would count as a moral belief because of the evaluative/normative content of the belief.” Why is my belief that X meets the condition of some evaluative category itself an evaluative belief?
Second, if the term is purely descriptive, why think the Jew wouldn’t share the Nazi’s evaluative belief about what ought to be done to subhumans generally? He’d just think he himself is not one.
Dave, you wrote
I find Doug’s answer here kind of intriguing, but I had something different in mind. I only mean to say that those are each plausible candidates for the fundamental criterion, whereas the contractualist principle is not. I (surprise, surprise) believe that some version of the formula of humanity supplies the moral criterion, in which case principles that prohibit reducing welfare and causing suffering are merely subsidiary to that fundamental principle.
But that’s not really the issue at hand, which is whether the contractualist principle is also plausible candidate. If it’s supposed to be fundamental, then there’s no reason why we might or might not consent to any given practice or sub-principle. Contractualists like to emphasize that the consent must be reasonable or rational as a way of avoiding that problem, but that to me just pushes back the problem. Either it’s reasonable (or rational) by virtue of some more basic principle (e.g., I can refuse consent because it wrongfully causes gratuitous harm) or not. If the former, (unadulterated) contractualism is false; if the latter, it’s too permissive.
As you put it in your latest reply to Doug,
At the risk of sounding like a broken record, this is what seems to me to be the counterintuitive claim: rather than saying that reducing my welfare is bad because I can’t reasonably consent to it, my intuition is to say that I can’t reasonably consent to it because it’s bad. If that’s not what makes my withholding consent reasonable, then what does? And, for whatever answer is given, why isn’t the content of that answer itself the criterion?
David,
First, I thought evaluative beliefs just were beliefs the contents of which is that the object satisfies some evaluative criteria. Thus the belief that ‘X is a subhuman’ on the thick understanding would be just the thought that X satisfies such-and-such evaluative criteria.
Second, well, have many people, other than Nazis, have thought that anything that is not quite a human on some descriptive criteria should be tortured, poisened, and burned? I think most everyone shares attitudes of more humaine treatment for not-quite humans if there were such. Nazies were the ultimate speciesists with a very deranged idea of the species.