The marginal cases argument (MCA) has been used very effectively in defense of the direct moral standing of sentient non-humans. Any being that has direct moral standing is such that our treatment of them matters directly, independently of how it affects others. Direct moral standing is worth wanting: it ensures protection against, among other things, cruelty, mistreatment, destruction and exploitation, etc. Probably works of art and other possessions have no more than indirect moral standing; certainly normal adult humans have direct moral standing.

MCA’s are designed to undermine our confidence that the possession of a particular property R is necessary for direct moral standing. Well, it’s a little more complicated. Abbreviating a lot, MCA’s ask us to answer this question:

Q1.  Are you more certain that having property R is necessary for direct moral

       standing or that human being H (where H is a non-R) has direct moral

       standing?

Let me instantiate the question so that it sounds more familiar.

Q2. Are you more certain that Rationality is necessary for direct moral

      standing or that Suzie (say, a normal, 1yr. old, not-quite-rational human)

      has direct moral standing?

Most thoughtful people seem more certain that Suzie has direct moral standing—that it matters directly what we do to Suzie, unlike what we do to chattel or to bits of property—than that rationality is necessary for direct moral standing. And, as it happens, there are counterparts of Suzie on the margins of most other properties allegedly necessary for direct moral standing. Most thoughtful people seem more certain that the counterparts of Suzie have direct moral standing than that those properties are necessary for direct moral standing. It appears—as most everyone knows—that the only rational way to ensure direct moral standing for all human beings is to concede direct moral standing to sentient non-humans as well. And so, many philosophers and others have concluded that we should expand the circle of beings that matter directly.

But the consistency requirement in MCA does not seem sufficient to determine which properties are necessary for direct moral standing. In answering question Q1 above we know far too much. We know who possesses the property and who doesn’t, so we can choose necessary properties in a way that favors the humans we care about and disfavors those we don’t. If rationality were necessary, some humans we care about would be excluded from the moral community, and similarly for autonomy, language-use, memory, self-consciousness, and so on and on. Therefore none are found to be necessary.

But there is a straightforward way to mitigate the influence of this information and make the marginal cases argument more impartial. Suppose you have to answer Q1 under these assumptions:

1. You do not know who possesses the property allegedly necessary for direct

    moral standing.

2. You do not know who does not possess the property.

3. You do not know who is on the margins of the property.

Now suppose the question arises as to whether some early-term fetus or other (call it F) has direct moral standing or whether aborting F is perfectly permissible. It is typically argued that the answer to that question is clearly no, F has no direct moral standing, since F does not possess (or does not possess sufficiently) some property R that is necessary for direct moral standing. But whether this answer is correct depends on how we are prepared to answer Q3:

Q3.  Are you more certain that property R is necessary for direct moral standing or that F (who is, for all you know, identical to you, or identical to a precursor of you)  has direct moral standing?

The question in Q3 I think raises some interesting questions in both moral epistemology and moral methodology. Q3 assumes (more broadly than Rawls did) that you do not know where on the developmental scale you are. And of course it raises similar metaphysical worries. But Q3 points up that it’s easier to reach flimsy conclusions concerning which properties matter to direct moral standing when there is less at stake for us than it is when there’s much more.

But should theoretical conclusions be given more credence when they’re made with something at stake? Practical reason and theoretical reason intersect here and no doubt some epistemologists would urge that theoretical conclusions should be informed by such practical considerations.

For what its worth, there is almost no property R that I am so certain is necessary for direct moral standing that I am willing to bet my adult life on it. So I’d likely answer Q3 that I’m less sure that property R is necessary for direct moral standing than that F has direct moral standing.

   

43 Replies to “Marginal Cases and Early Term Fetuses

  1. You say, “there is almost no property R that I am so certain is necessary for direct moral standing that I am willing to bet my adult life on it.” What about consciousness? It may be that, given certain plausible necessary conditions for personal identity, an early term fetus that is lacking in the capacity for consciousness could never be identical to a person, let alone to a person with a normal adult life. In that case, you don’t need to know where on the developmental scale you are to know that you’re not betting your adult life on it.

  2. I find that a hard question, Doug. I did say “almost no property”, but there is something to more to say.
    Take an adult with no capacity for consciousness. Does he have direct moral standing? I don’t know. I don’t think I can use and dispose of him (no matter how others may be affected by it, positively or negatively) any way I see fit. I can dispose of living plants and schrubs in the dumpster; can I dispose of him in the dumpster? I don’t think so, but I’m honestly not sure what properties are necessary for direct standing (I’m similarly doubtful that the standard litany of properties allegedly necessary for personhood are actually necessary. I’ve never seen so much as a decent argument for that claim.). If it’s just my bad intuitions on the matter, I’m open to arguments.
    On the personhood question, my answer to Q3 does not depend on my being identical to F. The early term fetus need only be a necessary precurser to me: i.e., F need only be such that, if F’s life is termninated, I will have no adult life. This is the relation that matters to my decision. Further, it need not be *me* making the decision: an ideal representative of mine could make the decision instead. The ideal representative might be better situated epistemically. He might plausibly not know what developmental stage I’m in. On the other hand, it is difficult (though perhaps not impossible) for me to abstract away from such knowledge.

  3. F need only be such that, if F’s life is termninated, I will have no adult life.
    Mike, doesn’t that yield the absurd result that sperm and ova have direct moral standing?
    More generally, your reasoning seems to exclude the possibility that there could be necessary precursors to persons which are themselves lacking in direct moral standing. Since that possibility is actual, you really shouldn’t be excluding it!

  4. On the personhood issue, I agree with Richard. Here’s another angle on essentially the same point: You say, “F need only be such that, if F’s life is terminated, I will have no adult life.” Suppose that we all evolved from the same unicellular organism. In that case, F could be that unicellular organism. But, surely, unicellular organisms don’t have direct moral standing even if someone like you might evolve from one some day such that it is true to say that if F’s life is terminated, that possible person will have no actual future adult life.
    Regarding the adult with no capacity for consciousness, it seems pretty clear to me that such a being has no direct moral standing and we can account for the fact that we can’t dispose of him in the dumpster in at least two ways: (1) it causes emotional harm to conscious beings or (2) it makes it the case that the person that once was (the conscious person) was in a harmed condition back when he was conscious — the Pitcher-type account of posthumous (or post-consciousness) harm.

  5. Thanks Richard, I surely don’t intend that result. Individual sperm and ova do not constitute *a* precursor, since, as I am using the term, there is no being there to be a precursor. There is rather the potential for a being that might be a precursor. I say nothing about potential precursors. As I am using ‘precursor’ it refers to post-embryonic beings. What might be said about embryonic beings, I don’t know.
    I hope I don’t assume that any precursor of me might fail to have direct moral standing. Rather, I argue that I don’t know enough about the properties necessary to direct moral standing to reasonably conclude that some precursor of mine fails to have that standing (and neither does anyone else!). So I don’t see that any precursors of persons (as I am using ‘precursor’) actually fails to have direct moral standing.

  6. “Suppose that we all evolved from the same unicellular organism. In that case, F could be that unicellular organism. But, surely, unicellular organisms don’t have direct moral standing even if someone like you might evolve from one some day such that it is true to say that if F’s life is terminated…”
    The question does not concern organisms that I *might* evolve from, but that being that I will develop from. As I noted to Richard, I would restrict my use of ‘precursors’ to post-embryonic beings that stand in some linear (perhaps purely material, perhaps purely genetic) relation to an adult human being. My intuitions about unicelluar beings and how we might have evolved from them are way too thin to be worth anything and frankly so is the science on the matter.
    But then you say “we can account for the fact that we can’t dispose of him in the dumpster in at least two ways: (1) it causes emotional harm to conscious beings. . .”
    But I ruled that out. I don’t believe I can dump him no matter what anyone thinks or feels. I still think that.
    You add,
    “. . .or (2) it makes it the case that the person that once was (the conscious person) was in a harmed condition back when he was conscious — the Pitcher-type account of posthumous (or post-consciousness) harm”.
    Posthumous harm theories are very controversial, since they are all assume some preference-theoretic basis for well-being that I doubt is right. But the irony is that you want to explain a harm to the living by what you take to be a more plausible harm to the dead. Finally, I never assume that the person ever was conscious. Maybe he never was a conscious being. Still, I’m pretty sure that you’re not now going to tell me that I might as well chuck him in the dumpster.

  7. I think this is a very important point:
    “In answering question Q1 above we know far too much. We know who possesses the property and who doesn’t, so we can choose necessary properties in a way that favors the humans we care about and disfavors those we don’t.”
    I think the above remark hits one serious cause for the dissatisfaction I feel with much of contemporary ethics. Much contemporary ethics sounds to me like rationalization. I suspect that our feelings about ourselves and our conspecfics have far too much influence on moral theory (“Well, I know humans are more important than animals, so let’s see what I have to say in moral theory…” and “Well, I know that killing human babies is horribly wrong, so let’s see…” are equally good examples of this).
    However, I don’t see that you, Mike, have given a solution to the problem. If the problem is that people are biased towards themselves and towards humans, that bias could perhaps be removed if you could remove from us the knowledge of what properties (rationality, sentience, etc.) humans have, or if you could remove from us the knowledge that we are human. But the bias won’t be removed by merely asking us to hypothetically imagine that we don’t have that knowledge. The person trying to answer your Q3 still in fact has the problematic biases and so is going to answer your question in a biased way, even though the person you’re asking him to imagine would not have those biases.
    This also points up, by the way, one thing that is wrong with Rawls’ original position thought experiment: Rawls justifies its use largely by the idea that the veil of ignorance eliminates biases. But it doesn’t eliminate biases on the part of the people in reality who are evaluating the theory, and those are the biases that matter.

  8. Mike H.
    I agree about what at least appears to be a rather thoughtless bias. There are salient examples. My suggestion was for a way to “mitigate” the effects of irrelevant information in moral deliberation. There has to be some deliberate effort not to allow that information to affect the process. The hypothetical situation is a device designed to help in this regard. But it is unquestionably true that such hypothetical situations will not prevent those with axes to grind and insufficient goodwill from taking advantage of the hypothesis to legitimize biases. On the other hand any moral method must assume the goodwill effort of those employing it.
    Rawls’s situation is different. Of course Rawls expressly endorses the social contract view of morality as a cooperative venture for mutual advantage. So the principles selected are actually designed to be biased toward parties to the contract. All the original position does is make the initial conditions of contractors as equal as possible. So I don’t think Rawls would be too upset at the charge that the principles are biased: morality is *supposed* to pay off for the cooperators.

  9. Doesn’t this argument assume I have a relatively high subjective probability for early fetuses having direct standing? And if direct standing is the sort of thing that protects a thing from destruction as Mike A’s introduction to the topic indicated I don’t in fact think I have such an antecedent subjective probability. It may well be that for some candidate criteria for getting direct standing my credence that these are the right criteria are lower than my credence for the claim that young fetuses have direct standing. But for just about any criterion I have not definitively ruled out, my subjective probability that it is a correct criterion is higher than my subjective probability that young fetuses have the required moral standing.
    I don’t have very determinate views on what it takes to have direct moral standing, though my guess is that the thing at least needs to be the sort of thing to have a perspective on things whatever that means. But I don’t think that one has to have any idea of what the criteria are to be able to rule somethings out as having such standing. I’m relatively sure that the chicken egg I recently ate does not have direct standing. I’m relatively sure amoebas don’t. I think that young fetuses are more similar to amoebas or fertilized chicken eggs than to the clearer cases of things with direct standing. So even without a clear idea I can be justified in thinking that fetuses don’t have such standing.
    I also don’t get the point about excluding bias through some analogue of a veil of ignorance. You have to remember that Rawls never put any real person behind such a veil. Rather it was a device for highlighting certain sorts of arguments and bracketing certain other sorts of arguments. The arguments that survive are those that don’t depend either on knowing one’s position in a social structure or on knowing how likely it is that one will wind up in such a position. (As a result our attention is turned to considerations that would be acceptable to all people in a given social structure – or at least that is the goal.) We, not ourselves behind a veil of ignorance can tell which arguments those are.
    Insofar as any suggestion for a criterion for direct standing does not make reference to the identity or lack thereof, it seems that the suggestion is such as to pass the test. Insofar as arguments for any given criterion also don’t make such reference, they too would pass the test. But just as Rawls knew that his criterion of justice would make the less well off better off while he was writing the book, we know that we either do or don’t satisfy the candidate criteria offered, and we know whether the arguments rest on features that we actually have or do not have.
    Finally, insofar as we are testing criteria with intuitions about cases, I don’t even see how the procedure would get off the ground. The veil works in TOJ because given the constraints of the decision problem we know roughly what arguments can still be offered. But I don’t even see how to begin to use a veil to sort between people’s intuitions. I sincerely doubt that our intuitions are at all influenced by knowledge that we are not fetuses, but insofar as an intuition about such particulars does not carry with it an argument that would either depends or does not depend on that fact, I don’t see how we (situated without any veil) could show it one way or the other.
    I hope these last remarks aren’t too cryptic to follow.

  10. “Doesn’t this argument assume I have a relatively high subjective probability for early fetuses having direct standing?”
    How? I assume only this: (1) some level of development is necessary for direct moral standing (dms)–I concede that the level of development must be post-embryonic or fetal, (2) I otherwise do not know what level of development is necessary for dms.
    My claim is that your conclusion about what properties are necessary (or, if you like, what level of development is necessary) for dms will be less partial *if* you are not in a position to know your own level of development (or, if you do not let information about your own level of development affect your deliberation on the matter). I do think that the information that most moral philosophers have that they are not fetuses would affect their conclusions on this question, just as information that they are not among the least advantaged members of society would affect their conclusions about distributive justice.
    But then you say, “But I don’t think that one has to have any idea of what the criteria are to be able to rule somethings out as having such standing.”
    Right. My pencil doesn’t have it either. But I’m unmoved by “more in common with” arguments. It doesn’t matter how much they have in common. What matters is whether they have in common what is relevant to dms.
    You say, “I also don’t get the point about excluding bias through some analogue of a veil of ignorance”
    I don’t understand this. There is no question (is there?) that the information that Rawls precludes is information that, according to Rawls, would bias the choice of principles. Unlike Gauthier, for instance, Rawls excludes information about how well I did in the social lottery. Why? Because it would obviously bias my choice of distributive principles in favor of those who did better in the lottery. Knowing I did well in the social lottery I’d be *much* less inclined, for instance, to find rational the choice of the difference principle. I’m sure I would not find it a rational choice because, given my information, it would not be rational choice. So I don’t understand the remark above.
    The fact that the veil of ignorance is simply a heuristic device–that the same conclusions can be reached using a less colorful form of argument–doesn’t (I think) show that the veil is not designed to mitigate the influence of morally irrelevant information on the choice of principles. And mitigating the influence of morally irrelevant information just is mitigating bias, wouldn’t you say?

  11. Mike A writes: “There is no question (is there?) that the information that Rawls precludes is information that, according to Rawls, would bias the choice of principles.”
    There are various readings of what the veil is doing for Rawls, but I don’t think the right reading is one on which the idea is simply to bracket irrelevant information – though I’m sure he says that it brackets irrelevant information at certain points in his discussion. Even if he sometimes does say that sort of thing, I think there are better readings of what he is up to. (What I say below no doubt owes a lot to Scanlon and Cohen’s readings of Rawls.)
    I think in the end the best reading is one on which it is designed to limit the considerations for or against a conception of justice to only those that are acceptable from each position. The way I read it, the veil is part of a package which includes the various other features in the OP, and only with those other features is it suited to his purposes. It is by coupling the veil with the idea that people are motivated by self-interest but not envy that he gets the result that deliberators are forced to consider how acceptable the society is from each person’s point of view. So, on my reading, it isn’t just that it brackets irrelevant information, it is that the package focuses the discussion on certain sorts of positive information about the society to be designed.
    Here is at least one bit of evidence that that is the right reading: It is crucial to the argument that the deliberators don’t know the likelihoods of winding up in any given position, so they are not supposed to think that they will most likely wind up in a position that is more common than one that is less common in any given society. I don’t think you can get a non-holistic (that’s not so vicious circularity in other terms)justification for this constraint. It is not obvious that knowledge of probabilities would be irrelevant unless you consider what Rawls wants to do, which is show that the resulting society is not such as to raise legitimate objections from each and every one of its members. If the parties knew the probabilities they would be able to discount the views of those in the less common situations, and Rawls does not want that because he wants to rule out trade-offs which make some miserable in order to make the vast majority better off.
    Related to this, the real mode of argument is one on which he constructs a choice situation with one eye on where he hopes to end up, and he adjusts where he ends up as he plays with the features of the choice situation to get it to limit the considerations to ones which he thinks are legitimate. This is part of what reflective equilibrium is all about. He has intuitions both about relevant sorts of argument and what the upshot should be and he revises these as he tries to construct a coherent package.
    My lack of understanding is just about how that is supposed to work for the sorts of issues you have in mind. The theorist (who does know his/her situation as Mike H points out) is making an argument about what she would decide if her knowledge were constrained in certain ways. So the analogue here would be for you or me to make an argument that we would decide that the appropriate criteria for dms would be such and such if we excluded irrelevant information. So suppose we agree that knowing that I am not a fetus is a kind of irrelevant information. (I’m not sure it is, but I’ll grant it for now.) Then the next stage would be so argue that without this information we would conclude that the right criterion was such and such. But I don’t think we have any more idea of what we would think bracketing the information in question than we did before we bracketed the information in question. So we’ve gotten no further towards resolving the issue in question. Rawls is in a better position precisely because he has already narrowed things down. He doesn’t just say, “What would I think if I did not know who I was?” He asks, what would self-interested parties who were not motivated by envy and who did not know the odds of winding up in any given social position choose to do if they wanted to cooperate with others on terms they could not reasonably reject?” That’s a much more well defined question to answer.
    And he got to asking that question by trying to reach reflective equilibrium among beliefs about justice and so on while not behind a veil of ignorance. So I don’t think the role of the veil is to limit Rawls’s biases by making it impossible for him to take certain info into account. It is rather to highlight a certain relatively specific positive argument.

  12. Mark,
    Maybe I’m misreading you, but I don’t see a large difference between the position that the veil is designed to filter irrelevant information and the role of the veil that you elaborate. So, here, you say:
    “I think in the end the best reading is one on which it [the veil] is designed to limit the considerations for or against a conception of justice to only those that are acceptable from each position”
    Yes, right. I don’t think I said anything inconsistent with this. What I’m asserting is that these considerations are limited to just those that are morally relevant. Of course what is relevant under the veil is still open to revision; some revision might be necessary to reach reflective equilibrium. So there is not much here I’d argue with.
    But then you add “[s]o suppose we agree that knowing that I am not a fetus is a kind of irrelevant information. (I’m not sure it is, but I’ll grant it for now.) Then the next stage would be so argue that without this information we would conclude that the right criterion was such and such. But I don’t think we have any more idea of what we would think bracketing the information in question than we did before we bracketed the information in question. So we’ve gotten no further towards resolving the issue in question”.
    I disagree that we have made no progress toward resolving the issue. In Rawls’s veil you must consider the possiblity that you occupy the least favored position in terms of social and natural goods and this affects what principles is it rational to select. In the situation I describe you must consider the possiblity that you (or a precusor) occupies the position of the fetus. Knowing that, I ask, do you think the evidence you have justifies you in concluding that the fetus does not have dms? I’m suggesting that you’ll look much more closely at your evidence under these conditions. I’m suggesting further that the evidential standards you think appropriate might reasonably change under these conditions. So I do think, in this sense, it moves us closer to a solution.
    Maybe you don’t see these as consequences of my assumption. Or maybe you don’t see these as esp. valuable consequences. I don’t know.

  13. I would have thought that, on Rawls’ view, you must assume that you are not a fetus when the veil lifts, indeed, that you are a minimally rational, abled, sane adult. The least favored position can’t be non-rational, semi-rational or impaired, for instance. (This is why it is a problem how Rawlsian justice deals with the disabled, infants and so on.) Or have I missed something here?

  14. Mike, you write:
    In Rawls’s veil you must consider the possibility that you occupy the least favored position in terms of social and natural goods and this affects what principles is it rational to select. In the situation I describe you must consider the possibility that you (or a precursor) occupies the position of the fetus. Knowing that, I ask, do you think the evidence you have justifies you in concluding that the fetus does not have dms? I’m suggesting that you’ll look much more closely at your evidence under these conditions. I’m suggesting further that the evidential standards you think appropriate might reasonably change under these conditions.
    I think that my sense we haven’t really made progress is two-fold. First off, I think that it is the combination of requiring self-interested reasoning while removing information that allows one to assess one’s prospects that does the work for Rawls. Since I don’t think that in real life I’m required to reason in a self-interested fashion, removing the information about who I am does not really constrain me very much. My own views on these matters have got very little to do with my interests in myself. Removing my knowledge of who I am doesn’t change it much. If I grant that a precursor of mine was a fetus (which given the right reading of precursor I will) my reaction is that this should not influence my judgement about whether they direct moral status. On the other hand, given that fact plus a demand that I reason about what to do in a self-interested way, then it won’t be any surprise if I decide to grant fetuses dsm. But we have no justification in this context for the demand that I reason in a self-interested way. (Rawls does for his purposes, which is modeling concern to cooperate with others on terms they cannot reject.)
    Second, I think that there is a difference between applying such a constraint to arguments (such as, no premises depending on knowledge of who one is) is much more straightforward than applying analogous constraints to intuitions about cases. There are not any premises here to examine for influence by the relevant judgement. With the first sort of case we don’t even have to eliminate the biasing interest to eliminate its illegitimate interest. We just take out the offending premise. What do we do in the second sort of case? We don’t even really know whether there is a biasing influence on our intuition.
    And this difference is not unimportant, because it is only because arguments are not just intuitions that we can in fact remove the influence. We do have the relevant knowledge and in the real world we cannot eliminate it. But when we have an argument we can identify where the fact in question influences the argument and we can take it out by eliminating the step that depends on facts of the sort we are not supposed to take into account. I just don’t see what we can do to remove the influence of biasing knowledge from our intuitive judgements in a similar way. In the absence of knowing exactly what its influence is, we cannot correct for it, if only because we might over-correct.

  15. Mark, you say,
    “My own views on these matters have got very little to do with my interests in myself. Removing my knowledge of who I am doesn’t change it much. If I grant that a precursor of mine was a fetus (which given the right reading of precursor I will) my reaction is that this should not influence my judgement about whether they direct moral status”
    I find this remarkable. I don’t doubt it, I just think it’s atypical. I’m certain that the (not necessarily conscious) influence of self-interested (group-interested) motivation is much greater than is reflected in your better and nobler motives. And so, in my view, it is something that must be cautiously guarded against in moral reasoning. This is no doubt a point on which we radically disagee. In any case, this is why I conclude that knowing your level of development would bias your deliberations. Well, not your deliberations, but certainly mine.
    I’m not sure I’m tracking the significance of the second (larger) point above. Why does the intuition/argument distinction matter so much here?

  16. Mike A,
    You ask:
    I’m not sure I’m tracking the significance of the second (larger) point above. Why does the intuition/argument distinction matter so much here?
    Just because it makes it easy to track when a bit of information is playing a role. If the fact that it is in my self interest plays a role in an explicit argument we can eliminate the premise that states that it is in my interest and see if the argument is still any good. (I admit that an idea that we want to bracket might play a role in the plausibility of a premise, but we can ask about how the premise is justified and go from there.) On the other hand, if some bit of knowledge is part of the explanation for my having a certain intuition about the case, we may have no way of figuring that out. For example if the bit of information is something I cannot help but know (as is the fact that I am not a fetus).
    Of course whether it is having an influence or not does not settle whether that influence would be legit or not. Plenty of background knowledge may go into making an intuition plausible without it thereby undermining the intuition.

  17. Robert, you write,
    “I would have thought that, on Rawls’ view, you must assume that you are not a fetus when the veil lifts, indeed, that you are a minimally rational, abled, sane adult. The least favored position can’t be non-rational, semi-rational or impaired, for instance.”
    I think you’re right that assuming otherwise might raise some serious metaphysical/epistemological problems. I do know that animal-rights theorists anxious to invoke Rawlsian impartiality have just this sort of problem. “What do you mean, what if I’m a gerbal?” I think these problems might be accommodated by an ideal representative capable of abstracting away from individual and species-specific interests. But this is obviously not the complete answer to this serious problem.

  18. I believe the way that Rawls dealt, initially, with children and other “not fully rational and able” persons was to stipulate that each knows that, when the veil lifts, he is head of a unit such as a family. So some sort of filial concern is supposed to fill the gap, then, for contractors; they make agreements knowing they will have concerns for their children and the future, as well as the disabled. Clearly that won’t work, but, in any case, contractors are not supposing they might turn out to be marginally of moral status when the veil lifts. They know they will have full moral status.
    In any case, Rawls’ view is really just a position regarding premises allowed in arguments for principles of justice. There’s no issue of bias influencing practical reasoning. If you construct an argument using no premises that refer to the character of those who hold the opposed view, your argument won’t be affected by the fact that you think they’re creeps.

  19. Robert wrote:
    In any case, Rawls’ view is really just a position regarding premises allowed in arguments for principles of justice. There’s no issue of bias influencing practical reasoning. If you construct an argument using no premises that refer to the character of those who hold the opposed view, your argument won’t be affected by the fact that you think they’re creeps.
    I think this very succinctly puts most of what I was trying to say about Rawls and his use of a veil of ignorance to constrain arguments.

  20. About the problem of disposing of mindless humans in trash bins:
    I don’t think that our reluctance to dump the human with no capacity for consciousness in the trash bin is due to the fact it has direct moral status. A brief (but good) argument that consciousness is necessary for moral status is that consciousness is necessary for anything to count as harming or benefiting a thing. Being dumped in the garbage bin doesn’t harm the anencephalic human any more than it would harm a candy wrapper to be dropped there. It wouldn’t care.
    Consider an analogous case. You bring flowers to the grave of your dead loved one. (Even sophisticated moral philosophers might do this.) Is this to benefit the dead loved one? It doesn’t benefit the corpse. It doesn’t seem really plausible that it retroactively benefits the person back when he was alive. One reason you might think it would benefit him would be that perhaps it satisfies a preference he had. But suppose that he had never thought about the question of flowers on the grave and did not have any feelings about it. (Maybe the dead person was a baby who couldn’t have understood such things.) Those who loved that person would still find it appropriate to leave the flowers.
    Why? Because they use this action to express care and respect for that person, which is appropriate. What about the anencephalic human? Who or what is one expressing disrespect for by dumping the body in the trash bin? Is the problem that one expresses contempt for the mindless body (but so what?)? I think, rather, that our reluctance to dump the body in the dumpster is explained by our feeling that it expresses a kind of contempt for human beings. Even though this thing is a sort of failed human, without the important and valuable properties of humans, it still seems like an expression of contempt for humanity, to just drop it in the dumpster like a used candy wrapper.

  21. Robert writes,
    “In any case, Rawls’ view is really just a position regarding premises allowed in arguments for principles of justice. There’s no issue of bias influencing practical reasoning”
    I don’t see it. There is no question that among the premises I cannot use is one specifying that I did very well in the social lottery. I assume we are agreed on that.
    But why can’t there be such a premise? There can’t be such a premise because that information–the information that I did well in the social lottery–is not relevant to the choice of principles of justice. That information allows me to choose principles that are *biased in favor* those who, though sheer chance, find themselves in advantageous social circumstances. Such principles will license a distribution of social goods that is not merit-based and that is in violation of what I take to be Rawls’s central motivating intuition.

  22. Mike H,
    “A brief (but good) argument that consciousness is necessary for moral status is that consciousness is necessary for anything to count as harming or benefiting a thing.”
    I’m not sure whether this is supposed to be an analytic truth, but I doubt that it is. There are all sorts of ways that you can be harmed that involve no consciousness at all. I might, for instance, shorten your life by a day or two without you ever being aware of it. I might lower your dosage of treatment for disease D in a way that produces no conscious effect.
    You might try to manage cases like this with a “deprivation” view of conscious harm: i.e., had I not lowered the dosage, you would have had a few more moments of consciousness. But I harm you even in cases where, had I not lowered the dosage, you would have by chance slept comfortably through the next day and then died.
    But there are other possibilities. Suppose Smith promises to write Jones a recommendation letter and, though sheer laziness or spitefulness, fails to do it. If he learns about it, Jones would be rightly aggrieved. Smith harmed Jones, even if by chance everything works out exactly as Jones had wished. But note that Jones is not harmed only after he learns about it; he rather learns that he has been harmed.
    There are yet others. If someone seriously and maliciously slanders you, that can constitute a harm, even if you do not learn of it and, by chance, it does not negatively affect you in any conscious way. It is not as though it suddenly becomes a harm after you learn about it. If that were true, the person who informs you about the slander would be doing the harming!

  23. Mark, you write,
    “I think this [Robert’s post just above] very succinctly puts most of what I was trying to say about Rawls and his use of a veil of ignorance to constrain arguments.”
    Here’s a reason to think that the OP (i.e. Rawls’s social contract argument) is itself the main argument for the two principles of justice. The social contract argument is not about (or not mainly about) constraints on other arguments one might put forth for principles of justice.
    It is right after Rawls offers his non-contract argument for fair equality of opportunity that he says,
    “none of the preceding remarks [about equality of opportunity] are an argument for this conception of justice, since in a contract theory all arguments, strictly speaking, are to be made in terms of what it would be rational to choose in the original position. But I am concerned here to prepare the way for the favored interpretation of the two principles of justice, so that these criteria, especially the difference principle, will not strike the reader as too eccentric or bizarre” (TOJ, 75).
    So pretty clearly Rawls thinks of his first non-contract argument as simply preparing the way for what he regards as the better or stronger, social contract argument for the two principles.
    But you seem to be reading Rawls the other way around, where the social contract argument prepares the way for the stronger or better non-contract arguments.

  24. Mike A,
    I admit my reading of Rawls does not fit everything he says in TOJ, but then his own subsequent reading did not either. (For example, there is some point later on where he withdraws the suggestion that his argument is part of the theory of rational choice or something to that effect.) My reading may be partly reconstructive, but the idea is to read it in such a way that it does in fact make for a decent argument. In this respect I don’t think I’m too far from a lot of other people who read Rawls.
    Since we are not in fact in the OP, the fact that people in the OP would choose in a certain way can provide reasons to us only if the reasons they give for choosing are or approximate reasons we could have for choosing a social structure. My (not at all original) thought is that insofar as the constraints on the OP force the self-interested parties to consider the acceptability of outcomes from each position in such a way as to give those in each position a veto, the argument models the sort of reasoning that would occur if people worried about the acceptability of a society to all of its members. Since we (not in the OP) should in fact think of the legitimacy of our society in this way, the construction models reasons we in fact have outside of the OP.
    On your comments to Robert about the relevance of information such as the fact that I did well in the social lottery: It is only irrelevant if I use it to provide self-interested reasons for preferring greater inequality. It isn’t irrelevant if I use it to justify taxing people in my position more heavily because it is relevant to showing that the benefits of cooperation that accrue to me are no more deserved than those I would have under an alternative system. The point is that whether a fact is relevant depends on how it is used, and for Rawls’s purposes it is the the package of self-interested reasoning plus lack of knowledge that plays the crucial role, not just the lack of knowledge.

  25. Mike A,
    I didn’t mean to say that x benefits or harms y only if y is conscious of x. What I meant was that x benefits or harms y only if y has consciousness, in general.
    I don’t actually think, intuitively, that you’d be harmed by losing a few hours of live sleep at the end of your life. However, if you are, that could be accounted for in terms of your preferences or other attitudes.
    I don’t have a general account of harm and benefit, but my thought was that harms and benefits require some sort of relation to some sorts of mental states. For instance, a benefit might cause someone enjoyment; or it might be something they desire, or would desire in some ideal conditions; or it might be something they believe to be good. But I find it hard to see how something that a person doesn’t have any opinion of, and doesn’t and wouldn’t care about in any way at all, could harm or benefit him.
    Btw, I don’t think the failure to write the letter in your example harms Jones, although it was still a violation of a prima facie obligation, the obligation to keep a promise. My keeping a promise to you doesn’t necessarily benefit you, even prima facie. Imagine that the letter of recommendation would have been a negative one (“his attendance was exemplary”, etc.) In fact, it would have kept Jones from getting admitted to any school. Now, maybe Smith should have written that letter, assuming he had rational grounds for taking a negative view of Jones’ suitability for graduate school (or whatever). (In fact, I think people often neglect their duties to say negative things, but that’s another topic.) But this wouldn’t be because Smith harmed Jones by not writing the letter.

  26. Mike H.,
    I don’t have a general account of harms either, but let me try one other case.
    Suppose you’re under a surgeon’s care and, during an operation, he removes a large part of your spleen, say, for kicks. Now, by chance (and this sometimes happens) suppose your spleen grows back and you never notice the difference. Now despite this lack of consciousness, the removal of a heathly organ for kicks is a pretty clear harm.
    I understand that your sense of it is that the harm has something to do with a possible (or ideal) experience, or something along these lines. My point is only this: the fact that someone is not conscious does not, by itself, entail that the person cannot be harmed. Assuming he can’t be harmed (and I’m less sure about that) there is some other necessary condition. That’s all I’m after.

  27. Mark you say,
    “The point is that whether a fact is relevant depends on how it is used, and for Rawls’s purposes it is the the package of self-interested reasoning plus lack of knowledge that plays the crucial role, not just the lack of knowledge”
    I think I wasn’t clear. I meant that the social lottery information is not relevant in the choice of basic principles of justice, whether that argument is the social contract one or the intuitive one.
    I agree that, once we have made the choice of principles, we might be able to use the social/natural lottery information to justify an unequal tax scheme that places greater burdens on those with chancy advantages. I don’t know how far that might go. The inequalities in social goods that we would agree to in OP under maximin are those (I think) that would leave the more advantaged better off than most others. This is what drives those more advantaged to greater productivity. No? A heavier tax burden might siphen that incentive.
    But these are finer points. I agree with your main point, supposing we are talking about arguments made after we’ve decided on the difference principle, etc.

  28. Mike H. & Mike A.,
    I like the following view of harm: S harms x by doing A iff S causes something bad to happen to x by doing A.
    Its competitor is: S harms x by doing A iff S makes x worse off by doing A than x would have been had S not done A.
    The competitor has easy counterexamples. Frances Howard-Snyder suggested a modified “Nancy Kerrigan Case” where Tonya hits Nancy in the knee, thus harming her, but Nancy is made better off than she would have been, because while at the hospital attending to the knee she discovers she has a curable illness in the knick of time.
    Anyway, if the view I like is correct, then it seems that some things can be harmed that are not conscious. Ecosystems for example. What do you think?

  29. “I like the following view of harm: S harms x by doing A iff S causes something bad to happen to x by doing A.”
    I’m not sure this can be right, Christian. Physicans often cause something bad to happen in treating internal injuries: they open the patient and cause bad (but anticipated) infections to happen. I don’t think this is a harm if the anticipated infection is treated along with the treatment of the internal injury. It might well be the only way to keep the person alive.

  30. I guess I want to say that during a surgery the doctor harms the patient, but that the harm is justified. The patient is not harmed overall because the the doctor causes more good than bad. Nonetheless, there is a harm.
    Your example seems to rely upon intuitions about intentions, that causing something bad to happen to another for the right reasons is not a harm to them. But, harm has to be more broad, right. I could unintentionally harm someone.

  31. I can’t see how either of these principles has a chance of being right. Suppose your boss goes on 10 mins. longer than expected causing you to get to the bus station just in time for an accidental gas explosion. The boss’s doing A causes something bad to happen to you. But *he* doesn’t harm you.
    Or suppose someone turns you in to authorites for some serious illegal activity. He causes something bad to happen to you (perhaps prison time) and perhaps you are harmed but *he* doesn’t harm you.

  32. This is an argument with a very high rationality requirement that the subject matter itself damages. You cannot abstractly speak of the margin cases, because you identify yourself in them, even when, as with the a-conscious husk, there is no rational reason to. The Terry Schiavo effect is not a demonstration of rationality, but of rationalization out of fear. You have developed an argument whereby anyone positing to the contrary inspires a degree of visceral horror in the other.
    Moreover, this is an argument whereby all edge cases get rationalized in, simply because we type-analogue them to ourselves. Life is choice; the refusal to own up to choice is bad faith. Even if we accept that “all life has value”, not all life has equivalent value, and value is a non-absolute. Direct moral standing is similarly relative and subjective. You are attempting to make it absolute, but in so doing, you will insist that every conceivable human type-analogue is equivalent to yourself. You suggest, by way of this, that aborting a foetus is morally equivalent to killing you. I will suggest thet the logical end of this chain is inaction and denial of will.
    The fallacy of moral equivalence is a useful hedge, but all it serves to do is wall off civilization from the pit. Civilized people will believe in the relative moral equivalence of beings type-analogous to themselves in the hopes that the sentients among the others will reciprocate, and not harm them. Many will still eat meat and screw their neighbor whenever possible, while talking down to the checker and the waitress, and honking at the car in front of them.

  33. Christian and Mike,
    We should distinguish two sense of “harm”: on balance harm, and pro tanto harm. So Christian’s view is that you harm someone pro tanto iff you make something bad happen to them; in the surgery case, the surgeon harms the patient in one respect, but benefits the patient overall.
    Perhaps to deal with cases like the gas explosion, Christian should change the principle to: x harms y iff x is the cause, or the salient cause, of something bad that happens to y.
    Still, I think the account is too weak: x harms y only by causing something to happen to y that is bad for y, not just bad in an agent-neutral sense. For example, suppose I help a criminal to escape from prison. Then I make something happen to the criminal that is bad. But I don’t harm the criminal.
    This is relevant to ecosystems, etc.: I don’t think anything that happens to an ecosystem counts as bad for it, though of course some things that happen to it may be bad for its inhabitants.

  34. Mike, the account is still too weak, isn’t it? You suggest,
    “x on balance harms y only by being the cause of something on balance bad for y to happen to y”
    Skydivers are often pushed from smaller planes for their own safety. Suppose I push Smith out of the plane and he wore the wrong backpack.
    I caused an on balance bad thing for Smith to happen to Smith, but I don’t think I harmed Smith at all.

  35. Mike H.,
    I like your modifications. I accept also that we need to distinguish:
    (a) F is bad,
    (b) F is bad for some x.
    My account was silent on the distinction, but this distinction is exactly what we need to decide whether consciousness is a requirement for harm. I take it that you will argue that consciousness is required for (b), but not for (a), and that it is (b) that approximates harm, not (a).
    Now, I wonder why think that, even if (b) is close to the truth, that only conscious individuals can be substituted for ‘x’ in (b) so that only conscious individuals can be harmed?
    Mike A.,
    I think pushing the skydiver out harms him, although the harm is of course unintentional. Change your case, though. Suppose I know he has the wrong pack and push him. He falls to his death. Do I then harm him? It seems you want to say yes to that, but no to the unintentional case, so that acting from a certain reason is necessary for harm.
    Perhaps the correct way to decide whether ‘harm’ expresses an intentional concept is to either (1) consider counterexamples, or (2) consider marks of intentional verbs.
    As for (1), it seems natural to describe extreme cold as “harmful to one’s health”. And for (2), it seems that we can quantify into instances of ‘x harmed………’ Doesn’t this suggest harm is an extensional concept.

  36. Christian,
    The counterexample is not based on the assumption that I do not intentionally harm Smith. I’m suggesting that I might cause a bad thing without harming. Suppose Smith deliberately put on the wrong backpack in an effort at suicide. I still pushed him out and I still caused a bad outcome for him, but it was Smith who harmed himself.

  37. Mike A., I see. Your case is one in which you cause something bad, but not something bad for an individual. In that case, one might say (b) above is correct, and that harm requires causing something bad to happen to an individual. Then the case doesn’t pose a problem. Or, instead one can argue that the individual who pushes Smith is not a salient cause (his pushing), but that the dead guy’s prior unpacking is the salient cause, in which case one does not do harm as Mike H. suggested above to help out my proposal.
    At this point, though, I am still inclined to say that pushing Smith is harming him, as is Smith’s own actions, and that the way we distribute responsibility to Smith and not the pusher, is a distorting feature of our intuitions about the case.

  38. “Your case is one in which you cause something bad, but not something bad for an individual.”
    But it’s pretty clear, isn’t it, that I cause something bad to happen to the individual. I cause him to die: that’s pretty bad for him.
    But you add, “Or, instead one can argue that the individual who pushes Smith is not a salient cause (his pushing), but that the dead guy’s prior unpacking is the salient cause”
    Yes, you could say this. But in the absence of some principled way of (i) making this distinction and (ii) showing that the distinction is relevant to our ascription of harmful conduct, the reply just begs the question.
    But you write, “At this point, though, I am still inclined to say that pushing Smith is harming him, as is Smith’s own actions, and that the way we distribute responsibility to Smith and not the pusher, is a distorting feature of our intuitions about the case.”
    Well, Smith succeeds in taking his own life and taking his own life is what he expressly wants to do in this case. I can’t see that as anything short of imposing a very serious harm to himself. I am just the means he uses to achieve his goal. So I am not responsible for the harm at all. But we can set this aside for the moment, since the position you are now defending yields a reductio.
    Suppose, for reductio ad absurdum, that you’re right in making this claim,
    “the way we distribute responsibility to Smith and not the pusher, is a distorting feature of our intuitions about the case”
    If you’re right then Smith and I share responsiblity in some way for the harm to Smith. Now consider this slight modification to the case. Smith does not deliberately put on the wrong backpack. Instead, he deliberately gives the wrong backpack to Jones and Jones puts it on unwittingly. Just as before, I’m doing the safe thing in pushing the skydivers from the plane. I push Jones from the plane and he falls to his death.
    On your account, the *distribution of responsibility exclusively to Smith* for the harm to Jones is a *distorting feature*. The correct conclusion, on your view, is that I am partly responsible for the harm to Jones. But that conclusion is plainly a reductio. It is obviously Smith who is fully responsible for the harm to Jones. Certainly no one would conclude that I am responsible at all for that harm.
    But my situation is perfectly analogous in these cases. Just as I am not responsible for the harm to Jones in this case, I am not responsible for the harm to Smith in the previous case.
    It might strengthen intuitions about this case to compare the situation above with this one. Imagine all is the same in the Smith-Jones case except that, instead of me pushing Jones from the plane, there is a mechanical device designed to launch the skydivers from the plane.The device operates perfectly; it does what it was designed to do. Jones is launched from the plane and falls to his death.
    It would be just false to assert that the device harmed Jones. Had the device malfunctioned or had the design been flawed then perhaps it might have harmed him. But it didn’t. My role in the previous case is exactly that of the mechanical device that is misused by Smith.

  39. Mike,
    I thought Christian’s point was this. You give these examples that are supposed to refute a view about harm. They all involve cases where somebody does an act that is overall bad for a person, but where, allegedly, the act does not harm the person. But they aren’t counterexamples; these acts really do harm the victims in the various cases. The reason we might be tempted to say they aren’t harmful acts is that we don’t assign moral responsibility to the person doing the act. But facts about responsibility are not relevant to facts about harm. (Hence responsibility facts are ‘distorting influences’ when determining whether some event is a harmful one.) When Smith pushes Jones out of the plane, his act really does harm Jones, but Smith is not responsible for the harm done by his act. If this is was Christian’s point, then I agree with Christian.

  40. Ben,
    I assumed that Christian was referring to causal responsibility for the harm. In any case, I was using responsibility that way throughout.
    I agree that I am causally responsible for the bad thing happening to Smith. I don’t see how I harmed Smith (unless you are simply stipulating that ‘x harmed Smith’ means x caused a bad thing to happen to Smith; but I thought that was the open question here).
    This is why I was suggesting that the mechanical device causes a bad thing to happen, but does not cause the harm to Jones. It is rather Smith that causes the harm to Jones by means of the device (just as he does it by using me). Smith is both causally and morally responsible.
    I can’t imagine that he isn’t. It would be like saying that, no, it was the hammer that was causally responsible for the harm to Jones’s smashed head. All Smith did was use the hammer (as he used me and as he used the device on the plane).

  41. But of course saying that the hammer is causally responsible does not preclude saying that Smith is causally responsible. I’d say both are.
    (I’m going to Portland in a bit, so I probably won’t be able to continue this, but I might post something about harm when I get back. I think there’s a lot more to say.)

  42. Well, there is an overdetermination problem. Anyway, thanks Ben. Have a good trip! I look forward to your post. But there is no chance I’ll come around to the belief that the hammer is causally responsible for anything in that case! .. 🙂

  43. Mike A.,
    So Ben said what I would have said. That’s good because he said “I agree with Christian” and I am keen on agreeing with myself.
    Anyway, I wish I had a theory of “salience” to offer so that I could tell which causes in various cases direct us to harmers. I don’t have a theory. I’m inclined to say “all the causes are relevant to harm”, but that the salient ones are salient because we care about the way harm connects up with responsibility, we forget about hammers, charged electrons, etc. “not-salient”, and focus on agents who know better “salient” and their causal contributions to bad outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *