In Mike Almeida’s recent post, this topic came up: what is it for a
person to harm someone? I’m interested in a more general question:
what is it for an event or state of affairs to harm someone? Here’s
the view I like best:
(H) X harms S iff X makes S worse off than S would have been had X not occurred or obtained.
Below the fold I defend the following disjunction: either (H) is the
correct account of harm, or harm is irrelevant (or maybe both).
There are a few ways people object to (H). Here’s one. Take Lance Armstrong. His getting cancer was evidently harmful to him. Yet, arguably, his life went better on the whole as a result of his getting cancer. (Certainly it seems to have helped his bike racing at least.) In general, it seems possible to overcome a harmful event and live a better life because of it.
We might say (as I think Mike Huemer said in the comments on Mike A’s post) that Armstrong’s getting cancer is harmful to Armstrong, but merely a pro tanto harm – it’s harmful to the extent that it causes intrinsic bads for Armstrong, but it’s not an all-things-considered harm. This seems plausible to me, but the objector is unlikely to be satisfied. After all, if I cause someone to get cancer, I should presumably be held responsible for doing so even if the person overcomes the cancer and is better off overall; but it would be odd to hold people responsible for mere pro tanto harms (which are routinely inflicted by doctors and such). What this shows, I think, is that facts about responsibility or blameworthiness facts do not track facts about harm. Perhaps they track facts about intentions, or about expected utility, who knows – that’s another topic.
Here is my main argument for (H). Suppose (H) is not true, and there are harms that do not make us worse off on the whole. Suppose I am in a situation where I have two options, O1 and O2. Suppose O1 and O2 would have an equal impact on my net well-being, and would affect nobody else. And suppose O1 harms me, but O2 does not (this is possible only if (H) is false). This would give me no reason to choose O2 over O1. The fact that O1 harms me should not affect my decision-making at all. The only thing that should matter to my evaluation of an option in this sort of case is its impact on my well-being. Harm is irrelevant.
This doesn’t show that harm is relevant if (H) is true. I suspect it might not be, for reasons something like those given by Alastair Norcross (in a recent Phil Studies paper called "Harming in Context"). I think talk of harm should be jettisoned altogether in favor of talk about well-being and intrinsic value. But maybe we’d be losing something important by not talking about harm anymore, and I’m just overlooking it. So here’s a question: why should we continue to talk about harm?
The notion of “harm” (defined as decreasing someone’s wellbeing) cannot do serious work by itself. We need a more textured notion such as “undue harm.” X can harm Y without doing so unduly (e.g. when a policeman takes a stolen object from a thief). It is when X harms Y unduly (e.g. when the thief steals the car from Jones, the rightful owner) that the notion of harm does serious work. This means that talk of harm (and talk of reduction of somoene’s wellbeing) only works morally when certain substantive claims about people’s duties toward each other are assumed. This does not mean that the notion of harm (or of reduction of well-being) plays no role. It means that the role they play must be framed by some substantive moral principles.
Thus, we should continue to talk about harm, but we should frame such a talk in certain ways (to be determined by your moral theory).
There is an interesting discussion of “harming” and “unduly harming” in Thomas Pogge’s book ‘World Poverty and Human Rights’ (Cambridge: Polity, 2002), (e.g. on p. 130)
Ben,
I’m having a difficult time figuring out what you take to be at stake in this question. I think I agree with you that the fact that O1 harms me is no reason for *me* to prefer it to O2. But it still might play give others reasons to prefer one over the other, assuming, as Pablo suggests, that the sense of “harm” is suitably qualified. Therefore if O1 is the option of stealing a thousand dollars from my savings every year, and O2 is the option of refusing to give me a thousand dollar raise in my annual salary, O2 should be preferable. But you seem to have in mind a view where the harm plays some significant role in first-personal reasoning, and I’m not sure what that would be (although I’m eager to hear).
A few worry cases. (1) is just a slight and impersonal drop in well-being. (2)-(5) are cases where one event has one positive outcome and one negative outcome:
(1) I step outside, and it’s a little chilly. I’m a trifle cold; my well-being has been lowered very slightly. Have I been harmed?
(2) A bully punches me and breaks my jaw, sending me to the hospital. I fall in love with my gorgeous nurse and we live happily ever after; that punch was the best thing that ever happened to me. I still think I was harmed.
(3) I win the lottery. It sends me into the fast life, and eventually I have spent all my winnings and lost my friends and self-discipline. That lottery was the worst thing that ever happened to me. Still, I don’t know that winning the lottery *harmed* me.
(4) I begin buying a more expensive hair gel. I look marginally better, but not enough to offset the extra price; my well-being has declined slightly. Did buying, or getting, the hair gel harm me?
(5) I get bitten by a radioactive spider. As a result, I am paralyzed from the waist down, but I also develop superhuman powers of intelligence. My well-being is actually much greater than before. But didn’t the spider harm me?
Ben, your (H) says this,
(H) X harms S iff X makes S worse off than S would have been had X not occurred or obtained.
I think it is really difficult to provide a decent analysis of harm. And there seem to me untenable implications for (H). Here’s one pretty clear problem. Suppose that if I had not thrown a brick at you I would have stabbed you. And let’s suppose that the stabbing would have been worse. It turns out that I do not harm you when I bean you with the brick. That can’t be right.
But there is a bigger problem. Suppose S’s life is going pretty well and X is the action of taking S’s life painlessly. Is S worse off than he would have been? No, he’s not: S no longer exists and so enjoys no state of being at all (better, worse, same) and is therefore not worse off.
But maybe you might modify (H) to accommodate this,
H’. X harms S iff had X not occurred S’s life would have been better than S’s life currently is.
But this pretty clearly won’t do either. Certainly I could have given S my winning lottery ticket and his life would have been better than it currently is. But my failing to give him my ticket is no harm to him. I don’t know, maybe something like H”
H”. X harms S iff were X to occur, S would be worse off than he currently is.
If I fail to give you the winning ticket, you are not worse off than you currently are. But H” won’t work either, I think. Suppose you have some sort of degenerative disease. So no matter what happens to you, you are worse off than you currently are. If I give you your medication, you will be worse off, since I cannot stop the degeneration (say). I can’t think offhand of an H”’.
Pablo,
Thanks for the tip about Pogge. I agree that if there’s such a thing as harm, it should be possible to distinguish morally permissible harms from morally impermissible or “undue” harms. I am skeptical that there is such a thing, in part because of some of the examples Mike A and Heath give in the comments above. If those problems can be sidestepped, then I might agree that there is a useful notion of harm.
Justin,
Thanks for the comment. I think your comment raises two distinct issues. One is whether just refusing to give someone money could be considered harming them. I’ll try to say something about that later. The other, which seems to be your main concern, is whether it makes a difference whether we are talking about what *I* have reason to prefer for myself or what *someone else* has reason to prefer for me. It’s not clear to me why this would make a difference. I take it that the reason I would care about harms to myself, if I had reason to care about them, is that they would be bad for me. I also take it that the reason someone else would care about harms to me, if they had reasons to care about harms to me, is that harms are bad for me.
Heath,
I like your cases. (2) and (5) seem relevantly like the Armstrong case. I would say they are pro tanto harms but not all things considered harms. (1) and (4) seem like cases of very mild harm. It seems sort of inappropriate to call them harms, because we usually don’t call something harmful unless it makes us significantly worse off. But I say they are harms anyway, if there is such a thing as harm. (3) I do want to say is a harm.
I think part of the reason we don’t want to call (1), (3) and (4) “harms” is that they are unlike our paradigmatic examples of harms. Paradigmatic cases are cases involving kicking, shooting, punching… causing intrinsically bad things to happen or physically injuring them. 1,3 and 4 are not like that. But presumably the real reason we care about kicking, shooting and punching is that they make us worse off. Similarly for (2) and (5), the reason we want to call them harms is that they are like our paradigmatic examples – they involve physical injury. But they don’t make us worse off, so we shouldn’t call them (all things considered) harms.
Mike,
thanks for the comment. Your brick/stabbing case is just the sort of thing I had in mind when I referred to Norcross’ paper.
You say, “But there is a bigger problem. Suppose S’s life is going pretty well and X is the action of taking S’s life painlessly. Is S worse off than he would have been? No, he’s not: S no longer exists and so enjoys no state of being at all (better, worse, same) and is therefore not worse off.”
I don’t think this is a problem. Even if it’s true that S fails to have a welfare level at times after S dies, it is timelessly true that S’s life as a whole has a certain value for S. The value of a whole life is what is relevant for (H), not the value of a life at any given time, such as some time after the harming event. S’s death makes it the case that S’s actual life is not as good as S’s life would have been otherwise. (I guess what I’m saying is I meant (H) to be understood as your (H’).) (I also think there’s good reason to say that we do have a welfare level at times after we die, namely zero. But I won’t argue for that here.)
I agree that your lottery case is problematic. At least, we don’t typically call such actions harms. But we might, depending on the details. For example, suppose I know that R is planning to give S his winning lottery ticket. I steal the ticket. S never finds out about any of this. I think many people would say that R’s failing to give S the ticket was a harm to S, and that I caused that harm. (Maybe I’m wrong about this? I don’t know.) But S presumably has no reason to prefer the situation where R just chooses not to give him the ticket to the situation where the ticket gets stolen.
Sorry Ben,
I don’t get your main argument at all. The absurd conclusion from not-(H) is supposed to be that the fact that O1 constitutes a harm should not affect your decision-making at all. Maybe that is counter-intuitive. Yet I don’t see from what this would follow. The only explanation I see for that is the assumption that only reasons one can have follow from the changes in well-being. But, that looks like quite an assumption. If we reject that we could think that O1’s harmfulness gives me a reason to avoid that option more even though in terms of well-being outcomes the choices are equal – O2 would get me to the same well-being position without harming me. But I’ve probably missed something at this late hour.
Ben,
I have a clearer sense of what you are trying to argue now. Thanks. And as you suggest, I’m not particularly wedded to the view that refusing someone a raise does not constitute a harm.
I was thrown by the statement Jussi just pointed to, where you say “that O1 harms me should not affect my decision-making at all.” This makes it sound like begin with an argument about how I should act for my own sake and draw a conclusion about how others should act for my sake. I assume that “decision-making” is understood as deliberation about how to act, and not just about what to care about.
The way you put it in your follow-up strikes me as more promising. There you talk not about decision-making but reasons for caring about something for someone’s sake. And indeed it does seem more plausible that there should be this symmetry between what I care about for my own sake and what you care about for my sake.
But I’m not certain. Perhaps what I should care about for my sake is my good, but what you should care about for my sake is something else—call it my moral dignity. It might be that caring about my moral dignity is in part a matter of caring that I meet a threshold of well-being, but in part other things as well (treating me with respect, etc.). And even if there is a symmetry between what I should care about for my sake and what you should care about for my sake, it doesn’t follow that harm should not affect my decision-making, not even basically or ultimately.
It might be that you want to make this argument to people who already agree with you about the above—that is, to people who already accept that every option worth acting on is an option that we should care about for someone’s sake, and that there is a symmetry between what’s worth my caring about for my sake and what’s worth your caring about for my sake. But if that’s the case, then I’m still a bit puzzled about what your argument for (H) is supposed to show. The interesting conclusions about harm’s bearing on decision-making seem to be pre-ordained by the two presuppositions. Does that make sense?
Ben,
To your question, what would we lose by not talking about harm, my vague sense is that the answer should be, ‘Talk about turning harms to our advantage’ or the difference between the Armstrongs of the world and the others, for whom the same harms would make their lives worsen precipitously. It seems as if we cannot explain the value of this sort of trait unless we explain it in terms of a response to harms. So: If H is false, and so it is possible to have harm in one case and not in another that are identical with respect to welfare, then in the case with harm you have the possibility of the Armstrong trait, but not in the other. And that looks like some sort of valuable thing itself.
Or can we characterize that character trait (what is it? pluck?) as a response, not to harms, but to simply a reduction in welfare? That doesn’t seem right to me, but perhaps it does to you.
Ben, I think you’re saying this,
H*. X harms S iff. had X not occurred S’s life would have contained more value (for S) than S’s life did contain.
This is close to my H’, but it ensures that if X kills S, and S has no welfare, it is still the case that S can be harmed. But H* mistaken too.
Suppose the closest world in which I do not kill you is a world in which there are such beings as angels. I’m imagining that the only thing that would have stopped from me killing you is the interference of such a being in my life. And let’s suppose that in that world, angels go out of their way to make your life go better. Let’s say this is one of their prime activities there.
And let’s say that in the actual world there are no angels and they do not make your life go better.
Why on earth would the fact that your life goes well in some distant angel-world make it the case that my killing you harmed you? I agree that my killing you harmed you, but not for that reason. It seems to me completely irrelevant to whether I harmed you.
Jussi,
I was assuming that harm would be analyzed in terms of well-being, and not in terms of any other morally loaded notion. For example, we couldn’t analyze harm in terms of wrongness, since, e.g. falling rocks harm people without anyone doing anything wrong. So, I’m not assuming that the only reasons anyone can have stem from facts about well-being; I’m assuming that the only *harm-based* reasons anyone can have stem from facts about well-being. But I didn’t make that clear in the post. From the fact that harm is analyzed in terms of well-being, I say it follows that (H) (or something very close) dominates all alternative analyses.
Justin: I’m not sure I get your suggestion. Suppose that I should care about others’ moral dignity but not my own, and that moral dignity is not part of well-being. Is the suggestion that moral dignity is relevant to harm?
I think maybe what I just said in response to Jussi partly addresses your worry: I’m not assuming that every option worth acting on is worth caring about for someone’s sake. There may be reasons for acting that have nothing to do with well-being, but those reasons won’t have anything to do with harm either.
Robert,
I guess I’d want to say about such cases that it’s pro tanto harm we’re talking about. That is, we can turn a pro tanto harm to our advantage, or make it the case that our lives are not made worse on the whole by an event that causes some intrinsic badness for us. Does this strike you as wrong?
Ben,
Maybe another way to put the objection might be easier to envision. Suppose the closest world in which Smith does not kill S is one in which there is some breakthrough in the treatment of the disease that causes Smith to kill S. In that distant world, thanks to this breakthrough, it is also true that S’s life goes well. So Smith harms S.
The question is why features about Smith should determine whether Smith harms S. We can easily imagine Jones killing S in exactly the same circumstances in world w and Jones doing no harm at all.
Other things equal, independent facts about the particular person doing the killing obviously should not be relevant to whether harm was inflicted on S. Yet these facts are relevant under this analysis. That’s the idea.
Mike,
Cool example. It seems to involve a “backtracking” counterfactual; the closest non-killing world is one where there were angels there all along. So, maybe I could avoid that problem by ruling out backtrackers. Or maybe (H) should be reformulated explicitly in terms of causation, perhaps like this:
(HC) X harms S iff X *causes* S’s life to be worse overall than it would have been if X had not occurred.
Mike, just saw your simultaneous post – but I think it also involves a backtracker. So maybe my response could there too. The idea would be that the reason those facts seem irrelevant is because the backtracker seems like the wrong one to employ when talking about harm.
Yes, nice, I see that. But since we are stipulating that the action has already occurred, *nearly* anything that would have prevented me from killing S is something that would have happened in the past. Isn’t that right? So backtracking might be motivated here. Still here is a way to avoid that objection.
Suppose we stipulate that there are angels already existing. Now suppose that the only thing that would have prevented me from killing S is the interference from an angel during or right after my action (perhaps deflecting the bullet or knife or restraining my arm or making his presence known right in the middle of my attempt, something like this). But angels never interfere in such cases, let’s say, unless they are already actively making people’s lives go better. And worlds in which this occurs are very distant worlds. I *think* that gets us to the counterexample without backtracking. A bit clumsy as it stands, but I’m guessing that you already see how a cleaner example might go.
I think I see, except it still seems like the counterfactual would be a backtracker: if Mike hadn’t killed S, that would have to have been because angels were already actively involved in making people’s lives go better. But it’s late, maybe I’m missing something. I’ll think about it tomorrow.
Hi Ben,
In the last post I suggested harming is causing something intrinsically bad to happen to one. This is consistent with (H) being false. But I don’t think causing something bad to happen to one is morally irrelevant, I think it is a paradigm morally significant reason, which is to say, roughly, that if I were to learn that x-ing causes harm to some y, then I thereby acquire a moral reason not to x. I’d be curious to hear your take on this kind of account.
I don’t understand why you say “The fact that O1 harms me should not affect my decision-making at all. The only thing that should matter to my evaluation of an option in this sort of case is its impact on my well-being.”
Both O1 and O2 impact your well-being equally. But you could still have reasons to prefer avoiding the harm. I am very inclined to think, for example, that whether the harm is past or yet to come makes a difference with respect whether at some time t, that event is a harm for one at t. Future operation cases come to mind.
So, if O1 is some future pain and O2 is some past pain, each of which would contribute the same disvalue to my net well-being, I have a reason, at some time later than the occurence of O2, to prefer O2 to happen to me rather than O1.
Of course, this would, if right, only undermine the point you don’t really care about, that “the only thing tht matters is the impact on your well-being” and this doesn’t show that the fact that O2 is a harm gives you a reason for prefering O1.
But more importantly, how can O1 harm, O2 not harm, but yet both have an equal impact on your well-being? I can’t see how that would be possible.
Ben,
thanks for the clarification. The assumption that all harm-based reasons are well-being based reasons still sounds quite substantial. Well, it doesn’t if the bi-conditional holds but then the argument might be begging the question. If not, then I can’t see why the harmfulness as such cannot be reason-providing even when the O1’s and O2’s well-being based reasons are equal. I guess my intuition just is that harms make you worse off but you can get to a worse of position without being harmed.
Ben,
What I’m actually worried about is why any counterfactucal situation that is determined by facts about those doing the harm should affect whether S is harmed or not. This shouldn’t depend on using a backtracker, so I’ll try to be careful. Consider two worlds w and w’ in which Smith is killed.
1. In the closest world to w in which Jones does not kill Smith, there is a miraculous intervention that Smith witnesses and his life goes better thereafter.
2. In the closest world to w’ in which Sue does not kill Smith, there is no need for divine intervention. She simply changes her mind and Smith’s life goes miserably.
Let the circumstances of the killing be otherwise the same in w and w’. Why should facts about what it takes to prevent Jones and Sue from killing Smith determine whether the killing is harmful? Such facts do in these cases, since they detemine which non-killing worlds are closest. But I can’t see why those facts are relevant to whether Smith is harmed.
Christian asks:
“But more importantly, how can O1 harm, O2 not harm, but yet both have an equal impact on your well-being? I can’t see how that would be possible.”
I’m not sure if this is what you had in mind originally Ben, and I’m having trouble coming up with an actual example, but suppose you have two similar options, where O1 costs you, say, 10 welfare units but you gain 30, whereas O2 costs you nothing but you gain 20. So the net is 20 units under either option. So imagine that you’d benefit from climbing Everest:
O1: a clean climb of Everest, gaining you twenty units net
O2: a climb of Everest in which you must overcome avalanches, gaining you thirty units minus ten units for being harmed by the avalanche, for twenty net
(I’m bracketing the common costs that we can assume would be equal under either option.)
Now do we have reason to prefer O1 to O2 given the equal net impact on well-being? Under your (H) O1 wouldn’t count as a harm relative to 02, since the avalanche has mixed effects, being perhaps harmful in itself but making for a more rewarding or exhilarating climb. (BTW, this makes me wonder whether “more harmful than” is the more fundamental notion than “harm.”)
I can see the argument either way: O1 is preferable because it seems irrational to prefer to suffer something intrinsically bad, such as being overrun by an avalanche, to an outcome lacking that. But (and you hear glimmers of this in responses to the problem of evil) I can imagine people thinking that 02 is the better outcome because there is something better (not in welfare terms of course) about a life in which goods and bads coexist. I think something can be said for this from a sort of naturalistic standpoint.
Also a side point: I think there’s a way in which harm is belief- or expectation-sensitive that links up harm with autonomy or intention. I.e., whether an event harms me can depend on my expectations and beliefs about how events are likely to unfold.
Christian,
The sort of case Michael C just gave is the sort I had in mind. Both options have the same impact on well-being; one causes harm (in your sense). You could think of it this way. Suppose we had two distinct concepts, harm and shmarm. Harms are events that cause intrinsic bads; shmarms are what I called harms in my original post (they make you worse off overall). You’re right that you would have a (defeasible) reason not to be harmed. But any time you had a choice between being harmed (but not shmarmed) and being shmarmed (but not harmed), you’d choose to be harmed, not shmarmed. So my argument is: suppose I give you your concept of harm; then we shouldn’t care about harm except insofar as we care about shmarm.
I have to think more about whether the Parfit stuff will make a difference.
Jussi,
If (H) were the only way to analyze harm in terms of well-being, then my argument might be question-begging. But look at Christian’s idea (which was one I had in mind as a target). That’s an analysis of harm in terms of well-being too. My argument doesn’t beg the question against Christian’s view, I don’t think. If there is a plausible analysis of harm that makes use of morally loaded concepts other than (or in addition to) well-being, I would be interested in hearing it.
Mike A,
I think I would agree that such facts shouldn’t be relevant, but I’m having a hard time understanding how to get the world described in (1) to be the closest one without using backtrackers. The idea there is that we’re supposing that Jones is so determined to kill Smith that it would take divine intervention to stop it. So the counterfactual is going to be something like: if Jones didn’t kill Smith, it must have been because…. (where what goes here is some description of the situation at the time), rather than: if Jones didn’t kill Smith, what would have followed would have been… (where what follows is a description of the things that happen after Jones miraculously decides not to kill). In the first case we’re holding Jones’ psychology fixed and changing the surroundings; in the second we’re not holding his psychology fixed. I think it is more natural not to allow changes in the surroundings, but to allow changes in Jones’ psychology; at least that’s the simplest way to imagine Jones doing something different. (I’m no longer sure that it’s really backtrackers I’m worried about.) It would help me to know what exactly the counterfactual statement is in (1), that would lead us to say that the world you describe gets to be closest.
Michael C,
Right, that is the sort of case I had in mind.
You suggest: “O1 is preferable because it seems irrational to prefer to suffer something intrinsically bad, such as being overrun by an avalanche, to an outcome lacking that.” But surely this is irrational only holding other things equal. And other things aren’t equal in this case, since you get more good along with the bad.
Your defense of O2 is interesting. It suggests the possibility of an account of harm in terms of some subject-relative value other than well-being. I’d like to hear more about that. But note this odd result: given that two options are equal wrt effect on well-being, we’d actually have more reason to choose the option where we get harmed than the one where we don’t. This would lead to a similar sort of conclusion to the one I’m after: if harm isn’t as described in (H), then harm isn’t something we should worry about or avoid.
Ben,
Your response to Jussi clears things up. Thanks. One thread down, three (or four) to go.
Ben & Michael,
The example of case in which O1 harms, O2 doesn’t, and both effect my net well-being equally is given by Michael as:
“Suppose you have two similar options, where O1 costs you, say, 10 welfare units but you gain 30, whereas O2 costs you nothing but you gain 20. So the net is 20 units under either option.”
This example doesn’t work for me. I describe the case as ones in which O1 harms me and O2 doesn’t. However, both do not effect my well-being equally, although it is true that my well-being is equal in both cases. I would say instead that I have a reason to prefer O2, considered relative to O1, but considering the effects of “other actions and events”, I have reason to prefer neither (assuming temporal position is irrelevant).
But then we still do have a case.
That is, we still do “not” have a case.
Christian, you say:
“However, both do not effect my well-being equally, although it is true that my well-being is equal in both cases. I would say instead that I have a reason to prefer O2, considered relative to O1, but considering the effects of “other actions and events””
I don’t understand. Can you say why they don’t affect your well-being equally? And what other actions and events are you talking about here?
Ben,
Yes, it does have to do with Jones’ psychology as you say. But you are describing a case in which Jones actually makes the decision not to kill Smith. I don’t have that case in mind (though I did in my initial comment on this). What I have in mind here is that nothing stops Jones from pulling the trigger. He indeed pulls the trigger even in the case where he does not kill Smith.
I’m imagining that the closest world in which he does not kill Smith is one in which he pulls the trigger and the bullet is miraculously deflected. So he does not kill Smith. But of course even that is a very distant world. So it looks like this,
Had Jones not killed Smith, the bullet from Jones’ pistol would have been miraculously deflected.
One of the things that interests me about these cases (and the initial possibly-backtracking case) is their implications for deprivation views of the disvalue of death. I’m pretty sure you’ve worked on that problem (or something close to it).
Ben,
I want to focus in on what we are contrasting. As stated, the case can be filled out in many ways. The way I understand it, O2 is suppose to not be a harm. It is an event which we would intutively describe as a benefit, say, the experience of seeing the top of Everest. That event, at a time, is intriscally good for the agent. No problem.
Then we move on to O1. O1 is suppose to be an event that is equal in value to O2. It is equal in value in the sense that O1 and O2 “cause” or “contribute” the same value to an agent’s well-being. I am denying this. I deny that we have located two events, O1 and O2, such that O1 causes n units of value and O2 causes m units of value (where n = m). I suggest that what we have instead is a case in which there are many events, some of positive value, some of negative value, that jopintly “cause” or “contribute” n units of value to the agents welfare.
So, when we contrast O1 and O2, what we are contrasting is not one event with another, but one event with a plurality of events.
But, we should not admit pluralities in an alleged counterexample. If we could, then just take two pluralities, the plurality that is one life history H and another life history H* such that H = H* in terms of well-being. Ask do we have a reason to prefer H to H*? Answer: No. But we should not take that to bolster (H). (H) is about harm, not histories or pluralities of events, it is about individual events per se.
Ben,
sorry I missed the debate today being away (at the British museum!) all day. I tried to look for Christian’s posts and potential analyses of harm. All I could find was this:
‘I suggested harming is causing something intrinsically bad to happen to one.’
But that doesn’t seem to be an analysis of harm in terms of well-being unless you think that only thing of value is well-being. So, maybe we are back to a worry about circularity. Or at least the dialectical situation seems odd. You start your argument from your opponent’s premise – it’s not the case that X harms S iff X makes S worse than S would have been if X hadn’t happened. Then in getting your opponent to something counterintuitive you rely on the premise that all harm based reasons are well-being based reasons. But, this I don’t see why someone who thinks that harms are not conceptually linked to well-being would accept this premise or ought to do so. I guess what I’m thinking is that (H) may be right but the argument doesn’t see have that much weight against someone who doesn’t already believe the view.
I also worry that harm-based reasons can be well-being reasons because for Scanlonian reasons I’m not sure there are well-being based reasons.
I too would wish to come up with alternative analyses of harm (and many other things). I’m just quite sceptical about this kind of simple definational analyses in the first place. Either they seem revisionist or counter-exampleable. How many good ‘iff’ analyses we have – ‘bachelor’ already seems hard enough.
Mike,
That’s basically the same reason I’m interested in the topic – I want to defend the view that death is a harm of deprivation.
Part of the problem here is that there are lots of ways for an event not to happen, and it seems like conversational context somehow picks out one of those as the thing that would have happened had the event not happened. And in the cases you’re describing, we end up with weird worlds. Suppose we went with a view like this instead:
(H2) X harms S iff the value for S of what X causes to happen is less than the value for S of what X causes not to happen.
I think in your Jones/Smith case, we wouldn’t want to say that Jones’ killing Smith *causes* it to be the case that the angels don’t make Smith’s life great. I think it would be nice if we could cash this out in terms of counterfactuals, but maybe we can’t.
Christian,
I’m not following you. Let me be clearer about exactly what I’m saying. Let me just stipulate that O1 and O2 are single events such that only one of them could happen at time t. If O1 happened, it would cause two other events, one with value -10 for me and one with value +30. If O2 happened, it would cause one event with value +20 for me. On your view, O1 is a harmful event but O2 is not, even though their effects on my well-being are the same. We shouldn’t care whether O1 or O2 happens. I think this shows that if your view of harm were correct, we could dispense with the concept of harm in favor of another concept, call it shmarm, that is captured by the biconditional in the original post. So, where do we disagree?
So here’s another sort of counter-example to (H) (no backtrackers, either):
(This is inspired by Parfit’s ‘future persons paradox’.) I think this example, as well as Mike Almeida’s examples, support something like Christian’s account: namely, you harm someone when you cause something bad to happen to them.
But now here’s a possible counter-example to Christian’s proposal:
It’s easy enough to fix this. Let the new view be:
(The last clause is to handle cases where I don’t cause the bad event, but I make it worse.) I’m sure there are still counter-examples to Hc (since there are counter-examples to all philosophical “iff” claims, including this one), but HC seems closer to the truth than H.
Last point: Why is harm relevant? Perhaps not so much for self-regarding decisions as for other-regarding ones. Perhaps, for example, others have a prima facie right not to be harmed, or perhaps X’s harming S is a necessary condition on S’s having a right not to have X done, or something like that.
Another possible counter-example to H: I’m in the Newcomb’s Paradox scenario, and I choose both boxes. Predictably, I get $100, instead of $1,000,000. Does my choosing both boxes rather than only the opaque box harm me?
A one-boxer might say: “Yes, because if you had chosen only one box, you probably would’ve had a million dollars.” (Though this is a backtracking counterfactual.)
The 2-boxer would say: “No, because the choice causes me to get the $100, and it doesn’t prevent me from getting the $1,000,000.”
Jussi,
The target of my argument is someone like Christian or Mike H., who want to analyze harm in terms of well-being but not in the way I suggest. My argument wouldn’t work against someone who thought harm could be analyzed in terms of something other than well-being. But I don’t know anyone who holds such an analysis (this may just be due to my own ignorance). So I will worry about arguing against someone like that after I find out that there is such a person.
I’m not sure why you don’t think Christian’s analysis is in terms of well-being; I’m taking a person’s overall welfare level to be the same thing as the intrinsic value of his life for him, unless someone can give me a good reason not to do so. The only difference between my view and Christian’s is that I think the values of things that would have happened, or are caused not to happen, are relevant to facts about harm.
I’m OK with my analysis being revisionary. It surely must be revisionary given how many problem cases there seem to be. I think our ordinary conception of harm is a big muddle that needs to be precisified in some way or else abandoned as a topic for serious philosophical ethics, and something like (H) is the best way to precisify it.
Michael H,
Those are good examples too. In the future persons case, I’d just say that burying the waste is not an all things considered harm to the daughter, merely a pro tanto harm. It’s an overall instrumentally bad action that happens not to be overall instrumentally bad for any individual, so it is not overall harmful to anyone. I realize this may seem revisionary, but I’m OK with that. I think people are not entirely rational when thinking about cases involving nonexistence.
I don’t like (HC). For one thing, it’s ugly. More importantly, disjunctive analyses multiply counterexamples, because anything that satisfies any of the disjuncts will count as a harm. There are cases of causing an intrinsic bad that don’t seem like harms; cases of preventing an intrinsic good that don’t seem like harms; etc. If we’re going to have a revisionary conception of harm, it should be something more wieldy.
You might be right about the relevance of harm. I just wonder why, if we had a right not to be harmed (in Christian’s sense, say), we wouldn’t also have a right not to be made worse off than we would have been otherwise, and why the right not to be made worse off wouldn’t be the one that would be more important.
Mike H.,
I like your counterexample, but my response is different than yours. Roughly, by stealing my winning lottery ticket I say you have “not” harmed me (unless I feel pain in virtue of becoming aware of the fact), rather you have prevented me from benefitting. Similarly, not giving me all of the money in your wallet has not harmed me. But not doing so has prevented me from benefitting.
I think harm must be causal and I am, well, very unconvinced that an instance of ‘not being F’, where F is some property, can be an effect of a causal relation rather than an analytic entailment of an effect. I am very unclear about negative properties, not whether they exist, but whether their instantions are, roughly, effects.
Lastly, it is cases like Parfit’s future generations case that I take to confirm my account. We are on the same page for sure. David Boonin has this example where, if a woman learns that if she takes a pill and waits one month, then it is extremely unlikely that her child will be born retarded, but if she doesn’t, then her child will very likely be born retarded. She decides to conceive anyways and her child is born retarded. Intuitively she harms the child, but she does not make the child worse off than it would have been, because had she taken the pill and waited, then the child would not have existed (different sperm-egg combo, etc.). I take that as a counterexample to (H).
Ben,
You said that “On your view, O1 is a harmful event but O2 is not, even though their effects on my well-being are the same. We shouldn’t care whether O1 or O2 happens.”
What I deny is that ” We shouldn’t care whether O1 or O2 happens” follows from what comes before it. Comparing O1 and O2, we should prefer the one that does not involve harm, that is, O2. That is, just considering two events, O1 and O2, we should prefer O2. I am saying it is not enough to say “But look, O1 cause evenet E, and together with E, O1 and E contribute the same amount of well-being to the agent as O2.” That is changing the subject. That is adding another event, and treating O1 as an indirect cause of some other event.
I think anybody who accepts my view should deny that we can do this.
Let me present another case: Suppose you do not like being punched. But you like money. I offer you the following: I will give you fifty bucks and hit you in the jaw or I will walk away. Suppose that you value your jaw at exactly fifty bucks. On your view, you should be indifferent to the options. I stipulate all else is equal.
I say that one option involves harm and it is sensible to avoid the punch.
Ben,
about this:
‘I’m not sure why you don’t think Christian’s analysis is in terms of well-being; I’m taking a person’s overall welfare level to be the same thing as the intrinsic value of his life for him, unless someone can give me a good reason not to do so.’
One thing I had in mind was the ‘suffering artist’ type cases (like Van Gogh maybe). In terms of overall welfare level, the life in such cases may be at the downright bottom. But, you still might think that the kind of life of artistic expression these people live may have a high amount of intrinsic value for them.
Another point about this:
‘I’m OK with my analysis being revisionary. It surely must be revisionary given how many problem cases there seem to be. I think our ordinary conception of harm is a big muddle that needs to be precisified in some way or else abandoned as a topic for serious philosophical ethics, and something like (H) is the best way to precisify it.’
Right – so the analysis is revisionary. Yet in the original post the claim seems to be that if it is revised in the suggested way, it no longer serves any purpose, and so it should be abandonded anyway. This sounds to me like neither keeping the cake nor eating eat. Wouldn’t it be better to revise the term to something useful *and* close enough? I also must say that I’m not too keen on the idea that only precisely defined terms can be a topic of philosophical ethics. This seems to alianate much of what is interesting in ethics.
Ben, you suggest this new analysis,
(H2) X harms S iff the value for S of what X causes to happen is less than the value for S of what X causes not to happen.
But H2 can’t be right. It is too often the case that the net value an agent causes in performing X depends on what other people do. If surgeon Hope performs a difficult, necessary and successful surgery on you, she certainly causes considerable pain. Suppose that, before you have a chance to recover the benefits from that surgery, nurse Nasty kills you. In that case surgeon Hope causes considerable overall disvalue. More disvalue, perhaps, than some alternatives she had. But the conclusion that the surgeon harmed you seems plainly false. It was nurse Nasty that harmed you in preventing the benefits forthcoming from the surgery.
A small comment on the exchange between Ben and Christian.
Ben describes this case:
I feel some intuitive pull in Christian’s claim that O2 is preferable. Yet I’d not be happy to deny that we should be indifferent between events of equal value. So let me suggest a way of saying the former without denying the latter.
In Ben’s description of the case, there’s an assumption of additivity, something like this: the value of a combination of effects must equal the sum of the values of its parts. But if we take a more holistic view, we might deny this. We might say, for any combinations of effects X and Y, if (a) the sum of the values of the parts of X equals the sum of the values of the parts of Y, and (b) only X includes any harmful effects, then the value of X is less than that of Y. This implies that O2 is better than O1.
Campbell,
This does lead to some unfavorable results.
Suppose dollars are linear with utility. A rational person would pay (at most) $10 for a .5 chance at O1. But given your principle, a rational person should be prepared to pay $(10 + n/2) for a .5 chance at O2. His expected utility is then -$n/2. Doesn’t seem like a rational thing to do. But he should be prepared to pay $(10 + n/2) since, on your principle, O2 is better than O1 by some unspecified amount n.
Let me try to put it another way. Ben’s counterexample is:
O1 and O2 are single events such that only one of them could happen at time t. If O1 happened, it would cause two other events, one with value -10 for me and one with value +30. If O2 happened, it would cause one event with value +20 for me. On your view, O1 is a harmful event but O2 is not, even though their effects on my well-being are the same.
I think the bearers of personal intrinsic value are experiences and that they are time indexed. So, x harms y at t only if x causes y to have an experience and that experience is intrinsically bad at t. Importantly, we need to index harm to a time.
I pose a dilemma: Either the event which is -10 in value and the event which is +20 in value are distinct or they are the same. If the same, then indeed it isn’t rational to prefer O1 to O2 or vice versa. But, that is because the event is not a harm at all and my view doesn’t imply otherwise. If the events are different, that is, occuring at different times, then I suggest one is preferable to the other. O2 is preferable to O1.
So, if we consider O1 as causing an event that is good at a time, and another event which is bad at another time, such that their sum is equivalent to O2, then I say it is preferable to select the event which does not involve harm at any time.
Christian,
I’m starting to understand your view, but it just seems like you’re double-counting intrinsic bads. When something causes an intrinsic bad, on your view it gets double demerits – one for the negative impact on the person’s welfare, and one for causing an intrinsic bad. Why count it twice? That’s what I’m not getting. Someone might think that intrinsic goods and bads aren’t commensurable, and that it’s always sensible to refuse to suffer something bad no matter what the reward. But that’s not your view, right, because you think goods and bads can be traded off. So I’d like to hear more about why you hold the view you hold.
Jussi,
I see your point about Van Gogh. I guess I wouldn’t say that his life was very good *for him* (though it was good in other ways). But if the intrinsic value of a life for a person can be different from the person’s welfare level, I might be happy just to formulate (H) as a view about intrinsic value rather than about well-being.
“Yet in the original post the claim seems to be that if it is revised in the suggested way, it no longer serves any purpose, and so it should be abandonded anyway.” That’s not quite the claim. The claim was that if harm is a useful concept, it is best captured by something like (H). (H) has the best chance of being an analysis of harm that makes harm a useful concept. You’re right that I did suggest that I think it wouldn’t be useful anyway. But I didn’t argue for that claim.
Mike A,
I think your surgeon Hope case is one where our intuitions about moral responsibility corrupt our intuitions about harm. Hope does harm me, but it’s Nasty’s fault that Hope harms me. Nasty makes Hope’s actions turn out harmful instead of helpful. Suppose Hope knew that Nasty was going to kill me, but out of malice he starts the surgery anyway. Surely then we’d want to say that Hope’s act harmed me, right? But why should Hope’s intentions in performing his actions matter to whether they are harmful to me?
Ben, “Why count it twice?” If I get you right, one thing is “bad for someone”, that is the harm, and the harm is also “bad in itself”. That the harm is bad for someone is what explains why her well-being is effected. That justifies the one role. Why the harm is bad in itself explains another role, why harms make the world worse.
And I was motivated to accept my view because I thought (H) was the most plausible account of harm. Then I disccovered counterexamples to it, and discovered my view avoids all the counterexamples I’ve considered and is consistent with my intuitions about what kinds of things are harms. I mentioned two very strong counterexamples, but I especially like the future generations counterexample.
Ben,
It is the temporal sequence that is making it look like intentions matter. Suppose Smith is very thirsty and I give him some sports drink X. Before he gets a chance to drink X Jones spikes it with chemical Y. X is beneficial, Y alone is harmless, but X & Y is harmful. Again the beneficial effects of what I did were interrupted. Do I harm Smith? I don’t think so. Still the effects of drinking X, given Jones’s interference, are on balance bad for Smith.
So the claim is metaphysical. The action I performed was not harmful. I gave Smith a beneficial drink X. I agree that, given the intervening causal history of X, drinking X caused harm to Smith, but nothing I did harmed Smith. I’d say the same thing in the surgeon case.
Mike,
You say: “Suppose dollars are linear with utility. A rational person would pay (at most) $10 for a .5 chance at O1.” But I don’t see how that follows. To say that dollars are linear with utility, I assume, is to say that there exist numbers A and B, with A positive, such that for any number of dollars N, the utility of N is given by U($N) = AX + B. But from this it does not follow that U(O1) = U($20).
In any case, I take it your point is just that if I judge O2 better than O1, then I should think it rational to pay more for a .5 chance at O2 than for a .5 chance at O1. But that implication strikes me as no less intuitive than the judgement that O2 is better than O1.
Campbell,
I said let dollars be linear with utility. This is just to assume that the value of one dollar is one utile. Maybe that wasn’t clear. This is not exactly relevant to your formula for positive linear tranformations, still it can be made so. The “transformation” is trivial. For the small formula, then, let N = X, let A (constant) = 1 and let B (constant) = 0. It is evident that under the assumption (the utility of) dollars is linear with (the utility of) happiness (or whatever is valuable in O1 and O2).
But you say this,
“. . . your point is just that if I judge O2 better than O1, then I should think it rational to pay more for a .5 chance at O2 than for a .5 chance at O1. But that implication strikes me as no less intuitive than the judgement that O2 is better than O1.”
Well, that’s not quite my point. My point was that (1) if someone used your proposed principle in betting, he would be willing to pay more for O2 than for O1 despite agreeing (bizarrely) that U(O2) = U(O1), and (2) anyone willing to do *that* will be taken to the cleaners in terms of overall utility. With every so-called rational bet of $(10 + n/2) on a .5 chance at O2 he will expect a loss of $n/2. Over time this so-called rational person will be bilked for all he has. The short story and familiar story is that he’ll face a “money pump” and he will be pumped into penury.
“Oh, but surely he’ll stop”. Not if he takes your principle as a rational guide he won’t. He has no reason to stop: by the lights of your principle, he is acting rationally, and assuredly losing every dime he has. For my part, I find that wildly counterintuitive for allegedly rational behavior.
The solution to this problem is to get your considered judgment that O2 is better than O1 reflected in the relative value or utility of O2 and O1.
Mike,
I think I see the misunderstanding between us. You say:
But my original point was that we need not accept that U(O1) = U(O2). Ben, I take it, assumes something to the effect that U(O1) must equal U(E1) + U(E2). However, I suggest, we might reject such additivity.
I see. You’re assuming something like the principle of organic unities or some sort of emergence theory, where the value of the whole is greater than the sum of the parts.
But it is hard to see how utilities are non-additive: suppose the -10 and +30 were at two (distant) ends of your life. Do we still get some total whose value is less than 20? Anyway, I do see the miscommunication.
Ben,
I agree with many of the other commentators that (H) cannot be correct. (Just to remind those who start at the end of the comments, (H) says,
(H) X harms S iff X makes S worse off than S would have been had X not occurred or obtained.)
Here’s an example I gave some students recently when discussing Parfit’s Non-Identity Argument to show that something similar to (H) is false:
Arnold Schwartzenegger and I are both pissed off at one of my students, Bob, and both of us decide to beat him up. However, I get to Bob’s house first and beat him up before Arnold can, so that by the time Arnold arrives the police are there and Arnold leaves never to beat up on Bob.
It seems clear to me that, had my beating up Bob not occurred, Bob would have been worse off. Arnold is way stronger and way better at beating people up than me! However, it also seems clear that my beating up Bob harms Bob. So, something can harm someone without making that person worse off than they’d otherwise be.
As for the move to pro-tanto harms, I don’t know very much about that. What I do know is that the sentence “My beating up Bob harmed Bob” is true. So, (H) is not true.
Christian, you say, “If I get you right, one thing is “bad for someone”, that is the harm, and the harm is also “bad in itself”. That the harm is bad for someone is what explains why her well-being is effected. That justifies the one role. Why the harm is bad in itself explains another role, why harms make the world worse.”
I don’t get it. If you’re counting bads twice because they affect both the person’s well-being and the value of the world, why not count goods twice too?
Here’s another counterexample to your view. The hippocratic oath says ‘first do no harm.’ If your view of harm is true then every doctor constantly violates this oath. On my view, they only violate the oath when they screw up.
Mike A, your new example doesn’t move me for just the same reasons the old one didn’t. Suppose you knew Jones was going to spike the drink and you gave it to him anyway. I think you harmed him. Do you disagree?
I think you have to reject a transitivity principle that seems plausible. You say that your giving the drink to Smith doesn’t harm him, but his drinking it does harm him. But (plausibly) your giving him the drink causes him to drink it. So you must deny that if Y is harmful in virtue of causing Z, and X causes Y, X is harmful.
Greg, I agree that overdetermination and preemption cases are problematic for (H). I think those worries can partly be addressed by distinguishing between states of affairs. Consider these two questions:
Was it harmful to Bob that he got beat up?
Was it harmful to Bob that *I* beat him up?
I can say that the answer to the first question is yes, and the answer to the second is no. If you don’t like that move, then you might like the variation of (H) that appeals to causation rather than counterfactuals, like the one I suggested in a comment above:
(H2) X harms S iff the value for S of what X causes to happen is less than the value for S of what X causes not to happen.
“Suppose you knew Jones was going to spike the drink and you gave it to him anyway. I think you harmed him. Do you disagree?”
I do. I say Jones harmed him and I deny that we both did. If I knew that Jones was going to do it, it would still be Jones that harmed you. I gave you a harmless drink (Jones made it harmful). I’m happy to reject this transitivity principle,
T. If [your drinking X] is harmful in virtue of causing [a painful experience], and [my giving you drink X] causes [you to drink X], [my giving you the drink X] is harmful.
I propose this half-principle instead:
P. My action of giving you drink X at t does not harm you at any time tn IF (i) X is not harmful at t, and (ii) I do not cause X to be harmful at any time tn.
So though I caused you to drink X, I did not cause X to be harmful, so I did not harm you. How’s that?
Ben,
I think the Hippocratic Oath when read straightforwardly is silly. I think the intention behind it is not silly, that doctors should not harm patients without good reason, with the aim of curing them.
About the “double-counting” problem, perhaps I should think more about whether that is a bad consequence. I’m inclined to accept that we should count goods twice as well.
More importantly, although I accept this point, I don’t see how my view of harm commits me to it. I count only one harm in the case as it is described. My view about world-centered goods is what motivates me to accept that there are two harms in the case.
“In Mike Almeida’s recent post…”
Could you include the link to that recent post? It’s not obvious where to find it.
M
M, sorry, link has been added to the post.
Mike A, interesting suggestion. I will have to think about it some more.
OK Christian, let me try one more example on you. Suppose you read the following newspaper headline: “SURGERY IS ALWAYS HARMFUL!” This would be a bad headline. I say it’s bad because it is false. But you think it’s true. If you think it’s bad, it must be for some other reason – that it’s inappropriate or misleading. But I think this at least shows that there is a common usage of ‘harm’ such that, when the word is used in this way, what is communicated is that someone has been made worse off overall.
Let me just say that I do think ‘harm’ is sometimes used in other ways. Sometimes we use the word to communicate that something caused something intrinsically bad to happen to someone. But we don’t always use the word in that way. Our usages of the word are not coherent. Thus it is no surprise that there are counterexamples to all the analyses of “the concept” of harm discussed here. There isn’t just one concept, so we have to decide which one is most useful.
Hey Ben, I’m on board with all of that. I think assumptions of univocality are often abused. I see no reason to think there is one concept of harm, but I would agree that there are many. And that (H) picks out one concept we care about has got to be true, I think.
The headline “SURGERY IS ALWAYS HARMFUL!” supports my view I would say. There is something bad about it. I would tell a Gricean story about that badness, that it is infelicitous not because it is false, but because it is true and uninformative.
Given that we seem to agree on this, where do we go from here? Develop arguments that one concept is more important than the other, or prior to the other? I don’t know.