Let A1-A6 stand for six distinct agents. Let V1-V6 stand for six distinct potential victims. Let T1-T6 stand for six distinct and successive times. And let C1 stand for some set of agent-centered constraints. Those who endorse agent-centered constraints accept the following:

(1) It would be impermissible for an agent, A1, to violate a set of constraints, C1, even if this is the only way (short of doing something even worse) for A1 to prevent A2-A6 from each violating C1.

There are at least two possible rationales for (1):

The Agent-Focused Rationale (AFR): The explanation for (1) lies with the fact that agents ought to have a special concern for their own agency — that is, for what they themselves do as opposed to what they merely allow to happen.

The Victim-Focused Rationale (VFR): The explanation for (1) has nothing to do with the thought that agents should have a special concern for their own agency. Rather, as Frances Kamm has claimed, “the agent's own act is special only in that it makes him come up against the constraining right” of his would-be victim. This constraining right acts as a barrier against the permissibility of treating him in certain ways, such as treating him as a means to the minimization of rights violations overall.

Many deontologists endorse VFR. Frances Kamm and Richard Brook certainly do, and I believe that Eric Mack and Jerry Gaus do as well. In support of VFR, these philosophers often appeal to intuitions such as this one:

(2) It would be impermissible for A1 to kill a victim, V6, through the introduction of some new lethal threat even if this is the only way (short of doing something even worse) for A1 to prevent V1-V5 from being killed by the lethal chain of events that A1 earlier initiated.

To make things a bit less abstract, imagine that A1 earlier set up a bomb that will kill V1-V5 unless A1 now shoots V6 and places her body over the bomb. (I borrow this case from Kamm.) Assume that shooting V6 is the only way (short of doing something even worse) for A1 to save V1-V5. Assume, then, that A1 cannot save the five by throwing her own body on the bomb. On VFR, it is impermissible for A1 to kill V6 even though this is the only way for A1 to prevent herself from becoming the killer of V1-V5. As advocates of VFR would put it, V6 has a constraining right that constrains A1 from using V6 as a means to minimizing even her own violations of this constraining right.

But (2) doesn’t support VFR over AFR, for the advocate of AFR can also endorse (2). The advocate of AFR could, for instance, hold that it is not A1's killings that A1 should be most concerned to minimize; rather, what A1 should be most concerned to minimize is instances of her treating people as a mere means — e.g., instances of her intending to cause a person’s death as a means to achieving her own ends. And, by shooting V6, A1 wouldn’t minimize the instances in which she treats someone as a mere means. Indeed, nothing A1 can do now can undo the fact that she has already treated each of V1-V5 as a mere means. So treating V6 as a means to minimizing the deaths she causes only adds to the number of instances in which she treats someone as a mere means. So if the relevant set of constraints includes a constraint against treating people as a mere means, then the advocate of AFR can endorse (2).

To test whether it is AFR or VFR that offers the best explanation for (1), we should, then, consider, not whether we endorse (2), but whether we endorse:

(3) It would be impermissible for A1 to violate C1 at T1 even if this is the only way (short of doing something even worse) for A1 to prevent herself from violating C1 on each of five separate future occasions: T2-T6.

The fact that approximately 80% of the respondents thought that Smith should break his promise to Tom on Saturday so that he could then fulfill his promises to Rick and Harry on Sunday suggests that most of us reject (3) and accept AFR. Indeed, less than 10% of the respondents thought that Smith should keep his promise to Tom, which is what Smith should do if VFR is correct. So it seems that most of us accept AFR, not VFR. Of course, one might worry that none of the respondants are people who actually endorse an agent-centered constraint against breaking one's promises; one might worry, for instance, that PEA Soup readers are all act-utilitarians. But, as the other survey showed, this is not the case. A sizeable majority of the respondents thought that Smith should not break his promise to Tom in order to enable Jones to keep his promises to Dick and Harry. So most of the respondants do endorse an agent-centered constraint against breaking promises. These two polls, then, would seem to provide us with some evidence that AFR as opposed to VFR offers the best explanation for (1).

59 Replies to “Constraints: Agent-Focused or Victim-Focused

  1. I’m worried that there’s something else going on here to do with timing the contstaint violations. In the story, it was *important* that Smith made the promises at the same time. People would have had different intuitions if Smith would have first promised to Tom. This means that Smith creates all the moral requirements on himself at the same time. When does he then violate these requirements? Well, it seems to me that he is in the state of violating some moral requirements already the moment he has made the simultanous promises. After all he cannot keep all his promises. So he is already not keeping some promise even if it is not determinate which one it is. This would make it not the case that he would only either violate one constraint on Saturday or two similar ones on Sunday.
    This leads me to think that this case is structurally more like the trolley case than the transplant case. After Smith has put himself into a situation where he has to direct his promise-breaking to either one or two broken promises just like in the trolley case you direct the harm to either 1 or 5 deaths. And, people who accept constraints do not usually think that redirecting threats is wrong or violates constraints. So, you might think that it is for this reason that Smith is allowed and should save two people. He is not violating constraints at the point of choosing whom to save but rather redirecting the harms due from promise-breaking.

  2. Doug,
    I think I agree with the gist of what you’re saying, but I’m not sure.
    To me it seems peculiar and wrong to say that agents ought to have special concern for their own agency. But I agree that each of us is responsible for, e.g., our own promises, and not particularly responsible for the promises of others (I mean responsible for making sure the promises are kept, of course). This seems to explain why it’s better (according to my intuition) for Smith to break the promise to Tom on Saturday (after all, he’s going to break a promise no matter what he does).
    So, am I agreeing with you? You tell me. I don’t like the way you put it (special concern for own agency), but maybe I’m agreeing with your view.
    Try this: when I am especially concerned with Clay’s welfare, far beyond the welfare of other 12 year old boys, this is no doubt because of (and justified by) my paternal relation to Clay. But it seems completely wrong, to me, to say that what I am really concerned with is my own paternity (or my paternal relationship, etc.). I am concerned with Clay’s welfare.

  3. Jussi,
    In the story, it was *important* that Smith made the promises at the same time.
    Why?
    People would have had different intuitions if Smith would have first promised to Tom.
    How do you know this? Why even think that this is true?
    This means that Smith creates all the moral requirements on himself at the same time. When does he then violate these requirements? Well, it seems to me that he is in the state of violating some moral requirements already the moment he has made the simultanous [sic] promises.
    Which requirement(s) does he violate when he simultaneously makes the promises to Tom, Rick, and Harry? Does he thereby violate the requirement to keep his promise to Tom or the requirements to keep his promises to Rick and Harry?
    After all he cannot keep all his promises. So he is already not keeping some promise even if it is not determinate which one it is.
    Suppose that he makes his promise to Tom on Wednesday and also that he makes his promises to Rick and Harry on Wednesday. (You can say that he made all three promises simultaneously if you think that it matters.) Suppose that, on Wednesday, Smith had exactly four doses of anesthetic. But, then, on Thursday, he used two on himself to get high. In this scenario, he could, at the time he made the promises, keep all of them. I still have the intuition that Smith should break his promise to Tom, refusing to give him the double-dose that he promised him on Saturday and then give each of Rick and Harry a single dose on Sunday, thereby keeping his promises to Rick and Harry.
    Also, I don’t see how the proposition that follows the word ‘So’ follows from the the proposition that precedes it. Can you explain your logic. Suppose, for the sake of argument, that Smith had only two doses when he made the three promises on Wednesday. And suppose that Jones unexpectedly gives him an additional two doses on Thursday. And suppose that Smith gives Tom a double-dose on Saturday and gives each of Rick and Harry a single dose on Sunday. On your view, he broke one of his promises on Wednesday. But isn’t that absurd? It seems to me that he kept all three promises.

  4. Jamie,
    We agree. I like the way you put things — that is, your way of explicating AFR in terms of responsibility rather than concern. I think that’s a better way of explicating AFR. I explained AFR in terms of concern only because that’s how it’s often explained in the literature.

  5. In the story, it was *important* that Smith made the promises at the same time.
    Why?
    People would have had different intuitions if Smith would have first promised to Tom.
    How do you know this? Why even think that this is true?

    Presumably Jussi knows this because he would have had different intuitions. Also because commenter ‘Eric’ in your first Intuition post would have. And me too.

  6. Heath,
    Okay, fair enough. I concede that *some* people would have had different intuitions if Smith would have first made his promise to Tom. I took Jussi to be making a stronger claim, but I’ll leave it for Jussi to clarify whether he meant by ‘people’ ‘some people’, ‘many people’, ‘most people’, or ‘all people’.
    In any case, let’s just assume that Smith made the promises simultaneously, which is indeed what I told people to assume in the comments. Does this in any way impugn the conclusions that I draw? Jussi thinks that it does, because he assumes that it follows from the simultaneity of the promises that Smith violated some requirement right when he made the promises. But his view about when a promise is broken is highly questionable. Moreover, there’s no reason to assume, as Jussi does, that Smith wasn’t in a position to keep all three promises when he simultaneously made them. Assume, to the contrary, that Smith had four doses when he made the three simultaneous promises, and that due some subsequent wrong-doing Smith ended up in a predicament where he had only two doses give and needed four doses to keep all three of his promises.

  7. Jussi,
    One more thought: Suppose that, in a variation on (2), A1 doesn’t shoot V6 before he puts V6 on the bomb. (This is in fact the way Kamm describes the case.) Is this a trolley-type case on your view, where you redirect the bomb threat from the five onto the one? Kamm doesn’t think so. And if this is not a trolley-type case, then what’s the relevant difference such the my promise case, but not Kamm’s bomb case, counts as a trolley-type case where whatever principle that justifies flipping the switch in the trolley case justifies breaking the promise to Tom. I’m not seeing what principle it is that you think justifies both flipping the switch in the trolley case and breaking the promise to Tom in the promise case.

  8. Doug,
    Yeah, Heath pretty much sums it up.
    “Which requirement(s) does he violate when he simultaneously makes the promises to Tom, Rick, and Harry? Does he thereby violate the requirement to keep his promise to Tom or the requirements to keep his promises to Rick and Harry?”
    Why does this matter? If he is in a situation where he would violate one identical requirement no matter what he did, then he already is in a situation where all he has to decide is which requirement to violate. In the constraint cases, this is not the case. There it is assumed that there is a way of acting without violating any constraints. And, I think the question directing requirement-violations makes a difference to the situation as illustrated by the trolley case.
    The getting high case is similar after the situation in which Smith has drunk the drug. By doing this, Smith puts himself into a situation where there is no option for him that would not violate some constraint no matter what he did. The getting an additional dose case works in the same. When he gets the new dose, it is no longer a question of directing a contstaint violation that would necessarily happen.
    I agree that perhaps the time of the constraint violation isn’t the time of making the promise. I realise that this wasn’t essential to my point. The substantial point seems to stand though that there is a relevant difference between cases where there is a way of acting that doesn’t violate a constraint and cases where a constraint violation is guaranteed by the circumstances. That there is no way not to violate constraints may make aggregation issues relevant in a way that they wouldn’t otherwise have been.
    I guess all I’m asking is, could you give a case where one is not guaranteed to violate moral requirements or constraints and where still we get a similar difference between AFR and VFR? I think that would make your case much more robust.

  9. Doug,
    Is the idea that although both views, AFR and VFR, are agent-centred, only the latter is also time-centred? So AFR says: each agent at each time should act so as to minimise the number of constraint-violations done by this agent at any time. But VFR says: each agent at each time should act so as to minimise the number of constraint-violations done by this agent at this time.

  10. Doug,
    Without trying to parse what the right conclusions from your experiment should be, in the interest of general knowledge and future experimentation, here was my reasoning:
    If you make a promise, and subsequently make a second that conflicts with the first, the second one doesn’t count. Making promises which at the time you know or are in a position to know are impossible or wrong to keep, makes the promise invalid, a failed speech act. (Not to say you did nothing wrong, just that you don’t incur an obligation to keep the promise. You can’t get out of a promise by making a subsequent conflicting one that you like better.) So if one promise had come before the other, I would have said go with the first promise.
    However, as you described the case, three promises are made simultaneously, not all of which can be kept. So my thought was that none of the promises are binding. In that case, there are no deontological constraints on the decision, and so we fall back on consequentialist reasoning. That tells us to anesthetize the two rather than the one.

  11. Jussi,
    In the Bomb Case, A1 puts himself in a situation where there is no option for him that would not violate some constraint. And yet we don’t think that it’s permissible for A1 to simply choose to do what will entail that he commits the fewest constraint violations. Doesn’t this show that the mere fact that in the Promise Case Smith must violate some constraint is not what accounts for our intuition that he should do what entails his committing the fewest constraint violations? So shouldn’t we infer that it’s AFR and not VFR that explains our intuition in the Promise Case? And if you admit that an agent doesn’t violate a patient’s constraining right to have the agent’s promise to do x at t kept until t rolls around and the agent fails to do x, then doesn’t VFR imply that Smith should give the double dose to Tom on Saturday? After all, Smith comes up against Tom’s constraining right first and according to VFR he can’t violate this constraining right even as a means to minimizing his promise breakings overall?
    I guess all I’m asking is, could you give a case where one is not guaranteed to violate moral requirements or constraints and where still we get a similar difference between AFR and VFR?
    How could we test for whether it is AFR or VFR that best accounts for (1) except by trying to figure out whether we have intuition (3), where one is guaranteed to either violate fewer constraints now or more constraints later?

  12. Doug,
    You characterize VFR as committed to the following: “‘the agent’s own act is special only in that it makes him come up against the constraining right’ of his would-be victim. This constraining right acts as a barrier against the permissibility of treating him in certain ways, such as treating him as a means to the minimization of rights violations overall.” You want to test this (against AFR) with 3: “It would be impermissible for A1 to violate C1 at T1 even if this is the only way (short of doing something even worse) for A1 to prevent herself from violating C1 on each of five separate future occasions: T2-T6.” You say that if we reject 3, then we reject VFR (or accept AFR). You illustrate our rejection of 3 with the intuition that Smith should keep his promise to Rick and Harry and break his promise to Tom.
    But I’m not seeing why VFR entails 3. Why can’t proponents of VFR say that in the Smith case there are three identical constraining rights against the same agent that must be traded off against each other, and when that happens, the numbers determine how to arbitrate the conflict of rights? The italicized portion is supposed to explain why in the second case, where the constraining rights are held against different agents, Smith and Jones, the numbers do not determine how to deal with the trade-off. Thus, permissibility in these cases still is at bottom a matter of what constraining rights victims have, but the question of when patients’ constraining rights are overridden by other patients’ constraining rights is sensitive to facts about which agents are constrained by those constraining rights. Also, letting the numbers count in this way is consistent with the part of VFR that says that “this constraining right acts as a barrier against the permissibility of treating him in certain ways, such as treating him as a means to the minimization of rights violations overall” insofar as that holds true in cases of inter-agent but not intra-agent conflicts of rights.

  13. Also, I guess that if you make three simultaneous promises which at the time you can keep, and then subsequent events (caused by you or not) make you unable to keep all of your promises, I would once again fall back on consequentialist reasoning, including “breaking a promise” on the negative side of the scale at some weight I am not precisely able to determine.

  14. Campbell,
    I’m not sure. I doubt that the advocates of VFR would be happy with what you say here: “But VFR says: each agent at each time should act so as to minimise the number of constraint-violations done by this agent at this time.” They would say, I suspect, that VFR says at no time should an agent choose to violate a constraint, even when choosing to violate a constraint is the only way to prevent oneself from necessarily committing more numerous constraint violations in the future.

  15. Interesting. A lot seems to turn on whether Doug has genuinely identified an “agent centered constraint” in his example of multiple promises, or whether it’s something weaker, at least at the moment when the drugs are to be provided or refused. I feel inclined to say that one can’t be in a situation “where one is guaranteed to either violate fewer constraints now or more constraints later”, because where there are genuine constraints, it follows that we can always act in a way that does not violate them. Intuition (3) might not therefore hold for cases of genuine constraints, because there is in these cases, by definition, always something better to do than violating C1.

  16. Josh,
    Why can’t proponents of VFR say that in the Smith case there are three identical constraining rights against the same agent that must be traded off against each other, and when that happens, the numbers determine how to arbitrate the conflict of rights?
    Wouldn’t that necessitate saying that, with regard to the Bomb Case, there are six identical constraining rights against the same agent that must be traded off against each other, and when that happens, the numbers determine how to arbitrate the conflict of rights?

  17. For what it’s worth, I was thinking earlier that what Heath says here was a possible line (when I had misread the prompt as saying that the promise to Tom was made before the promise to Rick and Harry). I would put it this way: promising is a moral power, but once Smith makes the promise to Tom, he’s lost some of his moral power to promise (think of a person’s property rights as a legal power and imagine that I sell someone the development rights to my plot — I no longer have the legal power to sell you my plot for development).
    I don’t agree with this view, but it seems like a non-crazy view of promising, and it might entail that Smith is morally required to honor his promise to Tom. In effect, he hasn’t made any genuine promise to R&H with the content he was trying for.
    (I tried to post something with this content earlier but I seem to have forgotten to go through the anti-spam stage.)

  18. Simon,
    As I think most define ‘agent-centered constraint’, they prohibit, in some circumstances, the performance of certain act-types even when performing that act-type would minimize comparable instances of that same act-type. Clearly, so defined, many people think that there is an agent-centered constraint against breaking a promise. And if that’s right, then I think that you are probably mistaken in thinking that there can’t be a situation in which one is guaranteed to either violate fewer constraints now or more constraints later. Indeed, why not think that in the first case I describe Smith is in precisely that situation.

  19. Jamie and Heath,
    I wonder why you two assumed that Smith did not have the ability to keep all three promises when he made them. In the prompt, I said that “Smith has [referring to the present] exactly two doses.” I never said that Smith didn’t in the past have four doses.

  20. Doug,
    I guess it might, as stated. Could we finesse things a bit, though? For example, maybe proponents of VFR could adopt the intra-agent numbers principle for certain rights (the ones that normally look agent-focused, like the right to have promises made to you kept) but not for others (the ones that normally don’t, like the right to not be killed). Also, I guess I’d like to distinguish between Kamm’s own version of VFR and VFR in general. Maybe some versions of VFR could simply reject Kamm’s (and ordinary?) intuition in the Bomb Case. All of Jack Bauer’s fans wouldn’t think that is too tough a bullet to bite, and permitting the sacrifice of one patient and her rights for many more is consistent with making patients’ moral status the ultimate basis for permissibility.

  21. Everyone:
    If I had said explicitly in the prompt that Smith made all three promises simultaneously on Wednesday and that when he made those promises he had four doses, but that on Thursday he uses two doses on himself, leaving himself with only two doses when Saturday rolls around, would this have changed your vote? If yes, how so and why?

  22. Josh,
    maybe proponents of VFR could adopt the intra-agent numbers principle for certain rights (the ones that normally look agent-focused, like the right to have promises made to you kept) but not for others (the ones that normally don’t, like the right to not be killed)
    If I understand you, you’re suggesting that we adopt AFR for some constraints but VFR for others. But why do we need to adopt VFR for any constraints. That is, what is it that VFR explains that can’t be explained by AFR? If you can’t point to anything, then I think that postulating VFR is unnecessary. We should just go with the simpler explanation: AFR alone.
    Maybe some versions of VFR could simply reject Kamm’s (and ordinary?) intuition in the Bomb Case. All of Jack Bauer’s fans wouldn’t think that is too tough a bullet to bite.
    I’m a fan of the show and I don’t recall Bauer ever killing one innocent person to save a mere five lives. When Bauer kills someone, it to save thousands, if not millions. Saying that A1 can throw a currently non-threatened bystander onto the bomb to save two or even five lives seems like a pretty tough bullet to bite to me. But, perhaps, you have stronger teeth and jaw muscles.

  23. Doug,
    Sorry, maybe I’m misunderstanding the scenario, but I don’t see why it matters how many doses Smith had when he made the promises. (The point isn’t that you cannot promise to do something not in your physical power.)

  24. Jamie,
    Heath said: “If you make a promise, and subsequently make a second that conflicts with the first, the second one doesn’t count. Making promises which at the time you know or are in a position to know are impossible or wrong to keep, makes the promise invalid, a failed speech act.” If he had four doses, then Smith’s promises didn’t conflict when he made them. If he had four doses, then none of the promises were made at time when it was not possible for Smith to keep them. Thus they are all valid.

  25. Doug,
    I wasn’t suggesting that AFR is true for some constraints and VFR for others. I was instead suggesting that perhaps VFR is numbers-sensitive for some constraints but not for others.
    As for why we should adopt VFR, I have a personal preference for VFR because I think that ultimately what makes right acts right is the moral status of patients. But obviously I can’t justify this principle here and anyway it faces potential landmines (like non-identity stuff) that need to be neutralized. All I’m looking for here is to see whether VFR can be made consistent with the cases you’re raising.
    Regarding Bauer, fair enough about your intuitions. Along with various other influences (like reading philosophy), that show has destabilized my intuitions enough that I no longer know what to think about numbers. But I do think that any approach to numbers is consistent with letting patients’ moral status be the ultimate determinant of permissibility, and so long as numbers aren’t the only thing that decides trade-off cases, presumably we can retain some rights to boot.
    I’m about to be slammed with other tasks, so apologies in advance if I don’t come back to this post, or don’t come back to it for awhile. I’ll be sure to keep an eye on it in any case. Thanks for the interesting discussion!

  26. I see. Okay, so I didn’t have in mind quite what Heath had in mind.
    My point about moral powers isn’t especially about what you know or are in a position to know is impossible or wrong (I do think that kind of proviso about promising is independently plausible). It’s structurally like the property right example. When I own some land, I have rights in it, which can be thought of as a kind of legal power. I can by a legal act give other people rights (by selling the development rights, for instance). But once I do so, I have lost some legal powers; there are thereafter things I could have done to the legal landscape but can do no longer.
    Maybe when I make a promise, in my exercise of a moral power (creating an obligation) I also dispose of some moral power. There are now things I cannot obligate myself to do just by the conventional act.
    So, once Smith has promised to give doses to Tom, he no longer has the power to obligate himself to give doses to R&H in the circumstance in which if he gave doses to R&H he could not give doses to Tom. Imagine that he makes the promise to Tom, and then apparently makes the promise to R&H. Then push comes to shove: he has only enough to give to Tom or to R&H. He gives the doses to Tom. R&H say, “But you promised to give us doses, you are obligated to give them to us.” Maybe the answer is: no, Smith did not obligate himself to give you the doses in that situation. He could not, since he had given up the power to make such an obligation. He could rightly say, “Sorry R&H, these doses are not mine to give you.”
    I’m not asking you to agree with this (since I do not, myself), but don’t you think this makes sense? Couldn’t promising be thought of as a moral power; and if it is, then couldn’t it work in this way?

  27. Josh,
    What’s the difference between (1) a victim-focused rationale of a constraint, C1, that allows the numbers to determine how to arbitrate cases in which one must either violate C1 once now or violate C1 two or more times in the future and (2) an agent-focused rationale of C1?
    Typically, the distinction between VFR and AFR is spelled out in terms of whether or not they are compatible with its being permissible to minimize one’s own constraint violations over time. Once you drop that out of the equation, I cease to understand what the distinction is. What do you take the distinction to be, since it’s not what I’ve suggested.

  28. Jamie,
    The general view makes sense, but I don’t see why you think that how many doses Smith had when he made the promises is irrelevant. If Smith has four doses, then it would seem that Smith has the moral power to promise two of his four (not specifying which two) to R&H even after promising two of his four doses (but not specifying which two) to Tom. Smith doesn’t lose his moral power to promise away the remaining two doses after promising away two of the four doses.
    So I don’t understand why you say this: “once Smith has promised to give doses to Tom, he no longer has the power to obligate himself to give doses to R&H.” If I have two houses, can’t I promise you that I’ll give you one of them (not specifying which one) and then turn around and promise someone else that I’ll also give him one of my two houses (not specifying which one).

  29. Doug,

    So I don’t understand why you say this: “once Smith has promised to give doses to Tom, he no longer has the power to obligate himself to give doses to R&H.”

    Well, actually, I didn’t say that. I said:

    So, once Smith has promised to give doses to Tom, he no longer has the power to obligate himself to give doses to R&H in the circumstance in which if he gave doses to R&H he could not give doses to Tom.

    So the idea is, in making his promise to R&H, what he was trying to do was to create an obligation without exception to give them two doses in a situation in which he had two doses; but he could not do that, since in any such situation those two doses would not really be his to give them (since he had already transfered the power when he made his promise to Tom).

  30. What seems to be going on is this. It is possible to get yourself in a position where there are a number of putative constraints that cannot all be satisfied; Doug is wondering what you are supposed to do then. The deontologist’s first line of defense is to look for a lexical ordering scheme: Jamie and I have suggested that temporal priority orders promising, but you can imagine other schemes. Maybe the biggest promises, or the ones to closest friends, get priority. Doug responds by altering the case—stipulating simultaneity of promises– to make the lexical ordering gambit fail. Then what?
    One response is to look for an infallible ordering scheme, but I don’t think that’s plausible. Another response, which appeals to me, is to say (effectively) that all deontological bets are off; you have to make your decision without (much) recourse to constraints at that point. That strikes me anyway as plausible, but I don’t think it will tell Doug what he wants to know about AFR vs. VFR.

  31. Jamie,
    I apologize for being slow on the uptake. I’m still not sure that I completely understand the view. The following might help, though.
    Suppose all of the following is true: (1) Smith has, at t1, four doses; (2) Smith promised, at t1, to give Tom two of those four doses at t4; (3) at t2, Smith can (in the ought-implies-can sense of ‘can’) complete a course of action in which he both gives Tom two doses at t4 and gives R&H each one dose at t4; and (4) Smith will take two of the four doses himself at t3 and this means that, at t3, Smith won’t be able to both give Tom two doses at t4 and give R&H each one dose at t4.
    On this view, is it in Smith’s moral power to make a promise, at t2, to R&H that he will give them each one dose at t4?

  32. Doug,
    A quick (but hopefully not incoherent) reply. You write: “What’s the difference between (1) a victim-focused rationale of a constraint, C1, that allows the numbers to determine how to arbitrate cases in which one must either violate C1 once now or violate C1 two or more times in the future and (2) an agent-focused rationale of C1?”
    The difference, as I see it, is in the content of the rationale. By your stipulation here, C1 is the same for both. (And as I’ll focus on in a second, C1 here is different from your (1) in the original post, but the same would go for (1) as well.) So the difference is not in what is rationalized, but in what does the rationalizing. As you put it in the original post, the difference will come down to something like whether the rationale for C1 essentially contains reference to the agent’s special concern for her own agency, or whether it instead essentially contains reference to the moral status of the patient (including possibly her relations to various agents).
    You also write, “Typically, the distinction between VFR and AFR is spelled out in terms of whether or not they are compatible with its being permissible to minimize one’s own constraint violations over time. Once you drop that out of the equation, I cease to understand what the distinction is. What do you take the distinction to be, since it’s not what I’ve suggested.” In the original post, though, you distinguished them in terms of how each rationalizes (1), not this sort of spelling out, which is closer to distinguishing them in terms of whether each is compatible with (C1). (As a side note, my own reading of things (which might, at this point, be a bit dated, not to mention being the product of foggy memory) is that victim-focused and agent-focused approaches to ethics are supposed to be rival accounts of how to make sense of all of our obligations, including not only agent-centered constraints but every obligation.)

  33. Doug, on the story presented in your last comment: Isn’t it arguable that Smith has already violated a constraint imposed by VFR by getting high at t3, so that what he must do at t4 is just minimize the harm to the group?

  34. Doug,
    Since I don’t believe this view I’m trying to explain, I’m not going to spend much more time on this. But, here goes.

    On this view, is it in Smith’s moral power to make a promise, at t2, to R&H that he will give them each one dose at t4?

    I guess it depends on exactly what is entailed by his making such a promise. For Moral Power theorists, the fundamental facts are the facts of just what somebody is obligated to do. Moral powers change these facts.
    So the idea is, Smith cannot obligate himself to give R&H a dose each in a situation in which he has only two doses. Because, as I said, in such a situation, those doses will not be his to give (he’s obligated to give them to Tom).
    Now, does this mean he can ‘make a promise to R&H’ to give them each a dose? Well, he did obligate himself to give them a dose in the situation in which he has some he is empowered to give; but not to give them a dose in the situation in which the only doses he has are the ones he is obligated to give Jones. Those are the normative facts, so to speak; what we say about whether he can make the promise you ask about is just a matter of how we choose to describe it.

  35. Josh,
    The difference, as I see it, is in the content of the rationale.
    What’s the content of each of the two rationales? That is, what’s the content of the victim-focused rationale for its being permissible for Smith to break his promise to Tom so as to prevent his breaking his promises to Rick and Harry but impermissible for Smith to break his promise to Tom so as to prevent Jones from breaking his promises to Rick and Harry, and what’s the content of the agent-focused rationale for its being permissible for Smith to break his promise to Tom so as to prevent his breaking his promises to Rick and Harry but impermissible for Smith to break his promise to Tom so as to prevent Jones from breaking his promises to Rick and Harry?

  36. Simon,
    Doug, on the story presented in your last comment: Isn’t it arguable that Smith has already violated a constraint imposed by VFR by getting high at t3, so that what he must do at t4 is just minimize the harm to the group.
    What constraint has he violated by getting high at t3? There’s a constraint against breaking his promises, but I don’t see that Smith has broken any promises at t3 anymore than A1 has killed anyone at anytime between when he sets the bomb to go off and when he shoots V6.

  37. Doug,
    sorry for being out of loop and not following the discussion. About this;
    “In the Bomb Case, A1 puts himself in a situation where there is no option for him that would not violate some constraint. And yet we don’t think that it’s permissible for A1 to simply choose to do what will entail that he commits the fewest constraint violations. Doesn’t this show that the mere fact that in the Promise Case Smith must violate some constraint is not what accounts for our intuition that he should do what entails his committing the fewest constraint violations? So shouldn’t we infer that it’s AFR and not VFR that explains our intuition in the Promise Case? And if you admit that an agent doesn’t violate a patient’s constraining right to have the agent’s promise to do x at t kept until t rolls around and the agent fails to do x, then doesn’t VFR imply that Smith should give the double dose to Tom on Saturday? After all, Smith comes up against Tom’s constraining right first and according to VFR he can’t violate this constraining right even as a means to minimizing his promise breakings overall?”
    I’m not sure about the bomb case. In the situation A1 is, he can either let other people die as a result of a threat he has placed or kill an innocent bystander. He is not directing an earlier threat towards that person. That seems like a relevant distinction in that case as in the promising case the promises are already made in the way that they cannot be kept.
    Also, don’t see that the later hypothetical questions follow. A second before t, Smith is in no position to keep all his promises. If that’s true that VFR doesn’t come into play when Smith has the last point of making the choice.
    “How could we test for whether it is AFR or VFR that best accounts for (1) except by trying to figure out whether we have intuition (3), where one is guaranteed to either violate fewer constraints now or more constraints later?”
    I don’t think this sort of question necessarily comes down to questions about the extensions of the views. They might be co-extensive in the end. I think the more interesting question is a substantial question in another way. Is it really some properties of persons that we face in the world as agents that safeguard them from certain sorts of actions or do our possible qualities as agents limit what we can do to others? I think this sort of question can be approached by looking at what qualities of patients and agents could be in question and how plausible is it that they play a primary role in what obligations we have.

  38. Doug,

    VFR says at no time should an agent choose to violate a constraint, even when choosing to violate a constraint is the only way to prevent oneself from necessarily committing more numerous constraint violations in the future.

    This sounds to me like time-centredness. An agent-centred theory says: don’t worry about whether constraints are violated by other agents; just do what you can to avoid constraints being violated by you. And a time-centred theory says: don’t worry about whether constraints are violated at other times; just do what you can to avoid constraints being violated now. AFR seems to be what you get if you endorse only the first sort of centredness, whereas VFR is what you get if you endorse both.
    Why is this important? Well, someone might argue that AFR is an unstable mix of centreness and non-centredness. If you endorse agent-centredness, why not also time-centredness? (This is similar to an argument Parfit gives against the theory he calls S in Reasons and Persons.)

  39. Doug,
    Spelling out those rationales is (obviously) a big task, one that I’m not up to here, but the kind of thing I had in mind (and it might sound like I’m repeating myself a bit) is just something like this.
    VFR: Smith may break the one promise but not the other fundamentally because this is the best way to respect the moral status of all of the patients involved.
    AFR: Smith may break the one promise but not the other fundamentally because of Smith’s special responsibility for his own promises.
    Of course, each has to say a lot more, but whatever else we want to say about them, they seem distinct. (I guess at this point what I’m saying and what Jussi most recently said are dovetailing a bit.)

  40. What constraint has he violated by getting high at t3?

    A ceteris paribus constraint against knowingly or negligently putting himself in a position where he cannot keep his promises (explained by the constraining rights of the promisees not to be treated in this way by the promiser)?

  41. Josh,
    Thanks. That’s helpful.
    Simon,
    Fair enough. But I’ve lost track of the dialectic now. Is this supposed to somehow support your contention that “one can’t be in a situation ‘where one is guaranteed to either violate fewer constraints now or more constraints later'”?

  42. “To make things a bit less abstract, imagine that A1 earlier set up a bomb that will kill V1-V5 unless A1 now shoots V6 and places her body over the bomb. (I borrow this case from Kamm.) Assume that shooting V6 is the only way (short of doing something even worse) for A1 to save V1-V5. Assume, then, that A1 cannot save the five by throwing her own body on the bomb.”
    I do not mean to be snarky, but this seems unrealistic. Why would she want to save those she intended on killing? If A1 changed her mind, why wouldn’t she simply warn V1-V5 of the bomb and then be a hero if they did not know that she intended to blow them up? Or why doesn’t A1 retrieve the bomb? How is A1 detonating the bomb? If it is by remote then she does not have to set it off. If it is timed why should we think that V1-V5 will all be there? IF she tricked them to being there why would we assume she then changed her mind? It could happen, but…
    I know that many philosophers like to play what-if games, but many, if not most, of them seem out of touch wiht reality. (Or at least I do not see the connections) There needs to be some discussion regarding the motivations of the agents in the scenarios. Without them they seem out of touch.
    This harkens back to Smith promising Tom, Rick and Harry something he knows he cannot fulfill. Why would he do this if he understood the normative implications of promise making? If he knows the implications of promise making then he would not make promises he knows he cannot keep. If he knows the normative implications of promise making and still makes promises he knows he cannot make then why would we think he cares how the problem is solved. why would we worry what he should do? He shouldn’t make promise he knows (or should knows) he cannot keep. What is his motivation?
    I used to play what-if games when I was in business. These games where played to determine possible outcomes given real possible situations that could happen in the marketplace to help me determine what I should do if these hypothetical conditions were to become actual. When these conditions became actual the game became real and the real outcome affected real people; many of them were severely negatively affected. But understanding these outcomes were part of the motivational aspects of ‘playing the game’ because I wanted to do the right thing. It seems to me that this discussion would be well served if someone would come up with a real historical example and discuss that example instead of a hypothetical that seems unrealistic at some rather key points. Unofrtunately, I cannot think of one.

  43. Doug – yes, precisely, it is supposed to support the contention that you can’t be in a position where you necessarily violate a constraint. This is in line with points Jussi and Heath made earlier, I think.
    Maybe a clearer way to see the dialectic is this:
    You said in your post that acceptance of principle (2), illustrated by your bomb case, does not support VFR over AFR, since the relevant constraint (treating people as a mere means), which is explained by AFR, may already have been violated when the new lethal threat is introduced. Similarly, I want to suggest that your second case of Smith’s promise breaking does not support AFR over VFR and the rejection of (3), since the relevant constraint (roughly: making oneself unable to keep a promise), which is explained by VFR, may already have been violated when the promise is broken.

  44. Simon,
    it is supposed to support the contention that you can’t be in a position where you necessarily violate a constraint.
    I don’t see this at all. It seems to me that there is a constraint against breaking one’s promises and that, in the Promise Case, Smith is in a position where he will necessarily violate this constraint. Surely, you don’t think that there is a constraint against making oneself unable to keep a promise but no constraint against breaking a promise that one is able to keep. Do you? So do you deny that there is a constraint against breaking a promise that one can keep? Or do you deny that Smith is in a position where he will necessarily break some promise that he can keep?
    So I admit that Smith has violated the constraint against doing what will make him unable to keep his promises. But that’s in the past. He now faces a situation where he must break some promise that he can keep. And it seems to me that AFR and VFR (at least as I stated them) make different predictions about whether Smith should break his promise to Tom. VFR implies that he shouldn’t, and AFR implies that he should.
    I do concede that one could reformulate VFR along the lines that Josh suggests and that in that case VFR doesn’t imply that Smith shouldn’t break his promise to Tom. But I have some doubts about Josh’s approach. I would need to hear more from Josh about why, in the Promise Case, Smith’s breaking his promise Tom is the best way to respect the moral status of all of the patients involved even though, in the Bomb Case, A1’s throwing V6 on to the bomb is not the best way to respect the moral status of all of the patients involved.

  45. John,
    I happen to think that thought experiments, even unrealistic ones, can be useful in philosophical inquiry. At some time in the future, we may have a discussion about this here at PEA Soup, but this is not that time — at least, I’m not interested in having this discussion now. I’m not saying that it’s not legitimate to worry about whether thought experiments have any value. I’m merely pointing out that, for the sake of this post, I’m just going to assume that thought experiments are useful. If you don’t accept this assumption, then this post isn’t going to interest you.

  46. Surely, you don’t think that there is a constraint against making oneself unable to keep a promise but no constraint against breaking a promise that one is able to keep. Do you?

    When I formulated a rough constraint against making oneself unable to keep one’s promises, I put it in ceteris paribus form. It is implausible to think that is under a constraint e.g. to be able to meet for dinner at the promised time if one would have to abstain from saving a drowning child in order to do so. A proponent of VFR might say that that’s because when one could still adequately respect the moral status of the promisee even while making oneself unable to keep the promise, if one acts for such a compelling reason.
    The constraint against actually breaking a promise one is able to keep will plausibly have the same ceteris paribus form. So the question here is not: Is there a general constraint against breaking promises one is able to keep? It is: Is Smith under a constraint not to break each of the promises he has made, given that he must break at least one of them? I think the answer to this question is “No”. Now that Smith has got himself into this situation, failing to keep the first of his promises in order to keep the second does not demonstrate any (additional) failure to respect the moral status of the first promisee.

  47. Simon,
    Okay, I see now how you can coherently say that, in the Promise Case, Smith is not in a position where he will necessarily violate a constraint. I’m not sure, though, that this really supports your contention that a person can never end up in a position where she will necessarily violate a constraint. It sounds more like a statement of your position, a position which I find implausible. But I see now how it’s a coherent position.
    Now that Smith has got himself into this situation, failing to keep the first of his promises in order to keep the second does not demonstrate any (additional) failure to respect the moral status of the first promisee.
    So you want to say: now that he has gotten himself into this situation, “failing to keep the first of his promises in order to keep the second does not demonstrate any (additional) failure to respect the moral status of the first promisee.” Presumably, then, you think that he has failed the first promisee merely by having gotten himself into this situation — that would be why you say there’s no additional failure. But do you want to say that he has failed Tom by getting himself into this situation even if he keeps his promise to Tom? After all, he can still keep his promise to Tom. So is the idea that whether or not Smith now breaks his promise to Tom, Smith fails Tom equally? That doesn’t sound plausible to me.
    I would say Smith hasn’t violated anyone’s constraining right yet, that Tom can still waive his right, and that if Tom doesn’t waive his right and Smith fails to keep his promise to Tom, then and only then does Smith violate Tom’s constraining right to Smith’s doing what he promised Tom he would do.

  48. Doug
    Fair enough. I was not trying to be snarky, but it did come across that way for which I apologize. I do realize the important role that thought experiments play; I use them myself. I was simply trying to suggest that lack of detail (realism) is problematic, at least for me.
    I do fine your post interesting (as I did the others) so I would like to comment within the experiment you developed. We know that A1 planted a bomb and planned to kill V1-V5. A1 now has a change of heart and wants to save V1-V5, but can only do so by killing V6. There are two options given, AFR and VFR, to use to settle the moral permissibility of A1 (I am getting hungry for steak!) killing V6 to save V1-V5. I take it that A2-A6 are not relevant for this experiment; we want to know what A1 (like Smith) should do. My question is are V1-V6 moral agents like A1? If they are then VFR may not be a legitimate option. What role do the constraints play in determining what we (A1/Smith) should do? Let us assume that the only way to save V2-V5 is to kill V6 and that this is morally defensible given one of the constraints in C1-C6 then should not V6 (A6) also see that her being killed is the morally permissible option? Let us assume that C1 is ‘we should not cause unnecessary and avoidable harm.’ A1 now realizes that killing V1-V5 violates this constraint, but that given the situation she has created the only option is to kill V6. Killing V6 does not violate C1; killing V6 is necessary and unavoidable if saving V1-V5 is going to happen. I suggest that there is Constraint Focus Rationale (CFR) that overrides AFR and VFR. Once we know what the constraints are then we will know what A1 should do.
    Consider the following: Imagine that A1 is the owner of a company and needs to terminate enough employees to save the costs associated with keeping them employed. If A1 does not terminate these people the entire company will fail. Let us assume that A1 can accomplish this by terminating 5 employees (V1-V5). Upon reflection he realizes that he can accomplish the same by terminating one VP (V6). Everything else being equal (e.g., no one wants to voluntarily lower their income to save the jobs of V1-V5), he only has these two options other then letting his company fail and harming many more then 6. The harm to V1-V6 will be equal in terms of lost life style (even though V6 has a higher life style then V1-V5 do individually, the resulting life style will be the same for those terminated). Should A1 terminate V6 and thereby saving V1-V5? Given CFR it seems that A1 should do this and that it would be morally impermissible for A1 to do otherwise. I believe that A1 should terminate V6 and that V6 should, as A6, understand and accept this decision.
    If this has already been discussed then I apologize (it must be Sunday) for restating what others have already put forth.

  49. Doug
    Fair enough. I was not trying to be snarky, but it did come across that way for which I apologize. I do realize the important role that thought experiments play; I use them myself. I was simply trying to suggest that lack of detail (realism) is problematic, at least for me.
    I do fine your post interesting (as I did the others) so I would like to comment within the experiment you developed. We know that A1 planted a bomb and planned to kill V1-V5. A1 now has a change of heart and wants to save V1-V5, but can only do so by killing V6. There are two options given, AFR and VFR, to use to settle the moral permissibility of A1 (I am getting hungry for steak!) killing V6 to save V1-V5. I take it that A2-A6 are not relevant for this experiment; we want to know what A1 (like Smith) should do. My question is are V1-V6 moral agents like A1? If they are then VFR may not be a legitimate option. What role do the constraints play in determining what we (A1/Smith) should do? Let us assume that the only way to save V2-V5 is to kill V6 and that this is morally defensible given one of the constraints in C1-C6 then should not V6 (A6) also see that her being killed is the morally permissible option? Let us assume that C1 is ‘we should not cause unnecessary and avoidable harm.’ A1 now realizes that killing V1-V5 violates this constraint, but that given the situation she has created the only option is to kill V6. Killing V6 does not violate C1; killing V6 is necessary and unavoidable if saving V1-V5 is going to happen. I suggest that there is Constraint Focus Rationale (CFR) that overrides AFR and VFR. Once we know what the constraints are then we will know what A1 should do.
    Consider the following: Imagine that A1 is the owner of a company and needs to terminate enough employees to save the costs associated with keeping them employed. If A1 does not terminate these people the entire company will fail. Let us assume that A1 can accomplish this by terminating 5 employees (V1-V5). Upon reflection he realizes that he can accomplish the same by terminating one VP (V6). Everything else being equal (e.g., no one wants to voluntarily lower their income to save the jobs of V1-V5), he only has these two options other then letting his company fail and harming many more then 6. The harm to V1-V6 will be equal in terms of lost life style (even though V6 has a higher life style then V1-V5 do individually, the resulting life style will be the same for those terminated). Should A1 terminate V6 and thereby saving V1-V5? Given CFR it seems that A1 should do this and that it would be morally impermissible for A1 to do otherwise. I believe that A1 should terminate V6 and that V6 should, as A6, understand and accept this decision.
    If this has already been discussed then I apologize (it must be Sunday) for restating what others have already put forth.

  50. Doug – these are good questions, I didn’t really want to defend a comprehensive view of your case, but I’ll take a stab.
    When I imagined Smith getting violating the constraint against making himself unable to keep his promises, I imagined him doing so in a way that was neutral between the promisees. That is, I assumed he didn’t think to himself “Only Rick and Harry really matter, forget Tom, I’ll use the doses I promised him to get high.” If he had acted on this intention, and then had followed through by keeping his promise to Rick and Harry, I don’t think he would have violated Rick and Harry’s constraining rights at all. I do think he violates Tom’s constraining rights by this decision at t3 though, even though it’s only later that the promise to him will be actually broken. And he would still have violated them even if Tom releases Smith from the promise after t3.
    The more natural way to think of Smith’s intention at t3 is something like: “I know I made all those promises, but I don’t care, I’m going to use a couple of doses now to get high.” Here it seems plausible that he’s violated all of his promisees’ constraining rights, by failing to act in a way that respects their status as his promisees. (Imagine that Smith had accidentally overdosed and died, and it was later discovered that he had used two of the doses up that he had promised. Wouldn’t each of Tom, Rick and Harry each have some justification for thinking that Smith had not treated them as he was morally required to, even though he never actually broke a promise to any of them?)
    Now you might think that Smith violated Tom’s constraining rights a little bit on using up the dose, and then again some more on actually breaking the promise. On the alternative, “forget Tom” construal of the case I just offered, probably he does – because his disrespect for Tom is manifested not only in his getting high, but also in his (presumed) failing to care about actually breaking the promise when he does so later on. However, on the more natural construal of the case, I don’t think he does violate Tom’s constraining rights after t3. When he actually breaks the promise to Tom, we imagine him thinking: “Now I regret that I can’t keep all my promises, however important they are. Now I must make a difficult choice, and I should try to mitigate the consequences of my previous bad behavior.” Even though Tom loses out materially at this point, there is no insult in this. He is like the dinner partner who is stood up by someone busy saving a drowning child.
    Your use of the phrase “Smith fails Tom” I think papers over this distinction between a failure of respect and a failure to keep a promise in material fact. It’s the first kind of failure, a moralized kind of failure, that seems most likely to bear a close relationship to anyone’s (morally) constraining rights.

  51. Simon,
    You ask, “Wouldn’t each of Tom, Rick and Harry each have some justification for thinking that Smith had not treated them as he was morally required to, even though he never actually broke a promise to any of them?”
    I don’t think so, no. They would have good justification for thinking that Smith was going to fail to treat at least one of them as he was morally required to. But they don’t know which one(s). And if I were Tom, I would think only that Smith is required to give me the double-dose; that, after all, is what he promised me. I would not think that Smith violates my rights merely by failing to keep enough doses to fulfill all of his promises. He never promised me that he would do what he promised others he would do. He only promised me that he would give me a double-dose. And if he does that, I have no grounds for complaint. What’s more, if he does that, he hasn’t violated any of my rights. So I don’t think that there is justification for Tom’s thinking that Smith had not treated him as he was morally required to treat him.
    You also write:

    The more natural way to think of Smith’s intention at t3 is something like: “I know I made all those promises, but I don’t care, I’m going to use a couple of doses now to get high.” Here it seems plausible that he’s violated all of his promisees’ constraining rights, by failing to act in a way that respects their status as his promisees.

    This implies that Smith violates some constraining right of Tom’s even if Smith gives him the double-dose. And this implies that so long as Smith has this intention at t3 and follows through with this intention, Smith violates some constraining right of Tom’s even if Tom releases Smith from his promise after t3 but before he goes into surgery. I take these both to be reductios of your position.

  52. Doug, your last reply ignores the distinction I pointed out between a failure to keep a promise in material fact, and a moralized failure toward the promisee. There’s only a resemblance of a “reductio” here if you assume either that these two are the very same thing, or at least that a necessary condition of a moral failure of a promiser to a promisee is a material failure to keep the promise. But this assumption is false, as I already explained. Moreover, you earlier apparently admitted that it was false, when you accepted that on getting high “Smith has [already] violated the constraint against doing what will make him unable to keep his promises”.
    If Smith took all four doses, it is quite clear, at least to me, that he would have violated his promisees’ constraining rights by failing to respect them and his obligations toward them, even if he later gets lucky and they release him his promises, or he happens to find more medicine by chance.
    The only difference that Smith’s taking two doses makes, as I explained, is that now we would need to know about the content of his intentions when taking them in order to know which of his promisees he had failed to respect in doing so.

  53. Simon,
    I hold that a necessary condition of a moral failure of a promiser to a promisee is a material failure to keep the promise. Now, I did accept that, in getting high, “Smith has [already] violated the constraint against doing what will make him unable to keep his promises.” But just because Smith has violated a constraint doesn’t mean that he has violated anyone’s constraining rights. I believe that there are constraints without any correlative constraining rights — see, for instance, Case C.
    If Smith took all four doses, it is quite clear, at least to me, that he would have violated his promisees’ constraining rights by failing to respect them and his obligations toward them, even if he later gets lucky and they release him his promises, or he happens to find more medicine by chance.
    We disagree. It’s quite clear to me that he doesn’t violate Tom’s constraining rights if either Tom releases him from his promise or he happens to acquire more medicine by luck and gives Tom the promised double-dose.
    I think that we’ve hit bedrock here. I find it absurd to think that Smith has violated Tom’s constraining rights if he does what he promised to do, whereas you think that it’s absurd to deny that Smith has violated Tom’s constraining rights if he knowingly acts in a way that makes it unlikely that he’ll be able to keep his promise to Tom.

  54. Doug – Wait, now I’m confused about the dialectic. You are now in the course of your argument simply asserting the existence of “constraints without correlative constraining rights” at t3. But you were supposed to be making an argument against VFR, which denies precisely that possibility!

  55. Simon,
    As I see it, VFR doesn’t entail that there are no constraints without correlative rights. It entails only that certain typical constraints have victims. As I see it, VFR and AFR are offering competing explanations for typical constraints such as the constraint against murder and the constraint against promise breaking. And I’m arguing that AFR offers the more plausible explanation for these typical constraints. For one, AFR can account for more constraints than VFR can — e.g., for constraints that don’t involve victims. And, for another, AFR can more easily account for certain intuitions such as the intuition that I have in the Promise Case.
    I realize, now (and thanks to you), that I set things up a bit too generally. But, hopefully, I can be forgiven for this mistake. The intuitions that AFR and VFR are meant to be providing rationales for are not (1) (after all, it’s not clear that we have intuitions about such schemas), but rather instances of (1).

  56. Doug, thanks for this, we seem to have taken the scenic route, and I’m still a bit confused about the dialectic. So I’m hoping I can point out the short cut.
    Your argument from the Promise Case depends on the intuition that, in case 2, both Smith and Jones are under the very same substantive constraint (roughly: to keep one’s promises), and that Smith could, by passing the medicine on to Jones, make it the case that Jones doesn’t violate the constraint. I, along with others, objected to thinking of the constraint at issue that way, and I’ve basically suggested that it is really better explained as a constraint to, roughly: keep one’s promises where one can. If Jones doesn’t possess the medicine, Jones can’t keep his promise, and so he isn’t about to violate that constraint. So that constraint could not provide a reason for Smith to give the medicine to Jones instead of Tom, whatever rationale you give for the constraint.
    Now you still want to say that AFR better accounts for your intuitions in the Promise Case. But it seems to me that this depends on the claim that the substantive constraint at stake is to keep one’s promises – in exactly the way that the interpretation of the bomb case depends on a claim about what the substantive constraint at stake is. If you want to say that your intuitions about the Promise Case provide evidence for AFR, then, you must be treating as part of your evidence your intuition about what the substantive constraint at issue is. Similarly, it seems to me, you must admit that the VFR proponent has equal and opposite evidence for her view from the bomb case, insofar as she has the intuition that what one should be most concerned to minimize is the number of one’s killings, rather than the number of instances of one’s treating people as a mere means.

  57. Simon,
    I think that there’s an important disanalogy between the Bomb Case and the Promise Case. In the Bomb Case, there’s a tension between two constraints: the constraint against killing and the constraint against treating people as a mere means. If A1 throws V6 onto the bomb, A1 decreases the number of people that she kills, but increases the number of people she treats as a mere means. And the proponent of AFR can claim that what one should be most concerned to minimize is the number of instances of one’s treating people as a mere means, rather than the number of instance of one’s killing someone.
    In the promise case, there is no similar tension between the constraint against breaking one’s promises and the constraint against doing what will make one unable to keep one’s promises. At the time of the agent’s decision, Smith (the agent) is deciding only whether or not to minimize the number of promises he breaks. He doesn’t face a situation, as in the Bomb Case, where he must choose either to minimize his violations of the one constraint or to minimize his violations of the other constraint.
    I admit, though, that the proponent of VFR could deny that there is a constraint against breaking promises that one can keep, and claim that there is only a constraint against breaking the promises that one can keep and that the keeping of which doesn’t prevent the breaking of more of one’s promises. I also admit that the proponent of VFR could deny that a necessary condition of a moral failure of a promiser to a promisee is a material failure to keep the promise. And I admit that if the proponent of VFR does either, then the Promise Case doesn’t present a problem for them. I just don’t find either plausible. This is where I think we disagree, but maybe I’m still not seeing what the dialectic is.
    In any case, it seems, contrary to what you suggest, that the move that the proponent of VFR has to make with respect to the Promise Case is not the same move that I made with respect to the Bomb Case.

  58. Doug – I see, I took the cases to be parallel because I understood the AFR proponent as simply denying that there was a constraint to minimize the instances of killings one performs in the bomb case. Now you’ve made it clear that you want to say that that’s just a less important constraint than the constraint to minimize the instances of treating someone as a mere means one performs. If so, you’re right that the two cases can’t be precisely parallel, though it’s still unclear to me why you think the difference between them is significant.
    Your latest explanation of the bomb case also highlights, again, the difference between our intuitions about what it means to be under a “constraint”. I always thought constraints were inviolable, pretty much by definition (is there a commonly understood difference between “constraints” and Nozickean “side-constraints” that I’ve missed until now?)
    I think I also may differ with you on the status and importance of intuitions, though perhaps you were speaking imprecisely. I don’t agree that if the VFR proponent merely denies that a certain constraint holds, or intuits that a certain constraint holds, then it follows that the Promise Case isn’t a problem for them. They may not think it’s a problem for them, but it would still be a problem for them if their denial or intuition turned out to be false. And a widely-shared intuition to the contrary, in my view, would be some prima facie evidence that their own denial or intuition is false. So I don’t think anyone should be arguing just from their own intuitions.

  59. Simon,
    Your latest explanation of the bomb case also highlights, again, the difference between our intuitions about what it means to be under a “constraint”. I always thought constraints were inviolable, pretty much by definition.
    I would say that, by definition, there is a constraint against performing a certain act-type if and only if agents are prohibited from performing that act-type even in some circumstances in which performing that act-type would minimize overall commissions of that act-type.
    And we could distinguish between violations and infringements of constraints. If there is a constraint against promise-breaking, then that constraint is infringed whenever an agent breaks a promise. But the constraint is violated only when that infringement is wrong. So, in my explanation of the Bomb Case, I should have, in some instances, talked about infringing rather than violating a constraint. Hopefully, it’s obvious where the changes should be made.
    They may not think it’s a problem for them, but it would still be a problem for them if their denial or intuition turned out to be false. And a widely-shared intuition to the contrary, in my view, would be some prima facie evidence that their own denial or intuition is false. I don’t think anyone should be arguing just from their own intuitions.
    I agree with the first and last sentence; I’m not sure about the middle sentence.

Leave a Reply

Your email address will not be published. Required fields are marked *