A lot of moral theorists are sceptics about Act Consequentialism
(AC). For example, some of these sceptics think that we need to qualify AC
with the following two agent-relative elements: (i) “deontological constraints”, which
forbid us to do certain horrible acts even if those horrible acts
would make the world as a whole a slightly better place; and (ii) “agent-relative
prerogatives”, which allow us sometimes to pursue our own personal projects or
commitments even if we thereby fail to make the world as a whole as good a
place as we could have done. However, a lot of these sceptics think that AC
gets it exactly right about our positive duties of beneficence, such as our
duties to help those who are in need.

This seems to me a half-hearted compromise. Those who reject
AC, I suggest, should reject it root and branch. So I am drawn towards a more thoroughgoing
nonconsequentialism, according to which absolutely no moral duties – indeed, absolutely no reasons for action at all –
are agent-neutral in the way that AC thinks of our moral duties as being. On
this approach, then, the moral duty to help those in need would not just
consist in morality’s giving me the aim that those who are in need are helped. It would consist in morality’s
giving me the aim that I play a role in helping
those who are in need
. In that sense, this approach makes this duty agent-relative. 

Suppose that from an impersonal, agent-neutral point of view
the world as a whole will be equally good whether I play a role in helping those
who are in need or not. (Suppose that if I don’t help those in need, others
will step up and do the helping instead of me, etc.) According to AC, in this
situation, I have no compelling moral reason to help, since the world as a
whole will be equally good whether I do any helping or not. Indeed, from a
moral point of view, I should be quite indifferent between my helping and my not
helping. According to an agent-relative interpretation of the duty to help, on
the other hand, I should not be
indifferent. I should have a definite preference for the state of affairs in which
I play a role in helping those who
are in need, even if the world as a whole is not a better place as a
result.

I haven’t begun to work this idea out in detail, but I
suspect that according to the most plausible way of developing this more thoroughgoing
nonconsequentialist approach, reasons for action are always reasons for the
agent to put herself into the right
sort of relationship with the intrinsic values that are at stake in her
situation – where the “right sort of relationship with these intrinsic values”
may be a non-harming relationship, or
a protective relationship, or a creative relationship, but never just
the simple relationship of “promoting” that is favoured by AC.

101 Replies to “Agent-relative positive duties

  1. I can’t tell whether you mean it to be a feature of the view that it has the consequence that if I were about to help someone in need whom you could equally well help, you would have a reason to push me aside so that you could do the helping. In any case, this does seem to be a consequence, and it seems rather implausible.
    (Honeymoon’s over, Wedgwood.)

  2. I also wonder how much of this would be an objection against AC. There are many ways in which AC theorists incorporate agent-relative constraints and prerogatives. One way to do this is the Smith way of indexing value to agents. So, what I ought to do is to maximise value[Jussi] where you ought to do maximise value[Ralph]. If that’s possible, they can also agent-relativise the duty of benificence. They can say something like me helping others has consequences that are of more value[Jussi] than your helping them in my deliberative perspective whereas in your perspective your helping others maximises value[Ralph]. There are problems with agent-relativising value but it doesn’t look like your idea greats a new problem.

  3. I wouldn’t say that in this case, I’d have a reason to push you aside in this way. (I don’t agree with Joseph Raz et al. in thinking that whenever you have a reason to pursue a goal you have a reason to take any means that would achieve the goal; you only have a reason to take good means to that goal.)
    Still, I would say that in this case, I might well have a reason to try to persuade you to let me do the helping instead of you, or at the very least to help you do the helping.

  4. Jussi, let me clarify: by AC I meant classical AC, according to which the sort of good or value that is to be maximized is itself an impartial and agent-neutral value.
    I agree with you that if we relatize the value that is to be maximized, then a “consequentialist” (or as I would prefer to say, “teleological”) approach could accommodate my idea about agent-relative positive duties.

  5. Ralph,
    I agree with this part:
    reasons for action are always reasons for the agent to put herself into the right sort of relationship with the intrinsic values that are at stake in her situation
    But “the right sort of relationship” might mean butting out. Right?

  6. I don’t think I’m latching on to the spirit of this idea.
    Okay, so you wouldn’t push me out of the way just to be the helper, but you’d try to persuade me to let you do the helping. (I wonder what you’d say.) What if you see me reaching for the life preserver to throw into the water (there’s a drowning violinist). If you sprint at top speed, you can probably grab the life preserver before I get to it and thus be the one to toss it into the choppy brine.
    I find it very difficult to believe that you have any reason to do that.

  7. Jamie,
    What about this watered-down version? If the suggestion is merely that I not be indifferent to my role in the beneficent action, then the non-consequentialist (of the sort we’re imagining) need not construe the duty as an obligation to directly do the saving. Instead, the obligation could be to take stock of the situation and act appropriately, where acting appropriately might include not only directly saving, but also, given certain circumstances, letting others do the saving (esp. where doing so is more efficient). (I guess this is similar to what Robert has suggested?)
    I’m not sure that this preserves everything behind the idea that “I should have a definite preference for the state of affairs in which I play a role in helping those who are in need, even if the world as a whole is not a better place as a result,” since it gives up the agent always having a preference that she play a role in helping. But it seems consistent with the agent having some sort of play-a-role preference. (And having it even when this doesn’t change the outcome.)

  8. Ralph,
    I’m interested to know how extensive you think our agent-relative duties to help are. If I have reason to prefer that I, rather than others, help those that I can, then it seems that I’ll have reason to prefer a state of affairs in which I sacrifice a great deal (say, to help the very needy) to one in which the same amount of help is provided through shared efforts involving less individual sacrifice. This suggests that your view may be even more demanding than standard act-consequentialism.

  9. It’s very hard for me to see how you can set up cases to decide between versions of NC where one component is aiming for the agent-neutrally better options within a conception where you have other goals as well, and the version you seem to favor. For surely on any conception where my goal is my bringing about what we might otherwise think of as the neutral good, my bringing it about will either be better for some others or worse.
    Sometimes my putting myself in relation to this good in such a way that I bring it about will take some burden off someone else and that will be better (neutrally). Or, if I shove someone aside or do the job less well than another (in which case I hinder the seemingly agent-neutral good), it will be worse. What it looks like you need is a case where what the agent-neutral good (that is the good as proposed by the NC theory with constraints and permissions tacked on to pursuing the a-neutral good) is equally well served either way, but on way of doing it puts me in the right relation and the other way does not. And then I suppose we are supposed to think that still we have a considered conviction that we ought to aim at ourselves standing in the right relation.
    I’m not sure if my skepticism about coming up with the right kind of case has to do with the lateness of the hour, or just the difficulty of constructing the case . . .

  10. Jamie and Robert — Even if I have an agent-relative duty to help, I also have a duty to treat other agents with courtesy and respect! Given my (non-Razian) view of reasons, that’s enough to explain why I have no reason to try to frustrate Jamie’s efforts to help the drowning violinist. Instead, I should “butt out”, as Robert says; that is the right sort of (non-interfering) relationship for me to put myself in to the intrinsic value of Jamie’s virtuous action. Even in this case, though, it may be appropriate for me to feel me least some mild degree of regret for the fact that I didn’t help the drowning violinist.
    Josh and Brian — I used the phrase ‘play a role in helping’ precisely in order to allow that it may be principally by my shouldering my fair share of the burden of a cooperative activity of helping that I can best fulfill this agent-relative duty of aid. (Indeed, I suspect that this sort of reference to cooperative activity is going to play a large role in any plausible account of our moral duties.) I don’t envisage this approach as being as demanding as AC because it will presumably also contain “agent-relative prerogatives” as well as agent-relative positive duties.
    Mark — You’re right that it won’t be easy to come up with a case that (i) clearly reveals the difference between the view that there are agent-relative positive duties and all rival views, and (ii) clearly elicits a considered conviction in favour of the former view. To mount a convincing case for this view, it will probably be necessary to argue for it as part of a more general argument for the agent-relativity of all reasons for action. All that these cases can show, I suspect, is that agent-relative positive duties are a viable theoretical option, not that they’re clearly preferable to every possible alternative view.

  11. P.S. Let me answer Jamie’s parenthetical question: “What could I say to him to persuade him to let me do the helping instead of him?”
    The answer is: I would try to persuade him that he does already done his fair share of helping recently and deserves a rest, whereas I haven’t done my fair share of helping yet.
    (This doesn’t show that agent-relative positive duties are reducible to agent-neutral duties to promote fairness, because I should still have a special concern with whether or not I am doing my fair share of helping.)

  12. Ralph,
    Nice post. I have three questions:
    (1) Is the view teleological? You say some things that suggest that it is, such as,

    I should not be indifferent. I should have a definite preference for the state of affairs in which I play a role in helping those who are in need, even if the world as a whole is not a better place as a result.

    This sounds teleological, because it sounds like we should first rank possible outcomes (broadly construed to include the acts themselves) according to which we should prefer over which others, and, second, we should perform the act that produces the outcome we should prefer above all other available alternatives.
    (2) Is the view time-relative as well as agent-relative? Should I act so as to fulfill a current promise if this will prevent me from fulfilling two future promises? Also, in those instances where “the world as a whole will be equally good whether I play a role in helping those who are in need or not,” should I step up now to play a role in helping others even if this will prevent me from being able to step up more numerous times in the future?
    (3) Suppose that we each have an equally compelling reason to ensure that utility is maximized. Is this, on your view, a case where we all have the same agent-neutral reason or a case where we all have equally compelling, agent-relative reasons to ensure that the same state of affairs obtains? So what exactly is the distinction between agent-relative and agent-neutral reasons on your view?

  13. Ralph, I don’t get it. It was no part of the story that I had already done my fair share and you haven’t. Our situations are symmetric, so we are in competition. This is an essential feature of agent centered reasons: they can lead us to compete.
    I agree that we might be duty-bound to compete in a polite and respectful way. But again, it was no part of the story that it would be rude or disrespectful for you to grab the life preserver first. And it still seems an absurd thing for you to do.
    Josh:
    I don’t know, it’s too hard for me to see what this view amounts to. At the extreme, of course, one could always just take the intuitively right thing to do, in each situation, and claim that the agent had an agent centered reason to do that — I’m sure there will always be a ‘role’.
    There are lots of good cases to be made for agent centered reasons. There are the break-one-promise-to-prevent-five-breakings examples, some good ‘fair share’ examples, and the asymmetry that Doug Portmore and Ted Sider wrote about, between self-sacrifice and other-sacrifice. I haven’t yet seen Ralph’s case for the agent-centered only view.

  14. Ralph,
    I thought so that you have the classic AC in mind. However, I wonder if this makes the objection even more problematic. Part of the classic AC is to be more of the foundationalist bend and ignore our common sense agent relative intuitions. So, well-being is of ultimate value, it ought to be maximised and everyone’s counts the same. Of course this view has all sorts of counterintuitive implications from scapegoats to not saving friends. But, so much worse for the intuitions says that classic ACer. That would also go for your intuition (which I think I have too).
    If the act-consequentialist wants to incorporate our moral intuitions, then she is bound to do some radical revisions in the axiology. At that point, she will no longer remain a ‘classic ACer’ and she has all sorts of ways to incorporate your intuition. So, I’m not sure I see how the objection gets grip of AC.

  15. Doug — Here are my answers to your questions:
    (1) I’m happy to present this view in a teleological form (although I wouldn’t regard the teleological formulation as giving an explanation of the deontic formulation, just as a reformulation of it).
    (2) If we present this view teleologically, we would indeed probably have to make the value to be maximized both agent- and time-relative.
    (3) My idea is that as Parfit puts it, agent-relative theories give different agents different aims, aims that can conflict with each other.
    Jamie — I’m afraid that I don’t get why you think it would be an “absurd thing for [me] to do” to throw the lifebelt first. I have conceded that it would be absurd if my doing so disrespectfully interferes in your activities, by deliberately frustrating a rational attempt that you are already making to achieve one of your permissible (indeed admirable) goals. But otherwise, if there’s nothing disrespectful or rude about my throwing the lifebelt (and I don’t owe you any apology or explanation for my doing so), what would be “absurd” about it? Saving people from drowning isn’t an “absurd thing for me to do”, even if someone else would do the saving if I didn’t.
    You’re right that agent-relative reasons, by their very nature, can conflict. If the case is perfectly symmetric, then there probably isn’t anything that I could say to persuade you to let me do the helping instead of you. Still, I shouldn’t be indifferent between your helping and my helping: I should prefer the outcome in which I do the helping (although of course I shouldn’t act on this preference in any way that is disrespectful towards you, e.g. by pushing you aside, as you suggested).
    Jussi — I think your point is answered by what I said in reply to Mark above.

  16. Suppose I can either play a role in bringing about a big good (but my role will be superfluous in bringing about that good–the good would occur without my help) or play a role in bringing about a smaller good (where the smaller good would only happen if I play that role). Does your view say that in some such cases I have most reason to join the already sufficient group which is creating the bigger good?

  17. Ralph,
    The only absurd part is your sprinting ahead of me to be the first to the life preserver. This strikes me as pointless, and so literally absurd. If I think of us as each ‘wanting to be the hero’, then it makes sense. But that doesn’t seem to me to be a moral motivation at all, and that I get to be the hero does not seem to be a moral reason.
    I think I might get much clearer on all this when I see the answer to David Sobel’s question.

  18. Ralph,
    I don’t find it plausible to suppose that it would be wrong for me to refrain from stepping up now to play a role in helping others if so refraining would enable me to more often step up (in like circumstances) to play a role in helping others. This intuition makes me think that the teleological formulation is giving an explanation of the deontic formulation. But I gather you must have different intuitions. Is that right?
    Regarding agent-relativity, I was asking what you take to be the distinction between agent-relative and agent-neutral reasons, not what you take to be the distinction between agent-relative and agent-neutral theories. The latter doesn’t, to my mind, obviously help me to understand the former. And, since you claimed that there are no agent-neutral reasons, I want to understand what you mean by that claim.

  19. Ralph,
    One way of distinguishing agent-neutral theories and agent relative theories is via same goal vs. different goal. A theory is consequentialist (in the old sense that I like) if it gives all of us the same goal, and non-consequentialist if it sometimes gives agents different goals. If we extend that to talk of agent-neutral vs. agent relative reasons, it looks like we might want to say that a reason to act is agent-neutral if (to the extent that it gives agents reasons to act) it gives them all reason to act in pursuit of the very same goal.
    If that’s right (and there may be an obvious problem with this way of thinking about it that I’m missing), then it looks like your thought is that even when it looks like agents are acting on an agent-neutral reason, they really are not because the ends of their actions are different. But I’m having trouble seeing how benevolence (say) would be best thought of in such a way that the fact that some person would be better off if they had food generates different goals for agents. And even if it did, I have even more trouble seeing that it would not in some good sense also generate the same goals for the agents. So even if Fred’s needing food generates the goal in me of my helping Fred to get food, it also seems to generate the goal that Fred get food (and not merely because the latter is a necessary constraint on my achieving the former).
    Is there something I’m missing?

  20. Mark –
    I don’t think trying to make the distinction in terms of goals is deontology-friendly. If we’re all teleologists, and think that what we ought to do is driven by certain goals, then yes, we can fret about whether we all get the same goal or some of us get different goals.
    You say that the relative duty would be given by the goal that I help Fred, and point out that there should still be the goal that Fred gets helped. If I’m Ralph, I’d resist the first claim. I have a duty to help Fred, not because of a further goal that is assigned to me, of Fred getting helped – that would be the teleological explanation – but that is where theory bottoms out, or because it follows from a principle or because it follows from the balance of my reasons to act, or something like that.
    The teleological picture looks like it starts with ordering states of affairs or propositions – things like that I help Fred and that Fred gets helped, and then inducing an ordering on actions, for me – things like helping Fred and making sure Fred gets helped. I took the ‘theoretical motivation’ behind Ralph’s suggestion to be that deontologists should not think of things in some hybrid way, but rather the other way around. We simply start with duties for actions, which we might think of as properties.
    On this picture, helping Fred, which is an action – the property that I have when I am helping Fred – is the object of a duty. It is the same duty for everyone – which is why I don’t like calling it ‘agent-relative’. It also fails McNaughton and Rawlings’ test for agent-relativity, because their test requires that all duties be, at bottom, to make some proposition true, and that is what this picture denies. So it doesn’t satisfy any standard definition of agent-relativity. Nevertheless, when we think of things in this way, it’s easy to explain all of the familiar so-called ‘agent-relative’ phenomena: constraints, options, and so on.
    But still, along with helping Fred, making sure Fred is helped might also be a duty. And so might not interfering with someone helping Fred. I’m doubtful that once making sure Fred gets helped is acknowledged as a duty, Ralph will be able to construct any cases that can distinguish this view from the other.
    But the duties it postulates are all still relative in that they take actions as their objects, which are understood as a kind of property, rather than beginning with goals or objectives which can be defined in terms of propositions or states of affairs. And the interesting thing about such ‘relative’ duties, is that they allow us to account for constraints and options by appeal only to neutral obligations, in the original sense of the term – obligations that are the same for everyone.

  21. Thanks, Jamie. Right, that (what I said above) isn’t going to provide a defense of the agent-centered only view. It’s just meant to de-fang the kind of counterexample to that view that we were discussing.
    And, right, more would need to be said to flesh out the role for us to be completely satisfied.

  22. Thanks Mark, that helps. I’ve even argued in print that reconstructing deontology/non-consequentialism in the teleological way is not non-consequentialist friendly, so I should accept your point on that score.
    Still, here is what got me asking in that way about Ralph’s proposal. I think that it is relatively natural even within a non-consequentialist framework to ground some duties in the thought that doing an action of the type picked out by the duty would make things better – better from a perspective that is available to others who are not the agent. For example, that more people would be well off in such and such a situation can ground my duty to do an action which brings about that situation. Even if what that fact gives me a reason to do is an action of mine (and hence is relative to me), there seems to me to be a good sense in which the reason is or might be called agent-neutral. I was partly trying to figure out if Ralph was denying even that.

  23. David and Jamie:
    Although I should probably let Ralph answer for himself, I’d suggest the following example. Suppose I live in a very cold part of the world, and I am volunteering to take part in the building of homes for the homeless. I assume that this is a way I can play a role in bringing about a pretty important good. When I get there I realize that the house would be built even if I were to go away (and let’s assume that there’d be no significant delay, or any significant burdens to others). I also realize that my neighbour’s son left this plastic ball outside in the cold, and if I don’t put it back in for him it’ll be deflated, and it’ll end up costing my neighbour $1 to buy a new one. I could now go back home so as to arrive in time to save my neighbour’s kid’s ball, or help out building the houses (assume I live about an hour away from the construction site, so it makes no sense for me to get the ball and come back). It seems to me not terribly counterintuitive to say that I ought to stay and help build the houses. I am not sure that Ralph actually wants to say this about this case, and, of course, there are other ways to get the same result, but this might be a case in which the theory could have the implication that David suggests without doing too much violence to our intuitions (David didn’t say that it would but I sensed, perhaps wrongly I admit, a suspicion that this might not be a very desirable implication of Ralph’s view).

  24. I like Sergio’s case, and it’s not hard to grant its supposition. It sounds totally reasonable to me, and like it illustrates that the reason to help build the house does not derive merely from a reason to make sure that the house is built – though you may also have that reason.

  25. I agree with Sergio that there would be some reason to stay and help build the house, but I doubt that it has anything to do with being the cause of an important good’s being provided oneself. Surely there are other more plausible reasons in the neighborhood (if I can put it that way). First, it goes with the very idea of helping that you would be part of a team for this good, so you would be expressing solidarity with them, etc. You would also be helping THEM, so you would be easing their burdens (“many hands make light work”), raising their spirits, etc.

  26. I like Sergio’s example too. But I wonder if Ralph would want/need to say that one should join/stick with the group working for the bigger good even when working for the smaller good is no less onerous and so not plausibly seen as shirking hard work that someone needs to do.
    I also quite like Darwall’s reply to the example. If we could contruct a case where team spirit is screened off, that would help isolate the wanted case. So perhaps let there be two buttons one will push in the privacy of one’s own home (and must be quiet about which one one pushes). Supposing that we can arrange a case in which one both knows that the bigger good is already going to be created without one’s “contribution” and the smaller good will not happen without one’s contribution, and the agent’s action nonetheless counts as part of the cause of the bigger good, then I find it odd to think one ought to join in causing the bigger good.

  27. I’ve lost track of what Sergio’s example is supposed to show. I understand the reasons that Steve mentioned. I understand also a reason of fairness: Sergio has a fair share of house-building to do, which won’t be paid off by someone else doing it or by Sergio saving his neighbor’s ball.
    But how are these things related to Ralph’s idea that there are no agent neutral moral duties?

  28. Great discussion. Could we get clear on what is meant by agent-relative and agent-neutral duties? The question is also to Ralph – what did you mean by agent-relative in the post?
    Here’s one more suggestion that I quite like. It’s from Mike Ridge:
    ‘If the principle reflecting the reason makes an ineliminable (and non-trivial) back-reference to the person to whom the reason applies then the reason is a personal (agent-relative) one: otherwise it is impersonal (agent-neutral). For example, the principle that an agent has reason to maximise *her own* happiness is agent-relative, as is the principle that agent must promote the welfare of *her* friends. On the other hand, the principle that one has reason to maximise happiness, and the principle that one should maximise friendship are both agent-neutral.”
    This fits quite well Ralph’s version of the agent-relative duty that I put myself in the helping position. Maybe there are reasons that require this in some occasions. Sergio’s example seems to be one such case even with Steve’s characterisation of the reasons.
    But, I wonder is it always the case. Cannot there be cases where it would be indifferent whether I help or others help? I think I’ll be a pluralist about this and accept that depending on cases there are both types duties of beneficence – agent-relative and agent-neutral. AU probably should agree with this. On occasion, it must be the case that *my* helping others even when others could do the same maximises happiness.

  29. Jamie,
    Isn’t this it: Sergio’s example shows that, contrary to what AC says, I should not be indifferent between helping and not helping, even though it would be very slightly better overall if I did not. The idea (I take it) is that I would have some special relationship to helping (set aside why) that is not explained by enhancing the total outcome of what I do.

  30. Suppose Sergio is right – screening off things like solidarity. The case suggests that the reason to help is weightier than the reason to ensure that help is given. But if that is right, then that would seem to suggest that the former can’t be wholly derivative from the latter.
    I’m still not clear on the exact content of Ralph’s positive claim; if we interpret it the way I suggested above, I don’t think it will lead to any normative differences in cases. But one thing seems clear: on the sorts of standard hybrid views that Ralph meant to be criticizing, I take it, the former reason is wholly derivative from the latter – you basically have reasons to ensure that good things happen, controlled by some constraints and some prerogatives. So the example looks relevant, to me, to testing that view.
    I agree that it’s hard to screen off where else the extra reason to stay and build might come from, though. I want to compare an ordinary decision whether to go to the polls on election day, rather than David’s button-pushing example, but that might bring in other complications.

  31. Ah! Robert beat me to the punch line. Jussi – there are multiple ways of precisely defining agent-relative reasons that have to do with ‘essentialy pronominal back-reference’, and none of which are theory-neutral.
    Nagel’s way is this: first assume that all reasons are reasons to bring about some state of affairs. Then look for the weakest modally sufficient condition for it to be the case that X has a reason to do A – that is, the weakest condition such that necessarily, if X satisfies that condition, then X has a reason to do A. If that condition is a pure Cambridge property – being such that P, for some proposition P, then the reason is agent-neutral; otherwise it is agent-relative.
    McNaughton and Rawlings’ way is this: first assume that every moral theory is committed to being able to formulate its most basic requirements as principles of the following sort: for all agents x, there is a reason for x to bring it about that Fx. If Fx is open in x, then the reason is agent-relative, otherwise it is agent-neutral.
    Everyone talks about how agent-relative reasons have to do with pronominal back-reference, but few are clear on the fact that these are quite different distinctions. In Nagel’s distinction, the pronominal back-reference comes in the statement of the sufficient condition for the agent to have a reason, whereas in McNaughton and Rawling’s case, the back-reference comes in the statement of what the agent has a reason to bring about.
    I don’t think either of these definitions is that helpful, because neither is theory-neutral. Nagel’s requires us to assume that the only reasons there are, are reasons to bring about some state of affairs. If we don’t assume that, then we get a perfectly good distinction, but it has nothing in particular to do with constraints or options. McNaughton and Rawling’s requires us to assume both that the basic reasons are to bring about some state of affairs and that all reasons are derivative from reasons that are reasons for everyone. I don’t think either of those assumptions is true.
    Basically, my own view is that when people talk about agent-relative reasons, the best way to understand them is as talking about the kinds of thing that create trouble for classical (agent-neutral) consequentialism. We all know what those are, so that gives us a sense that we’ve succeeded at giving a neutral characterization of what agent-relative reasons are. But in fact, I think we haven’t; we’ve merely succeeded at showing how one would characterize them if one went in for certain background assumptions.

  32. Mark,
    that’s good and interesting. I agree that Nage’s and Rawling&McNaughton’s definitions seem problematic. Maybe, I’m not sure why Ridge’s account is in a need of more precise definition. I’d think that *ineliminable* back-reference is rather precise test for agent-relativity. Seems like a good test to ask whether a given principle can be formulated in a way that does not require using such pronouns in defining the actions, the goals or the source of reasons that refer back to the person whose duty is in question. I’m not sure what theory this line assumes and why it doesn’t work as a test. But, maybe I’m missing something.

  33. What a torrent of fantastic comments — thank you so much, guys!
    Here are some very brief and superficial responses to a few of those comments.
    1. I guess what’s bothering Jamie in his case must be this. He assumes that I don’t like sprinting, and that he won’t have to sprint quite so fast to reach the lifebelt in time; so if I sprint to get to the lifebelt first, I am wasting resources that would be more efficiently used if I let him throw the lifebelt instead. But then of course this isn’t a case where he and I are symmetrically related to the situation; and so it’s not exactly the same as my original case.
    Of course, if I have a reason to prefer that I am an active helper in the symmetrical case, it will also be plausible that I have a reason to prefer that I am an active helper in a slightly asymmetric case as well (such as Sergio’s case). I’ll comment on what to make of such asymmetric cases when responding to David’s point.
    2. I think that I may have misled Doug with my parenthetical remark that I wouldn’t regard the teleological formulation of the view that I was suggesting as giving an explanation of the deontic formulation. In my slightly idiosyncratic view, the word ‘ought’ is multiply context-sensitive and expresses many different concepts in different contexts, and the construction ‘S has a reason to ___’ is itself a weak sort of ‘ought’. So by ‘deontic’ I wasn’t referring to ‘moral wrongness’. I agree that since the duty of beneficence is an imperfect duty, it’s not “morally wrong” to refrain from helping every person whom one could possibly help, so long as one does one’s fair share of helping overall.
    3. Mark van Roojen raises an excellent problem for my definition of “agent-neutral reasons”. I will have to think about what exactly I ought to mean by my claim that there are no agent-neutral reasons for action. I guess that what I had in mind was something like this. Every reason for action involves a reason for pursuing some ultimate aim; and whenever one has a reason to do anything, the fact that grounds that reason grounds a reason to pursue an agent-relative ultimate aim. (An agent-relative aim is, roughly, an aim that the agent would naturally specify using the first-person pronoun.) So even if the fact that grounds the reason grounds a reason for pursuing an agent-neutral ultimate aim, the reason in question will be agent-relative if this fact also grounds a reason for pursuing an agent-relative ultimate aim.
    4. Mark Schroeder is completely right that the exact way in which one distinguishes between “agent-neutral” and “agent-relative” will itself depend on one’s other theoretical commitments; and the way in which I draw the distinction depends on a number of views that he rejects — such as my view that ‘ought’ is fundamentally a propositional operator, and my views about how the concepts that can be expressed by ‘reasons’ are related to the concepts that are expressed by ‘ought’. Since Mark and I are on opposite sides of those debates, I don’t expect him to like my account of the “agent-neutral” / “agent-relative” distinction!
    5. David raises a crucial question. At the risk of being a cryptic, I’ll tell you about the view that I’m currently trying to explore. According to this view, there are actually two sorts of reasons to help. Somewhat tendentiously, I’ll label these reasons the “moral” reason and the “pre-moral” reason.
    The pre-moral reason is simply a reason to play an active role in helping people, because helping people is an intrinsically worthwhile thing to do. The moral reason is a reason to contribute towards creating the best collective activity of helping that one can, and then to shoulder one’s fair share of the burdens of that collective activity. Whether a collective activity of helping counts as the “best” is not an agent-neutral matter, but is in determined by the number of participants in the collective activity and those participants’ pre-moral reasons. In particular, efficiency at advancing the aims that correspond to these participants’ pre-moral reasons is one crucial determinant of whether or not this collective activity counts as “best”.
    In David’s case, I’d say that given that the large good will be created anyway whether one pushes the button or not, we have a better pattern of collective activity if you push the other button to create the smaller good. So one has a moral reason is to create the smaller good rather than the larger button. (Thus, in the end, I agree with Steve’s interpretation of Sergio’s case.)

  34. I find this:
    “even if the fact that grounds the reason grounds a reason for pursuing an agent-neutral ultimate aim, the reason in question will be agent-relative if this fact also grounds a reason for pursuing an agent-relative ultimate aim.”
    quite strange.
    Take Mona Lisa’s smile. That grounds a reason for pursuing an agent-neutral ultimate aim that the painting is preserved for the future generations. It also grounds a reason for me for pursuing the agent-relative aim of *me* seeing the painting. Now, on your criterion Mona Lisa’s smile would therefore be an agent-relative reason. Does that mean an agent relative reason tout court? That sounds odd. I’m not sure what a reason tout court would be.
    I mean it does look like a good agent-neutral reason to preserve the painting and maybe an agent-relative reason for me to see it (I’m not even sure about this). But, why say it is one or the other on the whole? I also don’t think that in this case the reason there is for ensuring the preservation of the painting implies that I play a certain active role in preserving the painting.

  35. Jussi — You’re right: My response to Mark van Roojen seems too half-baked. Oh well….
    Perhaps what I need to say is that we have an agent-relative reason if the relevant fact grounds a reason to pursue an agent-neutral ultimate aim only because it grounds a reason to pursue an agent-relative ultimate aim as well? (There are lots of promissory notes here about what I mean by “grounding a reason” and by an “ultimate aim”.)

  36. I am sorry if this is, strictly speaking, off topic after Ralph’s post. Robert and Mark already said most of what I want to say, but here are a couple of further thought. My example just meant to challenge the idea that a theory is counterintuitive insofar as it implies that in some cases, people ought to help in bringing about a greater good that is going to be brought about anyway rather than bringing about a smaller good that will not otherwise be brought about. As I pointed out in the post, there might be other ways to account for the case (I was thinking about something along the lines of Steve’s account of the example. I also agree with Steve that “being the cause of an important good’s being provided oneself” is an implausible description of the reason I have to help in that case or, for that matter, of any reason to help in any case. But I think one can accept the implication without accepting this way of describing the reason.) and so the example won’t show that the implication is unavoidable. But I agree with David that we should ask whether the implication will strike us as odd in every example for which we screen off team spirit.Unfortunately, I agree with Mark that is not clear that one can screen off every other possible reason. Now, first, if in my original example, I just heard on the radio that a certain group was going to build houses for the homeless, and I decided to join in, I am not sure there are reasons of fairness or team spirit to help building the house, but I don’t think it changes the original intuition in the case (but again, I am not denying that there might be still other ways to account for the example). I am not sure what to say in David’s example. Some ways of filling the example I am simply not doing anything when I press the button to “help” bring about the greater good. When I try to fill in the example in a credible form, I don’t have the intuition that the implication is odd, but I am not sure I have in my hands a really “pure” example. Say, for instance, a rich person will make an enormous donation to Oxfam if at least 1,000 people press the button “help” on his website. Suppose I go the website and I immediately realize that there is no doubt in my mind that more than 1,000 people will press the button. At the same time I see at the corner of my computer screen my program that shows random web cams all over the world. Let’s assume that if I press a button in the program it sends an instant message to the owner of the web cam alerting him to check the web cam. And suppose that I see a ball in the same situation as the original example and the homeowner in third floor surfing the net oblivious to the fate of the ball. Finally, I realize that the battery in my laptop is running out and I can only click the mouse once before it shuts itself off. It’s not clear to me that in this case I ought to press the web cam button rather than the “help” button.

  37. That would be one way of doing it. But, the consequences for your original thesis seem difficult. You wanted to argue that the duty of beneficence is agent-relative. I take it that in that case the ultimate agent-neutral aim is that there is less suffering in the world and the suffering of others is a reason for adopting this aim.
    If your new definition of agent-relativity holds, then the existence of this agent-neutral duty (and reason) would be conditional on the fact that the suffering of others grounds a reason for me to adopt an agent-relative aim that I put myself into certain active position within the project of helping others. Somehow that seems like putting the cart before the horse. It seems like there is a reason for suffering to be alleviated first and maybe then also a derivative duty for me to play a special role in this. But that has to come second.

  38. Just to report my intuitions about Sergio’s new (and quite delightful) case: I think that he should click on the web cam button rather than the “help” button.
    On Jussi’s point: I would agree that there is an intrinsic disvalue in suffering, and this sort of disvalue gives everyone a reason to hope that the suffering is alleviated; and on my own view, such intrinsic values and disvalues are what ground the reasons for action (since reasons for action are reasons for the agent to put herself in the right sort of relationship with these intrinsic values and disvalues). But a hope is not an aim: an aim is the content of an intention (or at least a tentative intention). On the view that I’m suggesting, I have a reason to have the aim that suffering is alleviated only because I have a reason for the aim that I actively contribute towards the alleviation of suffering.

  39. I also think Sergio should click the webcam button.
    I’ll go further: I don’t see what reason he has to push the help button.
    Remove the certainty that at least a thousand people will push their help buttons and you change the situation, though. In that case it seems to me that he should push the help button. (Well, of course, he might be certain that fewer than 500 people will push their help buttons, in which case I again think he has no reason to push his.)

  40. Maybe David’s example can be rephrased like this. Suppose we have a choice of worlds. On one world (World-the-first), we get a big good. On another world (World-the-second) we get a big good, plus a small good. What world would you pick?
    Picking these worlds is just what we really do do in the case that David has raised. We can make the world be either like world-the-first or like world-the-second. It is up to us.
    This is what the example reduces to, it seems to me. And it also seems to me to be obvious that we should pick world-the-second. World-the-second, of course, just is the world where we go get the ball out of the ice, and do nothing about the housing project and it gets built anyways.

  41. This:
    ‘I have a reason to have the aim that suffering is alleviated only because I have a reason for the aim that I actively contribute towards the alleviation of suffering.’
    is very clarifying. It’s the ‘only because’ here that I cannot get myself to accept. For me, the suffering of others in itself seems to be non-conditionally a sufficient reason to adopt the agent-neutral aim. Could be that it also gives me a reason to actively contribute but that doesn’t seem to be here or there with regards to adopting the general aim. There is something fetishistic about the ‘only because’ here.

  42. Great discussion, Ralph. Excellent start.
    Jussi – I think that Ridge’s definition, at least as you’ve stated it, is imprecise, because I think it accurately describes both the Nagel and the McNaughton/Rawling distinctions. Both are about ineliminable pronominal back-reference to the agent, but they differ with respect to where this ineliminable back-reference has to occur – for Nagel it is in the sufficient condition (which he claims is the reason), whereas for McNaughton and Rawling it is in the thing the reason is a reason to make the case. Since they’re not the same view, and your description of Ridge’s distinction describes both of them, I take it that Ridge’s distinction as you describe it is imprecise.
    As I said before and Ralph repeated, in order to distinguish between agent-relative and agent-neutral reasons by appeal to pronominal back-reference, we need to make certain background assumptions, in order to have canonical statements of the reasons that will have such back-reference in the right places.
    Suppose that I don’t have such a view. Suppose, for example, that I think that reasons are considerations (which I take to be facts) which count in favor of actions (which I take to be properties that agents can have). Among the reasons, I might hold on this view, is the fact that murder is wrong. That, I think, counts in favor of the following action: not murdering. Not murdering, on this view, is a property. It is a property that you have whenever you are not murdering.
    This view is only schematic, but it appears to allow for agent-relative reasons that are described in ways that make no pronominal back-reference to the agent. The reason itself makes no such reference, because it is just the fact that murder is wrong, which makes reference to no agent. And the thing the reason is in favor of makes no such reference, because it is just a property: the property of not murdering. Nothing in the description of this property makes reference to the agent (though of course, it is the kind of property that is had by an agent – but on this view of reasons, all reasons are in favor of such things). So it looks like a view on which agent-relative reasons can be formulated without pronominal back-reference to the agent.
    The moral is, whether a formulation or description of some reason has certain features depends on what we take to count as a formulation or description of the reason. And that depends on more than bookkeeping; it depends in part on substantive questions about the relata of the reason relation.

  43. Mark,
    thanks that’s helpful. I’m not sure why more general distinctions would be more imprecise. I thought it would be a precise test whether the back-reference is made anywhere when you don’t take a stand where it is done. I’m not sure how that introduces vagueness. But, never mind – I catch the drift.
    I’m not sure why to think that the fact that murder is wrong is a agent-relative reason not to murder.

  44. Mark, I don’t think I get your idea. On your suggestion, surely the agent-relative/ agent-neutral distinction comes back in as the distinction between a reason for acting so that the property not-murdering is had by *as few agents as possible*, or *agents in general*, and a reason for acting so that it is not had by *this* agent? So we do still need the pronoun (or some sort of indexical, at least) to specify the A-R reason.
    Ralph (and everybody), I struggle with the whole idea of an A-N/ A-R distinction anyway. I think reasons for action are ways of justifying action, and that justifications have to be general in form to be intelligible at all. “Because she’s my wife” is, on its own, no more intelligible than “Because I’m me”. What makes it intelligible is the continuation that Williams rejects: “and in situations like this, it is permissible for any agent to rescue his wife”. (Slot in the criterion of rightness/ decision procedure distinction as per usual.) Similarly, what (normally) rules out “Because I’m me” is the continuation we can most naturally add: “and in situations like this, it is permissible for me to prioritise myself”.
    I think this requirement of generality holds for moral reasons because I think it holds for reasons in general. Reasons for belief, whether that means reasons about evidence or reasons about inference, have to be general too: “p because it looks like p to me” doesn’t state an intelligible reason for asserting p unless the background allows us reasonably to spell the reason out with “…and in situations like this there is epistemic permission for any observer to accept his/ her appearances”.
    So moral justifications can’t, any more than any other sorts of justifications, make basic and ineliminable reference to particular agents. So there can’t be a *deep* A-R/ A-N distinction for moral justifying reasons. The most there can be is justifying reasons with this sort of reflexive pronoun in them: “For any agent, that someone else is *his* wife is a reason for *him* to treat *her* as special”. But those are not ineliminable pronouns in (what I take to be) the required sense; they’re bound variables.

  45. Jussi – I didn’t say that the Ridge definition as reported by you was vague; I said it was imprecise, and supported that claim by pointing out that it accurately describes each of two inconsistent ways of making the distinction.
    In any case, it’s neither necessary nor sufficient that ‘pronominal back-reference’ occur just anywhere. The view I offered shows that it is not necessary. On that view, there is a reason for everyone to not murder – which is ‘agent-relative’ in the sense that it results in a constraint, assuming that it is a weightier reason than the reason to prevent murders, and constraints are paradigmatic of ‘agent-relative’ phenomena. But there is no ‘pronominal back-reference’ anywhere.
    It’s also not sufficient. Suppose that basic reasons take propositions, but need not be reasons for everyone. And suppose, further, that necessarily, it is sufficient for any agent, X, to have a basic reason to make it the case that X does A, for arbitrary A, that X can bring it about that P by doing A, and that no weaker condition is such that necessarily, any X who satisfies that condition has a reason to make it the case that X does A. And suppose, finally, that Jon can bring it about that P by not murdering. It follows that Jon has a basic reason not to murder. But this reason of Jon’s not to murder is not agent-relative, in the appropriate sense, because he only has it because given his circumstances, that is the way for him to bring it about that P. Yet there is pronominal back-reference all over the place.
    In fact, the problem is precisely that there is too much pronominal back-reference. Allowing for it in the specification of the proposition to be made true undermines the ability of its presence in the sufficient condition to distinguish between cases that give rise to difficulties for consequentialism and those that do not. That is why, in order to make the distinction, Nagel specifically assumed that the basic reasons can only be in favor of propositions, and understands them only in ways that don’t allow that they might be open in the agent-variable. (In fact, if I remember correctly, it is basically this objection that McNaughton and Rawling raise to Nagel’s definition in the process of motivating their own – ignoring the fact that he is making this assumption.) But on the other side, allowing for back-reference in the sufficient conditions for basic reasons undermines McNaughton and Rawling’s definition, too. That’s why they have to assume that no theory will allow basic reasons that are reasons only for some people. They assume that the basic reasons will have to be reasons for everyone, and necessarily so.
    I’d encourage you to spend some time studying how Nagel’s definition works and thinking about why he does it that way. Few works in moral philosophy in recent decades repay study more than The Possibility of Altruism. McNaughton and Rawling’s original paper is also enlightening. And I’d encourage looking at two of my papers, as well: ‘Reasons and Agent-Neutrality’ and ‘Teleology, Agent-Relative Value, and ‘Good’.’
    Tim – I didn’t mean to say that there was no agent-relative agent-neutral distinction on the way of thinking about reasons that I articulated; just that it wasn’t tracked by any of the standard definitions. I do think that each of your ways of re-describing the view that I articulated are ways that I would reject. The view isn’t teleological, so it denies that it is a reason to act so that anything. It is just a reason in favor of an action.
    Also, you’re right: in both of the precise definitions of the agent-relative/agent-neutral distinction, there is no ‘ineliminable reference to particular agents’. There is essentially only a bound variable in a general statement about reasons.
    Finally, there are two importantly different ways of understanding your claim about generality. One way that justifications might always have to be general is that nothing can be a reason for someone to act unless that person is in some circumstances, such that there is a universal generalization to the effect that anyone in those circumstances has that reason to do that thing. This is, from my point of view, a relatively innocuous claim about generality. But another way that justifications might always have to be general, is that you might think that nothing can be a reason for someone to act unless it derives from a reason for anyone to act, together with circumstances that determine how they, in particular, are to do it. The latter idea is much stronger, and I think there are lots of reasons to reject it. McNaughton and Rawling need it in order to state their version of the distinction.

  46. Thanks, Mark, that’s interesting and helpful. Brief responses:
    1) Your reasons in favour of action, which aren’t “so that” anything (non-teleological reasons). Do you think all reasons are like this? Surely some reasons are “so that” something (teleological)?
    2) 1)’s teleological/ non-teleological distinction about reasons surely cross-cuts the A-R/ A-N distinction: non-teleological reasons can be either A-R or A-N, and so can teleological reasons (if there are any). Yes?
    3) You say: “in both of the precise definitions of the agent-relative/agent-neutral distinction, there is no ‘ineliminable reference to particular agents’. There is essentially only a bound variable in a general statement about reasons.” I reply: Maybe you don’t dispute this, but I thought some people did. I thought the idea in a lot of A-R theorists’ minds (probably including Ralph’s, though I haven’t had time to read the whole of this thread to be absolutely sure of that– I’m just getting this from his opening remarks) was that there *were* reasons which made IRTPA. Or, more radically– and again, this seems to me to be Ralph’s view– there’s the idea that *all* reasons make IRTPA. That’s the idea I can’t make sense of.
    4) You say: “another way that justifications might always have to be general, is that you might think that nothing can be a reason for someone to act unless it derives from a reason for anyone to act, together with circumstances that determine how they, in particular, are to do it. The latter idea is much stronger, and I think there are lots of reasons to reject it.”
    I agree, provided we’re talking about the same phenomenon. As I read your remarks, firefighters are a counterexample to the thesis that we need this sort of generality. That there is a fire gives a firefighter reasons to act that don’t “derive from a reason for anyone to act”. The firefighter’s reasons to deploy his special expertise now derive from reasons that *any firefighter* has, not reasons that *any person* has.
    Of course, the last-quoted phrase is ambiguous, because it depends what we mean by “derive”: of course we could try to derive the idea that society should have firefighters from “reasons that anyone has”, but that would be a different sort of job.
    If Rawling and McNaughton need to deny this sort of point about firefighters to state their distinction, then they certainly are in trouble… but I’m not sure they do.

  47. Mark,
    thanks. About the necessity side – when you said first that the fact that murder is wrong is a reason not to murder you said nothing about constraints. That is a substantial addition. The consequentialist is bound to say that the reason not to murder and to prevent murders are the same reason – that it is wrong, i.e., not maximising utility. If you accept the contraint, then I take it that the back-reference creeps in – you ought to make sure that *you* don’t murder even if by doing so others murder more.
    About the sufficient side here:
    “And suppose, further, that necessarily, it is sufficient for any agent, X, to have a basic reason to make it the case that X does A, for arbitrary A, that X can bring it about that P by doing A, and that no weaker condition is such that necessarily, any X who satisfies that condition has a reason to make it the case that X does A. And suppose, finally, that Jon can bring it about that P by not murdering. It follows that Jon has a basic reason not to murder. But this reason of Jon’s not to murder is not agent-relative, in the appropriate sense, because he only has it because given his circumstances, that is the way for him to bring it about that P.”
    I find this slighly too concise. I’m not sure where the non-trivial, ineliminable back-reference happens. Say P is ‘that general happiness is maximised’ (could be anything else that doesn’t mention Jon) and Jon can bring about this by not murdering. Seems like you don’t have to refer back to Jon when you say that he has a reason not to murder because as a result general happiness is maximised. It’s true that he has this reason because of the circumstances he is in but in describing the circumstances we don’t need to refer to him. As the back-references can be eliminated in the schema, don’t we get the correct result that the reason is not agent-relative?
    I am a big fan of Possibility of Altruism too. It is a while since I’ve read Piers’s and David’s paper but I will get back to it.

  48. Tim –
    1) Yes, on the view I was articulating but not defending, the reason relation holds between considerations (facts) and actions (properties) – not things that can be the case.
    2) Yes, the distinctions cross-cut. I was just explaining that there is no standard way of even making the AR/AN distinction that is theory-neutral.
    3) My point all along was that I think we can make sense of Ralph’s idea without going in for any of these ways of making the distinction. Since there is no single way of making it that everyone could agree on, we have to. Think of agent-relative phenomena as just those which are inconsistent with classical consequentialism in virtue of the fact that it appeals to a single ordering on states of affairs – the better than ordering – in order to explain what people ought to do. Then Ralph’s suggestion is that we understand positive duties to be that way, too – to have a structure that is inconsistent with classical consequentialism in virtue of the fact that it appeals to a single ordering, etc. We don’t need anything else to make sense of it, and you’ll notice if you read through the thread that except for my explanations to Jussi, pretty much everyone else has been taking more-or-less this as their criterion for what would be agent-relative or not.
    4) Think of the ‘another way’ like this. Maybe firemen have a reason to hang out at the firehouse because that is their job, and everyone has a reason to do their job. That same reason is where my reason to show up for my department meeting this morning comes from – it’s my job. If I understand them correctly, McNaughton and Rawling need to think that things work like this, because it is in the formulation of these most basic reasons that we have to look for agent-variables, in order to see whether they are agent-relative or agent-neutral. If you don’t make this assumption, then you get the problem I explained to Jussi in my last comment. If you’re curious about where this view of generality might come from or what motivates it, you should read my paper, ‘Cudworth and Normative Explanations’. It’s an old idea that Cudworth, Clarke, and Price took to be very important, which was important for Prichard, and which drives lots of interesting, important arguments, including Korsgaard’s argument against ‘voluntarist’ theories in Sources. So if you think it’s false (and I agree), that’s a problem for lots of interesting views.

  49. Hello again Mark:
    Suppose I deny (as indeed I do) that there is any such thing as a “a single ordering on states of affairs – the better than ordering – which can be appealed to in order to explain what people ought to do.” You’re saying that that denial alone gets me straight to an agent-relative conception of value and our reasons?
    I don’t see at all why that should be so. Suppose I deny the consequentialist ordering because I accept a different ordering: I say that there are actions which are forbidden, actions which are compulsory, and actions which are optional. (As it happens, this is what I say, in various publications…) None of this commits me one way or the other about the agent-relative vs. the agent-neutral. How could it?
    One reason why I’m a wet blanket about A-R vs A-N is because I think it’s a red herring (mixed metaphors– donchaluvem?). I don’t think the A-R/ A-N distinction is the consequentialist/ non-consequentialist distinction. The latter distinction is about whether we should just promote values, or also respect them (where respecting is not, as Pettit thinks, an agent-relative notion).
    If you want a reference for that, see my “A Way Out of Pettit’s Dilemma”, PQ 2001…

  50. Jussi – On the necessity side, you’re misinterpreting the view I suggested. The view is not that the fact that murder is wrong is a reason in favor of its being the case that I not murder. The view is that the fact that murder is wrong is a reason in favor of not murdering. You get a constraint if this reason is significantly weightier than the reason in favor of preventing a murder. Why? Well, what should I do, if I have a choice between murdering and allowing two murders to happen? Well, in favor of murdering there is the reason to prevent murders. But against murdering there is the reason to not murder. But that reason is weightier, so all things considered it is wrong to murder. So that’s a constraint.
    On the sufficiency side: as I said, you can make up different stories about what a canonical statement of a reason looks like. All I did here was to incorporate the relevant assumptions from both Nagel and McNaughton and Rawling that allow agent-variables to show up in the places they need. So it was part of my stipulation that the way the view in question thinks about reasons is in part precisely like Nagel does: it has to be the complete modally sufficient condition for it to be the case that someone has a reason to do that thing. So this condition is not just ‘that general happiness is maximized’, because there are worlds in which general happiness is maximized but Jon does not have a reason to do this thing (choose a different action if it makes it easier to imagine). So to get a modally sufficient condition, you need the condition, ‘by doing A, Jon can ensure that general happiness is maximized’.
    Anyway – and this is just an aside – there aren’t global, agent-neutral facts of the form, ‘general happiness is maximized’. There could always be more happiness, if you just imagine that the world is a little bit bigger. ‘that happiness is maximized’ is surely just short for ‘that of the options available to Jon, he takes the one with the highest prospect of happiness’, or something like that. Which, again, has an agent-variable in it.
    And finally – still on sufficiency – eliminating the agent-variable from the sufficient condition is not enough, because the way I formulated the reason, there is one in the thing he has a reason to make the case, as well – my point was that agent-variables appearing in both places that Nagel and McN and R allow for undermines each of their definitions. So there is no single definition that subsumes them both.

  51. Tim – I didn’t say that you had to believe that there was a single ordering on states of affairs, etc. Nor did I say that denying that gave you an agent-relative conception of reasons. All I said is that there are certain kinds of thing that are inconsistent with consequentialism in virtue of the fact that classical consequentialists think that. Call them K-phenomena, just to have a neutral name for them.
    And then I pointed out that there is no single definition of agent-relative reasons that is theory-neutral, but that each definition that has been offered has been designed to capture K-phenomena. So I offered a hypothesis: when people talk about agent-relative reasons without saying what they think, or by saying what they think only in imprecise ways, we should interpret them as merely trying to talk about K-phenomena. We need to go in for some further theory before there is going to be anything further than we can say about what those phenomena amount to.
    Now, if you think that you believe in K-phenomena but not agent-relative reasons, then it looks to me like you’re interpreting agent-relative reasons in some more robust way that is necessary in order to make sense of Ralph’s suggestion.

  52. Well, Ralph’s original suggestion was that moral “reasons for action are always reasons for the agent to put herself into the right sort of relationship with the intrinsic values that are at stake in her situation”. There are ways of reading that that make it essentially dependent on agent-relativity, defined in various ways, and other ways that don’t.
    I agree with you that we don’t need a “robust” notion of agent-relativity in order to make sense both of Ralph’s suggestion, and of the K-phenomena. But (it seems to me) that isn’t because a weak notion of agent-relativity does the job better. It’s because the notion that does the job is the one I talked about above, on which the agent-relativity is all of the form “For any agent, if someone else is his wife, then…”. And this, though I don’t wish to dispute about words, doesn’t seem to me like real agent-relativity at all.
    Nor, apparently, does it seem that way to Ralph himself, since Ralph himself surely does want a robust notion of the A-R.
    You seem to be playing simultaneous chess with me and Jussi… 🙂

  53. Tim – I lost track of where we stand. Ralph was explicit that he meant himself to be interpreted as having proposed something agent-relative:

    So I am drawn towards a more thoroughgoing nonconsequentialism, according to which absolutely no moral duties – indeed, absolutely no reasons for action at all – are agent-neutral in the way that AC thinks of our moral duties as being. On this approach, then, the moral duty to help those in need would not just consist in morality’s giving me the aim that those who are in need are helped. It would consist in morality’s giving me the aim that I play a role in helping those who are in need. In that sense, this approach makes this duty agent-relative.

    As for Ralph, he agreed with me (February 14, 2007 at 09:48 AM, point #4)that we have to go in for certain background assumptions in order to succeed at making the AR/AN distinction, and then said which ones he is willing to make. Perhaps it will be clearer to me what our disagreement comes to if you can tell me precisely what it is that you’re skeptical about.

  54. Hi guys,
    Just to clarify, I believe Jussi was quoting a passage from my paper on Scanlon, and in that particular paper the details of how the agent-relative/agent-netural distinction is glossed didn’t seem crucial, so I may have been a bit imprecise there. I agree with Mark that for Nagel its crucial that the pronominal back-reference occur in the statement of the sufficient condition for the reason as given in a suitable normative principle. Anyway, for my more considered and detailed discussion of the distinction itself, see my entry on the Stanford Encyclopedia. I probably should add a bit more in the way of discussion of McNaughton & Rawlings’ take when I next revise that, actually.
    – Mike

  55. Mark:
    I don’t know whether we’re disagreeing; I’m not committed to saying we are… All I’m saying is that (pace Ralph) I don’t think the A-R/ A-N distinction is the key to formulating a plausible non-consequentialism. Something like that distinction, in a dilute form, might turn up in, or as a corollary of, the formulation of NC that I’d prefer. (Cp. Ralph’s words “reasons for action are always reasons for the agent to put *herself* into the right sort of relationship” etc.) But no *robust* form of the distinction, such as Ralph favours, seems to me crucial to a plausible NC. In my view the real action, in getting NC off the ground, has to do with opposing (as I see Ralph does, interestingly enough) the consequentialist doctrine that what you do with goods is promote them, and only promote them. The key to NC is the thesis that what you do with goods is (mandatorily) respect them, and (optionally) promote them as well as respecting them. So what counts is the arguments we come up with to defend and develop that thesis. From this issue, the A-R/ A-N literature just seems to me a distraction. Which is not of course to say that it doesn’t have a lot of interesting lessons for other issues.

  56. Mike,
    thanks very much. That Stanford entry is very helpful. Good work. I do like this quote from later Nagel:
    “If a reason can be given a general form which does not include an essential reference to the person who has it, it is an agent-neutral reason…If on the other hand, the general form of a reason does include an essential reference to the person who has it then it is an agent-relative reason. (Nagel 1986: 152-153)”
    The similarity between it and Pettit’s formulation:
    “An agent-relative reason is one that cannot be fully specified without pronominal back-reference to the person for whom it is a reason. It is the sort of reason provided for an agent by the observation that he promised to perform the action in prospect, or that the action is in his interest, or that it is to the advantage of his children. In each case, the motivating consideration involves essential reference to him or his.…An agent-neutral reason is one that can be fully specified without such an indexical device. (Pettit 1987: 75)”
    is striking even though Nagel does refer to the general form that brings the worries about the principles.
    Maybe these characterisations, like the one I took from you Scanlon paper, are too imprecise. But, they do seem to capture the same right idea. It does not seem to be the case anyway that there is not that much disagreement about in which cases the reasons are agent-relative and in which agent-neutral. So, there seems to be shared understanding about the difference technicalities aside.
    Mark,
    thanks too. Couple of small points. On the necessity side – don’t you need a reason in favour of *me* *not murdering* and not just in favour of not murdering in general.
    The sufficiency side is interesting. If there are no agent-neutral facts of the form happiness is maximised (that seems right), don’t consequentialists too end up giving agent-relative accounts? In specifying the right action they too seem to be forced to make an ineliminable reference to the agent.
    By the way, are there things that could be modally sufficient conditions for reasons? I have hard time coming up with any.
    If the remaining reference is in the thing he has to make it the case, i.e., that X does A, that seems eliminable too. Can’t we just say that it is sufficient for X to do A that and so on. I’m not sure what work the make it the case that X does A does.

  57. Jussi – First, no. You weren’t paying attention. On the view I articulated, reasons don’t count in favor of propositions, or of states of affairs. On the view I articulated, reasons count in favor of actions. Actions are things like murdering and not murdering, as in ‘of course there is a reason not to murder’, and as I indicated, this view conceives of actions as properties of agents. *Me* not murdering is not a property. Ipso facto, the reason doesn’t need to count in favor of *me* not murdering.
    If you’re having trouble imagining how this view could work, the view I call the Subsumption Account in ‘Reasons and Agent-Neutrality’ is about the simplest view of this form this view might take, and that paper addresses some of the questions you may have about it. I guarantee you that if you sit down and think about why Nagel had to assume that the only reasons there are are reasons to promote P, for some P, in order to get his technical distinction to track the issues that he wanted it to track, you will see why ceasing to make that assumption will lead to the technical distinction failing to track the same issues.
    And second, no again. On no precise definition of the agent-relative/agent-neutral distinction, given the background assumptions made by the person making the distinction, does it turn out that consequentialists believe in agent-relative reasons. What I’ve been saying all along is that it is a condition of adequacy of how people ordinarily use those words that classical consequentialism turn out to only allow for agent-neutral reasons.
    Oh – and third, there had better be modally sufficient conditions on reasons, in order for Nagel’s distinction to work. On Nagel’s view, reasons are modally sufficient conditions on reasons – properties such that necessarily, someone who has that property has a reason to promote P. If you don’t think there are such conditions, don’t complain to me – I’m just telling you what you have to assume in order to make the distinction Nagel’s way, and my whole point was that this requires making substantive and rejectable assumptions.

  58. “it is a condition of adequacy of how people ordinarily use those words that classical consequentialism turn out to only allow for agent-neutral reasons.”
    Which words? “Classical consequentialism”, or “agent-neutral reasons”?
    Philip Pettit argues in PQ 2000 that (1)consequentialism can be about promoting agent-relative values, and (2) in fact that all people can mean by “non-consequentialism” is this form of consequentialism. If I remember rightly Jennie Louise argues a similar line in PQ 2004. I don’t accept (2), for reasons I’ve given already (and spell out in PQ 2001). But Pettit and Louise surely make a good case for (1). So I wouldn’t rate the prospects for the claim that *any* consequentialism would have to be about agent-neutral reasons only.

  59. Mark,
    thanks, I wasn’t complaining to you at all. Mike does a good job in explaining why Nagel’s view is problematic because of the morally sufficient conditions and I realise that that was your point.
    I’m still slightly wondering how constraints can be articulated without ineliminable back-references to agents. Seems like you can if actions are properties of agents. I missed that bit as it is quite odd and I’m not sure it was mentioned. There is though another reference in when we say that this reason is stronger than the reason *the agent* has for preventing murder. Maybe you can do without that too.

  60. Tim – Notice that I’ve been careful to say ‘classical consequentialism’ at each point, specifically to avoid being construed as having said that agent-relative teleology, as Ralph and I would both prefer to call it, is an agent-neutral theory – which by definition it is not. This point came up earlier in the thread when Jussi misinterpreted Ralph in his first comment.
    Philip Pettit and Jennie Louise are not the only ones to have made your claim 1); it has been discussed and defended over many years by Sen, JLA Garcia, Broome, Kagan, Hurka, Smith, Dreier, Doug Portmore, and Campbell Brown, among others. Brown and Louise helpfully dub your thesis 2 ‘Dreier’s Conjecture’, because it was suggested by Jamie in his article in The Monist in 1993.
    If you read the January issue of Ethics, you’ll also see that I’m not a fan of this kind of view, or of the idea that there is such a thing as agent-relative value, in the first place.

  61. Jussi – ?? You say:

    Seems like you can if actions are properties of agents. I missed that bit as it is quite odd and I’m not sure it was mentioned.

    I count four times that I mentioned it in this thread.
    In any case, I don’t think that it is at all odd to think that reasons count in favor of actions, and I don’t think that it is odd to think that actions are properties of agents. What do Fran and Stan have in common when Fran is opening the door and Stan is opening the door? That they are opening the door. So opening the door is a property. Actions are properties that agents share when they are doing the same thing.
    If anything, I would suggest, what is odd is the idea that reasons count in favor of propositions.

  62. Yep. Wasn’t paying attention to the significance of that claim at all. I’m not sure I’m yet happy to accept the idea but I do agree that reasons favour actions rather than propositions. The talk of actions as properties still does sound odd. I perform actions but do I perform properties? I think about which actions to do, do I thereby think of properties I could have?
    I’d think of them rather as events. Events hardly seem like properties even though agents admittedly can have the property of being in a certain relation to the event. So, I am happy to say that Fran and Stan have the same property of having been related to same kinds of events of them causing the door to open.
    I am going through Nagel though again and I am beginning to see the light.

  63. Hi Mark,
    I too am puzzled by your repeated claim that actions are properties (I have not read all the comments – if you explained this already, sorry about that)
    Recently, you wrote:
    “What do Fran and Stan have in common when Fran is opening the door and Stan is opening the door? That they are opening the door. So opening the door is a property. Actions are properties that agents share when they are doing the same thing.”
    Shouldn’t we say, instead, that Fran and Stan each have the property of being someone who opens the door or of performing the action of opening the door?
    It seems to me that you need to rule out that more natural alternative (where the property is being an agent of an action, rather than an action) to move from the second to the third sentence I quote above.

  64. Brad and Jussi,
    Doesn’t the ‘is’ in ‘Fran is opening the door’ look like the ‘is’ of predication? (Surely it isn’t the is of identity.) And isn’t it most natural to think of the metaphysical correlate of a predicate as a property? And isn’t the relatively trivial answer to the question of what property does Fran have when she is opening the door, the property of opening the door?
    Mark isn’t saying that all properties are actions. But when one does an action of a certain type, your doing an action of that type is a property of yours.
    So I guess I’m reporting that I find Mark’s suggestion pretty natural and more natural than the more complicated claim Brad makes and which I don’t think we need to deny to endorse Mark’s claim.

  65. Agreed. I certainly don’t deny that when Fran is opening the door and Stan is opening the door, they are both agents of the action-type, opening the door. They certainly are. But they are also both opening the door. So being agents of the action-type, opening the door is something that they have in common, but opening the door is also something they have in common. Both are properties, but only the latter is an action.
    I also agree with Jussi that no one does anything unless there is an event of her doing it. But events are particulars, and reasons don’t count in favor of particulars. Nor is the ought relation a relation between agents and particulars. Reasons are prospective – we have reasons to do things that we haven’t yet done, and to do have done things that we did not do. But if we did not do them, there was no event of our doing them, and ipso facto no event to be the object of that reason. So events can’t be what we have reasons for.
    Perhaps, you might think, the solution is that act tokens are events, and reasons weigh in favor of action types, which are properties not of agents, but of events – of act tokens.
    Still, as long as act tokens are only events which have an agent, there will be a 1-1 correspondence between action-types so conceived and actions conceived of as properties of agents. For every action-type conceived of as a property of events, there is the property of being the agent of an act-token of that type. And for every action conceived of as a property of agents, there is the property of being the event of someone’s tokening that property. (I think that reflection on Fran and Stan’s case provides intuitive support to my version of the priority of these two relations.)
    Given this 1-1 correspondence, there will be easy ways of re-formulating everything that I wanted to say about constraints and agent-relativity, etc., in the alternative framework – my points didn’t hang on my choice of ontology for actions.
    The ontology was supposed to illustrate, though, where the problems come from in trying to provide a theory-neutral characterization of agent-relative reasons in such a way that they will turn out to correlate with constraints, special obligations, and options.
    Intuitively, the problem arises because given the assumption that reasons take propositions, the reason not to steal has to be characterized as a reason for Al that Al not steal, a reason for Cal that Cal not steal, a reason for Hal that Hal not steal, and so on. If reasons take propositions, these reasons all have different objects. But if we think that reasons take properties, then Al, Cal, and Hal all have the same reason – to not steal. For each of them, it counts in favor of the same thing: not stealing. Which is either a property of agents, as I prefer, or a property of act-tokens. Either way, it is the same thing for each agent.
    Nagel’s test for agent-relativity is essentially designed to test to see whether everyone has the same reason or different people have different reasons. Notice that in the case of the reason not to steal, the view that reasons take propositions leads to the view that each has a different reason. That’s why that is the assumption that Nagel needed in order to get his distinction to pick out constraint-like cases. But notice that given the view that reasons take actions, which are not propositions, but at the very least are in a 1-1 correspondence with properties of a certain kind, each agent has the same reason – not to steal. So given that assumption, Nagel’s definition fails to pick out constraint-like cases.

  66. Oh. And Jussi – you say:

    I perform actions but do I perform properties? I think about which actions to do, do I thereby think of properties I could have?

    Try this one: I perform actions but do I perform events? I think about which actions to do, do I thereby think of events I could be part of?

  67. I’m also sceptical of claim that actions are properties. I think that Stan’s opening the door is an action. But Stan’s opening the door is not a property; it’s a state of affairs. (Compare: being tired is a property, but my being tired is a state of affairs. We can think of the former as a set of people, i.e. all the ones who are tired; and we can think of the latter as an ordered pair made up of me and the property of being overworked.)

  68. Mark II,
    You write:
    “Mark isn’t saying that all properties are actions. But when one does an action of a certain type, your doing an action of that type is a property of yours”
    I am a bit confused by this response. The second sentence asserts the view I was calling more natural – that when A phi-s, A has the property of doing an action of the relevant type. This is different, I assume, from saying that the action is the property that the person has.
    My suggestion (where reference to type of action is taken as given):
    property = being agent of action
    = doing action
    = performing action
    The alternative suggested by “an action is a property”:
    property = the action performed
    = the action done

  69. Mark I,
    Fair enough. Just thought the “something in common” argument was meant to force your view. I am tempted to try to use the razor on your two-property view, but also agree that the ontology issue is orthogonal to the main thread here.
    I also agree with what you say about oughts and (prospective) reasons for action not taking particulars as their objects. Your argument for that claim would, of course, be rejected by some (e.g. Dancy), but there is also the Davidson argument to back the view – that when John thinks he has reason to phi there is no particular he has in mind & there are countless different possible particular acts that would make it true that he phied for that reason.

  70. Guess I should say Dancy might reject the argument, given that he is ok (crudely speaking) with non-existent states of affairs being reasons for action. I was thinking: if you are ok with that, why not also be ok with the other half of the reason relation being a non-existent event? In any case, I agree with you that the later, like the former, is unpalatable.

  71. Here’s an argument for the view that actions aren’t properties.
    Consider:
    (1) Fran opened the door.
    (2) Stan opened the door.
    Each sentence describes an action. But it’s a different one in each case. Fran’s opening the door is not the same action as Stan’s opening the door. We can assume, for example, that they happened at different times. (Note: by “not the same” I mean numerically distinct.)
    Also, each sentence ascribes a property. Mark calls this property “opening the door”. But it’s the same property in each case. Each sentence ascribes the property to a different person; but it’s still the same property ascribed.
    The actions are different, but the property is the same. So the actions are not the property.

  72. Campbell – Each of your sentences describes an action, and clearly it is the same one – opening the door. Fran opened the door and Stan did the same thing: he opened the door, too.
    Of course, when Fran opened the door, something happened. And when Stan opened the door, something happened. Two different things happened: first, Fran opened the door, and then, Stan opened it. So there were two events.
    For various reasons, ‘action’ is sometimes associated with the different things that happen. For example, Davidson naturally associated it that way, because he started by asking what explains these happenings – these events. I’m not denying that any of that is okay. All I’m saying is that those aren’t the relata of the reason relation, which is prospective. Its relata are action-types.
    Brad C – The ‘something in common’ argument wasn’t supposed to force my view; it was supposed to support it. If you’ll recall, Jussi had just said that my view ‘is quite odd’, and so I was explaining why it is a natural possible view, not why you have to hold it.
    Either way, though, you seem to have a strange conception of burden of proof. You seem to think that your view is more natural. But I don’t see why. Recall that I showed how to generate a 1-1 correspondence between the two conceptions. The action-as-property-of-agent can be defined, on your view, as the property of being the agent of an act-token of a given type. And the act-type can be defined, on my view, as the property of being an event of someone’s performing that action.
    So now ask which of these is more natural? Take Fran and Stan, who each open the door. What do the events of their opening the door have in common? Surely, that each is a matter of someone opening a door. But that’s my story about event-types, not yours. And similarly, what do Fran and Stan have in common when each is opening the door? My answer is simple: that they are opening the door. Yours is complicated: it is that they each are the agent of an event that is a door-opening. Your candidate clearly piggy-backs on mine, and that illustrates something about the correct order of explanation, which is what the two views disagree about.

  73. Mark

    Each of your sentences describes an action, and clearly it is the same one – opening the door. Fran opened the door and Stan did the same thing: he opened the door, too.

    Doesn’t this just equivocate on “the same thing”?
    I own a computer, and you own a computer. Do we own the same thing? In one sense, yes: the things we own are the same, in the sense of being qualitatively similar (they’re both computers). In another sense, no: the things we own are different, in the sense of being numerically distinct.
    The things that Fran and Stan do, their actions, are qualitatively similar (they’re both door-openings), but numerically distinct (as I said above).
    Let me ask you this. Is Stan’s opening the door an action?
    I think it is. I also think Fran’s opening the door is an action. But Stan’s opening the door and Fran’s opening door have different properties. For one thing, the former is done by Stan, but the latter by Fran. So, by Leibniz’s Law or whatever, they are different, i.e. non-identical, actions. (Granted, individuating actions is a tricky business. But this case is clear.)
    So we have two actions, but only one property.

  74. Mark I,
    You write:
    “The ‘something in common’ argument wasn’t supposed to force my view; it was supposed to support it. If you’ll recall, Jussi had just said that my view ‘is quite odd’, and so I was explaining why it is a natural possible view, not why you have to hold it.”
    * Yep. That was what I agreed to in the last post.
    * In saying my suggested view is more natural I was alluding to the thought that it is odd to say that some event is a property I have. You may not share this intuition.
    The fact that you were willing to add a second property of the sort I suggested leads me to think you have no beef with the claim it is natural to posit some such property; you only contest that the two property view is less natural than the one property view.
    I also prefer simpler explanations (other things being equal) and my view only posits one property – the relational one of being the agent of an action – where your view posits two. So that seems to put the burden of proof on you. I suppose you think the further common something argument makes the case, but can you see that the burden is on you to show we need to posit two properties instead of by (in virtue of the simplicity condition – that was what my “razor” comment alluded to). I am not sure why you this is strange, if (and only if) you think it is.
    Here is my essay towards an account. Davidson is right: actions are identical with events; the property of being an agent of an action is thus the relation I bear to the event that is identical with the action. It is in virtue of that relation that the event counts as an action.
    Now you say:
    “Take Fran and Stan, who each open the door. What do the events of their opening the door have in common? Surely, that each is a matter of someone opening a door. But that’s my story about event-types, not yours.”
    I do not see why I cannot say that the common factor is that each is a matter of someone opening a door. As a first go, I would paraphrase that like this: some description, e.g. “opening the door,” is true of each event. It is true in virtue of the relations the people bear to the events.
    You then say:
    “And similarly, what do Fran and Stan have in common when each is opening the door? My answer is simple: that they are opening the door. Yours is complicated: it is that they each are the agent of an event that is a door-opening.”
    I would just say that some description, e.g. “being someone who opened the door” or “having opened the door”, is true of both of them. That is what is common. I do not see lots of complication here, and my account is, again, simpler because I only posit one property where you posit two.
    Finally, you say this:
    “Your candidate clearly piggy-backs on mine, and that illustrates something about the correct order of explanation, which is what the two views disagree about.”
    I am frankly unclear about what you mean here. I think there are two competing explanations. One is metaphysically simpler than the other. I do not see any explanatory relation between them at all.
    Maybe you mean your explanation explains why mine works? If so, and given your claim about their isomorphism, I would think mine can explain yours too.
    I suspect I just do not understand the grounds for your piggy-backing claim yet. Perhaps it just rests on the intuition you have about there being some second property in common (I suggest that because you said earlier that reflection on the Fran Stan case supports it). But as I do not share that intuition that argument would seem to beg the question about which explanation is better. My argument, by way of contrast, appeals to the standard of simplicity which I take it we share.
    Maybe these will help: (a) What extra explanatory force does your account have that mine lacks? (b) Is there anything other than the contested intuition that keeps the extra property in your account from being an explanatory spare wheel?

  75. Campbell – I don’t know how to make myself clearer. I agreed that there are two events, and I agreed that you can use the word ‘action’ for events. But I also argued that events can’t be what reasons weigh in favor of. After all, the issue was about what reasons weigh in favor of, not about how to use the word, ‘action’.
    So suppose that I have a reason to prepare today for my classes tomorrow. But suppose that I don’t. Then there is no event of my preparing today for my classes tomorrow. So which event did my reason count in favor of?
    Keep in mind that all I’ve been insisting on is that this is a natural and possible way to carve things up – not that other ways of carving things up are wrong. What I’ve been trying to explain is how tests for agent-relativity of reasons succeed at tracking constraint-like issues only given substantive and rejectable background assumptions.
    Bear in mind that I’m the one who acknowledges two possible views and that all that I’m asking is that both views be acknowledged when we carve up the space of possible views. People who think that they can track constraint-like cases by any of the standard definitions of agent-relative reasons have to deny not only that my view is true, but that it is an intelligible option.

  76. I agreed that there are two events, and I agreed that you can use the word ‘action’ for events.

    But you also seem to think that you can use the word “action” for properties. My argument shows why you can’t do that: actions aren’t properties.

    So suppose that I have a reason to prepare today for my classes tomorrow. But suppose that I don’t. Then there is no event of my preparing today for my classes tomorrow. So which event did my reason count in favor of?

    A possible event.

  77. Which possible event? There are lots of ways I might have done it, which would have been different events, had I done it those ways. Which one does my reason count in favor of?
    Also, I think I missed the part of your argument which showed that actions can’t be properties. I got the part which showed that on any interpretation of ‘action’ on which it picks out events, they can’t be properties. But that’s obvious: events aren’t properties. What further thing was your argument supposed to show?

  78. Let’s try a simple argument, Campbell.
    P1: If Fran opens the door and Stan opens the door, then there is something that Fran and Stan both do.
    P2: The things agents do are called ‘actions’.
    C1: So the thing Fran and Stan both do, if each opens the door, is called an ‘action’.
    Do you really think that it doesn’t make sense to talk about what Fran and Stan both did?

  79. The argument is invalid.
    Recall my earlier example. It’s true that you and I both own a computer. But that doesn’t mean there’s one thing that we both own. It means that there’s a kind of thing (things that are computers) such that you own a thing of that kind and I own a thing of that kind.
    Similarly, it’s true (let’s suppose) that Fran and Stan both open the door. But that doesn’t mean there’s one thing that they both do. It means there’s a kind of thing (things that are door openings) such that Fran does a thing of that kind and Stan does a thing of that kind.
    So if P1 is to be true, it must be understood as follows:
    P1: If Fran opens the door and Stan opens the door, then there’s a kind of thing such that Fran does a thing of that kind and Stan does a thing of that kind.
    But then the conclusion doesn’t follow.

  80. No, Campbell. It is true, given your theory that actions are events, that the only way for P1 to come out true is to reinterpret it in your way. But your theory is precisely what is under dispute.
    I claim that actions – things people do – are not like computers, and that this is exhibited by the fact that ‘If Fran owns a computer and Stan owns a computer, then there is something they both own’ is false, whereas ‘If Fran opens the door and Stan opens the door, then there is something they both do’ is true. Actions are kinds, on my view. Though as I’ve repeated several times already, doings of actions – the events that you call ‘actions’ – are also interesting and important, and I don’t object to calling them ‘actions’, so long as you don’t get mixed up and think that they are what reasons count in favor of.
    So my argument isn’t invalid; you simply deny its first premise. Similarly, I denied the first premise of your argument: ‘Each sentence describes an action. But it’s a different one in each case.’ (Campbell | February 20, 2007 at 07:56 AM) According to me, there are two performances – one action that is performed twice – so your premise is false.
    So we’re at a standstill, except that remember, you are the one who claimed to have an argument that ‘action’ cannot be used to refer to properties, whereas I admitted that both ways of talking make sense. In fact, my only thesis was that both ways of talking make sense, or at least are defensible enough not to be the kinds of view that should be ruled out of court when we are making fundamental distinctions in ethical theory that are supposed to be of general use. I never even meant to be defending the view that actions are properties, as I’ve explained several times. So you’re the one who still needs to explain why the view I articulated is a non-starter.
    Meanwhile, the only reason I made a claim about what actions are was in order to make a claim about what reasons count in favor of, and you still haven’t said which possible event my reason counts in favor of when I don’t actually do what I have reason to do.

  81. Mark
    1. You say:

    No, Campbell. It is true, given your theory that actions are events, that the only way for P1 to come out true is to reinterpret it in your way. But your theory is precisely what is under dispute.

    So you’ll concede that the argument you gave begs the question, because it’s first premise presupposes that my theory is false. Good.
    2. I still deny that we have two equally sensible “ways of talking”. Your way isn’t sensible.
    Suppose Fran opened the door yesterday, and Stan opened it today. Then, it’s natural to say, Fran’s action was before Stan’s action. But on your view this cannot be so, because Fran’s action just is Stan’s action and nothing can be before itself.
    Suppose Fran, but not Stan, opened the door clumsily. Then Fran’s action was clumsy, but Stan’s was not. Again, on your way view this cannot be so, because nothing can be both clumsy and not clumsy.
    A sensible way of talking would allow us to ascribe different properties to Fran’s and Stan’s actions. Your way doesn’t allow this.
    3. Another thing. You say this:

    I claim that actions – things people do – are not like computers, and that this is exhibited by the fact that ‘If Fran owns a computer and Stan owns a computer, then there is something they both own’ is false, whereas ‘If Fran opens the door and Stan opens the door, then there is something they both do’ is true.

    Both of those sentences are ambiguous. Consider the consequent of the first sentence (about computers):
    (P) There is something that Fran and Stan both own.
    To expose the ambiguity, it’s helpful to restate this:
    (P*) There is an x and a y such that Fran owns x, Stan owns y, and x and y are the same.
    Here, the phrase “are the same” is ambiguous. It could mean x and y are qualitatively the same (i.e. are of the same kind). Or it could mean they’re numerically the same (i.e. identical). So there are two readings your conditional sentence. On one reading, where “same” is understood qualitatively, it’s true. On the other, where “same” is understood numerically, it’s false.
    In this respect, your other conditional sentence (the one about opening the door), is no different. It’s ambiguous between two readings, one of which is false, and the other true.
    You seem to want to insist on the qualitative reading in one case, but the numerical reading in the other. I see no reason to do that.

  82. Campbell – Like Jamie in the last thread, I think I’m getting to the point of repeating myself. I’m going to clarify my position one last time, and then I have work that I need to do.
    1) You say:

    So you’ll concede that the argument you gave begs the question, because it’s first premise presupposes that my theory is false. Good.

    No; by virtue of the fact that my argument is valid, and has a second premise that is hard to reject, your view entails that the first premise is false. But not all valid arguments are question-begging. I claim that mine is not, because I claim that mine has a first premise that is independently plausible, even though you are committed to rejecting it. In fact, that is what I claim the argument shows: it shows that you are committed to rejecting something plausible.
    2) Here you provide a string of nonsequiturs. Remember that I have said that there are two sensible ways of talking, and that on one, actions are universals, not particulars. I agree that in your sense, Fran’s action is before Stan’s and that in your sense, Fran’s is clumsy by Stan’s is not. But those things are not true of my sense of ‘action’. Your arguments are both nonsequiturs because all they show is that you have a sensible way of talking, which I allowed before you even argued for it. They do nothing to show that my way of talking is not sensible, because they don’t bear on my way of talking at all.
    3) According to you, ‘There is something Fran and Stan both own’ has two quantifiers in it. I only see one. It looks to me like:
    (Ownership) There is an x (Fran owns x & Stan owns x).
    Similarly, ‘There is something Fran and Stan both did’ looks like:
    (Action) There is an x (Fran did x & Stan did x).
    Allow me to repeat my argument one more time. Though (Ownership) is unambiguously false, (Action) is, on some reading, true. You deny this, and that is a cost to your view. Of course, you don’t believe that it is a big cost, because you believe that there is something in the neighborhood that is true instead and which I am getting it mixed up with. But I don’t think I’m getting mixed up at all. I think I am not at all confused in finding it plausible that:
    (Door) There is an x (Fran did x & Stan did x & x is opening the door)
    But my point is (apologies for repeating myself): tell me how to make sense of reasons as weighing in favor of events – even possible ones. I couldn’t care less about these linguistic arguments, unless they bear on the question of what the relata of the reason relation are.

  83. Mark,
    What do we say about the following kind of case? (I’m not interested in polemics here, I’m just trying to figure out your view.) Suppose that
    Ed kissed Ed’s wife.
    Ted kissed Ted’s wife.
    Then, according to you as I understand it, Ed and Ted performed different actions, in that Ed has the property of kissing Ed’s wife and Ted has the property of kissing Ted’s wife. Do they also perform the same action, since they have the same property of kissing one’s own wife?
    If the answer is yes, then your view has the (to my ears odd) consequence that one and the same event-description can describe the performance of two distinct actions. This seems more strange than the usual fine-grained view that one and the same event can be more than one action; your view would be super-fine-grained.
    If the answer is no, then I wonder what reasons Ed had for kissing his wife that Ted lacked. It seems to me that we often have what one could call indexical reasons—reasons to kiss my wife, discipline my children, insure my property, etc., that are in one sense (with the indexical) common but in another sense (putting in names for the indexicals) different among individuals.

  84. Hi, Heath. Apologies for going polemic. I’m not exactly sure why I get a more fine-grained result. I think that kissing Ed’s wife and kissing one’s own wife are both actions, and that Ed does both when he kisses Teresa (his wife), but that Ted only does one of them when he kisses her.

  85. By “super-fine-grained” I just meant that yours is the only view I know of on which “Ed kissed Ed’s wife” describes (or reports, or whatever) two distinct actions. Davidson and Anscombe would say there is only one action here, the event. And Goldman would not say there were different actions until he had different descriptions of one event.

  86. By “super-fine-grained” I just meant that yours is the only view I know of on which “Ed kissed Ed’s wife” describes (or reports, or whatever) two distinct actions. Davidson and Anscombe would say there is only one action here, the event. And Goldman would not say there were different actions until he had different descriptions of one event.

  87. Couple of days sick and you miss all the fun. Mark is right to bring up Prichard’s problem. Dancy has a nice introduction to it in his new paper ‘Defending the Right’ and I’ve been working on it for while too. It is a tough one no matter what view you have about reasons. It’s in this question:
    “Which possible event? There are lots of ways I might have done it, which would have been different events, had I done it those ways. Which one does my reason count in favor of?”
    I’m not sure that the idea of actions as properties has easier time with this one. On that view too there are cases where I could have done a variety of actions for which there were good reasons. Now we are talking about different properties that would have been favoured by reasons. Of course these properties are not ones that I have but ones I could have had. And, you can ask again, which ones of these does the reason favour? I’m not sure if this question is any different from which possible events did the reason favour.
    I’ve always been puzzled though by what follows from all of this. You’d think that reasons are important for what you do or should have done. They connect to how you actually are and on occasion demand things from you. Something about this idea seems to be lost if by saying that I had a reason to do something I had merely possible properties that were favoured or there were possible events I could have initiated that were favoured.

  88. I had a reason to finish my grading. I didn’t. (Hope to this morning, though.) What does that involve? Here’s my proposal. It’s the past tense of ‘there is a reason for me to finish my grading’. And what does that say? Just:
    For some x(Reason(x,me,finishing one’s grading))
    The reason relation, on this view, holds between ‘considerations’ (whatever reaons are), agents, and properties. Nothing about how reasons ‘demand things of you’ gets lost by my saying this, because the reason relation just is the demanding things of you relation.

  89. I’m sorry that I’ve dropped out of this discussion for such a long time. (It’s been an insanely busy week here in Oxford….)
    Anyway, now I’m tempted to quiz Mark (i.e. Mark Schroeder) about his view of reasons. So here goes.
    Mark — Surely, on your views, the present-tensed statement ‘There is a reason for Mark to finishing his grading’ must contain some reference to time somewhere?
    Indeed, arguably, such statements will typically contain two references to time. The first is relatively uncontroversial: some reference to a time (or at least some quantification over relevant times) will normally feature in the specification of the relevant property (e.g., strictly speaking, the property of finishing one’s grading should be specified as “the property of being an agent x such that x finishes the grading that x is under an institutional obligation to do before the deadline t” or something like that).
    Secondly, in addition, I believe that we also need a time as an additional parameter for the reason relation itself. Just as ‘S ought to F’ implies ‘S can F’, so too, I assume, ‘There is a reason for S to F’ implies ‘S can F’ as well — where the relevant sort of ‘can’ refers not to mere logical or metaphysical ability, but the sort of ability that one and the same agent can have or lack at various different times. So strictly speaking, ‘There is a reason at t for S to F’ implies ‘S can F at t‘.
    But clearly this second time may be different from the time mentioned in the specification of the relevant property. Even if the relevant deadline has not yet arrived, it may already be the case that you cannot now finish your grading before the deadline. If so, it is not true that there is now a reason for you to finish your grading before the deadline (although of course at an earlier time, when it was still possible for you to finish your grading on time, there was then a reason for you to finish your grading on time); instead, there is now a reason for you to email a suitable apology to all your students, or something like that.
    In short, the reason relation is not just a three-place relation between a “consideration”, an agent, and a property, but a four-place relation between a consideration, an agent, a property and a time.

  90. Mark,
    Sorry for the unclarity. I meant this. (Names of) properties, I take it, can be generated by removing proper names from sentences, leaving open sentences. So from “Ed kissed Ed’s wife” we can generate two different properties of Ed:
    (i) x kissed Ed’s wife
    (ii) x kissed x’s wife
    So “Ed kissed Ed’s wife” describes two actions, if actions are properties.
    If these are both actions that Ed performs, the reasons for them are different. The reason for the first might be that (a) Ed’s wife is a great kisser. The reason for the second might be that (b) it builds a marriage.
    If (a) is true, that is or might be reason for Ted to perform the same action (i) as Ed, i.e. kissing Ed’s wife. But (a) will not be a reason for Ted to perform (ii), i.e. kissing Ted’s wife. Likewise, (b) is a reason for Ted to perform (ii) but not a reason for Ted to perform (i).

  91. Heath – I’m totally with you. There may be different reasons to kiss Ed’s wife than to kiss one’s wife (even if one is Ed). Doesn’t that seem to support the view that there are two actions there? Two different things for reasons to be in favor of?
    It still looked to me like you had to use two different descriptions in order to distinguish them, though. Of course, ‘Ed kissed his own wife’ entails ‘Ed kissed Ed’s wife’, but ‘Ed opened the door noisily’ entails ‘Ed opened the door’, but those still count as separate descriptions on any other fine-grained view, so I’m still not seeing what makes the view I offered more fine-grained.
    Ralph – I agree that there are two places in which a time can figure. I might today have a reason to show up in New York tomorrow. The fact that I have that reason today already explains why it is rational for me to start taking steps now to get to New York by tomorrow.
    I don’t see why it affects anything material in what I said, though. The former place for a time looks like it is part of the action. What I have a reason to do is not just to show up in New York, obviously, but rather to show up in New York tomorrow. So I get that.
    The latter place for a time – the ‘today’ in ‘today I have a reason to show up in New York tomorrow’ – is also important, as I just allowed. But so is the ‘today’ in ‘today my desk is brown’ – for after all, I might paint it white tomorrow. This is nothing special about reasons. If there is an extra place in the reason relation for times, then there is an extra place in the brown relation for times.
    It seems to me that it would be kind of stupid to get bogged down in a discussion of brown over whether it was a property or a relation to times, since that question has nothing to do with brown in particular. So it looks the same way for reason. I’ll leave it to the semanticists and metaphysicians of time to let us know which way these things need to work. If they tell us that every relation needs an extra place for times, then I’m fine with that. I just think it’s an irrelevant distraction and that I’d have to know more about the semantics of tense and metaphysics of time in order to evaluate it.

  92. Mark — Great, so you agree with me about reasons and time (so the reason relation as you understand it isn’t a timeless relation, like some mathematical relations perhaps, but more like being darker than, etc.).
    Next question — This “property” that is involved as a relatum of the reason relation — can it be absolutely any property at all, including “Cambridge” properties (like the property of being such that 2 is prime, or being such that Alfredo is dead by tea time, or in general, for any proposition p, being such that p is true)?
    If you’re disposed to answer Yes, then interestingly, your view may actually be rather closer to mine than I had thought….

  93. Hi, Ralph.
    I don’t have a settled view about which properties actions can be, but I definitely don’t go for the unrestricted view. At the very least, ‘being such that p‘ properties don’t sound like things one can do, to me. One thing you can do is to make sure that p, but that’s different. I also think there are further restrictions beyond that, but don’t have any worked-out proposal to give you.
    Intuitively, I think the test is something like this: ‘is that something someone can do?’ The right set of restrictions will yield the cases that intuitively yield ‘yes’ answers to this question, barring some reason to think that intuitions about this are misleading.

  94. OK, Mark, I see a bit better what your view is.
    As a semantic matter, I think that it’s better not to impose such restrictions. It’s easier to argue this with respect to ‘ought’, but even with respect to sentences of the form ‘There is a reason for S to F’, I think that there are going to be cases that will make trouble for any such restrictions.
    E.g., consider a case where the mafia consigliere advises the mafia don, “There is a reason for Alfredo to stay alive for the next 24 hours”. This statement will be false unless the mafia family has the ability to act in such a way that Alfredo is alive for the next 24 hours, but Alfredo’s being alive for the next 24 hours isn’t something that the mafia family “can do”. So I don’t believe that whenever a consideration is related by the reason relation to an agent and a property (at a time), it is necessary that the property be something that the agent “can do” at that time.
    At the same time, I only take this point as a point about the semantics of normative terms like ‘reason’ in natural languages like English. I’m not claiming that there are any truths that are crucial to ethical theory (or to the theory of rational belief or rational decision) that can only be expressed by means of these more unrestricted uses of ‘ought’ and ‘reason’ which I regard as quite common in natural language.

  95. Mark,
    good. One more question about this:
    “For some x(Reason(x,me,finishing one’s grading))
    The reason relation, on this view, holds between ‘considerations’ (whatever reaons are), agents, and properties. Nothing about how reasons ‘demand things of you’ gets lost by my saying this, because the reason relation just is the demanding things of you relation.”
    I take it that relations can only hold between relata that exist. Now, I’m worried about in what sense does the property in question do that. Do properties exist on their own independent of being instantiated in some objects? In the example you gave, you did not finish grading last night so you did not have the relevant property. Did that property still exist somewhere so that it could be in the favouring relation?

Leave a Reply

Your email address will not be published. Required fields are marked *