In my first post, I pressed Mark’s defense of the bold Humean thought that, crudely, if someone has a desire, then there is reason for him to act in ways that will help satisfy it. Let’s grant him that claim and move on.
There is an even larger worry in the offing: will Mark follow Hume in saying that someone could reasonably (or even ought to) choose scratching her finger over saving the world? He would have to, if he said that a reason’s weight is proportional to the strength of the desire. But that view, which Mark calls Proportionalism, is something he rejects. In addition, he also rejects the usual neo-Humean attempts to blunt the counter-intuitiveness of Hume’s finger scratching claim – he does not define correct weighing in terms, e.g. of coherence or higher order desires.
Instead, Mark gives us a novel, and somewhat complicated, account of correct weighing and ought. It has the following rough shape: start with the reasons you have to act; then ask what (second order) reasons you have to weigh these reasons one way rather than another; then ask what (third order) reasons you have to weigh second order reasons one way rather than another…keep going until you reach a level where there are reasons to weigh one way but no reasons to weigh any other.
When you reach that point, you are ready to figure out the correct way to weigh the original reasons – weigh the second-highest-order reasons as the highest-order reasons suggest, and then continue on down until you have determined which of the first order reasons you have most reason to weigh more heavily. Finally, we can say that correct deliberation will weigh the first order reasons in the way just identified and that you ought to act in accord with correct deliberation.
Now to understand Mark’s view it is crucial to add two more elements:
First, we should always and only consider reasons of the right kind. And here is his definition of that:
(RKR) The right kind of reasons to do A are reasons that are shared by everyone engaged in the activity of doing A, such that the fact that they are engaged in doing A is sufficient to explain why these are reasons for them. (135)
His example is based on the premise that if one is playing chess, then one has some desire to follow the rules of the game; just grant that. He then says that because that desire is shared by everyone playing chess, there is a reason of the right kind for everyone playing chess to follow these rules. Desires like that contrast with ones that are not essential to an activity. You might, for example, be tempted to break the rules of chess to win some money and therefore have reason to break the rules. But because not all chess players have a desire that would be furthered by breaking the rules in order to win, your reason to break the rules of chess is not one that would carry weight in a correct deliberation about how to play chess; it is a reason of the wrong kind.
The second and final element of his view is his account of agent neutral reasons: basically, X is an agent neutral reason to φ if and only if X is a reason for each and every agent to φ. Given Mark’s account of reasons, we can also say – to put it a bit crudely – that X is an agent neutral reason to φ if and only if every agent has a desire whose satisfaction would be promoted by φ-ing. Finally, he says that an action promotes a desire’s satisfaction just by making its satisfaction more likely that it would be if the action were not performed. Against this backdrop, he expresses confidence that we will be able to show that there are agent neutral reasons to act morally – for any agent it is true that acting morally will promote some desire she has. Let’s just grant that.
We can now show why Mark thinks that even Frankie, the fetishistic finger-scratcher, ought to save the world rather than scratch his finger. Roughly the argument goes like this (adapting from another case on page 142):
To figure out what Frankie ought to do, we need to figure out how he would correctly weigh the reasons for and against scratching his finger instead of saving the world. Because of his fetish, Frankie may have abundant reasons to place more weight on his reasons to scratch at the world’s expense. But those reasons aren’t relevant for our purposes because they are of the wrong kind – to be reasons of the right kind, they must be reasons that everyone who is placing weight on reasons (deliberating) has, in virtue of engaging in the activity of deliberating. And since deliberating is an activity in which every agent engages, only agent neutral reasons are reasons of the right kind to take into account when trying to determine how to correctly deliberate. Finally, because there is agent neutral reason to save the world and agent neutral reason to choose saving the world over scratching a finger, Frankie ought to save the world.
He uses an analogous argument to show that Ronnie has reason to help Katie, even though he hates her (142) and that Aunt Margaret ought not build the spacecraft, even though she really want to (143).
With all this in mind, here are some worries:
Worry #1: The argument that Frankie ought to save the world I sketched above uses one key assumption: that when we are trying to decide how to weigh (first order) reasons, we are engaged in the activity of deliberating, so the right kind of reasons to take into account are those which all deliberators have – agent neutral reasons. But we can be engaged in more than one activity at the same time. Imagine that Ted Bundy is deliberating in order to figure out how to cover his tracks – should he kill a witness who saw him at the scene of his last crime or would that be too risky? When he deliberates about this, he is simultaneously engaged in two activities – deliberating and trying to cover his murderous tracks (of which his deliberating is one dimension). But if that is right, then it seems Mark must do more to establish his claim that whenever we are deliberating, it is only correct to weigh as agent neutral reasons suggest. Claims about correctness depend on claims about which reasons are of the right kind, which depend, in turn, on claims about the activity in which the agent is engaged.
Insofar as we focus on the fact that Ted's activity is deliberation, it will be right to say that agent neutral reasons are the only right ones, and that it is correct for him to decide to not kill the witness; from this point of view, his desire to cover his murderous tracks does not give him reasons of the right kind while moral reasons will. But insofar as he is engaged in the activity of trying to cover his murderous tracks, it might be correct for him to decide to kill the witness, and his desire to cover his murderous track will generate reasons of the right kind. More generally, even if Mark’s account allows us to say that people ought not to act on reasons (to weigh reasons) grounded in idiosyncratic desires, the same cannot be said for reasons that are grounded in desires that are typical of those engaged in criminal activities. In addition to generating counter-intuitive ought claims, this also seems to entail that on Mark's view Ted ought and ought not kill the witness, if the risk of doing so is low.
Worry #2: I have an independent worry that Mark’s approach – as he represents it – is too strong. It entails when we correctly deliberate we weigh first order reasons as agent-neutral reasons suggest and we never weigh as (only) merely agent-relative reasons suggest. Now assume, as Mark hopes we should, that moral reasons are agent neutral. Won’t his view commit us to saying that we always ought to volunteer rather than spend the weekend at a stamp collecting convention (or mountain climbing or piano playing)? Because some people have no desire that would be promoted by spending the weekend stamp collecting, there is no agent neutral reason to go stamp collecting. And this raises doubts about there being agent neutral reason to choose stamp collecting over volunteering. More generally, it is hard to see how there could be an agent neutral reason to choose working on a hobby rather than doing some supererogatory moral act.
Again, any advice about how to be
more charitable and any responses to these worries are welcome. And advice about how to shorten my summary of Mark’s view is also welcome via email.
I don’t have the book with me, and since you’re right that this part is complicated, bear with me if I say something clearly wrong.
Here is how I would respond to your first worry. It’s true that Ted is deliberating, and so the right kind of reasons for deliberators are relevant to what he ought to do. But the right kind of reasons for those engaged in the other activities he’s engaged in are also relevant to what he ought to do. So, the right kind of reasons for serial killers are relevant to what he ought to do. Substantive claim: The right kind of reasons for deliberators, most especially the moral ones, will win out. They will be the ones for which it is correct to place more weight on. If the moral ones win out, then it’s not the case that Ted both ought to kill the witness and ought not to kill the witness. He ought not kill the witness because it is correct to place more weight on the moral reasons than on the serial killer reasons.
I think the basic idea here helps with the second worry. If the reasons to be weighed up are not just the right kind of reasons for deliberators, but also the right kind of reasons for those engaged in other types of activities, then it’s plausible that the right kind of reasons for some of the other types of activities will periodically outweigh the moral ones (or the agent-neutral ones for all deliberators). Now don’t ask me for the details of that, but once we allow that it’s not always the case that the only reasons that are relevant are the ones that everyone has in virtue of deliberating, it doesn’t boggle the mind to think that sometimes the deliberator reasons will be outweighed by reasons of the right kind for those who play the piano, or climb mountains, or collect stamps.
[n.b. some of the wording here bothers me, but without the book I don’t know how to correct it. Also, I’ve just flown halfway across the world, and so I might later be appalled by what I just wrote.]
Hi Errol,
Thanks for the comment which was both lucid and interesting. Where in the world are you?
I have two worries about your suggested response on Mark’s behalf.
First, when he gives the recursive account of weighing (138) he seems to imply that when we ascend the levels of reasons we will reach a point at which there are reasons on one side but none on the other and that that is why we avoid an infinite regress. But on your suggestion, there might be conflicting reasons all the way up, so to speak; your think that correctness depends on a substantive claim about which reasons in fact weigh more than others.
Second, even if your suggestion were accepted by Mark, it seems to threaten his reductionist aspiration; it is hard to see how a substantive claim about what reasons are heavier than others (e.g. moral reasons are heavier than serial killer reasons) could be reduced to a claim about the relevant agent’s (Ted’s) desires.
I suspect that Mark wants to avoid appeal to substantive reason-weight claims for exactly this reason, and his attempt to focus on deliberation as the only relevant form of activity is part of a strategy to get reduction without being committed to ought-relativism.
Hope that makes sense.
Great questions, Brad. I’m worried more about #2 than about #1.
Let me respond to #1 by trying to clarify what is supposed to be doing the work. I don’t mean to be using the assumption that Frankie is deliberating in order to explain why his reason is weighty. The view is that the weight of a reason is the weight it carries in correct deliberation. That is why deliberation matters, whether or not Frankie is deliberating. The weight of a reason, in the sense in which we are interested, is not the weight it carries in correct chess-playing, or in correct knot-tying, or in correct any-activity-other-than-deliberation. So that is why those activities don’t matter, even if Frankie is engaged in them.
So why do I appeal to the assumption that everyone is a deliberator? Well, in order to keep things simple (they were already getting complicated, and I wanted to keep the basic ideas in full view rather than letting the complications distract), I tried in chapter 7 (the one this question is about) to argue for the conditional conclusion that if my answer to the Too Few Reasons objection works, then there is unlikely to be an extra worry created by the concerns about weight. And my answer to the Too Few Reasons objection is supposed to explain how there are some reasons that are shared by necessarily every agent. So if the class of agents is the same as the class of deliberators, then that reduces one problem to the other. Anyway, that was the idea – the fact that Frankie is deliberating in this particular case is not, in fact, supposed to be playing a direct role. (Hope that helps clarify things.)
I’m much more worried about your worry #2 – in fact, it’s one of the things which worry me most about the book. I think it’s probably fair to say that nothing I’ve done in the book shows that my explanation is not too strong in precisely this way. The worry is that agent-neutral reasons could always outweight merely agent-relational reasons, if the agent-neutral reasons are always backed up by higher-order reasons of the right kind, and merely agent-relational reasons are never so backed up.
I escape the objection just in case merely agent-relational reasons are sometimes backed up by reasons of the right kind at the next level. So, for example, Ronnie’s reason to go to the party outweighs his reason to give his party money to famine relief for the nth time, if there are reasons of the right kind to place more weight on his (merely agent-relational) reason to go to the party, and they outweigh the reasons of the right kind to place more weight on his agent-neutral reason to give his party money to famine relief for the nth time.
What would such reasons be? I don’t know, but this might be one: that he has already made many sacrifices for famine relief. That might be a reason of the right kind to place less weight on the reasons to give to famine relief. And the fact that dancing is a harmless pasttime might be a reason of the right kind to place more weight on the fact that there will be dancing at the party, as a reason to go there. Or perhaps the fact that dancing is a harmless pasttime is a reason to place more weight on the fact that Ronnie really enjoys dancing as a reason to place more weight on the fact that there will be dancing at the party as a reason to go there. I’m not sure – those are just some ideas, and are just intended to fill out a picture of what sort of thing I would need in order to be able to definitively set aside Brad’s second worry in this post.
Hi Mark,
Thanks for the thoughtful response. Your comments about worry #1 are enlightening – but now that I see how the argument depends on the previously given account of correct deliberation I wonder how you hope to achieve reduction; how does the claim that ought is best defined in terms of correct deliberation bottom out in a claim about desires instead of in “our” intuitions about what sounds plausible? What if Ted thinks he ought to act in accord with best serial killing?
On worry #2: I thought you would suggest going that route, and agree it makes sense. The worry would be I could come up possible deliberating agents that show that the proposed reasons and not grounded in essential deliberative desires. But I see that this is not something to be easily shown, and I do not see a principled argument that would tell against your move – it would need to show that there is no essential deliberative desire that would ground an agent neutral reason to refrain from being a moral perfectionist.
A related question: what do we do when agent neutral reasons conflict? Say you are right and the fact that Ronnie has given a lot is a reason for him to not give more. I assume that the fact that the people are in need is still (also) a reason to give more. Does your recursive account give us a way to figure out which is more weighty? I worry that we will ascend to the higher levels of reasons but never reach a level where all the reasons point in one direction (so we can apply base weight).