Suppose that you know that your opening a certain safe is what would maximize the good. But suppose that you don’t know what the correct combination is. Some critics (e.g., Wiland 2005 and Howard-Snyder 1997) allege that, despite your ignorance of the correct combination, objective consequentialism (OC) implies that you are morally required to open the safe – OC being the view according to which S’s performing x is morally permissible if and only if there is no alternative, y, that would in fact produce better consequences than x would. This implication, they claim, is absurd. Contrary to OC, it’s permissible for you to fail to open the safe even though this amounts to your failing to maximize the good. Thus, they conclude that we should reject OC. Here’s the example in greater detail:

The Safe: Imagine that the time is T-0 and that the fate of the world depends on your opening a safe by T-11. Suppose that the correct ten-digit combination is 205-513-9437. And suppose that although this ten-digit combination is unknown to you, you could dial it in the following sense. If you were to intend at T-1 to dial a ‘2’ for the first digit of the combination, you would then succeed in doing so. And if you were subsequently to intend at T-2 to dial a ‘0’ for the second digit of the combination, you would then succeed in doing that. And so on and so forth for the remaining eight digits. Thus, there is a sequence of actions, each of which you are capable of performing, that would result in your opening the safe by T-11. Let me stipulate, though, that given your ignorance of the correct combination you would in fact fail to open the safe by T-11 no matter what you intend at T-0 to do.*

Critics argue that, despite this last fact, OC implies that you are (as of T-0) morally required to dial 205-513-9437 and thereby open the safe by T-11, for this is, of all your alternatives, the only one that would maximize the good. They conclude, then, that OC absurdly implies that you are required to dial the correct combination despite your ignorance of the correct combination. Moreover, Wiland argues that if you’re required to dial the correct combination in this case, then you’re required, on most every other occasion, to type out and publish the next supremely important document (perhaps, one describing the cure for cancer), for typing out and publishing this document would maximize the good. And since almost all of us regularly fail to type out and publish the next supremely important document, almost all of us are acting impermissibly virtually all the time.

But let’s consider more carefully the critics’ assumption that, in The Safe, your dialing the correct combination is one of the alternatives from which OC tells you to select. Whether it is, we’ll see, depends on what we take the relevant alternatives to be. Of course, it’s clear that, in The Safe, the relevant alternatives are sequences of actions — e.g., sequences of number-dialing acts. But the critics go further and assume that the relevant alternative sequences are those that are, as of T-0, personally possible for you. The consequentialist could, however, reject this assumption and claim instead that the relevant alternative sequences are those that are, as of T-0, securable by you. In so doing, the consequentialist could avoid the absurd implication that you are required, in The Safe, to dial the correct combination as well as the absurd implication that you are required, on most every other occasion, to type out and publish the next supremely important document.

To illustrate the difference between what’s securable by an agent and what’s personally possible for that agent, consider the following. Suppose that I’m on a low-fat, low-carbohydrate diet. It’s now 2 p.m., and I’m thinking that I should grill up some asparagus and a lean boneless, skinless chicken breast for tonight’s dinner. If this is going to be what I’m having for dinner, I’ll need to get into my car, drive to the grocery store, buy the necessary ingredients, drive back home, marinate the chicken, prep the asparagus, fire up the grill, wait for it to heat up, grill the chicken and asparagus, plate them, and eat them. But suppose that, as a matter of fact, no matter what, at 2 p.m., I plan or intend on doing later, I will not end up eating grilled chicken and asparagus for tonight’s dinner. For, as a matter fact, I’m going to eat pizza instead. What’s going to happen is this: while in the midst of preparing the chicken and asparagus, I’m going to get very hungry. And, in this state, I’m going to be overcome by the temptation to go off my diet and will at 5 p.m. throw the chicken and asparagus back into the fridge, turn off the grill, and order pizza. It’s not that I couldn’t grill and eat the chicken and asparagus. Indeed, I could in that, were I to intend at 5 p.m. to continue with my plan to grill and eat chicken and asparagus, that’s exactly what I would do. It’s just that, as a matter of fact, I will at 5 p.m. abandon my plan and order pizza instead.

In this case, my eating chicken and asparagus at 6 p.m. is, as of 2 p.m., personally possible for me, but it is not something that is, as of 2 p.m., securable by me. It’s personally possible for me, for there is a series of steps that I could take that culminate in my eating grilled chicken and asparagus at 6 p.m. tonight, and the following is true of each of those steps: having taken the previous step or steps, I could then take the next step in that, were I to intend to take the next step, I would succeed in doing so.
Nevertheless, my eating chicken and asparagus at 6 p.m. is not, as of 2 p.m., securable by me, for, as we’re supposing, no matter what I intend to do now at 2 p.m., I will not end up eating grilled chicken and asparagus at 6 p.m. Even if I were, at 2 p.m., to form the most resolute intention to stick with my plan, it’s a fact about my psychology that I would abandon it in the face of the ensuing hunger and the ever-increasing temptation to eat pizza. Thus, there’s absolutely nothing that I can intend to do now that will result in my eating chicken and asparagus at 6 p.m. — or so we’re assuming.

Now, in The Safe, your opening the safe is, as of T-0, personally possible for you, but it is not, as of T-0, securable by you [I added the second 'as of T-0' at 5:05 AM]. And the fact that it is absurd to suppose that you are, as of T-0, morally required to open the safe by T-11 despite the fact that, no matter what you intend at T-0 to do, you will fail to open the safe by T-11 is not a reason to reject OC, but rather a reason to identify the relevant alternatives as being those that are securable as opposed to personally possible. Once we combine OC with the idea that the relevant alternatives are all and only those that are securable by the agent, we avoid absurd implications in cases such as The Safe.

*Here and elsewhere, I’m assuming that, for each set of actions that S might perform, there is some determinate fact as to what the world would be like were S to perform that set of actions. And I’m also assuming that, for each set of intentions that S might form, there is some determinate fact as to what the world would be like were S to form those intentions. This assumption is sometimes called counterfactual determinism — see, e.g., BYKVIST 2003. I’m assuming, then, that even if you were to pick the correct ten-digit number at random and intend to dial it and so intend at T-0 to dial 205-513-9437, you would still fail to dial that number, for you would, say, half-way through change your mind and decide to dial a ‘6’ instead of ‘3’.

## 55 Replies to “Objective Consequentialism and the Wiland/Howard-Snyder Objection”

1. Jussi Suikkanen says:

hmh. Doug, can you first say why opening the safe at T-11 isn’t securable by you? Is the time aspect relevant here? I mean we could run the same case without multiple actions. Imagine a bomb with 10 to the power of 10 buttons on its side. If you hit the button 205-513-9437 all will be well. If you don’t hit any button, let’s say 10 people will die. If you hit any other button, let’s say 10000 people die. In this case, is punching the right button securable by you? If not, why not? I mean here we don’t have a series of actions where you might become uncertain about what to do next. And, the conclusions of the case seem to be the same.
Also, I quite like Philip Pettit’s line on these cases. He thinks that the relevant actions in these cases really are wrong, but tries to explain away our intuitions against this. The idea is that these actions belong to the class of wrong actions for which agent’s are not blameworthy (Smith has a similar idea). You can then say that we use the word ‘wrong’ pragramatically to blame people even if this isn’t part of the content of the word. As a result, there’s an explanation of why our wrongness intuitions track blameworthiness even if these two terms are not coextensive.

2. Jamie Dreier says:

Doug simply stipulated that opening the safe is unsecurable:

Let me stipulate, though, that given your ignorance of the correct combination you would in fact fail to open the safe by T-11 no matter what you intend at T-0 to do.

But is this stipulation essential to your solution, Doug? Let’s drop the stipulation. Suppose instead that if the would-be safe-opener, Crackerjack, were to intend at T-0 to dial 205-513-9437, he would succeed in doing so by T-11.
Now the Objective Consequentialist is again stuck saying that Crackerjack acts wrongly by failing to dial in 205-513-9437. (Does he teach at the University of Alabama?)

3. Campbell Brown says:

1. The objection seems to presuppose some principle like this: it is possible for you to F if, were you to intend to F, you would F. But that seems false, because it might be impossible for you to intend to F.
2. Perhaps a defender of OC could respond as follows. Suppose there is some sequence of numbers n1, n2, … n10 of which the following counterfactual is true: if you were to enter any sequence of numbers into the safe, then you would enter n1, n2, … n10. If this is not the correct sequence (i.e the one which will open the safe), then it’s not possible for you to open the safe, and so you’re not required to do so. But if it is the correct sequence, then you are required to open the safe, for your failure to do so would show that you didn’t even try to open it.
Here’s an analogy. Suppose that what needs to be opened is not a safe, but a locked door. You have a key, but you don’t know whether it will fit the lock. If it will, then you’re required to open the door. But if it won’t, then you’re not required to do so.

4. Doug Portmore says:

Jussi,
Yes. Time is relevant. I should have said that opening the safe by T-ll is not, as of T-0, securable by you. It’s not, as of T-0, securable by you, because, as stipulated, you would in fact fail to open the safe by T-11 no matter what you intend at T-0 to do. (I’ve fixed the text, by the way.)
Jamie,
You write:

But is this stipulation essential to your solution, Doug? Let’s drop the stipulation. Suppose instead that if the would-be safe-opener, Crackerjack, were to intend at T-0 to dial 205-513-9437, he would succeed in doing so by T-11.
Now the Objective Consequentialist is again stuck saying that Crackerjack acts wrongly by failing to dial in 205-513-9437.

Yep, the stipulation is essential. Crackerjack acts wrongly by failing to dial 205-513-9437. Why do you say we’re “stuck” with this result. This result doesn’t seem problematic to me. What seems problematic to me are the sorts of implications Wiland attributes to OC, where according to OC I’m required to type out the next supremely important document even though there is clearly no intention that I could form now that would result in my doing so.

5. Jussi Suikkanen says:

Ok. So, if we just make it a non-temporal case with the same outcomes and probabilities, doesn’t the initial objection still stand?

6. Doug Portmore says:

Campbell,
Regarding your first point, the assumption is usually both that (1) S would perform x if S were, at T-i, to intend to perform x, and that (2) S is not, at T-i, in a state that’s incompatible with S’s intending to perform x.
Regarding your second point, that’s essentially what my response is. In other words, that is pretty much what OC says if you let the relevant alternatives be those that are securable as opposed to those that are personally possible. The only difference is that I want to allow for weird cases where, were you to intend to scratch your back by rubbing against the dial, you would in fact open the safe.

7. Doug Portmore says:

Jussi,
If the case is one where the fate of the world depends on your pushing the red button when you don’t know whether it’s the red button or the blue button that must be pushed to save the world, then OC implies that you’re morally obligated to push the red button — assuming, of course, that you would push the red button were you to form the intention to do so.
Now, you ask: “So does the initial objection still stand?”
I’d say it doesn’t. The objection that people like Wiland are raising is not just the standard epistemic objection: that we often don’t know which alternative will maximize the good. Their objection goes beyond this, I take it. It says that we’re required to do things like beat Karpov at chess and type out the next supremely important four-hundred page document. In these sorts of cases, the standard Moorean response (that you like) where one holds that these are cases of blameless wrong-doing seems less satisfactory. So I’m happy to say that pushing the blue button is a case of blameless wrongdoing. But I’m not happy to say that my failing to type out the next supremely important four-hundred page document is just another case of blameless wrongdoing.

Doug,
I’m don’t think your view helps as much as you suggest with the document cases. The thought shouldn’t be that I just couldn’t form the relevant intentions because the papers would be too long and I just can’t form such complicated intentions. I don’t think securability has anything to do with that issue. (Plus this point about us being unable to form intentions doesn’t arise for some of the examples in the paragraph below.)
Is the thought that even if I could form such complicated intentions, I’d fail to carry them out (because I’d forget and mess up or something)? If this is the answer, note that these problems do not arise with less lengthy/complicated, but still highly important, documents/ideas. Surely there have been times in the history of physics or mathematics where if someone had just written down a certain simple differential equation (perhaps involving known quantities) or coined a certain definition, progress would have moved forward at a rapid pace. Think about someone writing down F=mA, or certain key definitions of calculus, topology, etc. Or think of someone writing down relatively short directions for medical treatments that would be vastly beneficial. This is especially bad in cases where a certain drug treatment has been created to treat one kind of illness, but it isn’t yet known that it has further applications. There is just no reason to think that these doctors couldn’t intend to write such prescriptions or carry out their intentions. I think you’ll be forced to say that in at least some of the cases just described, many people are failing miserably by not writing these things down if you’re an Objective Consequentialist, whatever else you think about securability.

9. Jamie says:

Doug,
Oh, I guess I don’t see the difference between Eric’s example and Crackerjack’s example. Is it that the important document would be very long, so that it’s hard to form the intention to type out the whole thing (or if I did form that intention, it’s plausible that I wouldn’t stick with it)?
But suppose there is some fairly compact little formula I could write out. It’s the “double helix” of bone cancer research, say: a small insight which would, if written out and disseminated, spark a new line of research that would quickly result in wondrous cures, and it’s only one line of text. Of course, I could never come up with this formula in a million years, but if I were to form the intention to type it out and post it on the internet, I would easily succeed.
Is your view that I am doing something morally wrong by not typing out the wonder formula and posting it? Or are you saying that if I formed the intention to do it, I would fail — that the conditional (in italics) I’ve stipulated is implausible?
Let me add that I think it’s extremely unintuitive that Crackerjack is morally required to dial 205-513-9437 — but I take it you just have a different intuition.

10. Doug Portmore says:

Nick,
The thought shouldn’t be that I just couldn’t form the relevant intentions because the papers would be too long and I just can’t form such complicated intentions. I don’t think securability has anything to do with that issue.
Why shouldn’t it be? This is precisely what I’m thinking. More precisely, ‘securable’ is defined as follows:
A set of actions, αj, beginning at T-j is, as of T-i (T-i ≤ T-j), securable by S if and only if there is a set of actions, αi (αi being a subset of αj), such that both of the following hold: (1) S would perform αj if S were, at T-i, to intend to perform αi, and (2) S is not, at T-i, in a state that’s incompatible with S’s intending to perform αi.
And, yes, it’s true that some of us would type out the next supremely important document if we just sit down at our computers and give it go. I think that these people are obligated to do so. I just deny that I am, as of now, obligated to type out, say, a document describing the cure for cancer — assuming, that is, that even if I were to form the intention to give it a try, I would fail. And I don’t think that I’m in a state now that’s compatible with my intending to type out all the individual keystrokes that would constitute my typing out the cure for cancer. That’s just too complicated.

11. Doug Portmore says:

Jamie,
Is it that the important document would be very long, so that it’s hard to form the intention to type out the whole thing (or if I did form that intention, it’s plausible that I wouldn’t stick with it)?
Yes.
Is your view that I am doing something morally wrong by not typing out the wonder formula and posting it?
Yes.
Let me add that I think it’s extremely unintuitive that Crackerjack is morally required to dial 205-513-9437 — but I take it you just have a different intuition.
Fair enough. Is it as implausible as saying that you’re obligated to type out the next King Lear? I don’t think so. So, admittedly, I’m not addressing the standard epistemic objection to OC. I’m merely addressing the sort of twist that Wiland and Howard-Snyder put on it.

12. Doug Portmore says:

Nick,
I should add, as I did in response to Jamie, that I do think that OC is committed to the view that you’re obligated to type out the next supremely important formula (such as F=ma) if this would do a lot of good.
But this is different than the claim that people like Wiland and Howard-Snyder make. They claim that OC is further committed to the view that I could be obligated to beat Karpov at chess and obligated to type out some very long document.

Doug,
I agree that the point about not being able to form certain intentions is relevant for some cases, but deny that the securability requirement helps explain this in a special kind of way. The explanation is open to all kinds of folks, including people very similar to possibilists, who might think that in the relevant sense of “can”, you can’t perform actions you can’t intend.
For what it’s worth, I do not find an intuitive difference between long document type cases and very difficult, but short and important, documents. I also do not find a difference between synchronic and diachronic safe cases. It therefore seems odd to me to be embarrassed about one kind of case, but not the other (but that’s just a first-pass judgment).

14. Doug Portmore says:

Nick,
Do you find an intuitive difference between the following two cases?
Case 1: It’s T-0. The fate of the world depends on your pushing the red button. You don’t know whether it’s the red button or the blue button that must be pushed to save the world. You take a shot at saving the world, guess that you need to push the blue button to save the world, form the intention at T-1 to push the blue button, and push the blue button at T-2. The world is destroyed. Had you formed the intention at T-1 to push the red button, you would have succeeded in doing so, thereby saving the world.
Case 2: It’s T-0. The fate of the world depends on your beating Karpov at chess. You don’t know anything about chess except for the rules about how to move the pieces. You take a shot at beating Karpov and make a series of moves that seem to you to be the best moves. You lose. The world is destroyed. No matter what you had intended at T-1 to do, you would not have beaten Karpov.
Intuitively, I find it considerably less plausible to suppose that you are, in Case 2, morally obligated, as of T-0, to beat Karpov than to suppose that you are, in Case 1, morally obligated, as of T-0, to push the red button.
If you don’t share this intuition, you will rightfully be unimpressed by my argument. If so, fair enough.

15. Jussi Suikkanen says:

Doug,
I’m not sure I see the relevance of time yet. What drives the Karpov and writing examples is not that they are actions that take a long time but actions that are difficult to perform. Pushing a red button is not like this. But, we can think of instantenious actions that are difficult to perform. For instance, we could set up a bomb that can be disposed by whistling at an exactly right high pitch. Imagine that doing this reliably is as difficult as beating Karpov or writing a document that describes the cure for cancer (say that achieving this skill takes as much talent, time, and effort but it still is humanly possible).
This is a non-temporal case so securability doesn’t come into play. So, maybe you do accept that this is a case of blameless wrongdoing if you don’t whistle the rigth way. But, then if you say this, then I don’t see much motivation to say anything else in the temporal cases.

16. Doug Portmore says:

Jussi,
In your case, is it true that there is some intention such that, were I to form that intention, I would whistle at the right pitch? If so, what is that intention?
Suppose that the fate of the world depends on my beating someone at golf. Suppose that I could either lay up (that is, play it safe and just aim short of the green) or go for a hole-in-one at the risk of ending up in the bunker. It’s very difficult to hit a hole-in-one. I’m imagining, though, that OC says that you are morally required to hit a hole-in-one iff there is some intention such that, were you to form that intention, you would succeed in hitting a hole-in-one. So on one occasion it may be that if you were to form the intention to try to hit a hole-in-one, you would hit a hole-in-one. In that case, OC implies that you are required to hit a hole-in-one. On another occasion, it may be that no matter what you intend to do, you would not succeed in hitting a hole-in-one. In that case, OC denies that you’re required to hit a hole-in-one. To my mind, what matters is not how difficult it is to hit a hole-in-one in the sense that you would succeed in only a small percentage of attempts, but whether one would succeed in doing so were one to form some relevant intention.

Doug,
I don’t really share the intuition that you did something wrong in Case 1, but I understand where someone is coming from when they say that I acted wrongly in that case. So I’ll play along. If I grant a difference between these cases, why should I grant a difference between diachronic safe cases and synchronic safe cases, or between long and difficult papers and short and difficult discoveries?
I agree with Jussi that something about difficulty drives the intuitions in these cases. My hunch is that it is an epistemic kind of difficulty (we don’t know how to do what’s right, or something like that) for the cases we’ve discussed so far.
The whistling case may involve another kind of difficulty. Whether you think that is also an epistemic difficulty will depend on whether you think knowing how to whistle at the right pitch is a kind of knowing that.

18. Jussi Suikkanen says:

I don’t know – I guess the intention would be to whistle a high b exactly. It could be that being able to do so reliably (achieving the right kind of control) takes years of practice. Just like in the game of chess or writing a cancer cure.
It’s funny how the kind of control condition that you describe is standarly taken to be a condition for intentionally doing x. And, intentionality is usually taken to be the condition of responsibility.
On your view, though wrongness and rightness has this built-in control condition too. This does make rightness and wrongness vary on what mental states and dispositions the agent has. In this sense, it is a departure from a pure OC theory even if it doesn’t make rightness and wrongness relative to knowledge of the consequences/their value. I guess I have hard time seeing the motivation for the view. If the rightness/information and responsibility comes apart at the stage of information, then why not go all the way to pure OC?
Also,I’m starting to worry about the modal status of securability. I take it that when we consider whether something is securable in the right way, we consider counter-factuals in which I form the intention and then see whether the action/outcome is brought about. So, which one of these modal statuses does the securability need to have:
If I form the given intention to play the hole-in-one in the actual world, then:
a) in all worlds in which I do this I succeed in making the hole-in-one.
b) in the closest possible worlds I make the hole-in-one. or
c) in only the actual world I make the hole-in-one [there are other similar worlds in which I form the intention but do not make the hole-in-one].
If a) is right, then I don’t think pretty much anything is securable (even trivial actions – intenting to move your arm – can go wrong). If c) is right, then punching the ten digit code can on occasion be securable (it’s odd to think that failing to do so would be necessary). So, I guess something like b) must be the case. But, even then, it will be tricky to argue that the prolonged actions will not be securable.

19. Nick,
I don’t think that it’s difficulty that’s driving our intuitions. Consider the following pair of cases. Suppose that Smith and Jones are both equally bad at golf. Assume that hitting a hole-in-one is equally difficult for each of them. Suppose that an evil demon will kill one person for each stroke they take to sink the ball on the 18th hole. Assume that the evil demon will kill millions if they refuse to play along. Each has the choice to lay up or shoot for the green, hoping to get a hole-in-one but risking ending up in the bunker. Now, here are the two cases:
Golf 1: No matter what Smith intends to do, he will not hit a hole-in-one.
Golf 2: If Jones intends to try to hit a hole-in-one, he’ll succeed.
I find it much more plausible to suppose that Jones is required to hit a hole-in-one than to suppose that Smith is require to hit a hole-in-one. But it is, we’re supposing, equally difficult for each of them to hit a hole-in-one in the sense of ‘difficult’ that I take you to be employing.
What do you think?
By the way, what are the diachronic safe cases and synchronic safe cases? Can you spell them out for me.

20. Jussi,
You didn’t answer the following question: Is it true that there is some intention (that I could form) such that, were I to form that intention, I would whistle at the right pitch? If you answer ‘yes’, I don’t see why it’s implausible to suppose that you’re obligated to deactivate the bomb by whistling in that pitch.
In any case, the motivation for the view is the idea that S can’t be morally required, as of T-1, to do x at T-2 if there is nothing that S can do or intend to do at T-1 that will ensure that she does x at T-2. This seems pretty intuitive to me. Yet it seems to me that Wiland and Howard-Snyder are committed to its denial.
Regarding counterfactuals, what do you want to say in ordinary cases. Suppose that I could now at 10 AM form the intention to go the to Student Union at noon for lunch. I’m assuming that there is some determinate fact about what would happen if I formed that intention. Let’s suppose the fact is that I would then go to the Student Union at noon for lunch. So how would you fill in the following blank: “If now at 10 AM I form the intention to go the Student Union at noon for lunch, then…”

21. Jussi Suikkanen says:

Doug,
a) there is. The intention to whistle a high b note. Problem is here the ‘could’ of your sentence in brackets. If that’s a could of as in I could easily form the intention in my current state, then you already have a solution to the Wiland/Howard-Snyder cases. If it is a could of as in an intention I could form with much practice, then your solution to the chess/cure of cancer cases become problematic.
Your second paragraph answers the question about counter-factuals. Note that there’s ‘ensures’ on the right hand-side. This means that you are committed to the strong modal claim a). In order to be required to phi, it must be that in all worlds in which you intend to phi you phi. Otherwise you couldn’t ensure the action.
I think this will be far too strong. On this view, we couldn’t be required to do anything. It is possible that at any given moment I become paralysed. So, nothing I ever intend will ensure that the intention will be successfully carried out.
Here’s another way to put this. Let’s assume that counterfactual determinism is true. Take two cases:
Jack: Jack intends at T-0 to dial 205-513-9437. There’s a fact of the matter what happens next – he changes his mind and punches in ‘6’ instead of ‘3’.
I take it that you don’t want to say that everyone who intends to dial 205-513-9437 at T-O will change their mind. So, the following scenario must be possible:
Jill: Jill intends at T-0 to dial 205-513-9437. There’s a fact of the matter what happens next – Jill doesn’t change her mind, she dials 205-513-9437.
Now, the question is, did Jill secure the bomb going off? If you say yes, then your response won’t work in all cases. If you say no, then the other worlds in which Jill intends to dial 205-513-9437 are relevant and we either have to choose a) (implausible) or b) (tricky).

22. I’m having trouble understanding the notion of securability.
Doug writes:
“…Nevertheless, my eating chicken and asparagus at 6 p.m. is not, as of 2 p.m., securable by me, for, as we’re supposing, no matter what I intend to do now at 2 p.m., I will not end up eating grilled chicken and asparagus at 6 p.m….”
(I find this example odd; but perhaps I’ve never had such a strong compulsion to eat pizza, or maybe it’s just a metaphor, so I’ll set that aside.)
I am then puzzled by this:
“I’m assuming, then, that even if you were to pick the correct ten-digit number at random and intend to dial it and so intend at T-0 to dial 205-513-9437, you would still fail to dial that number, for you would, say, half-way through change your mind and decide to dial a ‘6’ instead of ‘3’”
What motivates this assumption? Do I irresistibly crave to dial a ‘6’ in the same way I crave pizza? Or, perhaps the thought is I’m just flaky, and will change my mind midstream. But if that’s true, then there is a chance that if I intend to dial 205-516-9437, I’ll actually dial 205-513-9437, and thus open the safe? Flakiness cuts both ways. I realize that securability is defined such that V-ing is securable only if three things are true: 1) I can intend to V, 2) I can V, 3, I will V if I intend to V. But I still wonder why OC should hold the relevant option ranges hostile to securability.
I think the issue really turns upon action individuation. Is writing the word “action” one action? six? eight? (Remember to dot you Is and cross your Ts.) thirteen? (Remember to leave space between each letter.)
Is commenting on a blog post an action? Only once we understand how to answer this question will we be in a position to answer questions about how to evaluate actions that have parts. If Anton Ford is right, all actions have parts, and the question is inescapable. Even “eating a pizza” or “turning the dial of the safe to 24” has parts.

23. Jussi,
I’m probably not getting your point. Sorry. So let me try this:
X is, as of T-i, securable by S if and only if there is some intention, I (where I is a token mental state), such that: (1) X would obtain if S were to form I, and (2) S is not at T-i in a state that’s incompatible with S’s forming I. Since I is a token mental state, we don’t need to talk about when it is formed, but I’m assuming that I is a token mental state that occurs, if it occurs at all, at T-i or, perhaps, the instant after T-i.
So, now, I take it that we should evaluate the counterfactual stated in (1) according to whatever the best account of counterfactuals is. Suppose that the closest possible worlds account is the best one. In that case, we should evaluate (1) by asking whether, in the closest possible worlds in which S forms I, X obtains. So what’s your worry about this view?
You also write:
Jill: Jill intends at T-0 to dial 205-513-9437. There’s a fact of the matter what happens next – Jill doesn’t change her mind, she dials 205-513-9437.
Now, the question is, did Jill secure the bomb going off?

Are we assuming both (a) that there is some token intention such that, were Jill to form that intention at T-0, she would dial 205-513-9437 by T-11 and (b) that the bomb will not go off iff she dials 205-513-9437 by T-11. In that case, the bomb’s not going off is, as of T-0, securable by Jill. But I take it that you then worry that this won’t work for all cases. Which cases won’t it work for?

24. Eric,
What’s odd about the example? Is it not perfectly coherent? In any case, it’s certainly not meant to be a metaphor.
You write:

I am then puzzled by this:
“I’m assuming, then, that even if you were to pick the correct ten-digit number at random and intend to dial it and so intend at T-0 to dial 205-513-9437, you would still fail to dial that number, for you would, say, half-way through change your mind and decide to dial a ‘6’ instead of ‘3’”
What motivates this assumption?

It’s just an assumption. Nothing motivates it except an interest in finding out what follows about what a person should do in such a case.
Do I irresistibly crave to dial a ‘6’ in the same way I crave pizza?
Sure. Why not? I was actually thinking that it’s just a fact about your psychology that you would think half-way through that there can’t be two 3’s in the combination and so change your mind. But we could stipulate that I have an irresistible craving to dial a 6 that will overcome me halfway through.
But I still wonder why OC should hold the relevant option ranges hostile to securability.
I’m confused by this. My view is that OC should hold that the relevant alternatives are all and only those that are securable by the agent, whereas you seem to assume in your paper that OC is committed to the relevant alternatives being all and only those that are personally possible. After all, you talk about my being able to write the next King Lear in the sense that it’s personally possible for me to do so even though it really doubtful that this is securable by me. I doubt that there is any intention that I could form now that would result in my writing the next King Lear.
I think the issue really turns upon action individuation.
How so? I’m interested in whether there is any intention that you could form at T-1 (that is, any token mental state of the intention kind occurring at T-1) such that, if you formed that intention, you would open the safe. What does this have to do with act individuation?

Doug,
Re: the golf example, I was not proposing a view according to which what is securable for you is never relevant. I agree that the fact that one person, but not the other, is able to make a hole in one is relevant here.
I was proposing something much, much weaker: that when you have cases where two individuals are capable of doing all and only the same things, the fact that one of them doesn’t know how to do the best thing sometimes influences our judgments about whether or not those individuals are required to do a given thing. (I wasn’t even saying we were right to have our judgments influenced in this way, or that this always happens. But I’d bet it pretty much does always happen if it’s sufficiently tough to know what has the best consequences.)
Now, suppose you had a variant of your pair of cases where each agent can do the same things. Tom is absolutely terrible at golf. But there is a certain way he could move his club, such that if he intended to move it in that way, he’d score a hole in one. Unfortunately for Tom, he hasn’t a clue what this way is. Melissa is excellent at golf. She can score a hole in one whenever she wants, including on this occasion.
Golf 3: Tom needs to score a hole in one to save the world.
Golf 4: Melissa needs to score a hole in one to save the world.
Intuitively, it is much more plausible that Melissa is obligated to score a hole in one than it is that Tom is obligated to score a hole in one. This is as my claim would have it. You can make similar remarks about chess, high-pitched notes, etc.
Safes: The diachronic safe case is the case where you have to enter 10 digits one at a time to save the world. The synchronic safe case is the case where there are loads of buttons and you just have to press one to save the world.

26. Nick,
Maybe we’re not too far apart, then. My point is rather limited. It is only to show that I’m not, as of now, objectively obligated, say, to write the next supremely important document if my writing that document is not, as of now, securable to me.
I even agree with you that it is much more plausible to suppose that Melissa is obligated to score a hole in one than it is to suppose that Tom is obligated to score a hole in one if we’re talking about subjective obligations. If we’re talking about objective obligations, however, I’m not so sure. If there is a certain way that Tom could move his club such as to hit a hole-in-one, then I’m inclined to say that he is objectively obligated to do so even if he has no idea which way he should move the club.
Would you admit that if you could save the world by pressing the red button as opposed to the blue button, then you would be objectively required to press the red button even if you don’t know which of the two button pressings would save the world? And if so, what’s the difference between this case and the golf case except that there are a lot more different ways to move your club than there are different buttons to push. But I don’t see why the number of different possibilities should change things. Do you?

27. Nick,
Another thought: Does the difficulty of opening a safe in the synchronic-type safe case increase as the number of possible buttons to choose from increases? Suppose that there are only two buttons to choose from: A and B. Pressing A will open the safe and save the world. Pressing B will result in the safe not being opened in time, thereby resulting in the destruction of the world. Do you deny that you objectively ought to press button A? Might you not ask God: Which button should I push? Surely, this ‘should’ is the should of the objective ‘ought’. If there are billion buttons to select from, is it not still the case that you’re objectively required to push the one of the billion that would open the safe? And is it not the case that it’s much less difficult to save the world in the two-button case than in the billion-button case?
If you think that you’re objectively required to push button A in the two-button case but not in the billion-button case, I would like to know at what number of buttons, n, does the switch occur and I would like to know why things change when there are n possibilities.

28. Jussi Suikkanen says:

Thanks Doug. The closest possible worlds account is the most plausible one. The worry is that, if this is the account, then the original counter-example goes through. It might be for most people in most cases that when they form an intention to punch in a ten digit code then in the closest worlds to the actual one they manage to do so without changing their minds. In this case, the complex action is securable by them. And, it’s slightly odd that the rightness and wrongness of punching the codes would depend on how feeble minded you are. This is the worry about the Jill case if you think her action is securable. You have warded of the objection when it comes to Jack but not Jill.

29. Doug,
I have to confess that I’ve also advanced a version of the objection your criticizing. I also have to confess that I’m initially perplexed by your response to it. Let me ask a couple of questions so that I can get clearer about this:
1. Suppose that I grant that there could be people whose agency is diminished insofar as they suffer from irresistible compulsions and that these compulsions make it impossible for them to perform certain actions even though they are physically capable of doing so. So for agents with these sorts of compulsions, the right action in any situation is the action with the best consequences that their compulsion will allow them to perform. Thus if an agent has some weird compulsion to make 6 the sixth number that she enters into the safe, then it is not wrong for her to fail to open the safe. How does that weaken the objection, though, if I can still show that OC has unacceptable implications when we imagine a normal agent in that situation? After all, that an agent is ignorant of the correct combination in no way entails that if she guesses correctly and forms an intention to enter the right number then she will not be able to follow through on that intention.
2. Following up on Eric’s comment about the individuation of actions: For the agent that you’re imagining in this situation, could she form the intention at T1 to enter 2, and her doing so be securable? And then could she form the intention at T2 to enter 0, and would that be securable? And so on for each individual digit?

30. Doug Portmore says:

Jussi,
it’s slightly odd that the rightness and wrongness of punching the codes would depend on how feeble minded you are.
It doesn’t seem odd to me. After all, we’re supposing that your being feeble-minded makes it the case that there is nothing that you can do or intend to do now to ensure that you punch in the correct ten-digit code. It seems unreasonable to hold that you are, as of now, morally required to punch in the correct code when there is no plan, intention, or resolution that you could form now that would ensure that you punch in the correct ten digit code. In this sort of case, you, presently, have no more control over whether you will punch in the correct ten digit code than you do over whether someone else, who is not under your influence, will punch in the correct ten digit code. Maybe I’m crazy but this seems like a relevant difference between the Jack case and the Jill case. Moreover, it undermines Wiland’s claim that, for virtually everyone, typing out the next supremely important document (the next, King Lear, for example) is a relevant alternative. This is false if the relevant alternatives are those that are securable by the agent. It’s simply not true of most everyone that they could form some intention now that would ensure that they would write some long document like King Lear.
Now, is it true that there will still be cases where doing something extremely difficult (such as, opening a safe or hitting a hole-in-one) is securable and so where OC still implies that they’re objectively required? The answer is yes. But I take these cases to be less problematic. See my response to Nick above.

31. Doug Portmore says:

Dear All,
So I guess that I didn’t do a very good job explaining in the post the type of case that I’m worried about and that I take myself to be responding to. That’s in part because it wasn’t until I read all these excellent comments that it became more crystallized in my own mind. So, for those of you who are still patient enough to hear me out, I hope that the following is clearer.
First, here’s the intuition that’s driving my response.
INTUITION: It is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if there is no plan, intention, or resolution that S could form at T-i that would ensure that she does x at T-j. Or, in other words, it is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if S’s doing x at T-j is not, as of T-i, securable by S.
And I take it that INTUITION is, in part, what makes many of Wiland’s examples so forceful, for many of Wiland’s examples concern cases where it’s implausible to suppose that the relevant courses of action are securable by the agent. Take, for instance, Wiland’s example of being morally required to write the next Great American Novel. I take it that it is very implausible to suppose that I and other non-literary types could, as of now, secure our writing of the next Great American Novel. Of course, Wiland rightly notes that it’s personally possible for me to write the next Great American Novel in that there is a series of keystrokes each of which is, as of each subsequent time, securable by me having made the previous keystroke(s) and these series of keystrokes will constitute my having written the next Great American Novel. But, nevertheless, given the number of keystrokes required and the complexity of them, it’s implausible to suppose that there is any intention that I, given my limited cognitive abilities, could form in my present state and that would ensure that I write the next Great American Novel. And I think that the same thing can be said about Howard-Snyder’s beating Karpov at chess example. If OC did imply that I’m obligated to write the next Great American Novel or beat Karpov at chess, then it would conflict with INTUITION. But I think that it was mistake for Wiland and Howard-Snyder not to keep distinct what’s securable versus what’s personally possible.
Now, admittedly, there are still a number of cases in which OC will imply that an agent is, as of T-i, morally obligated to perform some incredibly difficult act that is, nevertheless, securable by the agent. But these cases don’t violate INTUITION. And I think that the proponent of OC can just appeal to whatever the solution she likes to the standard epistemic objection. For instance, she could point out that she is talking about objective rightness and not subjective rightness. And then she could pose the following sorts of questions: Suppose that there is safe that you open by pressing one of the available buttons and that there are only two buttons to choose from: A and B. Pressing A will open the safe and save the world. Pressing B will result in the safe not being opened in time, thereby resulting in the destruction of the world. Do you deny that you objectively ought to press button A? Might you not ask God: Which button should I push? Surely, this ‘should’ is the should of the objective ‘ought’. If there are billion buttons to select from, is it not still the case that you’re objectively required to push the one of the billion that would open the safe? And is it not the case that it’s much less difficult to save the world in the two-button case than in the billion-button case? If you think that you’re objectively required to push button A in the two-button case but not in the billion-button case, I would like to know at what number of buttons, n, does the switch occur and I would like to know why things change when there are n possibilities.
I would be interested in hearing a response to these questions from any of those who have been pushing these sorts of cases as putative counterexamples to OC.

32. Doug Portmore says:

Dale,
How does that weaken the objection, though, if I can still show that OC has unacceptable implications when we imagine a normal agent in that situation?
Well, as I see it, part of the force of the objection in the hands of people like Wiland and Howard-Snyder is that they employ examples that conflict with INTUITION — see the comment directly above this one if you have the patience. If I’m right about this being where part of the force of the objection comes from, then my response does weaken the objection. Admittedly, there will still be cases where you and others will claim that OC has unacceptable implications, but see my comment directly above about these cases.
For the agent that you’re imagining in this situation, could she form the intention at T1 to enter 2, and her doing so be securable?
Yes, your dialing a 2 at T1 for the first digit is, as of T0, securable by you.
And then could she form the intention at T2 to enter 0, and would that be securable?
Yes, having dialed a 2 at T1 for the first digit, your dialing a 0 at T2 for the second digit is, as of T1, securable by you.
And so on for each individual digit?
Yes, this is just to say that your dialing 205-513-9437 by T11 is, as of T0, personally possible for you. But note that this doesn’t mean that your dialing 205-513-9437 by T11 is, as of T0, securable by you. And see INTUITION above for why I think that this is relevant. If it’s not, as of T0, securable by you, then you have, as of T0, no more control over whether you will dial in the correct subsequent digits than you do over whether some person, who you have no influence over, does.

33. Doug,
Let me get your take on an example that I’ve used in print somewhere, either in the Ratio paper that Eric cites or something else. I assume that there are several (at least) people who die every day because they are asleep and unaware of some looming danger—the house’s catching on fire is the most likely threat. Most of the people could be saved if their phone were to ring in time to wake them. As a person with access to a phone, it must be true that if I were to dial the right numbers at the right times I could save several lives every day but am failing to do so. So I am presumably performing multiple actions every day that OC entails are seriously morally wrong (insofar as the consequences of the actions I’m doing instead, like grading exams, are far from optimific). This type of scenario, it seems to me, poses serious problems for OC. For present purposes, though, I just want to see whether you agree with me that this scenario does not conflict with INTUITION.

34. Doug Portmore says:

Dale,
I agree that this scenario does not conflict with INTUITION. But if we’re talking about objective rightness (which has little to do with blameworthiness or with how we should deliberate about what to do), then I don’t see the force that the objection is supposed to have. I can imagine someone waking up in the middle of the night and asking God: “What should I do at this very moment?” And it seems entirely plausible to me to suggest that the right answer is to dial some phone number, where dialing that phone number would wake up some headmaster in a burning orphanage, leading him to detect the smoke and to get all two hundred orphans outdoors to safety in time.
If I knew, that your dialing such a number would have such good consequences and I woke you up to tell you that this was so and that you should dial the number, would you dissent? (Ignore the issue of whether I should be in your house while your asleep and why I don’t just dial the number myself.)
Of course, I understand that you probably think that an adequate moral theory must be action-guiding, but it was either Wiland or Howard-Snyder (or both) who explicitly said that the proponent of OC can just get around the standard epistemic objection by denying that OC is supposed to be action-guiding. They, then, give examples that, unlike yours, conflicts with INTUITION. The reason that I think that these examples have such force and that the move to deny that OC is supposed to be action guiding is ineffectual against them is because they conflict with INTUITION.

Doug,
I agree that it is difficult, and probably impossible, to find a principled distinction between difficult whistling/golf cases and 2 button world saving cases. Likewise, it is difficult, and probably impossible, to find a principled distinction between 2 button cases and n button cases, for very large n. I therefore find it plausible that people who think that there is a sense in which I am obligated to push the world-saving button in 2 button cases should also think I am obligated to save the world in n button cases and difficult whistling/golf cases. However, I modus tollens that and conclude that in all of these cases, you are not obligated to do the world-saving things. (As I said, I don’t find it intuitive that in the 2 button case, you’re obligated to save the world, but I said I’d play along.)
If there is a distinction, I would think it is because knowing how is not a kind of knowing that. If you believed this, you might think that in golf/whistling cases, you have full propositional information, but still don’t know how to do what would be best, and that this helps explain your lack of obligation. In contrast, in n button cases, you have all the relevant know-how, you just lack some information. This, someone might believe, makes it plausible that in n button cases you’re objectively obligated to save the world, but not in whistling/golf cases. In effect, this option specifies a non-epistemic sense of difficulty and uses it to explain the difference between 2 button cases and n button cases. I don’t know what to make of this view.
I am prepared to grant that your securability requirement helps in the Karpov case and the 1st version of the safe case that you describe. However, I think it isn’t a good idea to lead with The Safe, because your rider “Let me stipulate, though, that given your ignorance of the correct combination you would in fact fail to open the safe by T-11 no matter what you intend at T-0 to do.” doesn’t seem to be essential to the implausibility of the claim that you are obligated to open the safe. So I will agree that the securability requirement helps a little (in that it excludes some weird cases). If that’s all you wanted to show, then you won by my lights.
As for what’s going on when you ask God what to do, my view is that practical “ought” claims are always relative to some probabilities. I don’t have a worked out view about whose probabilities a particular use of “ought” is relative to. But when you ask God what button to press, I think you’re asking about what ought to be done relative to God’s probabilities. (I don’t maintain that whenever you ask someone what to do, that person’s probabilities are relevant.) I don’t believe that there is an additional sense of “ought” that is being used in these examples. (I realize that this view may be idiosyncratic and is not obligatory.)

36. Doug,
In a nutshell, what I’ve argued previously is that there is something problematic about any moral theory according to which any moral agents who are recognizably human will perform very many actions every day that are objectively wrong, especially when many of them are seriously morally wrong. And I think that this is true on OC, even if cases that violate INTUITION are excluded (not that I’m conceding that they should be). My argument for this is not as airtight as I would like it to be at present, I admit, and I think you’d reject out of hand the version that I find most persuasive because you and I differ over the relation between objective wrongness and blameworthiness. But that’s all a different discussion anyway.

37. Doug,
What work is the personally possible/securable distinction doing in the safe-opening case? I can see why you might deploy it in the case of really complicated intentions like Great American Novels or chess games. (Ultimately, however, I think you don’t need to.) But dialing safe combinations (or winning lottery numbers) is pretty easy. Why don’t you just say that you are objectively required to dial 205-513-9437 but subjectively permitted not to?

38. Doug Portmore says:

Nick,
You’re right that it was mistake to lead with The Safe. Live and learn. And I think that I understand where you’re coming from, and I think that you understand where I’m coming from. One man’s modus ponens is another man’s modus tollens. I think that this is one of those places where reasonable people can disagree — at least, absent further argument that would rationally compel them to go one way rather than another.
Dale,
I accept that my appeal to securability doesn’t work against your argument. It only works against whatever further work cases like Wiland’s writing the Next Great American Novel are supposed to be doing with respect to making the case against OC. And you’re right that the source of our disagreement about your argument is a different discussion altogether.

39. Doug Portmore says:

Heath,
Why don’t you just say that you are objectively required to dial 205-513-9437 but subjectively permitted not to?
In The Safe, your dialing 205-513-9437 is not, as of T-0, securable by you. So saying that you are, as of T-0, objectively required to dial 205-513-9437 conflicts with INTUITION (the moral obligations in INTUITION are, by the way, meant to be objective obligations). Because I have the INTUITION, I want to deny that you are, as of T-0, objectively required to dial 205-513-9437 in The Safe. I think that I’ve shown that I can accept OC and deny that. Now, if you want to consider a different case, say, Safe 2, where your dialing 205-513-9437 is, as of T-0, securable by you, then I would say that in that case your dialing 205-513-9437 is, as of T-0, objectively (but not subjectively) morally obligatory. As Nick pointed out, though, it was really a mistake on my part to lead off with The Safe. This understandably misled people as to what I was up to. The move that I’m making is really targeted against only cases with like the Great American Novel case and the Beating Karpov at Chess case.

40. Alex Guerrero says:

I have the intuition that INTUITION is too strong a condition on what we are morally obligated to do—we are morally obligated to do many things at time T1 even if it is implausible that those things are ‘securable’ by us at T1, contrary to what INTUITION says.
Here it is:
INTUITION: It is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if there is no plan, intention, or resolution that S could form at T-i that would ensure that she does x at T-j. Or, in other words, it is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if S’s doing x at T-j is not, as of T-i, securable by S.
Imagine someone, Bill, who has an emotionally fraught relationship with his dying mother. The deterioration of their relationship is mostly his fault. He has a terrible temper, is selfish and prone to narcissism, and had a bad meth habit in the 1990s that was trying on everyone in his family, including his mother. Also, he finds his mother’s ‘small-town’ tastes cloying and inhibiting, and is prone to interpret her every remark as a criticism of his lifestyle. He’s sober now, but hasn’t reconnected with his family, and he finds family interactions provoke great and destabilizing anxiety in him. It is May 5, 2010. He hasn’t seen her in two years, and he is going to see her (for her final fifteen minutes, as it turns out) on May 15, 2010, when he flies across the country.
I find it compelling that Bill is, as of May 5, 2010, morally obligated to keep his temper in check, to treat his mother with kindness and compassion (and love and warmth) when he sees her, and to refrain from petty bickering and sniping, no matter what slights or critical remarks he feels that she’s made. But I also find it plausible that there is no plan, intention, or resolution that Bill could form on May 5, 2010, that would *ensure* that he acts in these ways on May 15, 2010.
He might adopt various strategies, attend a yoga class or two, look at old family photographs, take some time off from work, acquire some heavy psycho-pharmaceuticals, etc., and he might form various intentions (be good to your mother, when you see her, count to ten and focus on the big picture, etc.). But I don’t think doing any of that will *ensure* that he acts as he is morally obligated to when the time comes.
This is true both because of who Bill is, what skills and capacities he has—-it is not something that he can do—-and because of the nature of plans and intentions.
And note that this is compatible with it being the case that Bill could, and indeed might, act as he is morally obligated to when the time comes.
(Another case: imagine your pizza/chicken case, but say something morally significant turns on whether you eat chicken or pizza. Say the pizza has ham on it, and the chicken is actually vegetarian ‘chik’n’, and one is morally obligated to be vegetarian. It seems that you are morally obligated, at 2pm, to eat the chik’n and asparagus at 6pm (if that is your only other option), even though doing so is not ‘securable’ at 2pm, at least according to your presentation of the case.)
Doing what one is obligated to do is, or can be, very difficult.
What am I missing?

41. Doug Portmore says:

Alex,
If it’s true that there is no plan, intention, or resolution that Bill could, as of 5/5/10, form such as to prevent himself from treating his mother improperly on his visit, then I would say that Bill is, as of 5/5/10, morally obligated to cancel his planned trip to visit his mother. (I’m assuming that it’s better to cancel his trip than to treat his mother improperly.)
Let me try to pump your intuitions in this matter by upping the stakes. Suppose that what will happen if Bill visits his mom is that he will kill her. And assume that there is nothing that he can do or intend to do as of 5/5/10 that will ensure that he meets with her but doesn’t kill her. In that case, shouldn’t he cancel his trip? What’s the point of saying that he is, as of 5/5/10, morally obligated to meet her and not kill her if there is no plan, intention, or resolution that Bill could, as of 5/5/10, form such as to ensure that he meets her and doesn’t kill her?
Yes, doing what one is obligated to do can be difficult but if you are obligated to do something at a given time, I think that there should be something (some action or intention that you can perform/form at that time) that will ensure that you will do what you’re obligated to do. If there is nothing that you can do at T-1 to affect whether or not you will do x at T-2, what’s the point of claiming that you are, as of T-1, obligated to do x at T2? How is this obligation that is indexed to T-1 supposed to guide you at T-1? If you can’t control your future behavior anymore than you can control someone else’s future behavior, I don’t see the point of saying that you are, as of the present, obligated to do something in the future.
In any case, we’re now moving into a debate over actualism versus possibilism. I think that securitism is better than both views although it leans closer to actualism than to possibilism. You seem to have possibilist leanings. If you’re interested, I argue for securitism (and against both actualism and possibilism) in sections 6.3 and 7.1 of my manuscript Commonsense Consequentialism. You need a password to view Chapter 7, but I’m happy to send it to you if you’re interested. Just email me.
I don’t know how productive it would be to carry on the discussion about INTUITION here. I agree that possibilists will reject INTUITION. So I see where you’re coming from, but I think that there are powerful arguments against possibilism that I make in my manuscript. Since they are complicated and involve graphs, I rather not try to rehearse them here. But if you read those arguments and want to discuss it, I would be happy to do so. And that goes for anyone else interested in this topic.

42. Doug,
As I’m understanding this, you take the following principle to be a consequence of the conjunction of INTUITION and OC:

If S’s performance of a series of actions between T-i and T-j would be optimific, but there is no intention, plan, or resolution that S can form at T-i that will ensure that S will complete this series once it is started, then S is under no obligation to perform the first action in that series at T-i (unless it is also the first action in a series of actions whose successful completion S can ensure at T-i).

I see the appeal of the principle, but I wonder about this sort of case: Snidely is going to tie Penelope to the railroad tracks. Dudley will have an opportunity to free her at T-j, but he will be able to do so only if he has first bought a pocket knife at T-i. It would be optimific for him to buy the knife and then free Penelope, but Dudley is irresolute enough that at T-i there is no way for him to make up his mind to free Penelope when the time comes so firmly as to ensure that he will do it. Since it would not be optimific for him to buy the knife unless he uses it to rescue Penelope, when he chooses not to buy it he does no wrong. Then, when T-j comes, it is not wrong for him not to free Penelope, for he cannot do so without the knife and ‘ought’ implies ‘can.’ Yet it is counterintuitive to me that Dudley has done no wrong here. This leads me to a conclusion similar to Alex’s, which is that INTUITION is too strong.

43. Doug Portmore says:

Dale,
First, I think that X is optimific if and only if it produces better consequences than any alternative to X. Since I think that the relevant alternatives to any given X are all and only those that are securable by the agent, I would deny that a series of actions that is not securable by an agent could be optimific.
Regarding your case: I’m committed only to the view that Dudley does no wrong in not buying the knife. I’m not committed, and would in fact deny, that Dudley has done no wrong. His moral failing lies in his failing to develop a good character and thus allowing himself to become such an irresolute character.
But, anyway, I concede that people sometimes have these possibilist leanings in cases like those that you and Alex suggest. I used to be one of them, but, as I’ve documented in earlier posts, I’ve been converted.
In the hopes of pumping your anti-possibilist intuitions, consider Alex’s case in which Bill will kill his mother if he meets with her. Bill may be responsible for being the sort of person that would kill his mother if he met with her. But if he is the type of person that would kill her if he met her, he ought not to meet with her even if he, say, promised to meet with her. (I’m assuming that it’s better for Bill to break his promise to meet with her than to kill her.) Another famous case is that of Professor Procrastinate. If you’re the type of person that would end up procrastinating and never writing the book review, then you shouldn’t agree to write the book review.

44. Doug,
The pizza example confused me because the safe example and the cure example both involve ignorance about how to do something, whereas the pizza example instead involves an irresistible desire. The two kinds of cases seem different, but perhaps in the end they should be treated the same way.
In any case, your formulation of INTUITION clarifies things. It says:
INTUITION: It is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if there is no plan, intention, or resolution that S could form at T-i that would ensure that she does x at T-j. Or, in other words, it is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if S’s doing x at T-j is not, as of T-i, securable by S.
I actually am sympathetic to the spirit of INTUITION, if we can set aside the modal worries about what “ensure” means, as Jussi notes. We must also be clear that we must not require that an intention at T-i to do x be sufficient for doing x at T-j. After all, you actually have to DO something at T-j in order to do X at T-j! So, sufficiency would be too stringent a requirement. But I think you are right to insist that doing X is a relevant option only if there’s something you could intend to do that could lead to doing X.
In other words, I think I can accept something like INTUITION, and yet maintain that if OC is true, you are almost always required to write documents with great consequences (yes, even better than your great blog post!). How can I maintain that? It’s easiest to see how if we take a historical example. Let me oversimplify one for purposes of illustration.
Suppose that in 1905 Einstein wrote and published “E=MC2”. Suppose that this has had very very good consequences. In 1906, some physics undergraduate student (Marie) thinks to herself: “Well, I could have written that!”
I want to say that Marie is correct. She could have written and published “E=MC2”. Is there any intention she could have formed in 1905 that would have led her to do so? Yes: the intention to write “E=MC2” and publish it. This is all compatible with acknowledging that in 1905 Marie was not capable of understanding what E=MC2 means.
What’s true of Marie is true of all of us. There are important documents that we would write if we intended to write (and we shouldn’t read what follows de dicto) the sequence of characters that constitutes them. So, I don’t see any important difference between the safecracking examples and the documents examples, except for the fact that we almost always are able to write important documents, whereas we are never actually faced with safes we can crack. (Actually, given the ubiquity of opportunities to hack into computers, perhaps I should rethink that last point.)

45. Alex Guerrero says:

Just a couple of things.
First, I took my case to be one in which it would be morally better for Bill to see his mother and behave inappropriately than to not go at all. His mother is dying! One has a moral obligation to go to see one’s dying mother (unless perhaps one is a psychopath, and will kill her). If that isn’t convincing, just make the case so that Bill has no choice as to whether or not he will see his mother–he definitely will.
Second, on that note, the case you offer is not a mere case of ‘upping the stakes.’ You changed the case so that if Bill goes to see his mom then he will kill her. In my case, it is not at all clear that if he goes to see his mom he will act inappropriately. He might, but he really might not.
The point of the case was to put pressure on the requirement in INTUITION that, to have an obligation now to do something at a later point in time, it must be the case that there is something that we can do now that will *ensure* that we do as we ought to at the later time. There are certainly things that Bill could do now that would make it more likely that he will behave as he ought to, while still not *ensuring* that he will behave as he ought to. I take it that he has a moral obligation to do those things now (and to refrain from doing things that would have the opposite effect), and that this moral obligation stems from his moral obligation to behave appropriately when he sees his mother.
You ask, “If there is nothing that you can do at T-1 to affect whether or not you will do x at T-2, what’s the point of claiming that you are, as of T-1, obligated to do x at T2?”
This suggests that you think that if at T1 you can’t do anything that will *ensure* that you will do x at T2, then there is nothing you can do at T1 to *affect* whether or not you will do x at T2. But that’s not right. In particular, you might take steps at T1 that make it more likely that you will do x at T2, without it being the case that those steps ensure that you will do x at T2.
What do you think of this claim:
INTUITION(2): It is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if there is no plan, intention, or resolution that S could form at T-i that would increase the probability that she does x at T-j.
This seems more plausible to me (though I’m not saying I endorse it), and I think it captures the ‘spirit’ of your intuition, though I could be wrong about that. Also, it makes evident the ‘point’ of claiming that someone is obligated, as of T-i, to do x at T-j.
But I will have a look at the book…

46. Doug Portmore says:

Eric,
I would think that the supremely important document would not be a brief formula such as “E=MC2,” but a document that both states the formula and provides a detailed proof and/or explanation of it. Somehow I suspect that the formula would not have done much good at all if published in the local paper just by itself without any accompanied proof or explanation.
But I’m willing to concede that there are very many instances everyday in which it’s true of almost all of us that, were we to write out some brief formula and email it to the right person (a scientist or mathematician who would understand it importance and explain it to the rest of us), we would be doing a lot more good than we are doing now. And I’m willing to concede that writing out such a brief formula and emailing it to, say, the chair of the physics department at Oxford is securable by most agents. So I’m willing to concede that most of us are at most moments of the day not doing what we are, as of that moment, objectively morally required to do. This is, I think, compatible with denying that we almost never act subjectively rightly.
Are you willing to concede, though, that many of your examples don’t work if we limit the alternatives to what is securable by the agent? I have in mind all the times that you talk about writing out the next King Lear, the next Great American Novel, the next Declaration of Independence, or some journal article explaining, say, how to cure AIDS. Surely, the typing out of such documents, although personally possible, is not, as of the present moment, securable by most of us. So I would argue that some of the rhetorical force of your argument relies, I think, on your illicit use of such examples. The use of such examples without any argument to the effect that we should take the relevant alternatives to be those that are personally possible as opposed to those that are securable is, I believe, illicit given the prima facie plausibility of something like INTUITION. So I think that part of the force of your argument comes from the fact that we have something like INTUITION at the back of our minds and thus find it very implausible to think that we could, as of the present moment, be morally obligated (even objectively morally required) to type out such long and complicated documents.

47. Doug Portmore says:

Alex,
I took my case to be one in which it would be morally better for Bill to see his mother and behave inappropriately than to not go at all.
Okay, but I don’t have the intuition that he is, as of 5/5/10, obligated to behave appropriately toward his mother on 5/15/10 if there is no intention (or plan or resolution) such that, were he to form that intention on 5/5/10, he would refrain from treating his mother inappropriately on 5/15/10.
In my case, it is not at all clear that if he goes to see his mom he will act inappropriately. He might, but he really might not.
It’s not? If he is going to act appropriately, then his acting appropriately would be, as of 5/5/10, securable by him.
In any case, I’m assuming that counterfactual determinism is true. On this assumption, there is some determinate fact as to what he would do if he met with his mom. Could you tell me what that fact is.
Of course, we could deny counterfactual determinism and accept a probabilistic version of OC, but I really don’t see the point. It just makes things unnecessarily complicated.
There are certainly things that Bill could do now that would make it more likely that he will behave as he ought to, while still not *ensuring* that he will behave as he ought to.
Again, I’m assuming counterfactual determinism. So the objective probability that he would do something were he to form a certain intention is either 0 or 1.
By the way, when I say that “S could ensure X by forming intention I” this is just short hand for “X is, as of T-i, securable by S,” where this is defined roughly as follows:
X is, as of T-i, securable by S if and only if there is some intention, I (where I is a token mental state), such that: (1) X would obtain if S were to form I, and (2) S is not at T-i in a state that’s incompatible with S’s forming I. Since I is a token mental state, we don’t need to talk about when it is formed, but I’m assuming that I is a token mental state that occurs, if it occurs at all, at T-i or, perhaps, the instant after T-i.
In particular, you might take steps at T1 that make it more likely that you will do x at T2, without it being the case that those steps ensure that you will do x at T2.
Again, as I explicitly stated in my post, I’m assuming counterfactual determinism.
What do you think of this claim:
INTUITION(2): It is implausible to suppose that S is, as of T-i, morally obligated to do x at T-j (T-j being later than T-i) if there is no plan, intention, or resolution that S could form at T-i that would increase the probability that she does x at T-j.

I think that INTUITION(2) is more plausible than INTUITION if we assume that counterfactual determinism is false and that the laws of nature are indeterministic, but I think that INTUITION is more plausible that INTUITION(2) if we assume counterfactual determinism is true and that the laws of nature are deterministic. I’m assuming that they’re deterministic.

48. Doug,
You grant that writing a brief formula whose significance one doesn’t understand (e.g. F=MA) is securable by most agents. But you don’t feel the same way about longer documents. I don’t see why you feel differently. Does it have something to do with the limits of keeping a long string of characters in one’s working memory? I hope not, because there are many complex actions one does intentionally even if one can’t focus on each step of the action all at once.
Maybe it would be good to come at this issue through looking at your answer to one of Dale’s questions. You grant that entering each digit of 205-513-9437 is securable, but you deny that entering the whole string “205-513-9437” is securable. You say it’s only personally possible. I don’t understand how each digit can be securable without the whole thing also being securable. And I guess that reveals that I don’t understand fully the notion of securability.
Does it have something to do with the probability of success? For example, for each digit there’s a greater than 50% chance than if I intend to enter it, I will succeed. But if the string of digits is long enough, there will be a less than 50% chance that I will succeed if I intend to enter that string. If this is on the right track, then I wonder whether 50% is the relevant threshold.

49. Doug Portmore says:

Eric,
Does it have something to do with the limits of keeping a long string of characters in one’s working memory? I hope not, because there are many complex actions one does intentionally even if one can’t focus on each step of the action all at once.
True enough. But do you think that it’s likely that there is any intention that I could form now such that, were I to form this intention now, I would type out the next King Lear? It’s possible, of course, that if I form, say, the intention to give it shot, I might succeed. But I think that it is very, very unlikely. So it might very well be false of me that I can secure the writing of the next King Lear. Yet you seem to be claiming that OC implies that I and virtually everyone else is obligated to type out such documents as the next King Lear. That can’t be right if OC picks out the relevant alternatives as those that are securable as opposed to personally possible.
You grant that entering each digit of 205-513-9437 is securable, but you deny that entering the whole string “205-513-9437” is securable. You say it’s only personally possible. I don’t understand how each digit can be securable without the whole thing also being securable.
Securability isn’t transitive, because, like one’s abilities, what one can secure changes over time. Do these definitions help?
A set of actions, αj, beginning at tj is, as of ti (ti ≤ tj), securable by S if and only if there is a set of actions, αi (αi being a subset of αj), such that both of the following hold: (1) S would perform αj if S were, at ti, to intend to perform αi, and (2) S is not, at ti, in a state that’s incompatible with S’s intending to perform αi.
A set of actions, αj, consisting in a number of steps extending from tj – tn is, as of ti, personally possible for S if and only if S’s performing the first step at tj is, as of ti, securable by S, and if S were to perform the first step at tj, S’s performing the second step at tk would be, as of tj, securable by S, and if S were to perform the first two steps at tj and tk, respectively, S’s performing the third step at tl would be, as of tk, securable by S, and so forth until all the steps have been completed (ti < tj < tk < tl… < tn).

50. Doug Portmore says:

Eric,
And be careful; I didn’t say that “dialing each digit of 205-513-9437 is securable.”
What I said was: “having dialed a 2 at T1 for the first digit, your dialing a 0 at T2 for the second digit is, as of T1, securable by you.” And so on for the remaining digits.
Because what you can’t, as of t1, secure could be something that you can, as of t2, secure (provided certain things occur), you have to always talk about what’s securable as of a particular time. In The Safe, there is no intention that I could, as of t-0, form that would secure my dialing a 3 at t6. Nevertheless, provided that I have dialed 205-51 over t1-t5, there may very well be an intention that I could, as of t5, form that would secure my dialing a 3 at t6: namely, the intention to dial a 3. So dialing a 3 at t6 may now, as of t5, be securable by me even though it was not, as of t-0, securable by me.

51. Doug – thanks for this discussion, now I think I understand the difference between what is “securable” and what is “personally possible”. Tell me if I I got this wrong:
For it to be the case that dialing 205-513-9437 is securable by me at T1, it must be the case at T1 that: there is some intention I could form (e.g. the intention to dial 205-513-9437) such that if I form it, then I will in fact dial 205-513-9347.
For it to be merely personally possible at T1 for me to dial the number 205-513-9437, it only has to be the case that at T1: if I form a relevant inention at T1 (e.g. to dial 2), then I will in fact dial 2, and if I form a relevant intention at T2, then I will in fact dial 0, and if I form a relevant intention at T3, then I will in fact dial 5, and so on.
But now I don’t see how these claims seriously threaten Wiland’s sort of objection. Can’t he just say that OC now obligates you to write the first letter of the next Great American Novel, then it will obligate you to write the second letter of the next Great American Novel, then it will… and so on. Isn’t this set of claims about obligation just as implausible as the claim that it (now) obligates you to write next Great American Novel? (Indeed, couldn’t we plausibly interpret the latter, if we are worried about INTUITION, as just a bit of shorthand for the former?)

52. Doug Portmore says:

Zimon,
You write:
Can’t he just say that OC now obligates you to write the first letter of the next Great American Novel[?]
He could say that, but, depending on what letter that is, it would probably be a false statement. Suppose that in the above quote ‘now’ refers to the moment in time at which I typed the letter ‘Z’ above. Suppose that, as it happens, the next Great American Novel begins with the letter ‘Z’. Further suppose that you are offended by my spelling your name ‘Zimon’ rather than ‘Simon’ and that had I used a different example that didn’t involve typing ‘Z’ for the first letter of this comment, more good would have actually obtained. In that case, it’s false that I was, according to OC, obligated, as of that time, to type ‘Z’ above. Note that, according to OC, whether I’m obligated to type a letter depends on the consequences of my doing so. Further note that my doing so doesn’t have the good consequence of my following that action with a sequence of keystrokes that results in the production of the next Great American Novel (I hope that this comment illustrates at least that much). And it does, we’re supposing, have the bad consequence of offending you. So, according OC, I was in fact obligated not to type the first letter of the next Great American Novel.
Of course, you may want to say that, having typed ‘Z’, I was then obligated to type the next letter of the next Great American Novel. But, again, whether I was obligated to do that depended on what the actual consequences of my doing so would have been in closest possible world to the actual one, not on what the consequences of my doing would have been in the far off possible world in which I follow that second letter with the next 80,000 or so keystrokes required to produce the next Great American Novel.

53. Only the closest possible world is relevant?
But what about this case: At T1 Abe is walking past a pond with a drowning child in it; he could jump in now and save her life, but if he does jump in, he’ll in fact (in the closest possible world) realize then at T2 that the child has ginger hair, and because of Abe’s dislike of ginger haired people (he’s a real jerk), he’ll intentionally get out of the pond and let the child drown. Moreover, he’ll have wet shoes.
Do you want OC to say that Abe is therefore not obligated at T1 to jump in and save the child, nor even to take the first step, of jumping in? (Suppose that Abe can make no resolution or intention at T1 to save the child that is immune from being second-guessed by his choice at T2. But he can at T1 secure his jumping in, and if he does jump in, Abe will be able to secure either saving the child or not doing so at T2. It’s just that he’ll in fact choose not save her.)
Doesn’t this view make our obligations too easy to escape?

54. Doug Portmore says:

Simon,
Is there no intention (no plan, no resolution, nothing) such that, were Abe to form that intention at T1, Abe would save the child at T2? Could he not perhaps form, at T1, an efficacious intention to save the child no matter what color his or her hair?
If you answer ‘no’ to both questions, then I’m committed to saying that Abe is not, as of T1, obligated to save the child. After all, there’s no plan that he could initiate, at T1, that would ensure that he saves the child at T2. This is not to say that he’s not blameworthy. Presumably, he must have done something wrong to have ended up as a person who cannot bring himself to save gingered haired children. But if that’s really the way he is now, then he has to work within the limits of his current moral imperfections.
In any case, this, like some of the discussion above, concerns whether one has actualist or possibilist intuitions. The possibilist has to bite some difficult bullets. I find the bullet that I have to bite quite palatable as compared to the possibilist’s very unsavory bullets.

55. Doug, I never understood the problem of memory/difficulty of typing to be the problem that Howard-Snyder and Wiland’s examples were getting at. So attaching the “securable” qualification to one’s original intentions seems completely orthogonal to their main points. If you’re trying to distinguish between, say, someone who could make *one* intention and thereby complete the activity as a guaranteed result of this intention, and someone who needs additional (contingent) intentions to do so, this fails; because we’re all in the first condition for most activities. You ask:
do you think that it’s likely that there is any intention that I could form now such that, were I to form this intention now, I would type out the next King Lear?
But I doubt that Shakespeare ever had such an intention which was sufficient to perform the job; he needed others that occurred to him along the way. So would I. Now the *probability* of my being able to form such appropriate sequential intentions is much lower than that of Shakespeare doing so at some point in time, or Don Delillo, Toni Morrison, etc. But it is *possible* (and it is also possible that such masters of the craft would fail; it’s a matter of degree, though a huge one).
I take the Howard-Snyder/Wiland cases to show (1) that the differences between logical possibility and what we might call morally practical possibility are so vast, much much vaster than what we see in more mundane examples like pressing a red or green button to stop a bomb, that to make moral attributes a function of the former instead of the latter is more palpably absurd than might previously have been thought, (2) even if I were to press the right button, win the chess game, etc. by making random motions and hitting on the solution by luck, only the motor actions can be considered truly mine, and not the relevant success; the latter is not something *I* do as an agent, but merely something my body actions contribute to in conjunction with highly fortuitous circumstances, and hence not relevant to moral judgment of *my* actions, which are now more clearly seen to be based on intentional responses to the evidence which I have and can assess about my situation, which *must* include awareness of and appropriate responses to my cognitive limitations, (3) [Wiland’s addition] that this gulf appears not just on rare occasions, but constantly and massively.
Now, that said, you can evoke a theory by which you *call* anyone who fails to maximize the logically possible good (copossible with their current physical state, and notably including vastly improbable psychological possibilities) “wrong.” But this now appears to be (1) a palpable abuse of the English language, and (2) a palpable abuse of language-independent moral concepts. If anything is true, we are not all morally wrong for failing to do these things at nearly every moment [this shows we are *imperfect*, but to conflate imperfection with wrongness now appears more vividly to be a salient fault of objective consequentialism]. My inability to come up with with the next great novel or scientific document is not based on the fact that I couldn’t form a single intention to put it all down at once, or that I would have some irresistible compulsion to mistype a crucial character or section part-way through. It’s that the subjectively-known chances of my successfully completing it, while non-zero, are so low that I would now be wrong to form and act on any intentions to try to produce such documents instead of doing other things with far higher subjective expected value. For forming and acting on intentions in response to accessible evidence is what agents actually do; whether they produce good results, write novels, or even succeed in pressing buttons, is strictly speaking out of their control. They can only reach in certain directions with certain hopes and expectations; to judge them right or wrong based on the results is, strictly speaking, to confuse qualities of the resulting states of affairs with qualities of their actions. Now this is something that deontologists have long accused consequentialists of doing; and I think they were right. What’s interesting is that a consequentialist response to this criticism is possible: it is subjective consequentialism.
You concede above that most of us are at most moments of the day not doing what we are, as of that moment, objectively morally required to do. But then you basically agree w/ Howard-Snyder and Wiland about the facts on the ground, so to speak; you just want to use different words than what they want to use. The question then is: is it more, or less confusing, to say that Caesar was equally wrong not to have advanced human civilization by inventing the steam engine as to not have done so by restoring the republic, than to say what normal people would say about this case. I think it is deeply misleading to say such things; we would constantly have to be qualifying the term “wrong” using objective and subjective parameters. We could also say a dog has five legs, four regular ones and a fifth “tail leg.” But why on earth should we do this, when both ordinary language maps better onto important real distinctions? We can say all you want to say about “objective wrongness” by using words like “actually bad results.” But this has *nothing* to do with moral qualities of human action /except/ insofar as evidence for, anticipation of, etc. such results formed some part of some agent’s intentional response to some facts.