Suppose that you know that your opening a certain safe is what would maximize the good. But suppose that you don’t know what the correct combination is. Some critics (e.g., Wiland 2005 and Howard-Snyder 1997) allege that, despite your ignorance of the correct combination, objective consequentialism (OC) implies that you are morally required to open the safe – OC being the view according to which S’s performing x is morally permissible if and only if there is no alternative, y, that would in fact produce better consequences than x would. This implication, they claim, is absurd. Contrary to OC, it’s permissible for you to fail to open the safe even though this amounts to your failing to maximize the good. Thus, they conclude that we should reject OC. Here’s the example in greater detail:
The Safe: Imagine that the time is T-0 and that the fate of the world depends on your opening a safe by T-11. Suppose that the correct ten-digit combination is 205-513-9437. And suppose that although this ten-digit combination is unknown to you, you could dial it in the following sense. If you were to intend at T-1 to dial a ‘2’ for the first digit of the combination, you would then succeed in doing so. And if you were subsequently to intend at T-2 to dial a ‘0’ for the second digit of the combination, you would then succeed in doing that. And so on and so forth for the remaining eight digits. Thus, there is a sequence of actions, each of which you are capable of performing, that would result in your opening the safe by T-11. Let me stipulate, though, that given your ignorance of the correct combination you would in fact fail to open the safe by T-11 no matter what you intend at T-0 to do.*
Critics argue that, despite this last fact, OC implies that you are (as of T-0) morally required to dial 205-513-9437 and thereby open the safe by T-11, for this is, of all your alternatives, the only one that would maximize the good. They conclude, then, that OC absurdly implies that you are required to dial the correct combination despite your ignorance of the correct combination. Moreover, Wiland argues that if you’re required to dial the correct combination in this case, then you’re required, on most every other occasion, to type out and publish the next supremely important document (perhaps, one describing the cure for cancer), for typing out and publishing this document would maximize the good. And since almost all of us regularly fail to type out and publish the next supremely important document, almost all of us are acting impermissibly virtually all the time.
But let’s consider more carefully the critics’ assumption that, in The Safe, your dialing the correct combination is one of the alternatives from which OC tells you to select. Whether it is, we’ll see, depends on what we take the relevant alternatives to be. Of course, it’s clear that, in The Safe, the relevant alternatives are sequences of actions — e.g., sequences of number-dialing acts. But the critics go further and assume that the relevant alternative sequences are those that are, as of T-0, personally possible for you. The consequentialist could, however, reject this assumption and claim instead that the relevant alternative sequences are those that are, as of T-0, securable by you. In so doing, the consequentialist could avoid the absurd implication that you are required, in The Safe, to dial the correct combination as well as the absurd implication that you are required, on most every other occasion, to type out and publish the next supremely important document.
To illustrate the difference between what’s securable by an agent and what’s personally possible for that agent, consider the following. Suppose that I’m on a low-fat, low-carbohydrate diet. It’s now 2 p.m., and I’m thinking that I should grill up some asparagus and a lean boneless, skinless chicken breast for tonight’s dinner. If this is going to be what I’m having for dinner, I’ll need to get into my car, drive to the grocery store, buy the necessary ingredients, drive back home, marinate the chicken, prep the asparagus, fire up the grill, wait for it to heat up, grill the chicken and asparagus, plate them, and eat them. But suppose that, as a matter of fact, no matter what, at 2 p.m., I plan or intend on doing later, I will not end up eating grilled chicken and asparagus for tonight’s dinner. For, as a matter fact, I’m going to eat pizza instead. What’s going to happen is this: while in the midst of preparing the chicken and asparagus, I’m going to get very hungry. And, in this state, I’m going to be overcome by the temptation to go off my diet and will at 5 p.m. throw the chicken and asparagus back into the fridge, turn off the grill, and order pizza. It’s not that I couldn’t grill and eat the chicken and asparagus. Indeed, I could in that, were I to intend at 5 p.m. to continue with my plan to grill and eat chicken and asparagus, that’s exactly what I would do. It’s just that, as a matter of fact, I will at 5 p.m. abandon my plan and order pizza instead.
In this case, my eating chicken and asparagus at 6 p.m. is, as of 2 p.m., personally possible for me, but it is not something that is, as of 2 p.m., securable by me. It’s personally possible for me, for there is a series of steps that I could take that culminate in my eating grilled chicken and asparagus at 6 p.m. tonight, and the following is true of each of those steps: having taken the previous step or steps, I could then take the next step in that, were I to intend to take the next step, I would succeed in doing so.
Nevertheless, my eating chicken and asparagus at 6 p.m. is not, as of 2 p.m., securable by me, for, as we’re supposing, no matter what I intend to do now at 2 p.m., I will not end up eating grilled chicken and asparagus at 6 p.m. Even if I were, at 2 p.m., to form the most resolute intention to stick with my plan, it’s a fact about my psychology that I would abandon it in the face of the ensuing hunger and the ever-increasing temptation to eat pizza. Thus, there’s absolutely nothing that I can intend to do now that will result in my eating chicken and asparagus at 6 p.m. — or so we’re assuming.
Now, in The Safe, your opening the safe is, as of T-0, personally possible for you, but it is not, as of T-0, securable by you [I added the second 'as of T-0' at 5:05 AM]. And the fact that it is absurd to suppose that you are, as of T-0, morally required to open the safe by T-11 despite the fact that, no matter what you intend at T-0 to do, you will fail to open the safe by T-11 is not a reason to reject OC, but rather a reason to identify the relevant alternatives as being those that are securable as opposed to personally possible. Once we combine OC with the idea that the relevant alternatives are all and only those that are securable by the agent, we avoid absurd implications in cases such as The Safe.
*Here and elsewhere, I’m assuming that, for each set of actions that S might perform, there is some determinate fact as to what the world would be like were S to perform that set of actions. And I’m also assuming that, for each set of intentions that S might form, there is some determinate fact as to what the world would be like were S to form those intentions. This assumption is sometimes called counterfactual determinism — see, e.g., BYKVIST 2003. I’m assuming, then, that even if you were to pick the correct ten-digit number at random and intend to dial it and so intend at T-0 to dial 205-513-9437, you would still fail to dial that number, for you would, say, half-way through change your mind and decide to dial a ‘6’ instead of ‘3’.