I used to think that we ought to do the best we can. After debating the issue with Richard Chappell, doing some more research, and rereading some articles that I hadn’t read in awhile, I’ve changed my mind. The idea that we ought to do the best we can is plausible if we assume that ‘ought’ implies ‘can’ and nothing more restrictive than ‘can’. Consider that it is plausible to suppose that we ought to perform the best alternative, whatever the relevant set of alternatives is – at least, this is plausible so long as we presume, as I will, that the best alternative is to be understood in a theory-neutral way such that the best alternative is not necessarily the one that has the best consequences but is necessarily the alternative that is best according to the correct normative theory (i.e., the one that there is best/most reason to perform). But what is the relevant set of alternatives? I used to think that it was the set of alternatives that the agent can perform – call these ‘personally possible’. But what if ‘ought’ implies something more restrictive than ‘personally possible’? Suppose, for instance, that ‘ought’ implies ‘X’, where the set of alternatives that are X is a proper subset of those that are personally possible. If that’s true, then there would be some acts that are personally possible for me but which I cannot be obligated to perform: those that are personally possible for me but not X. I now think that there is such an X and that X equals ‘securable’. Below the fold, I explain the notion and defend the claim that ‘ought’ implies ‘securable’.
Let ‘αi’ be a variable that ranges over sets of actions that are mutually performable by an agent, S. These sets may include only a single act or multiple acts performable at consecutive and/or inconsecutive times.
- A set of actions, αi, is securable by S at ti if and only if there is some set of intentions available to S at ti such that, if S were to have all those intentions at ti, S would perform αi. (Intentions should be understood broadly to include plans, policies, decisions, resolutions, and the like.)
- A set of intentions is available to S at ti if and only if S is not at ti in a state incompatible with S’s having or forming that set of intentions. Thus, S must, for one, be conscious at ti. (Carlson 2003, 183)
- S performs a set of action, αi, if and only if S performs each action in that set.
Suppose that it’s t1 and I’m deliberating about whether to exercise at t5. Now assume that there is no plan, policy, resolution, or intention that I could form now that would ensure that I would exercise at t5. Indeed, let’s assume that no matter what set of intentions I form at t1, I will not exercise at t5. In this case, my exercising at t5 is not securable by me at t1. And I think that this implies that I cannot be, as of t1, under an (objective) obligation to exercise at t5. Why do I think this? There are two reasons.
First, as Chappell argues, an agent can’t rationally intend to do what she knows that she won’t do. And note that any act that is not securable is an act that won’t be performed. So, no fully informed and perfectly rational agent could intend to perform an act that’s not securable. Therefore, unless we want to hold that agents can be objectively rationally required to perform acts that they can’t rationally intend to perform, we must deny that an agent can be objectively rationally required to perform an act that’s not securable – I’m assuming that an objectively rationally required act is, roughly speaking, one that any fully informed and perfectly rational agent would perform.
Second, as Holly S. Goldman (now Holly Smith) argues, the range of alternatives that S can be obligated, as of ti, to perform can be no wider than those which S can, at ti, secure. Otherwise, we would have to allow that S can, as of ti, be obligated to perform x at tj even though there is no way, as of ti, for S to see to it that this obligation is fulfilled. And what would be the point in claiming that S is, as of ti, obligated to perform x at tj if there is no plan/intention/resolution that S can, as of ti, form that will ensure that S performs x at tj? It seems that such a prescription would be utterly pointless, for not even a fully informed and perfectly rational agent could make any practical use of it. After all, the prescription is indexed to a time at which there is no available intention that, if formed by the agent, would result in the prescription’s being followed. Indeed, even if the agent were to form the intention to perform the prescribed act, this would be completely ineffectual in bringing about the prescribed act. To paraphrase Goldman: If no present intention of S’s could lead to S’s performing the required action, then no present knowledge on S’s part of that requirement could lead to its being fulfilled. And, under those circumstances, there is no point in ascribing such an obligation to S.
Given all this, I think that we should accept what I’ll call securitism:
- It is, as to ti, objectively rationally permissible for S to perform a set of actions, αi, beginning at tj (j ≥ i) if and only if at least one of the best maximal sets of actions that are securable by S at ti includes S’s performing αi.
- A set of actions, αi, that is securable by S at ti is a maximal set if and only if there is no set of actions that is securable by S at ti that includes αi as a proper subset.
- A set of actions, αi, is one of the best maximal sets of actions that are securable by S at ti if and only if there is no other maximal set of actions that is securable by S at ti that S has more reason to perform.
- A set of actions, αi, includes another set of actions, αj, if and only if it is logically necessary that S performs αi if S performing αj.