As I argued in my previous post, I think that ‘ought’ implies ‘securable’, and, from that, it follows that agents can only be required to perform securable acts. I am, however, a bit unsettled about how best to define ‘securable’. Below, I list the three most obvious contenders and explain why my current thinking is that something along the lines of D3 is best. Here are the three contenders, D1-D3:

  • D1: A set of actions, αi, is securable by S at ti if and only if S would perform αi if S were to intend at ti to perform αi.
  • D2: A set of actions, αi, is securable by S at ti if and only if there is some set of effective intentions available to S at ti such that, if S were to have that set of intentions at ti, S would perform αi.
  • D3: A set of actions, αi, is securable by S at ti if and only if both (1) there is some set of actions, αj, such that, S would perform αj if S were to intend at ti to perform αj, and (2) S would perform αi if S were to perform αj.

D1 has a problem in the following sort of case. Suppose that αi is such a large and complex set of actions that my intending to perform αi would “fry” my brain so-to-speak, rendering me unable to carry through with the intention. In that case, it’s not the case that I would perform αi if I were to intend at ti to perform αi. Thus, αi would not, on D1, be securable. But what if there is some smaller and less complex set of actions, αj, such that, I would perform αj if I were to intend to perform αj and I would perform αi if I were to perform αj. In that case, it seems right to say that αi is securable.

D2 has a problem in the following sort of case. Suppose that αi has a time-interval that extends from t2-t12. Suppose that my psychology is such that if I were to intend at t1 to do a14 at t14. then that intention would cause me not only to perform a14 at t14 but also to perform all the acts in αi. But suppose that I have no reason to perform a14 at t14. D2, then, claims that αi is securable. Yet I don’t see how I can form the intention to perform a14 at t14, for state-given reasons for intending to X are not the sorts of reasons that can lead an agent to X. That is, I don’t think that I can form the intention to X just because the state of intending to X would have some good consequences even though X itself has no feature that recommends it.

D3 seems to be the best of the three. I do, however, think that it will need to be tweaked a little. Suppose that I’m currently unconscious at ti. It may still be true that I would perform αj if I were to intend at ti to perform αj, but, since I’m unconscious, I’m currently in a state that’s incompatible with my intending to perform αj, it seems to be a mistake, then, to say that αj (and what stems from αj) is securable to me now at ti. So I still need to rework D3 so as to avoid this problem concerning the relevant intentions needing to be available intentions.

Do people agree that something along D3 is the best? Are there other possible definitions that I’m overlooking? And let’s just assume for the sake of argument that counterfactual determinism is true. That is, I'm assuming that for every set of actions performable by an agent, S, there is some determinate fact as to what the world would be like were S to perform that set of actions. And I'm assuming that for every set of intentions available to an agent, S, there is some determinate fact as to what the world would be like were S to form those intentions.

3 Replies to “The Definition of ‘Securable’

  1. If anyone is still reading, here’s the more-or-less final formulation of D3:
    A set of actions, αj, beginning at tj (ti ≤ tj) is securable by S at ti if and only if there is a set of actions, αi (αi is a subset of αj), such that all of the following hold: (1) S would perform αj if S were to perform αi, (2) S would perform αi if S were to intend, at ti, to perform αi, and (3) S is not, at ti, in a state that’s incompatible with S’s intending to perform αi, (Intentions should be understood broadly to include plans, policies, resolutions, and the like. And I’m assuming that counterfactual determinism is true.)

  2. I think there is a problem with solely using intentions to determine what sets of actions are securable. Jaegwon Kim from Princeton argues for the Defeasibility of Mental-Behavioral Entailment. The result is that no finite set of intentions (belief, desire pairs) could entail an action.

  3. Peter,
    The idea is not that the intention alone results in the performance of the act. The idea is that the intention (or more precisely the corresponding event in the agent’s brain) along with all the other states of the world and the deterministic laws of nature causally determine that the act will be performed. Does Kim argue that this is not possible?

Leave a Reply

Your email address will not be published. Required fields are marked *