I used to think that we ought to do the best we can. After debating the issue with Richard Chappell, doing some more research, and rereading some articles that I hadn’t read in awhile, I’ve changed my mind. The idea that we ought to do the best we can is plausible if we assume that ‘ought’ implies ‘can’ and nothing more restrictive than ‘can’. Consider that it is plausible to suppose that we ought to perform the best alternative, whatever the relevant set of alternatives is – at least, this is plausible so long as we presume, as I will, that the best alternative is to be understood in a theory-neutral way such that the best alternative is not necessarily the one that has the best consequences but is necessarily the alternative that is best according to the correct normative theory (i.e., the one that there is best/most reason to perform). But what is the relevant set of alternatives? I used to think that it was the set of alternatives that the agent can perform – call these ‘personally possible’. But what if ‘ought’ implies something more restrictive than ‘personally possible’? Suppose, for instance, that ‘ought’ implies ‘X’, where the set of alternatives that are X is a proper subset of those that are personally possible. If that’s true, then there would be some acts that are personally possible for me but which I cannot be obligated to perform: those that are personally possible for me but not X. I now think that there is such an X and that X equals ‘securable’. Below the fold, I explain the notion and defend the claim that ‘ought’ implies ‘securable’.
Let ‘αi’ be a variable that ranges over sets of actions that are mutually performable by an agent, S. These sets may include only a single act or multiple acts performable at consecutive and/or inconsecutive times.
- A set of actions, αi, is securable by S at ti if and only if there is some set of intentions available to S at ti such that, if S were to have all those intentions at ti, S would perform αi. (Intentions should be understood broadly to include plans, policies, decisions, resolutions, and the like.)
- A set of intentions is available to S at ti if and only if S is not at ti in a state incompatible with S’s having or forming that set of intentions. Thus, S must, for one, be conscious at ti. (Carlson 2003, 183)
- S performs a set of action, αi, if and only if S performs each action in that set.
Suppose that it’s t1 and I’m deliberating about whether to exercise at t5. Now assume that there is no plan, policy, resolution, or intention that I could form now that would ensure that I would exercise at t5. Indeed, let’s assume that no matter what set of intentions I form at t1, I will not exercise at t5. In this case, my exercising at t5 is not securable by me at t1. And I think that this implies that I cannot be, as of t1, under an (objective) obligation to exercise at t5. Why do I think this? There are two reasons.
First, as Chappell argues, an agent can’t rationally intend to do what she knows that she won’t do. And note that any act that is not securable is an act that won’t be performed. So, no fully informed and perfectly rational agent could intend to perform an act that’s not securable. Therefore, unless we want to hold that agents can be objectively rationally required to perform acts that they can’t rationally intend to perform, we must deny that an agent can be objectively rationally required to perform an act that’s not securable – I’m assuming that an objectively rationally required act is, roughly speaking, one that any fully informed and perfectly rational agent would perform.
Second, as Holly S. Goldman (now Holly Smith) argues, the range of alternatives that S can be obligated, as of ti, to perform can be no wider than those which S can, at ti, secure. Otherwise, we would have to allow that S can, as of ti, be obligated to perform x at tj even though there is no way, as of ti, for S to see to it that this obligation is fulfilled. And what would be the point in claiming that S is, as of ti, obligated to perform x at tj if there is no plan/intention/resolution that S can, as of ti, form that will ensure that S performs x at tj? It seems that such a prescription would be utterly pointless, for not even a fully informed and perfectly rational agent could make any practical use of it. After all, the prescription is indexed to a time at which there is no available intention that, if formed by the agent, would result in the prescription’s being followed. Indeed, even if the agent were to form the intention to perform the prescribed act, this would be completely ineffectual in bringing about the prescribed act. To paraphrase Goldman: If no present intention of S’s could lead to S’s performing the required action, then no present knowledge on S’s part of that requirement could lead to its being fulfilled. And, under those circumstances, there is no point in ascribing such an obligation to S.
Given all this, I think that we should accept what I’ll call securitism:
- It is, as to ti, objectively rationally permissible for S to perform a set of actions, αi, beginning at tj (j ≥ i) if and only if at least one of the best maximal sets of actions that are securable by S at ti includes S’s performing αi.
- A set of actions, αi, that is securable by S at ti is a maximal set if and only if there is no set of actions that is securable by S at ti that includes αi as a proper subset.
- A set of actions, αi, is one of the best maximal sets of actions that are securable by S at ti if and only if there is no other maximal set of actions that is securable by S at ti that S has more reason to perform.
- A set of actions, αi, includes another set of actions, αj, if and only if it is logically necessary that S performs αi if S performing αj.
Can you say a little bit about the upshot of all this? I’m having trouble imagining the cases where it cuts any ice. And what kind of possibility/availability/compatibility are we talking about? Must αi be psychologically compatible with that state I’m in at ti?
I think this view would have awkward consequences if I understand this correctly. Imagine that I’m sound a sleep tonight at midnight (ti). Tomorrow, when I wake up, I will be able to easily save hundreds of people by calling the fire-brigade to fight the fire in the neighboring burning building (this is at t5). This means that the intentions to save the people are not available for me. And, this entails that the saving actions are not securable, which will entail that I’m under no obligation, as of t1, to save the people tomorrow. Yet, for me it doesn’t sound odd to think that you are already under obligation to save the people when you wake up even already when you are sleeping.
Also, you could get rid of obligations on this view by making the relevant actions non-securable for you by manipulating your intending. This strikes me as unwelcomed. And, of course the availability condition commits you to a fairly strong form of existence-internalism about oughts.
Finally, I think the best normative theories too should be sensitive to what we can intentionally bring about at a given time. As a result, it won’t necessarily be that the best we can do and the securable comes apart.
Hi David,
Good questions.
Securitism is an alternative to both extreme actualism and extreme possibilism.
Possibilism holds that it is, as of ti, objectively rationally permissible for S to perform a set of actions, αi, beginning at tj (ti ≤ tj) if and only if the best maximal set of actions that is personally possible for S at ti includes S’s performing αi. A set of actions is personally possible for S at ti if and only if the first act in that set is securable by S at ti, and the next (subsequent) act in that set will be securable by S once S has performed the first act, and so on for all the temporal steps in that set of actions.
Actualism holds that it is, as of ti, objectively rationally permissible for S to perform a set of actions, αi, beginning at tj (ti ≤ tj) if and only if the set of actions that S would actually perform were S to perform αi is no worse than the set of action that S would actually perform were S to refrain from performing the first act in αi.
To get a grip on these different views and how their implications differ, consider the following two cases:
Case PP1: Professor Procrastinate (PP) is asked to review a book. The best thing that could happen is that PP accepts the invitation at t1 and then writes the review at t10. This is best, for PP is the most qualified person to write the review. The second best thing that can happen is that PP declines the invitation. The worst thing that can happen is that PP accepts the invitation and then endlessly procrastinates, never writing the review. Suppose that there is no set of intentions that PP can have a t0 that would result in his accepting the invitation at t1 and writing the review at t10. Assume, then, that PP would not write the review were he to accept the invitation.
In this case, possibilism implies that PP ought, as of t0 (t-zero being the present), to accept the invitation at t1, because the best maximal set of actions that is personally possible for PP at t0 includes his accepting the invitation at t1 and writing the review at t10. Securitism, by contrast, implies that, as of t0, he ought not to accept the invitation, for accepting the invitation and writing the review is not a set of actions that is securable by S at t0, and accepting the invitation would only result in the worst outcome obtaining.
Case PP2: Things are different than they are in PP1. In this case, if PP intends at t0 not only to accept the invitation at t1 but also to teach a seminar on the book at t8, then he will both accept the invitation at t1 and write the review at t10. In this case, unlike PP1, his accepting the invitation and writing the review is securable by PP at t0.
In this case but not in the original, PP can, on securitism, be obligated to accept the invitation and write the review. But assume that although PP could now, at t0, intend to teach a seminar, he in fact has been planning for some time to teach a seminar on something completely unrelated to the book, a seminar that will in the end distract him from writing the review. Thus, the following is true: he would not write the review were he to accept the invitation. Thus, on actualism, if we ask whether he ought to accept the invitation in this case, the answer will be that he ought not to accept the invitation, since what he would do were he to accept the invitation is worse than what he would do if he were not to accept the invitation. Securitism, by contrast, holds that he ought to accept the invitation, because he ought to reconsider his plan to teach the unrelated seminar and instead form the intention at t1 to teach a seminar on the book that he’s been asked to review.
Jussi,
I just have time for a quick response to the first issue that you raise. Securitism does not say that in order to be obligated, as of ti, do x at tj, S must be able to intend at ti to do x at tj. It may be that no matter what you intend at ti (even if it’s just to go to work tomorrow morning), you will in fact save those people tomorrow — you’re just the sort of person that helps people when you find them in need. In that case, saving the people is certainly securable as well as obligatory. Perhaps, though, you’re an evil self-centered person that won’t lift a finger to save anyone. And suppose that there is nothing you can do now that would effect a substantial enough change in your character such that you would save those people tomorrow. What, then, can you do now to ensure that you save those people tomorrow? Nothing. In that case, what’s the point of ascribing a present obligation to save those people tomorrow? I can’t see any point at all. Of course, when tomorrow rolls around you will be able to secure the act where you save them. And, at that time, you will be under an obligation to save them, for it is at that time that there is something you can intend to do that would result in your saving them: viz., to save them.
Hi Jussi,
[Y]ou could get rid of obligations on this view by making the relevant actions non-securable for you by manipulating your intending.
Could you explain this? An example perhaps.
Finally, I think the best normative theories too should be sensitive to what we can intentionally bring about at a given time. As a result, it won’t necessarily be that the best we can do and the securable comes apart.
So what if the best that we can do and the best that we can secure don’t always come apart? Is this supposed to be an objection?
Hi Doug,
What happens on your view if an act X is securable only by having a further intention to perform some additional act Y, where Y is not itself securable? (E.g. suppose PP2 won’t actually end up teaching the seminar on the book, whatever he now intends. But this ineffective intention is essential for enabling his intention to write the review to prove effective.)
What ought the agent to intend? Not both X and Y, since the latter isn’t securable. But not X alone, because without the further intention to Y, X will not itself be secured. Perhaps neither?
Hi Richard,
Securitism says nothing about what an agent ought to intend. It’s a view about what agents ought to do. So it’s compatible with saying anything about what the agent ought to intend to do in this case.
I suspect that the best view about whether S ought to intend to do x is one that appeals only to features about x and not to any effects or lack of effects that S’s intending to do x would have. But I’m open to other suggestions. What do you think?
Ah, I was assuming that an agent ought (objectively) to intend to do just the things that they ought (objectively) to do. But if that’s not right, the puzzle I’m raising here might not extend to your view, as you say.
(I have no idea what to say about the puzzle case.)
Doug,
sorry, I think I haven’t understood the view. You write that:
“Securitism does not say that in order to be obligated, as of ti, do x at tj, S must be able to intend at ti to do x at tj. It may be that no matter what you intend at ti (even if it’s just to go to work tomorrow morning)”
The thought was this. This:
“A set of intentions is available to S at ti if and only if S is not at ti in a state incompatible with S’s having or forming that set of intentions. Thus, S must, for one, be conscious at ti. (Carlson 2003, 183)”
entails that the life-saving intentions are not available for me at ti because I am not conscious.
Then:
“A set of actions, αi, is securable by S at ti if and only if there is some set of intentions available to S at ti such that, if S were to have all those intentions at ti, S would perform αi.
this seems entail that the life-saving actions are not securable. And, so securitism seems to entail that I’m not obliged. Now, you might be thinking that given the character I have and my back-ground planning ensure that I will form the life-saving intentions in the morning. But, what is this isn’t the case? What if, when I wake up, it’s open whether I make the life-saving call or not? What if I have to make a genuine decision then? In this case the intentions I have at night do not ensure that I will save the lives and I have no other intentions available for me that would do so either. Upshot: if I have genuine freedom between ti and the time of action, I won’t ever be obliged.
About making getting rid of obligations. Say that I am obliged to keep a promise at t5 and it’s now ti-n. One option for me is to get myself brain-washed so that at ti it is psychologically impossible to intend to keep the promise. This would mean that at ti the intentions to keep the promise would not be available for me, nor the act securable. So, I wouldn’t be obliged as of that moment even if I might be between ti-1 and ti.
About this: “So what if the best that we can do and the best that we can secure don’t always come apart? Is this supposed to be an objection?”
The idea was that if they never come apart, then it’s still true that we ought to do the best we can.
Hi Jussi,
If S will perform x at tj regardless of what, or even whether, S intends at ti, then S’s performing x at tj is securable by S at ti. So suppose that although you are now unconscious and are having no intentions, you will save those people tomorrow, because that’s the sort of person you are. In that case, your saving them tomorrow is securable by you now while you’re unconscious. Here’s the definition of securable: “A set of actions, αi, is securable by S at ti if and only if there is some set of intentions available to S at ti such that, if S were to have all those intentions at ti, S would perform αi.”
In the case that I’m imagining, there is some set of intentions available to you while you’re unconscious such that if you were to have that set of intentions, you would save those people tomorrow: namely, the empty set.
Regarding freedom, I’m assuming counterfactual determinism. That is, I’m assuming that for every set of actions performable by an agent, S, there is some determinate fact as to what the world would be like were S to perform that set of actions. And I’m assuming that for every set of intentions available to an agent, S, there is some determinate fact as to what the world would be like were S to form those intentions. Although this assumption is controversial, nothing much hangs on it. I make the assumption only for the sake of simplifying the presentation. If counterfactual determinism is false, then instead of correlating each set of actions/intentions with a unique possible world, we will need to correlate each set of actions/intentions with a probability distribution over the set of possible worlds that might be actualized if S were to perform those actions/form those intentions. And I’ll need to change the definition of ‘securable’, such that it requires only that there be some chance of securing the relevant act in order for it to be securable. But now if there is libertarian freedom and if libertarian freedom is incompatible with assigning even probability distributions with respect to what I might do in the future, then I don’t know what to say.
About getting rid of obligations, is this supposed to be an objection? I can, on almost any view, get rid of an obligation to perform of some future act. This is true in virtue of the fact that ‘ought’ implies ‘can’. Thus, if I ought to do x at tj (a future time), I just need to change the world so that it’s no longer true that I can do x at tj, and then it will no longer be true that I ought to do x at tj. You point out that if ‘ought’ implies ‘securable’, then it also possible to get rid of an obligation by changing the world so that the act is no longer securable. But why is this an objection?
Regarding the two not coming apart, you had originally said “it won’t necessarily be that the best we can do and the securable comes apart.” Now you seem to be suggesting that they will necessarily never come apart. Can you give me some reason to believe that.
Hi Richard,
This is an interesting puzzle, but I’m not sure (and I mean that I’m genuinely unsure) whether it’s a puzzle for my view in particular or just a general puzzle about the relationship between ‘ought to do’ and ‘ought to intend to do’. What do we want to say in those cases where the only way to ensure that you fulfill a promise to perform some act at t10 is to form at t1 a completely ineffective intention to perform some other future act? Suppose, for instance, that your psychology is such that you’ll fulfill your promise to help Smith at t10 if and only if you form at t1 the intention to jump over the Empire State Building. Do we want to say that you are, as of t1, obligated to help Smith at t10 even though the only way for you to ensure at t1 that you will help Smith at t10 is to form at t1 a completely ineffective intention? Do we want to say you ought now to intend to jump over the Empire State Building even though such an intention would be completely ineffective?
If we want to say “no” to both questions, then we do have the making of an objection to securitism. In that case, I’ll want to revise the definition of securable as follows: “A set of actions, αi, is securable by S at ti if and only if there is some set of effective intentions available to S at ti such that, if S were to have all those intentions at ti, S would perform αi.”
What do you think of this proposed revision? I think that this is probably the way I want to go.
Yeah, that sounds like a good fix.
Doesn’t this formulation make it nearly impossible to secure any set of actions that are supposed to take place at a future time? In some possible worlds, I will change my mind between the time at which any intention is formed and the time at which the future action is performed. I might now form the intention to give some sum of money to charity tomorrow, and if I don’t change my mind, I will give the money to charity tomorrow. But it’s not the case that merely by having that intention (or, as far as I can see, any set of intentions) now, I make it the case that I will give the money to charity tomorrow. That event will also depend on my not changing my mind. So according to your securitism, it seems it couldn’t be obligatory for me now to give any sum of money to charity tomorrow.
Whoops, I see you answered my question already in response to Jussi. Sorry!
Hi Simon,
I’m assuming counterfactual determinism — see my response to Jussi a few comments up. Thus, I’m assuming that there is some fact about what you would do were you to have certain intentions. Suppose that I intend now to cook a feast on Christmas eve. Sure, it’s true that if I were to die or change my mind, then my plan to cook a feast on Christmas eve wouldn’t be completed. But suppose that, as a matter of fact, I won’t die and that I won’t change my mind. Isn’t it, then, true that if I were to plan to cook a feast on Christmas eve, I would do all sorts of acts, such as stuff a turkey, mash some potatoes, bake a green-bean casserole, etc. If so, then each of these acts are securable by me at present. Moreover, I don’t even have to intend at ti to do x at tj for my x-ing at tj to be securable by me at ti. I assume that my getting up to go to the bathroom after I finish typing this reply is securable by me now even though I do not now have such an intention. I assume that were I to form the intention now to go to the bathroom after I finish this reply, I would do so. Thus, if this plausible assumption is true, it’s securable. And the same holds for many other acts.
Simon: Okay, I hit post before seeing your second comment.
Hi Doug, I guess I’m not seeing the plausiblity of your conception of counterfactual determinism. You seem to be assuming that the relevant intentions at ti are intentions to perform the securable acts at tj. But I don’t see any reason to think that my intention at t1 to perform x at t2 determines that I will perform x (after all, I could always change my mind). If you agree, then if you want to maintain determinism, you will have to accept that my performing x was already determined at t-1, prior to t1 (which, let me now stipulate, is the time at which the thought of doing x first occurs or even becomes avaialable to me). But if so, then at t-1, your view seems to make an odd fetish of my available intentions, by making relevant intentions that are not just intentions to perform the securable act x. Let me echo your stipulation in your reply to me and stipulate that, at t-1, it will as a matter of fact be the case that I perform x at t2 (we could add, if you like, that I will as a matter of fact at t1 intend to perform x at t2). Your claim is that at t-1, my performing x at t2 is obligatory only if there is some set of intentions avialable at t-1 such that if I have them, I will perform x. Then the obligatoriness of my giving a sum of money to charity tomorrow at t2, say, depends on the availability of a bunch of intentions at t-1 that may have nothing in particular to do with this action. We could put t-1 back in my early childhood and suppose that my available intentions were to drink milk or to try to sleep. It seems bizarre to say that the obligatoriness at t-1 of my giving a sum of money to charity at t2 depends on the availability of one or both of these intentions. Nor will I have “secured” my later action by having these intentions, in any ordinary or first-personally recognizable sense.
I see you remarked in previous comments that you weren’t necessarily committed to counterfactual determinism. I can’t see how you would avoid this problem even if you rephrased your view in terms of probability distributions though. So I’d like to hear a bit more about this option, if you think it provides a solution.
Hi Simon,
You seem to be assuming that the relevant intentions at ti are intentions to perform the securable acts at tj.
No. It may be that I have no intention at t1 of looking in the fridge at t3, but that I do intend at t1 to go into the kitchen to wash the dishes and that once I’m there I will look in the fridge for something to eat. In that case, my looking in the fridge at t3 is securable by me at t1 even though the relevant intention at t1 is not the intention to perform the securable act in question.
I don’t see any reason to think that my intention at t1 to perform x at t2 determines that I will perform x (after all, I could always change my mind).
Suppose that the laws of nature are deterministic such that the antecedent conditions and the laws of nature causally necessitate certain subsequent events. You still don’t see how my intention at t1 to perform x at t2 could determine that I will perform x?
if you want to maintain determinism, you will have to accept that my performing x was already determined at t-1
No. It may be, at t-1, causally determined that I will not intend at t1 to perform x at t2. It may still be true, though, that in the nearest possible world in which I instead intend at t1 to x at t2, I perform x at t2. The laws of nature might be such that the event of my forming an intention at t1 to x at t2 are such as to initiate a chain of events that culminates in my x-ing at t2.
We could put t-1 back in my early childhood and suppose that my available intentions were to drink milk or to try to sleep. It seems bizarre to say that the obligatoriness at t-1 of my giving a sum of money to charity at t2 depends on the availability of one or both of these intentions.
Suppose that at t-1 you promised your mom that you would give money to charity at t2. Suppose that you have full information and are perfectly rational (after all, we’re talking about objective obligations here). So you know that unless you try to get some sleep at t-1, you won’t give money to charity at t2. So what’s so strange about saying that you are, as of t-1, objectively required to give money to charity at t2? After all, you did promise to do so, and there is a way, as of t-1, for you to ensure that this obligation will fulfilled at t2: namely, by trying to get some sleep at t-1. So there is a way for you to see to it that your obligation is fulfilled. In such a case, what’s so odd about saying that you have such an obligation as of t-1 in virtue of the fact that there is at t-1 some intention that you could form at t-1 that would ensure that you would give money to charity at t2, thereby keeping your promise and fulfilling your obligation to do so?
If you think this view is bizarre, what alternative view would you suggest? That you’re objectively required, as of t-1, to give money to charity at t2 even if there is no way for you to secure at t-1 that this obligation is fulfilled? That you’re not objectively required, as of t-1, to give money to charity at t2, which is something that you promised to do, even though there is something you can intend to do now that would ensure that this obligation will be fulfilled?
I think that the only oddness that one may initially feel about such a case stems from the fact that we’re talking about objective obligations. It may seem odd to say that you ought to do x in order to ensure that some later obligation will be fulfilled when you can have no idea that x is necessary to fulfill that future obligation. But that’s just because our intuitions are usually focused on subjective oughts.
Thanks Doug, those are helpful replies. I’m suspicious of objective oughts, so I think that is playing a role here. Now you’ve drawn my attention to this aspect of your view, I wonder about the validity of your objection in the post to an alternative view that “such a prescription would be utterly pointless, for not even a fully informed and perfectly rational agent could make any practical use of it.” But you probably have a better worked out view of what objective oughts must be useful for than I do, so I won’t try to press this as an objection!
You ask for an alternative view: it’s not clear to me why my obligation at t-1 to x at t2 should be dependent on the availability of any intention at t-1. Why couldn’t it be dependent rather on the fact that an effective intention (specifically: an intention to x) will (or perhaps: may) become available to me at some later time prior to t2? If I promised to send someone a card on their birthday next year, aren’t I obligated for the entire time between the making of the promise and the sending, even if (because of indeterminism) I can’t at a particular moment (e.g. right after making the promise) secure that I will send it (so long as I would be able to form an effective intention and send it at the appropriate time)? Even if indeterminism turns out to be false, it seems odd to make the notion of my being obligated between the time of my promise and my action in a case like this depend on the falsity of indeterminism.
In reply to my example you wrote: “Suppose that you have full information and are perfectly rational … So you know that unless you try to get some sleep at t-1, you won’t give money to charity at t2…there is a way for you to see to it that your obligation is fulfilled”. If I have full information about a deterministic universe, wouldn’t I already know what intention I will form at t-1 (you are claiming this information, at any rate, about the later intention at t1). So how do I “see to it” that my obligation is fulfilled just when I form this intention? Wasn’t it already secured? Does every intention I form between now and the latest obligation I will in fact fulfil in my life “secure” my fulfilling that obligation? Then I shall be pleased to learn that I have spent my life being so dedicated to meeting my final obligation, whatever it is!
Hi Simon,
You ask for an alternative view: it’s not clear to me why my obligation at t-1 to x at t2 should be dependent on the availability of any intention at t-1. Why couldn’t it be dependent rather on the fact that an effective intention (specifically: an intention to x) will (or perhaps: may) become available to me at some later time prior to t2?
Well, I gave two reasons in my original post. I gather that you don’t think much of the second one — the one that comes from Goldman. But what about the other — the one that comes from Chappell? And do you like the implication of this proposed alternative view in Case PP1 (see the second comment)? The implication is that Professor Procrastinate ought, as of t1, to accept the invitation even though the result will be that he endlessly procrastinates, never writing the review.
If I have full information about a deterministic universe, wouldn’t I already know what intention I will form at t-1
Okay, so let’s just assume that you have full information about the future effects of your available actions and intentions but don’t know what you’re actually going to intend to do now.
So how do I “see to it” that my obligation is fulfilled just when I form this intention?
By forming the relevant intention — the intention to get some sleep, which will, we’re assuming, result in your giving the money to charity at t2 — you put a causal chain into motion that results in your giving money to charity at t2. Isn’t that a case of your seeing to it that your obligation is fulfilled? Note that you may be determined not to intend to get some sleep as well as not to give any money to charity at t2. But I’m assuming that there is a compatibilist notion of ‘could’ such that you could have formed the intention to get some sleep (that it was in the relevant sense available to you) and that you could have gave some money to charity at t2 (that doing so was an act available to you).
Does every intention I form between now and the latest obligation I will in fact fulfil in my life “secure” my fulfilling that obligation?
No. What ever gave you that idea? Nothing I said, I hope.
I’m with Simon on this. I’m not really convinced of Richard’s argument. Here it is again:
P1. An agent can’t rationally intend to do what she knows that she won’t do.
P2. And note that any act that is not securable is an act that won’t be performed.
P3. So, no fully informed and perfectly rational agent could intend to perform an act that’s not securable.
C. Therefore, unless we want to hold that agents can be objectively rationally required to perform acts that they can’t rationally intend to perform, we must deny that an agent can be objectively rationally required to perform an act that’s not securable
I think on the kind of view Simon is holding (and I agree with it) P2 just needn’t true. It’s not necessarily the case that the agent won’t do the act at the future moment even if she cannot adopt an intention that would secure that she does the act. She can adopt the required intention later (and she can fail to do so too). P3 also strikes me as false. I can intend to do things that I am fairly certain I will do. And, this doesn’t seem to be due to a failure of rationality or lack information. I intend to watch more Wire tonight even if I know I watching Wire is not securable for me currently – I can come to rationally change my mind. All of this makes the conclusion false too. Agents can rationally intend to do acts that are not securable and, even if they don’t intend these actions yet, they can come to intend them. So, they can be morally required of them.
I should say what is motivating my resistance. This is the Kantian background that it is built into the notion of requirements and imperatives that we act under the notion of freedom. If we lack such freedom, then it’s not clear in which sense we can be required either.
Jussi,
Do you deny this: ‘if counterfactual determinism is true, then P2 is true’?
And if counterfactual determinism is false, I said that I’ll need to change the definition of ‘securable’, such that it requires only that there be some chance of securing the relevant act in order for it to be securable. And do you agree that if we define ‘securable’ in this way, P2 will be true, for if there is no chance of securing the relevant act, then it won’t be performed.
It seems to me that P2 is true on the original definition of ‘securable’ if counterfactual determinism is true, and P2 is true on the modified definition of ‘securable’ if counterfactual determinism is false.
If determinism is true, it follows that that every intention I in fact form is such that if I form it, I will fulfil the final obligation in my life that I will in fact fulfil.And doesn’t forming this intention then constitute my “securing” my fulfilling that obligation, according to your definition?
Why assume this? It seems ad hoc.
I don’t see why the alternative view would be committed to this implication, sorry. Maybe, as your discussion with Jussi seems to indicate, you want to define “securable” such that an action that is not securable cannot be performed. We could then accept that “ought implies securable”. But maybe this is no more interesting than, for example, “ought implies non-contradictory”. The interesting dependence of ought on possible intentions might be something else, such as “ought implies could be (now or in the future) effectively intended”.
Hi Simon,
I never defined ‘secure’. I defined ‘securable’. If you want me to define ‘secure’, I would, just off the cuff, suggest something along the following lines: S secures the performance of x at tj by y-ing at ti =def. y-ing at ti causes, either directly or indirectly, the performance of x at tj. So even if my having intention I1 at ti causes the performance of x at tj, it doesn’t follow that “every intention I form between now and the latest obligation I will in fact fulfil in my life ‘secure’ my fulfilling that obligation.” Only I1 secures the fulfillment of that obligation.
As to why make the assumption in question, I’m assuming that what S objective ought to do is what S would do if S were perfectly rational and fully informed with regard to the relevant options available to him. I don’t see why information about what S is, in the actual world, causally determined to intend to do at this moment is relevant. We’re not interested in what he’s going to intend to do given his lack of full information and his lack of perfect rationality. We’re interested in what he would do if the antecedent conditions were different — specifically we’re in interested in what he would do in the closest possible world in which he is perfectly rational and fully informed about the nature of his options.
Maybe, as your discussion with Jussi seems to indicate, you want to define “securable” such that an action that is not securable cannot be performed.
I don’t believe anything that I said indicates that. I said that an act that is not securable is one that won’t be performed. I did NOT say that an action that is not securable is one that cannot be performed. I think that a set of actions is personally possible for S at ti (that is, is, as of ti, available for S to perform at some later time tj) if and only if the first act in that set is securable by S at ti, and the next (subsequent) act in that set will be securable by S once S has performed the first act, and so on for all the temporal steps in that set of actions.
I don’t see why the alternative view would be committed to this implication, sorry.
Isn’t it true that, in Case PP1, “an effective intention (specifically: an intention to x [i.e., to write the review]) will (or perhaps: may) become available to [Professor Procrastinate] at some later time prior to t[10]”?
So doesn’t your view imply that Professor Procrastinate ought ought, as of t1, to accept the invitation even though the result will be that he endlessly procrastinates, never writing the review?
Isn’t your view, just what I call possibilism above? If not, perhaps you could spell out your view more precisely so that we can evaluative its implications.
Doug – a small nitpick: I think your P2, as stated, is too weak to support the conclusion. Consider: it might be that an agent won’t perform x, but this is only because they don’t intend to x. That fact doesn’t suffice to make intending x objectively irrational. We need the stronger claim that even if they had now intended to x, this intention would have proved ineffective. I take it that this also follows from the non-securability of x, so the strengthened version of P2 will still be true.
Richard: Good point. I accept your proposed revision.
Hi Doug,
I never defined ‘secure’. I defined ‘securable’
The adjective ‘securable’ usually has an alaytic connection with the verb ‘secure’! Also, you did write in your main post post:
as Holly S. Goldman (now Holly Smith) argues, the range of alternatives that S can be obligated, as of ti, to perform can be no wider than those which S can, at ti, secure.
Your view together with your new definition of “secure” appears to contradict that Goldman’s claim.
we’re in interested in what he would do in the closest possible world in which he is perfectly rational and fully informed about the nature of his options.
Don’t you want to introduce an advisor here? Surely, in the world in which he was perfectly rational, PP would accept the invite and then write the review?
Isn’t it true that, in Case PP1, “an effective intention (specifically: an intention to x [i.e., to write the review]) will (or perhaps: may) become available to [Professor Procrastinate] at some later time prior to t[10]”?
That’s helpful I think. I was imagining PP to be such that no effective intention to write the review will ever become available to him. I see now (I think) that you are thinking of PP as someone for whom such intentions would become available if he accepted the invite, but he will never form tham as a result of his disposition to procrastinate. If you describe him this way, I don’t see why we shouldn’t describe him as obligated to both accept the invite and write the review.
Hi Simon,
The adjective ‘securable’ usually has an alaytic connection with the verb ‘secure’
That’s right. Any act that is securable by S at ti is one that S can secure at ti. But, from that, you can’t infer that it is this or that intention or all intentions that secure(s) that S performs that act.
Your view together with your new definition of “secure” appears to contradict that Goldman’s claim.
I’m not seeing this. Could you spell it out for me.
Don’t you want to introduce an advisor here? Surely, in the world in which he was perfectly rational, PP would accept the invite and then write the review?
Good point. Maybe I should appeal to an advisor. I was thinking that we’re talking about the closest possible world in which he is perfectly rational at ti and fully informed about the nature of his options at ti. We should assume that his information and degree of rationality is the same at other times.
If you describe him this way, I don’t see why we shouldn’t describe him as obligated to both accept the invite and write the review.
But the question is: Is he obligated, as of t1, to accept the invite? Some hold both (1) that he is, as of t0, obligated to both accept the invite at t1 and write the review at t10 and (2) that he is, as of t0, obligated to decline the invite at t1.
Hi Doug,
OK I finally get your point about which intentions can be said to ‘secure’ actions. I had misread your account and thought that the relevant set of intentions was an agent’s complete set of intentions at a particular time. Thanks for clarifying.
If anyone is still reading, here is the more-or-less final formulation of securitism. Again, thanks for the help.
Securitism:
Let ‘αi’ and ‘αj’ be variables that range over sets of actions that are mutually performable by an agent, S. These sets may include only a single act or multiple acts performable at consecutive or inconsecutive times.
It is, as to ti, objectively rationally permissible for S to perform a set of actions, αj, beginning at tj (ti ≤ tj) if and only if, and because, at least one of the objectively rationally permissible maximal sets of actions that are securable by S at ti includes S’s performing αj.
A set of actions, αj, beginning at tj (ti ≤ tj) is securable by S at ti if and only if there is a set of actions, αi (αi is a subset of αj), such that all of the following hold: (1) S would perform αj if S were to perform αi, (2) S would perform αi if S were to intend, at ti, to perform αi, and (3) S is not, at ti, in a state that’s incompatible with S’s intending to perform αi, (Intentions should be understood broadly to include plans, policies, resolutions, and the like. And I’m assuming that counterfactual determinism is true.)
S performs a set of actions, αj, if and only if S performs each action in that set.
A set of actions, αj, that is securable by S at ti is a maximal set if and only if there is no set of actions that is securable by S at ti that includes αj as a proper subset.
A set of actions, αj, includes a set of actions, αi, if and only if every element in αi is an element in αj — that is, if and only if αi is a subset of αj.