What fundamentally exemplifies the property of practical rationality?
- According to atomism, it is fundamentally each particular intention that an agent might have at a time that counts as either rational or irrational for the agent to have at that time.
- According to holism, it is fundamentally each set of intentions that the agent might have at a time that counts as either rational or irrational for the agent to have as her total set of intentions at that time.
In this post, I shall argue for holism, and against atomism.
Atomism can allow that, derivatively, we can say that a certain set of intentions is “rational” for you to have as your total set of intentions at a time. However, for atomism, that only means that each particular intention in that set is, individually, rational for you to have at that time.
Similarly, holism can allow that, derivatively, a particular intention may count as a “rational” intention for you to have at a time. However, for holism, that only means that that intention is a member of a set of intentions that it is rational for you to have as your total set of intentions at that time.
There is a simple problem with atomism. The following two principles seem undeniable.
- Buridan’s Ass (BA). An agent may be in a situation in which it is equally rational for the agent to intend either (though not both) of two incompatible courses of action. For example, it is rational for Buridan’s Ass to intend to go to the Left and not to go to the Right; but simultaneously it is equally rational for the Ass to intend to go to the Right and not to the Left.
- The Irrationality of Inconsistent Intentions (III). It is irrational simultaneously to intend a course of action A and also to intend a course of action B such that it is logically impossible for one to do both A and B.
The only way in which atomism can explain what it means for a set of intentions to be rational is by invoking an agglomeration principle for rational intentions: If it is rational for an agent at a time t to intend A, and also rational for the agent at t to intend B, then it is rational for the agent at t to both intend A and intend B.
Clearly, however, this agglomeration principle is incompatible with the conjunction of Buridan’s Ass (BA) and the Irrationality of Inconsistent Intentions (III).
If we apply the agglomeration principle to BA, it follows that it is rational for the Ass to simultaneously intend to go to the Left and not to the Right and intend to go to the Right and not to the Left. And that obviously contradicts the III.
Holism, by contrast, does not need the agglomeration principle. What it needs is the principle that rational permissibility distributes over conjunction: If it is rational for the agent at t to both intend A and intend B, then it is rational for the agent to intend A. The case of Buridan’s Ass creates no difficulties for the distribution principle, even given the III.
Moreover, here is a further difficulty for atomism. Holism can easily provide an account both of what it is for an intention to be rationally permissible for you to have at a time, and of what it is for an intention to be rationally required of you at the time:
- An intention is rationally permissible for you to have at a time t if and only if it is a member of some set of intentions that it is rational for you to have as your total set of intentions at t.
- An intention is rationally required for you to have at a time t if and only if it is a member of every set of intentions that it is rational for you to have as your total set of intentions at t.
Atomism, by contrast, needs to have two separate principles – one about when an intention is permissible, and another about when an intention is rationally required.
Perhaps these problems for atomism can be fixed somehow. But prima facie it seems that a holistic account of rational intentions is much more plausible than the rival atomistic account.
Why not say there are both types of rational requirements?
I do say that there are both types of rational requirements (and both types of rational permissions as well)!
However, if there are both types of rational requirements and permissions, surely there is some way of giving a unifying explanation of these two types — and the simplest way of unifying them is by seeing one of these types as derivative from the other.
Atomism sees the requirements and permissions that apply to the whole set of intentions that the agent has at the time as derivative from the requirements and permissions that apply to particular intentions; holism sees the latter requirements and permissions as derivative from the former.
Two quick questions, Ralph.
First, a clarificatory question on your definition of atomism. It reads like a thesis about what are the fundamental bearers of rationality. Suppose this is the correct reading. Is an atomist then precluded from adopting the following thesis about the evaluation of intentions: in evaluating the rationality of a particular intention, we can appeal to the presence/absence of other intentions in the agent’s psychology.
Consider an imperfect analogy. Suppose I am an atomist about well-being. That is, I think the fundamental bearers of value in determining how well a person’s life goes are time-slices of that person of a suitable length. This is a thesis about what, fundamentally, has value in the context of determining a person’s well-being. But am I thereby precluded from endorsing the following thesis about the evaluation of time-slices, namely, how much value a particular time-slice contributes to a person’s well-being depends on what happens in other time-slices (perhaps an undertaking in a particular time-slice is more valuable if in an earlier time-slice the person has made a promise to engage in that undertaking).
I guess one way to put my question is: Does atomism about the bearers of value lead to atomism about the evaluation of those bearers of value?
On your second argument against atomism, is it open to the atomist to say the following in reply — there is just a single principle, which tells us which intentions are rationally required for A to have at t. Those intentions that are inconsistent with the rationally required intentions, are ones that A is rationally required not to have at t. Rationally permissible intentions are then ones that A is not rationally required not to have at t.
Thanks, Erik!
To reply first to your clarificatory question: I’m not talking about the evaluation of intentions that the agent actually has – I am talking about evaluating merely possible intentions (or sets of intentions), as it were as “candidates” for (members of) whatever set of intentions the agent has at the time.
Now, whenever we evaluate merely possible attitudes – as candidates for the attitudes that a given agent is to have at a given time, we are evaluating these candidate attitudes against a certain background of facts that are “held fixed”. These are normally the facts that are simply “given” to the agent – the facts that the agent cannot change through the practical reasoning that she performs at that time. Plausibly, this includes the facts about the agent’s sensory experiences, and her desires, feelings, and emotions, as well as the facts about all the intentions that the agent actually held in the past.
However, these facts that are “held fixed” will not normally include the facts about any of the intentions that the agent actually has at the very time in question – since one option that is open to her is not to have those intentions at that time.
You’re right that an atomist could technically work with just the notion of rational requirements, and define permissions in terms of requirements – so long as they are willing to say that there are rational requirements of both the following two kinds: (a) rational requirements that forbid having certain intentions, and also (b) rational requirements that require having some intention or other of a certain sort. Unless they make room for requirements of both kinds, the atomist can’t possibly give a plausible account of rationality without having primitive permissions as well as requirements.
Thanks, Ralph, for your prompt and helpful reply!
Regarding my first question, I had the following in mind:
There are two candidate intentions at issues: intention to A (go to the Left) and intention to B (go to the right). And there are three possible psychological states the ass could have: (1) have only the A-intention, (2) have only the B-intention, and (3) have both A-intention and B-intention.
I was imagining the atomist saying the following. In evaluating the three possible psychological states, the fundamental bearer of rationality is each particular intention. The intention in each of (1) and (2) is rational. In (3), however, the A-intention is irrational, because in assessing the rationality of A, we need to assess it against the background that the (incompatible) B-intention is also present in this possible psychological state. The same goes for the B-intention in (3).
Atomism, as you defined it, says that “it is fundamentally each particular intention that an agent might have at a time that counts as either rational or irrational for the agent to have at that time.” It says nothing, it seems, about how the rationality of candidate intentions are to be assessed. In particular, it doesn’t say that intentions cannot be assessed in the light of the presence of other candidate intentions. So I was wondering whether one can remain an atomist in this sense and also accepting a picture of evaluating the rationality of intentions that make reference to the presence of other intentions.
Thanks again, Erik!
I don’t believe that the view that you’re describing is consistent with atomism as I was thinking of it. Let me explain.
According to atomism, as I loosely formulated it, the fundamental bearer of rationality is the (possible) intention. What, more precisely, is this “(possible) intention”? I propose that it is, strictly speaking, a certain state of affairs. Specifically, it is the state of affairs of the relevant agent’s intending the relevant course of action at the relevant time (in a situation in which all the relevant background facts are “held fixed” – as I put it in my last comment).
As I understand it, if this state of affairs is the “fundamental bearer” of rationality, then either it is rational, or it is not rational, without relativization to anything else. That is, it is either rational simpliciter, or irrational simpliciter.
On the view that you’re suggesting, the state of affairs of the agent’s having the A-intention is not irrational simpliciter. It is irrational to have as part of psychological state (3) – i.e. it is irrational to have the A-intention together with the B-intention – but it is perfectly rational to have it as part of psychological state (1) – i.e., it is rational for the agent to have it as their sole intention about how to act at the relevant time.
In other words, I am presupposing that something is the absolute unrelativized bearer of the property of rationality. Atomism and holism give different answers to the question of what this bearer is. The view that you’re suggesting seems to reject this presupposition, and so doesn’t count as a version of atomism as I was understanding it.
Dear Ralph Wedgwood,
Thank you for this, this is very interesting. I was wondering about how to understand the atomism/holism distinction in relation to the formulation of the agglomeration principle.
Your agglomeration principle states: If it is rational for an agent at a time t to intend A, and also rational for the agent at t to intend B, then it is rational for the agent at t to both intend A and intend B. I think you are right that this principle as currently formulated is incompatible with BA and III.
But your formulation is slightly different from the Principle of Intention Agglomeration (PIA) by Michael Bratman: If at one and the same time an agent both rationally has an intention to A and an intention to B, then it should be rational for the agent to have an intention to A and B.
For Bratman, what is agglomerated is the content of the two intentions. He thinks that given the roles of intention in practical reasoning that there is rational pressure for agents to put his/her various intentions together into a larger one. The way PIA is formulated seems to be straightforwardly the opposite of the Irrationality of Inconsistent Intentions. And PIA is consistent with the conjunction of BA and III (because it is not rational to agglomerate the content of the two intentions in BA).
My question then is how to understand the atomism/holism distinction in relation to PIA. If we need this principle to assess the rationality of an intention, does that automatically imply holism? If the idea is to put intentions together into one larger planning intention, such that after agglomeration my intention becomes I intend to both A and B, it is technically still the rationality of a particular intention (and not a set of intentions) that an agent has at a time that counts as either rational or irrational for the agent to have at that time. And so, could PIA not be a way to defend atomism? Perhaps this is really stretching what atomism means, and so maybe this just shows again that we must accept holism. I’m not sure. But it would be helpful perhaps to clarify what atomism exactly means in light of this and it might point to another interesting point, namely that the atomism/holism question hinges in part on whether there is indeed this rational pressure to put one’s intentions into a larger one or whether there is no such rational pressure and we should merely be concerned with coherency between intentions.
Kind regards,
Niels de Haan
Thanks, Ralph, that clears up my confusion!
Thanks, Niels (if I may)!
The principle of intention agglomeration (PIA), as you formulate it, is certainly distinct from the agglomeration principle that I discuss in my original post. One big difference is that the PIA concerns the intentions that the agent actually rationally has, and not merely the intentions that it is rationally permissible for the agent to have. For this reason, the PIA is not open to the objections that I raise against the agglomeration principle that the atomist needs.
However, I don’t think that the PIA will be enough for the atomist to defend their position. Here is one way to see this. Suppose that an agent – call him Albert – violates the PIA. That is, Albert intends to do A and intends to do B, but does not intend to do A and B.
According to the PIA, Albert is irrational in some way. But the irrationality need not lie in his intention to do A – that might be in itself a perfectly rational intention; nor, for the same reasons, need this irrationality lie in his intention to do B. And in this case, these are the only two intentions that Albert has at this time.
Moreover, a proponent of the PIA cannot even say that Albert is irrational for lacking the intention to do A and B, because that too might be perfectly rational – perhaps Albert might equally rationally have lacked the intention to do B, and intended to do C instead. So, according to the PIA, Albert’s irrationality lies in the whole set of intentions that he has at this time – viz. a set that includes the intention to do A, the intention to do B, and nothing else (and so does not include an intention to do A and B).
In other words, Bratman and other proponents of the PIA seem committed to holism, and to rejecting atomism.
It seems to me that there is a problem with the actual “size” of intentional atoms, tied in with how adequately the intention represents the world. The ass presumably has a rational intention to never get caught in decisional deadlocks, which can’t be seen easily as simple agglomeration or whole, but can as part of a structured hierarchy.