In the previous post, I applied Williamson’s anti-luminosity argument to argue for the claim that there must be some moral truths that cannot be known. Now I want to look at one of the best arguments against the view at the other end of the scale – that all moral truths could be unknowable. I will use basic act-consequentialism as an example. Yet, similar problems would be faced by Rossians who think that we can never know the over-all ‘duty proper’ in particular circumstances and by contractualists whose view would imply that we can never know which principles are non-rejectable because we cannot know what kind of standpoints the principles create for individuals. The argument is an application of Wright’s Wittgenstein. It is based on the idea that, if all moral truths turned out to be unknowable, it is not clear whether anything would count as intending to do the right action because it is right.
A standard objection to actual consequence AC is that the paradox that if we try to pursue the options that have the best consequences we end up with less value. The standard reply to this is to make the distinction between deliberation procedures and the criteria of rightness and wrongness. AC is then said to be only the latter. But, this still seems to assume that there are moral deliberation procedures that we can use to settle on an intention to do what is right because it is right. But, if we can never know whether an action satisfies the criterion, it is not certain how any deliberation procedure could play such a ‘motive of duty intention’-forming role.
I’m pretty much going to only assume that we cannot ever know whether some action has the best consequences of the options we have in a particular circumstances. This has been done much better in, for instance, James Lenman’s ‘Consequentialism and Cluelessness’. The idea is that there are (almost?) always some directly or indirectly identity-affecting actions in our option-sets. The effects on the world-histories of such actions will be unknowable. After many enough generations different individuals will exist as their consequence. These individuals will act in different ways, and thus create different amount of value to the world. We cannot then know which one of the identity-affecting actions will create the best world-history.
Why would all of this threaten the idea that we can intend to do the right thing because it is right? Well, here I will quote in length Crispin Wright from his Realism, Meaning, and Truth (p. 25):
‘Consider an analogy. Suppose I place before you two small, identical-seeming boxes. Each has been sealed and cannot easily be opened. I tell you that each contains a vacuum, and that enclosed in one of them, but not the other, is a beetle, fashioned of a highly volatile substance that will vaporize tracelessly if it comes into contact with air. There is no betraying rattle or other symptom – difference in weight e.g. – to suggest which box this might be. The other contains an identical amount of the same material, used as a lining. In such circumstances, there is serious doubt whether you can so much as to try to pick the right box. … If you think you can aim at the right box, ask yourself what this aiming would consist in. You might, for instance, reach out and tap one of the boxes with your finger, but that will not distinguish your intention from that of picking the wrong box, or just picking a box. You might accompany your gesture with the words, thought or spoken, ‘This is the box with the mysterious beetle.’ But your having the thought is no guarantee of the requisite intention; you might have had it while picking one of the boxes quite aimlessly. We can make the thought carry such a guarantee, perhaps, if we construe it as, in effect, a performative: ‘I hereby intend to pick the box with the mysterious beetle in it.’ But this construal sheds no light on what, if anything makes such an intention possible in these outré circumstances; the question merely becomes whether the performative thought can succeed. A similar fate will befall any other candidates for psychological process or episodes which are putatively characteristic of the controversial intention.’
Wright then goes on to note that having an intention is not an atomic state but rather dependent on many other things that are true of the agent. Intentions are essentially a part of our aims and projects that are constituted also by many other wants, and also tied to our beliefs about how to aim at the particular result, i.e. about what are the actions that are likely to promote the fulfilment of the intention. There is also a connection between the content of the intention and a whole set of reactive attitudes – being satisfied or frustrated ‘when the sequence of events unfolds’. Wright claims that all of this background is missing in the beetle case. There is no sensible motives to pick the right box that would be part of our other projects, no beliefs about how one could rationally go on picking the right box, and nothing that could count as responding with frustration or satisfaction to what happens. For these reasons, nothing seems to count as intending to pick the beetle-box in this case.
Similar things could be said about the intention to do the right thing because it is right if we cannot ever know if our actions are right. We could often think that we were trying to do the right thing qua right, but what would distinguish this intention from intention to do the wrong action, from intention to do just some action, from the intention to do the action you think others would like you to do, from the intention to do the action you think others would recommend, from the intention to create happiness, and so on? But, if whether our action is right or not remains something we can never know, then the things that could distinguish this particular intention from the other intentions seem to be missing. Rightness so conceived could hardly be aimed at. This is because the relevant intention could not be integrated to other aims and projects, no beliefs formed about which deliberation-procedure gets us to reliably act in the right way, and no frustration or satisfaction could be felt when it turns out that either we managed or we didn’t manage to do the right thing.
Thus, it seems like unknowability of moral truths in general make it impossible to intend to do the right thing because it is right – to act from the motive of duty. If this is right, then nothing would count as a moral deliberation-procedure either – as a way of arriving at the moral intentions. But, I think we can intend to do the right action qua right action. So, I think moral rightness cannot be like the elusive beetle in the box.
Jussi,
Interesting post! I’m worried about intentions and acting under risk (since most actions are under risk). To keep it simple, suppose I can choose to do either A or B where each has some probability of producing an unfavorable outcome, but where A maximizes overall expected utility. If we agree that it is actual utility (not expected utility) that makes an action right, can I intend to do what is right in performing A? I take it your answer will be yes. But if so, then why can’t I intend the right action in choosing a box? I use a randomizer (say, a fair coin) that distributes equal probabilities to the beetle being in each (as I understand the case, the probabilities are even, but if that’s wrong, I’d change the moral case to one of decision under uncertainty). This gives me the best chance available to select the box with the beetle, just as choosing A gives me the best chance available of performing the action that would produce the greatest overall utility. In the latter case, I can intend to do what is right by choosing the pure strategy of performing A. So in the former case I can intend to do what is right by choosing the mixed strategy that puts .5(box1) and .5(box2).
“There is also a connection between the content of the intention and a whole set of reactive attitudes – being satisfied or frustrated ‘when the sequence of events unfolds’. Wright claims that all of this background is missing in the beetle case.”
So I’m not sure that all this background really is missing in the beetle case — if I have a desire to making correct guesses about things, I may feel a little pleased with myself if it turns out that I’ve picked the right box. And I may feel differently upon imagining myself picking the right box than the wrong box. This will give me evidence for whether I desire to pick the right box or the wrong box, and the facts about my underlying mental states (which give rise to the differences in emotion) are what make it the case that my desire is one way or the other.
In the case of moral action, I’m even more confident that the background isn’t missing. If I daydream about doing the right thing, I feel pleased, and if I daydream about doing the wrong thing, I feel displeased. That’s evidence that my desires (and the intentions that flow from them) are focused on the right and not the wrong.
I’m assuming here that you don’t mean to be presuming a skepticism about which of our concepts is a concept of rightness. The only thing we’re presuming is that we don’t know which course of action is the right one.
Jussi,
I think this is interesting, but not quite right. I’m somewhat sympathetic to the following two ideas:
(1) You can’t intend to do what appreciate you do not believe you will do.
(2) You can’t believe what you appreciate you know you don’t know.
Each of these will be subject to qualifications having to do with minimal rationality and the like, but suppose we grant something along the lines of (1) and (2). All that gets you is that you cannot intend to do the right thing under that guise if you _appreciate_ that you’re not in a position to know what’s right. There’s a difference between X’s being right being a known unknown and an unknown unknown. If it is in the latter category, I just don’t see that you cannot intend to the right. I don’t see any reason to think it is in the former category even if we assumed actual outcome act consequentialism. That actual outcome act consequentialism is true is a fact lost on most people. They might believe they don’t know what’s right were they to believe that, but since they don’t have that belief I don’t think they take themselves to be ignorant of which actions are right, which means that they might have the beliefs necessary for the relevant intentions.
Thanks for the interesting comments so far. Sorry that I haven’t responded so far. Just saw a fellow pea-souper Chris Heathwood give a great talk at Oxford. Anyway, to the comments…
Mike,
In the moral case you have in mind, it seems like we can in retrospect know which of the choices has an unfavourable outcome. Given that we have such knowledge it looks like we can intend to bring about the less favourable outcome. Afterward we can see if we succeeded, we can test which procedures get us reliably right, we can feel disappointed if the worse outcome comes about and so on. But, the theories I had in mind imply that we cannot have such knowledge or reactions.
About the beetle-picking machine. The worry is that there is a difficulty in specifying what it is about the situation that makes the use of the machine a procedure to settle on an intention to pick the right box rather than a procedure to settle on an intention to pick a random box.
Neil,
the problem is whether there is any sense in feeling pleased about picking the right box if the circumstances are really such that we can never know whether we have picked the right box. Because you will never be able to know that you have picked the right box, you will never be able to feel such a pleasure. The idea of imagining is an interesting one but I wonder if similar problems carry over from intentions to imaginings and daydreamings. What is the difference between daydreaming that you have chosen the best option and daydreaming that you have chosen the second best option if even in the dreams you cannot tell the two apart.
Clayton,
I think I might be with you. But, I think the results are bad enough. It would after all imply that it is not possible for anyone to both believe that AC is the criterion of rightness and intend to do the right thing.
About the beetle-picking machine. The worry is that there is a difficulty in specifying what it is about the situation that makes the use of the machine a procedure to settle on an intention to pick the right box rather than a procedure to settle on an intention to pick a random box.
A few things, first about the quoted stuff above. Intentional contexts are also intensional contexts. So it can be true that, using the mixed strategy, I intend to choose the beetle-box and do not intend to choose a randomly, even if [](I choose a beetle-box iff. I choose randomly). Similarly, I think I can intend that a fair coin land “heads” even though the chances are even. Suppose you give me (generously, if irrationally) 3 to 1 odds. I’m willing to wager $1 for a chance to make $3, if the coin lands heads. I flip the coin, and it lands heads. If you ask if that is what I intended to happen, I’m sure the answer would be yes. Similarly again, I can intend to sink a 4 foot putt where my chances of doing that are roughly .5 (make it a three foot putt, if you find that hard to believe). So I don’t see why I can’t intend to choose the beetle-box, though my chances are just .5 of succeeding.
What is the difference between daydreaming that you have chosen the best option and daydreaming that you have chosen the second best option if even in the dreams you cannot tell the two apart.
We can imagine situations in which we have false beliefs, or in which there are facts that we do not know. To take an example that’s more extreme but quite ordinary, we can imagine situations in which we do not exist — for example, situations in fictional worlds or after our deaths. If we can imagine scenarios in which we do not exist and thus have no beliefs, it seems that we should be able to imagine situations in which things are some way and we have false beliefs about them.
I sometimes imagine scenarios like this when making charitable contributions. What if, despite all my research, the International Burmese Monks’ Organization misuses my donation, and I never know that I’ve given to a corrupt organization? Even though that’s not a scenario in which I imagine myself experiencing any displeasure, it’s one that I’m displeased to think of. On the other hand, I’m pleased to think that the monks will use my donation to save lives, even if my subjective states in this scenario are identical to those in the former scenario.
Couple of things. First, I find it odd to say that you intend the coin to hand ‘heads’ – you might wish it or want it but intending seems to require more control. Though the putting thing does sound more natural even if the odds are similar. But, anyway, there is a difference between those cases and the beetle case that does seem significant.
I take it that that is supposed to be the force of Wright’s argument rather than the general idea of intending uncertain outcomes. It’s not clear what constitutes a success in that case. It’s impossible to ever come to know whether the box you’ve picked is the right one – there is no plan available to an outcome that can be known to be the intended one. If there were for instance a x-ray machine that could detect the right box, then the situation would change. Intending would become available as would all kinds of betting patterns and so on.
Neil,
I think that’s right. But, here we have cases where within a dream we know the facts about how our plans turned out to be. So, it does seem like you can dream that you intend to do various things in the dream. So, transfer the beetle box case to daydreaming. Maybe you can daydream that you see the beetle through the box – in a dream you can take a sneak peek to the box. In that case you can dream that you intend to pick the right box. But, I’m not sure you transfer that dream of an intention to an intention.
There is an interesting suggestion in your first comment where you say that what allows us to transfer the intention from imaginings to actual intentions is a ‘feel’ – some presumabaly phenomenological what it is likeness to intend the action. Wittgenstein of course makes much of this in Investigations – according to him it is not obvious how the ‘feel’ could have the power to direct our intentions to different sources.
Jussi – interesting post. Your argument goes:
1) If all moral truths are unknowable, we never intend to do the right action qua right action.
2) We sometimes intend to do the right action qua right action.
=> It’s not the case that moral truths are unknowable.
My concern is with premise 2. Unless you already thought that at least some moral truths were knowable, you would be unlikely to find premise 2 plausible in the first place. Hence the argument may be question-begging.
Thanks Simon. I know that giving non-question begging arguments is too hard and I’m not always sure I want to aim so high. I do see the worry but I’m not sure the argument is question begging in the classic sense. 2 doesn’t assume the conclusion as such and certainly isn’t outright denied by act-consequentialists who offer their view as a criterion of rightness. It does seem to need a further argument (which I had a go at) to show that it is impossible to intend to bring about the satisfaction of an unknowable condition.
Also, come to think of your paper. Couldn’t the defender of a response-dependence views similarly say that Zangwill’s argument begs the question that we can act from the motive of duty?
what allows us to transfer the intention from imaginings to actual intentions is a ‘feel’
I don’t know if I would have characterized the position this way, in terms of an intention being transferred from imaginings to actual intentions.
Rather, the point is just that there can be functional outputs of a desire or intention to act rightly that differ from those of a desire or intention to act wrongly, even in the case when we will never have any evidence about which course of action is right. These are the functional outputs that are exhibited in the way we feel when imagining how things might go. By looking at these functional outputs, we can tell whether we are motivated to act rightly or wrongly.
One other question — are we allowed to appeal to behavior in counterfactual situations here? The truth of the counterfactual “If I knew that the beetle was in the right-hand box, I would’ve picked the right-hand box” could be enough to make it true that I had an intention (or some other complex of motivational states) aimed at picking the beetle. I’m guessing that for some reason, appealing to the truth of these counterfactuals is off limits in this case, otherwise solving Wright’s problem would be way too easy.
Neil,
I hope I get this right this time. I’m not altogether sure what functional outputs are. But, it seems like Wright’s argument does hang on holism in attributing mental states. So it might be that one criteria for attributing intention is the functional output that one feels satisfied when the intention is carried out.
So, once one allegedly has the intention to act rightly, one can check whether one really has that attitude by imagining various scenarios and the checking at what point one becomes satisfied. One problem is that, if the right-makers are really what the AC we have on table says they are, then we would be satisfied only when we have imagined various complete world-histories and compared their value. My imagination doesn’t stretch that far. If, on the other hand, one becomes satisfied in the imaginings earlier, then either we would have to conclude that the intention was not an intention to act rightly or that the right-makers are something different than AC.
About the counter-factuals, I think we are allowed to refer to behaviour in counter-factuals. But, the case is supposed to be specified so that that particular counterfactual has an impossible antecedent. We could not possibly know in which box the beetle was.
Sorry about the unnecessary complexity of the ‘functional outputs’ talk. I think you understood me, in any case.
we would be satisfied only when we have imagined various complete world-histories and compared their value. My imagination doesn’t stretch that far
I don’t think you have to stretch that far to do the necessary imaginative work. For the purposes of testing whether someone has good intentions or bad, it should suffice to see whether she is pleased when she imagines something that contributes to better total consequences as far as she imagines. So it’s enough if she is satisfied when imagining a situation where someone’s suffering is averted, while being agnostic on what downstream effects that has. When we imagine a scenario, we don’t always fill it out completely (when I imagine meeting my friend for dinner, I don’t specifically imagine all his vital organs, even though they’re important for dinner going smoothly).
that particular counterfactual has an impossible antecedent. We could not possibly know in which box the beetle was.
I don’t think this can be the right thing to say here — surely it’s metaphysically possible to know! I think that’s all we need to have the counterfactual available to appeal to.
Not sure whether that would work. Being pleased about a contribution to the total consequences seems to be just as good evidence for that your intention is to maximise the expected value of the consequences. This might be a good intention to have but it wouldn’t be an intention to do the right thing because it is right in the considered framework.
With regard to potential question-begging, I can imagine two relevant kinds of Act Consequentialism.
One kind, let’s call it “skeptical AC”, proposes a view that contains AC as a criterion of rightness (CR), but owing to full-blooded skepticism about knowledge of such matters, proposes no deliberation procedure (DP) at all. Skeptical AC denies that we are in a position to have any idea at all about what is and is not right. I see no reason to think that someone who holds skeptical AC would also try to hold that he “…sometimes intend[s] to do the right action qua right action.”
Another kind, let’s call it “non-skeptical AC”, proposes AC as a CR, but also proposes a DP by which it is thought we can produce the best consequences overall. Non-skeptical AC may accept that of any individual action we cannot know whether it is right or not, but does not deny that we have some idea about what’s right or not in general. (In general, acts that are decided on by DP will be right). Someone who holds non-skeptical AC would presumably think that he “…sometimes intend[s] to do the right action qua right action”, and think he does this by intending acts chosen by DP.
If this is right, then Jussi’s argument (at least in the way I presented it) would be question-begging if the difference between skeptical and non-skeptical AC is precisely a difference about whether any moral truths are knowable. But this is just where the difference lies. Skeptical AC supposes not only that there’s no way to know which particular actions are right, but that there’s no way to even get in the ballpark about this. Non-skeptical AC suggests that even if we can’t know which particular actions are right, we can know that someone who follows DP will perform more right actions than wrong actions, or more right actions than someone who does not, or something similar. A piece of moral knowledge like this is enough to radically change the beetle-box analogy you used: it’s as though you know in advance that one of the boxes has a greater chance of containing the beetle (because it’s selected by DP). If you pick it for that reason, then surely you can be aiming at the beetle in doing so.
Maybe Jussi is doubtful that non-skeptical AC is tenable because he doubts that DP could be known to be generally reliable in choosing actions meeting CR unless we also knew of some particular actions that they met CR. I can see why one might have epistemological worries here, but I think this particular suggestion would be a non-sequitur.
Suppose you need to assess the records of two meteorologists who have given a 24-hour chance of rain in London forecast every day for the past year. Suppose also that these meteorologists never give a “rain” or “no rain” forecast, but always a probability ranging from 1% to 99%, and that all you have to go on, apart from these forecast records, is data about whether it rained the next day or not. And suppose that their assessments diverged such that meteorologist 1 tended to give high numbers on the days before it rained and low numbers before the dry days, and meteorologist 2 tended to give numbers that were pretty randomly related to whether there was rain the next day. I think you could say quite confidently that meteorologist 1 was more reliable than meteorologist 2, even though you couldn’t say of any of his particular forecasts that it was correct, or of any of meteorologist 2’s particular forecasts that it was mistaken (because even if there really is a 99% chance of rain, it won’t rain 1/100th of the time the next day: a 99% rain forecast can be correct even for a non-rain day).
(Jussi – on your last point: The paper of mine you heard is different because the response-dependence theorist won’t generally want to deny that we act from the motive of duty, unlike the skeptical AC).
Simon,
Thanks. This is very helpful. I still don’t agree that the argument begs the question against the sceptical AC but I see why the defender of the view would not be bothered about the argument. You are right that I do have an epistemic worry about the tenability of the non-sceptical AC. I cannot see how the reliability of a DP can be checked if we can never know whether its prescriptions get us to the right or wrong actions. The closer analogy with weather-forecasting would be that we would need to chech the reliability of the two weather-forecaster without any data of the rain during that year.