This got to be too long to be a comment on Michael C’s post. People sometimes criticize utilitarianism for being "inapplicable," or for entailing that you shouldn’t try to use it as a decision procedure. I don’t think there is any way to interpret this criticism such that utilitarians should worry about it.
First of all, it is a category mistake to criticize utilitarianism for not being a decision procedure. Utilitarianism is a statement of necessary and sufficient conditions for an act to be morally right. No statement of necessary and sufficient conditions is a decision procedure. Imagine buying a cookbook that purported to give you a procedure for making a cake, and the cookbook just gave you necessary and sufficient conditions for something to be a cake. You could rightly complain that you aren’t interested in the philosophy of cake, you just want to make a cake, and you wanted the book to tell you how to do that. (For this reason it is misleading to say that it is only "sophisticated" utilitarians who claim that their view is not a decision procedure. All utilitarians must say so on pain of committing a category mistake.)
But some will press the objection. They might say that the project of coming up with a decision procedure is really the important thing, or is at least another important thing, that moral philosophers ought to be doing. And nothing about utilities is likely to be part of the correct decision procedure. In most cases, thinking about utility when deciding what to do will not help you do the right thing, and often it might hurt. So even if utilitarianism were correct as a criterion of moral rightness, there can be no correct and distinctively utilitarian decision procedure.
This is a bad objection. There can be no correct decision procedure of any sort, if a decision procedure is supposed to tell you, under any circumstances, what procedure to follow – how you ought to deliberate – in order to act rightly (or in order to have the best chance of acting rightly). What procedure would be best to follow for the person who has less than 2 seconds to decide whether to save the children, and also best for the person who has 3 months to decide which charity he should send his money to? The correct procedure would have to be possible to follow in a very short time, and also give the right results in cases where more time for deliberation is available. It would have to be best to follow for someone with no common sense, and also best for someone with lots of common sense; for someone good at math, and someone who sucks at math…etc. That seems impossible.
If there is no correct decision procedure of any sort, then it can be no objection to utilitarianism that there is no correct and distinctively utilitarian decision procedure. Do people have something else in mind when talking about decision procedures? Is there any criticism in this neighborhood that utilitarians should worry about? And is there any reason to think philosophers should spend time trying to come up with a correct decision procedure?
Ben,
One way in which a moral theory can be a decision procedure is that it provides a way of coming up with a set of principles or kinds of actions that you could subsequently (e.g., in the heat of battle) rely on to make decisions. In that sense, there could be a universally applicable decision procedure. But then utilitarianism as you describe it, together with the premise ‘do what is right’, would be just fine as a decision procedure.
Ben, you write,
First of all, it is a category mistake to criticize utilitarianism for not being a decision procedure. Utilitarianism is a statement of necessary and sufficient conditions for an act to be morally right. No statement of necessary and sufficient conditions is a decision procedure.
So suppose we put the specify what makes an act right/obligatory in this way,
AU. An act A is right in circumstances C iff. performing A in C would produce more overall utility than would be produced by performing any alternative to A in C.
This version is not meant to be tendentious. Supply modifications or another version of AU is you’d like. Now let A be an action that is right in C and suppose we can perform A in C. Isn’t it true that in C we have most moral reason to perform A? That is, can’t we derive that (iii) we have most moral reason to do A in C from (i) we can do A in C and (ii) A is right in C? If so, then it seems we can arrive at what we have most moral reason to do (perhaps, say, what we ought morally to do) from AU and some version of ought-can. No?
Mike A, sure, we can derive what we have most moral reason to do in C from those facts. But where is the decision procedure? Is the procedure:
1. Figure out which things you can do.
2. Figure out which of those things is right.
3. Do that.
?
I take it that giving a decision procedure is not the same thing as providing a way, in the abstract, to determine which act is right. I thought a decision procedure was supposed to be something practical, that you could follow in real situations, such that if you followed it, you’d do the right thing (or have the best chance of doing the right thing).
If you read the earlier thread, actually, no-one even mentioned decision procedures as a problem for utilitarianism before utilitarians themselves started to defend their view against the objection that they are not in the business of providing a decision procedure. This was a very odd dialectic move in the first place.
As I see the issue, it is that we have a concern for acting in the right way, avoid wrong actions, and doing what we ought to do. This is the point from where we start moral theorising. We attempt to form systematically true beliefs about what actions we ought to do to help us guide in our practical lives. In this situation, it is quite odd to say that here’s truth about what one ought to do, what is right and so on, but never mind, don’t believe in these when you act or try to act in a way that would fit the specifications of the theory. That doesn’t satisfy the concern we started with.
You are right that decision procedures as you described are impossible and beyond the reach of any theory. But, we do want to be able to give some basic advice on how to act – don’t lie, be kind, and so on, and want to be able to give some good reasons for why this is the case. We want some help with how to go on solving certain difficult situations – whether to keep the dying, suffering person alive, let her die, or kill him. Now, there are non-esoteric versions of utilitarianism that can help in this respect and serve the purpose for normative theories we start with. Some utilitarians do ground certain specific ways of thinking about right and wrong, based on principles, on the thought that these ways of thought would maximize the utility. Some say that in the problematic situations we should start doing the calculations. Neither of these are esoteric in the defined way. They are the ones I’ more interested in discussing and not the one’s that usher themselves from the stage.
Robert, just so I understand: is the idea that a decision procedure isn’t itself a procedure you follow in the heat of battle, but a principle that tells you which other procedure you should follow?
Jussi, that’s helpful. A couple of points: I think it’s a mistake to think that, because moral theorizing began with a certain goal, that should continue to be our goal. I guess I first became interested in moral theory because I wanted to be able to figure out what to do in various situations. Years later I discovered that philosophers were doing things that were supposed to help me with that in some way. Then at some point I realized that the theories they offered weren’t likely to be very helpful to me. But I remained interested in the attempt to understand what makes an act morally right. I don’t feel I should now evaluate moral theories according to how well they achieve the practical goal I was first interested in.
I do still share your desire to figure out what’s right in certain problematic situations. But it seems like that is entirely within the purview of the attempt to discover necessary and sufficient conditions for moral rightness. If we come up with that, then we’ll have done all the interesting philosophical work to be done. What remains would be figuring out which action meets the conditions described in the criterion (e.g. whether killing this terminally ill person in fact maximizes utility), and depending on the criterion, there may be no reason to think philosophical methods will be relevant there.
I actually do not think that, as Ben writes, “there can be no correct decision procedure of any sort”. As Ben defines it, a decision procedure would “tell you, under any circumstances, what procedure to follow – how you ought to deliberate – in order to act rightly (or in order to have the best chance of acting rightly)”. Now if, following Ben, we understand utilitarianism to be a theory that specifies necessary and sufficient conditions for something to count as right or wrong, then it would seem to follow from utilitarianism that the correct decision procedure is the right one according to the conditions so specified.
Ben might claim that this possibility is ruled out because utilitarians only recognize acts as genuine units of moral evaluation. The necessary and sufficient conditions that utilitarianism provides are necessary conditions for acts to be right or wrong. Because decision procedures are not acts, utilitarianism doesn’t say anything about the rightness or wrongness of decisions procedures, and hence remains silent about which decision procedure is correct. I agree that act utilitarianism has this implication. But I deny that all utilitarians must be act utilitarians. In particular, I claim that utilitarians should be global utilitarians; they should extend the standard of rightness which they have traditionally applied to acts (and, more recently, to rules) to every object to which moral predicates can be meaningfully applied.
Now, is it meaningful to say that a decision procedure is right? I think it is, provided that we understand predications of the form ‘x is right’ as expressing the proposition that x ought to be, or ought to exist in the world that ought to be. The notion of a world that ought to be is, I take it, a meaningful one. And since among the many things worlds can contain are decision procedures (as abstract ideas in people’s minds, or perhaps as behavioural dispositions of some sort), it would seem to follow from global utilitarianism that there is a correct decision procedure, or at least a plurality of decision procedures each of which exists in the world that ought to be.
Correction: in the third paragraph, the disjunct
“or ought to exist in the world that ought to be”
should be
“or exists in the world that ought to be”.
I thought a decision procedure was supposed to be something practical, that you could follow in real situations, such that if you followed it, you’d do the right thing (or have the best chance of doing the right thing)
Ok, maybe we are at cross purposes here. I take a decision-procedure to tell us what we ought to do. So, I think we agree that AU does provide a decision procedure (in this sense) in some circumstances, viz. those in which I am able to determine which of my actions will maximize utility and it is an action I can perform. Certainly there are some cases in which I can do this fairly easily.
The objection to AU is that, though there are some situations in which I can use the principle to make a decision, there are many in which I cannot. That objection does not commit a category mistake. It merely states that AU can function in making a decision in only a limited range of cases.
OK Mike, I think I am on mostly board with that, although AU by itself isn’t the procedure. The procedure is, “figure out what maximizes utility.” Sometimes you can follow that procedure, but sometimes you can’t, or shouldn’t try to. But the same will be true of every proposed decision procedure. That’s why I don’t think there’s a problem.
Pablo, that’s interesting. I certainly didn’t mean to be saying that utilitarians can’t evaluate things other than acts. I’m fine with saying this: it is right for a person to employ decision procedure D in circumstances C iff employing D in C maximizes utility. (It’s not clear that this would actually be adding anything to AU, since employing a decision procedure seems like an action you can perform.) The point is, which decision procedure maximizes utility will depend on the situation. No utilitarian should say of any procedure, “*this* is the procedure you should always follow!”
Here’s an attempt to formulate at least one objection to esoteric theories. (It’s harder to get it clear than I thought.)
Here’s a parallel example. Let RB be any theory of rational belief, a set of N&S conditions, which by its own lights is irrational to believe. If RB is true, then you should not believe it. Since RB could be any such esoteric theory of rational belief, there are no true esoteric theories of rational belief which anyone ought to believe. If you believe them, defend them, or entertain them, you are irrational. This of course has nothing to do with any decision procedures for acquiring rational beliefs; those were never mentioned.
Let MA be a theory of morally right action, a set of N&S conditions, which by its own lights is not morally right to believe. (This, I take it, is the posited condition of AU.) If MA is true, then you should morally not believe it. Since MA could be any esoteric theory of morally right action, there are no true esoteric theories of morally right action which anyone should morally believe.
It doesn’t (yet) follow that the esoteric moral theorist is irrational, since she posits that what is moral to believe is what not the same as what is rational to believe, and there are in fact two kinds of reasons for believing, pragmatic and epistemic. But something else does follow: namely that the esoteric moral theorist is immoral by her own lights. She is evidently not following or trying to follow her own moral theory. In short, there are no moral believers in true esoteric theories of morally right action.
This schizophrenia will show up in any particular case of action where what it is rational to do or aim at, and what it is moral to do or aim at, conflict. Rationally, the esoteric theorist says that one thing is morally right, and therefore that you ought to do or pursue that; pragmatically, the same theorist advises you to do or pursue something else. Here we have two different answers to the question, “What should I do?” One objection, at least, is that there is only one set of answers to this question, which is how we define “morally right,” and the other set of answers is just not interesting. There is a certain mad-dog realism which claims that whether something is morally right is unconnected to the practical question of what one ought to do, but I just don’t know what we’re theorizing about anymore in that case.
This is not the same condition as when the rational answer is complicated or hard to know, so a second-best recommendation is made. “What should I do?” “Maximize utility.” “I don’t know how to figure that out.” “OK, then just do what Mom would do.” That we can understand; the first recommendation is not something we’re able to do intentionally under that description, and then the notion of a decision procedure, in distinction from the moral theory, makes sense. But when the dialogue is, “What should I do?” “You should turn left.” “OK.” “Wait! Turn right for best results,” I think we’re left wondering in what sense we should turn left, and why that sense is one we care enough to theorize about. When the decision procedure comes that far apart from the moral theory, we’re left wondering about the whole point of the theory.
Ben,
I am quite sympathetic to your post. Lots of times people say the problem with Consequentialism is that it is useless as a decision procedure. It can hardly be claimed that C is the true account of the truth-maker but fails because it is useless as a decision procedure. Thus the thought must be that part of what we think it would take to be the truth about what makes actions right or wrong is that that very truth be useful as a real world decision procedure. But we don’t generally think this about what makes something true (compare: what makes it true that the Big Bang theory is the correct account of the universe might or might not come together with an epistemically available in the real world way of determining this truth). Thus, at a minimum, I think those that offer such an argument must explain why there must be such a connection between truth-makers and useful decision procedures in the case of ethics but not in other cases.
Small quibble: I would say that C purports to offer not just nec and suf conditions for being right, but what makes an act (rule, etc.) right.
Ben,
You asked: Do people have something else in mind when talking about decision procedures? Is there any criticism in this neighborhood that utilitarians should worry about? And is there any reason to think philosophers should spend time trying to come up with a correct decision procedure?
We (as ordinary folks, not as philosophers) make moral judgements of people based on the decisions that they make. We could say that the right decision is defined by doing the thing that, in fact, is the right action, but typically we don’t. If a person sincerely thought that the course of action he took in some case would cause maximum harm, but, as it turns out, it causes maximum benefit, we still want to say that this person is a (morally) bad guy. We judge him by judging his decision, not by judging his action.
What this suggests is that while it is certainly important to have a criteria for what makes an action the right one, we also want a criteria to morally evaluate people based on the actions they perform (or don’t perform). Thus we want to be able to say some decisions are good and some are bad – and thus some people are good and some bad – independent of the rightness or wrongness of the action.
One obvious way utilitarians can talk about decisions is to talk about expected utility. Thus whatever the time you have, your degree of common sense, and your skills at math, we can ask whether you made the right decision based on what you knew and were capable of understanding at the time. But they need not do it this way.
The complaint about utilitarianism as I understand it that it is an incomplete moral theory if it just gives us a basis for determining what actions are right and what actions are wrong. To be a complete moral theory it must also tell us what how we ought to be making decisions about our actions so that we can determine who deserves moral praise and who deserves moral blame. Some people might buy a book that tells them the necessary and sufficient conditions for something counting as a cake, but many more want to know what to do to make one. Similarly, there is certainly some interest in defining what counts as right action, but many more people want to know how to decide what to do and how to judge the actions of others. If utilitarianism has nothing to say on that question, then it has failed to address the most important moral question.
Hi Ben. A few comments.
First, there are a number of criteria that one might look for in selecting a theory—any type of theory, not just a moral theory–and there might even be agreement on most of these criteria. For example, a theory should be explanatory, simple, elegant, not invoke entities beyond necessity, etc. One might also think that a criterion for theory selection is that the theory help us solve practical problems. A good mathematical theory would help us solve practical problems, a good scientific theory would help us solve practical problems, a good legal theory would help us solve practical legal problems, a good engineering theory would help us solve practical engineering problems, etc. So, if a moral theory cannot help us solve practical moral problems, that theory would, to that extent, be criticizable. I see nothing amiss about such a criticism. The criticism comes to, as David W. says, that such a moral theory is incomplete in the sense that it does not satisfy what some think is an important criterion for selecting a moral theory. The seriousness of this criticism will of course be felt only by those who think that such a criterion is important when selecting a moral theory. (Likewise, the seriousness of the criticism that a moral theory is not explanatory will be felt only by those who think that explanatory power is an important criterion in selecting a moral theory.)
Second, a standard objection, usually coming from analytic circles, to those pursuing issues in applied ethics is the following: How can one solve *any* specific, practical moral problem if we don’t first know what *makes* an act right or wrong? The assumption here is that applicability is an important criterion in selecting a moral theory: we must first find out what makes acts right or wrong so we can then apply that principle(s) in specific cases.
Third, I agree that almost no moral theory satisfies the criterion of applicability.
But, fourth, I think the objection, when raised again Utilitarianism, is a different kind of objection. That is, I don’t think the objection is just “Utilitarianism can’t be applied to specific cases to help us resolve practical moral problems.” Rather, I think it is that Utilitarianism can’t *in principle* be applied to help us resolve specific cases. For example, the standard objection to Utilitarianism that there is, in principle, no future time before which the utils of every possible action is calculated implies, I think, that Utilitarianism cannot in principle be applied to help us resolve specific, practical moral problems.
By the way, Ben, yes, I think the decision procedure that folks have in mind is the following: In order to resolve a specific, practical moral problem
1. Figure out what makes acts right (wrong) generally, and state your finding in the form of a principle(s).
2. Apply that principle to the specific case at hand.
Heath, if utilitarianism is esoteric in the sense you’re talking about, then I agree that we’d be in a weird situation if utilitarianism were true. We couldn’t believe what was true about morality and also act morally. If utilitarianism were esoteric in that sense, then on utilitarian grounds we might think we should give up moral theorizing, or else regard it as an entirely different sort of enterprise than we currently do (e.g., as not aiming at truth but as attempting to achieve certain practical results). But I don’t think anyone has ever argued that utilitarianism is esoteric in that sense. e.g. Sidgwick, who thought utilitarianism was esoteric, didn’t seem to think he was doing anything wrong in *just believing* utilitarianism. I think there’s more to say here but I have to give it some more thought.
David S, thanks, and I agree with your quibble, although I don’t have a good handle on what “makes” means in this context. But that’s another problem.
David W, I agree that figuring out under what conditions people deserve to be praised and blamed is an important job for moral philosophy. I don’t know how we determine what is the “most important” job for moral philosophy. But in any case, there are a great many other questions moral philosophers are interested in. A theory of moral rightness should not be expected to answer all of those questions. A theory of rightness should tell us about rightness; a theory of praiseworthiness should tell us about praiseworthiness; a theory of virtue should tell us about virtue; etc. You might think that these things should all be connected in some way. And in fact some utilitarians would agree, and would give accounts of blame/praiseworthiness, or moral virtue, that look relevantly like the utilitarian account of moral rightness. If you think those theories go wrong, and you think moral theory should be unified in a certain way, then you might have reason to suspect that the utilitarian account of rightness is mistaken too. But it still seems like a mistake to me to say that utilitarianism is “incomplete” because it doesn’t itself answer those other questions.
Dan, that all seems reasonable, but I think there are different ways we might think of a theory being “applicable.” One is that it yields results about specific cases. I think it *is* a serious problem for a moral theory if it doesn’t yield any results. But utilitarianism doesn’t seem to have any particular problem there. Another is that it can be used as part of a decision procedure, in the “heat of battle” as Robert says, to get good results. Utilitarianism has a problem there, but so does every possible theory, because no decision procedure that involves applying a theory works in the heat of battle.
So I am sticking to my guns here, I still see no problem for utilitarianism.
Dan, could you say more about what the in principle objection is?
Ben writes:
Ben, you do seem to agree that there is a problem for Utilitarianism, but you just don’t see that there is any special problem for Utilitarianism that other theories don’t also need to resolve. It’s a picky, but important difference, I think.
As for the in principle objection, I was thinking the following. Here’s a standard question/objection raised towards Utilitarianism: For how long into the future does one need to consider all the possible consequences for every possible action one could now perform in order to determine what would be the right action to perform now? This question can point to two different objections, one practical, one theoretical. The practical objection says (very roughly) that it is in practice too difficult to calculate far into the future utils for every possible consequence for every possible act one could now perform, so the theory is practically inapplicable. Another objection says that there is no *principled* future point before which utils of all possible consequences of all possible actions must be calculated. That is, (the objection goes) Utilitarianism can give us no principled reason why we should stop calculating consequent utils in three hours, or tomorrow, or next week, or five years, or one hundred years from now, etc. Whatever answer Utilitarians give will seem arbitrary. If this objection is correct, then it seems to imply (I think) that Utilitarianism cannot even in principle be applied to solve any specific, practical moral problem. At least that’s what I was thinking.
In response to David W., Ben also writes:
Ben, I think you have finally gotten a hold on the objection. Considered solely as a theory of rightness, perhaps Utilitarianism should not be criticized for being inapplicable. But I take it that others, including other Utilitarians, think that Utilitarianism is more than a theory of rightness. They think it is a *moral* theory, or perhaps even a *political* theory. You seem to be identifying moral theory with a theory of rightness, but a moral theory, in the usual sense, is assumed to be telling us much more than just what makes acts right. (Or at least that’s my understanding of what a moral theory is supposed to do.) And those who think so might raise the “objection from inappicability.” That is, the objection is raised towards Utilitarianism taken as a moral theory, not taken merely as a theory of rightness.
Ben,
I don’t think there is any problem with a moral theory’s criterion of rightness being difficult to apply in the heat of battle. E.g. if a virtue theorist says that the right action is whatever a virtuous person would do, I don’t think they mean that you ought to speculate about sage-psychology in the (perhaps literal) heat of battle. After all, that is not what a virtuous person would do. So I agree with you that the mere failure to be a (quickly applicable) decision procedure is not in itself a mark against utilitarianism or any other theory.
On the other hand, it’s important that one be able to have a certain ideal of moral action–maximizing utility, doing what the virtuous would do, or whatever–while feeling free to apply rules of thumb or quick and dirty decision procedures when snap or poorly-informed judgments are called for. After all, this is how you determine which rules of thumb / decision procedures are worth using. Naturally, you don’t determine this in the heat of the moment, but beforehand.
I took esotericism (“it’s better if the doctrine is not believed”) to rule this out; the decision procedures to be used can’t be evaluated in terms of their ability to approximate the genuinely right results. If we can’t evaluate our decision procedures in light of the true theory of morally right action, that is a real problem for a moral theory, I take it.
I’m inclined to support Ben’s hard line on this issue.
Suppose you want to know which decision procedure is most conducive to acting rightly. Then you have two questions to answer. Which acts are right? And, which decision procedure is most conducive to performing those acts? Granted, utilitarianism answers only the first question. But that doesn’t show that utilitarianism is incomplete, at least not as a philosophical theory, because the second question isn’t philosophical. It’s an empirical question about human psychology.
I’m tapping my feet to your tune, Dr. Brown.
I think we are starting to go in circles but consider this quote from Campbell:
“Suppose you want to know which decision procedure is most conducive to acting rightly. Then you have two questions to answer. Which acts are right? And, which decision procedure is most conducive to performing those acts?”
This can only apply on the condition that the rightness of acts we normally talk about is independent on what kind of thoughts lead the agents to act in a given way. If they are not independent, then the two questions do not come apart in Campbell’s way. And, there is some evidence that agent’s intentions do matter when we evaluate the rightness of actions. My giving a pill to someone with the intention to kill the guy is made wrong by that intention even in the case where the action in fact is handing a harmless sweet because I mistakenly think it is poisonous. To capture this phenomenon, it does not seem to be enough for the moral theory to answer the two questions that giving sweets is right and provide a procedure of how to get people to give sweets to others. Something else must be going on.
Jussi,
How is this a problem for Campbell’s claim? To know which decision procedure is most conducive to your not acting wrongly, you have two questions to answer. Which acts are wrong? (Your answer: acts that involve an intention to kill are wrong.) And, which decision procedure is most conducive to your refraining from those acts?
It sounds like you’re just giving an answer to the first question. The answer to the second question might be: whenever you feel like killing someone, take a couple deep breaths, remember that you have a family to take care of (which you can’t do from prison), and walk away. This may be a good decision procedure for avoiding wrong-doing of the kind that worries you.
Well, if acts are individuated by the thought-processes that lead to them and the assessments of rightness and wrongness are given for such thought-process including acts, then of course the question of whether moral theories should be assessed in terms of their action guidingness or the decision procedures does not rise. They all do trivially. But, I thought few consequentialists would be willing to do such individuation (especially the ‘act’ ones).
I don’t also think that taking deep breaths or walking away can be part of a decision procedure. Neither can having a family be included – not all of us do. Moral theories have to do better than that. Ideally the decision procedures which many moral theories refer to should be related to the right and wrong-making properties of the situations.
Jussi,
“I don’t also think that taking deep breaths or walking away can be part of a decision procedure. Neither can having a family be included – not all of us do.”
Why can’t taking a few deep breaths or thinking about your family before deciding what to do be part of a decision procedure? And I don’t get your point about not all of us having families. Why can’t the best decision procedure for me be different from the best decision procedure for someone else with no family? Another decision procedure that might help one to avoid wrong-doing is thinking about what someone more virtuous than yourself would do in the circumstances. If you don’t think these can be parts of a decision procedure, then could you explain what you take a decision procedure to be?
“Ideally the decision procedures which many moral theories refer to should be related to the right and wrong-making properties of the situations.”
Why is this ideal? Why does a decision procedure have to refer to the right- and wrong-making properties of actions at all? If the point of following a decision procedure is to make it more likely that you will act rightly, then it seems to me that the ideal decision procedure may not refer to the right- and wrong-making properties of actions at all, for it may be that a decision procedure that refers to different properties makes it more likely that you will act rightly.
One story would be that what makes right acts right must be something which it is right to have serve as one’s decision procedure. But a lot of accounts of what makes right acts right would concede that, at least in some cases, explicitly thinking in terms of the truth maker is not right. To me this seems likely true for a wide range of cases, but I would have thought that nearly everyone would have to say that this could be the case in at least some cases. Consider evil demon cases in which the evil demon will torture everyone if you explicitly use the true theory as a decision procedure spring to mind. We could insist that nothing but using a decision procedure that mimics the truth maker matters all that much morally, but that would be implausible. It matters whether people get tortured.
Thus I would like to see an instance of a plausible moral theory that passes the test that Consequentialism is being claimed to not pass. I take it that to pass the text there must not be circumstances in which it would be right to not use the sort of thoughts that are appealed to in the truth-maker as one’s decision procedure.
Doug,
sorry. I’m having a bad day and all I’m seeing able to do is to give sarcastic remarks no-one seems to get. Sorry about that. Why taking deep breaths and walking away cannot be a decision procedure? Well, I was assuming that decision procedure is something you go through in deliberation, i.e., thoughts. Come to think of it, this is wrong. Flipping a coin can be a decision procedure without being anything mental. So, in principles, breaths and walks can be a decision procedure. I’m not sure about the correlation they have with not wanting to kill anyone. I thought killers take deep breaths before they kill and walk away afterwards.
About remembering the family. If I don’t have a family, I cannot remember that I have a family to take care of when I decide whether to kill someone or not. Thus, this won’t do as a decision procedure for me. I might not even care about my family.
Why right-making decision procedures are ideal? That seems clear too. If the decision procedure points to the right-makers, then there is a systematic explanation and guarantee why the procedure guides to right actions. If it does not, then it seems that the guidance is contingent, accidental, and reliable at best. If there are some decision procedures that are not related to the right-makers, then we at least would want to have strong empirical evidence that they lead agents to have the right intentions.
Of course you still get the interesting evil demon cases like David’s above. They are puzzling in many ways. The demon is on the break of demanding something impossible. Can you not believe something you think is true? Thinking that it is true seems to imply believing it. And, the demon does not give any evidence that the theory is false. I’m not sure we have a free will to form the beliefs on non-evidential grounds. I’m afraid this kind of demons.
Campbell writes:
Campbell, first, the objection is not that Utilitarianism fails to provide the “most conducive” decision procedure for resolving moral problems, but that it fails to provide *a* useful or reliable decision procedure for a very large number of practical cases (or that it cannot in principle provide such a decision procedure).
Second, you recognize, I know, that the history of philosophy is replete with philosophers who were/are quite explicit that *one* of the central aims in providing a philosophical theory is to help guide our actions in practical situations and, uh, make our lives better. I’m assuming that some famous Utilitarians were/are some of these folks. Now, you may deny that the potential to help people resolve practical problems should be a desideratum for any successful philosophical theory on the grounds that whether a theory can be used to resolve practical problems is “an empirical question.” But, I’m not seeing your justification for such a denial. Whether or not a particular legal theory can be used to help resolve practical legal problems is an empirical question, but the answer to this question might be important in deciding the further philosophical question of whether or not that legal theory is a good/adquate legal theory; whether or not a particular political theory can be used to help resolve practical political problems is an empirical question, but the answer to this question might be important in deciding the further philosophical question of whether or not that political theory is good/adequate political theory. Likewise, whether or not a particular moral theory can be used to help resolve practical moral problems is an empirical question, but the answer to this question might be important in deciding the further philosophical question of whether that moral theory is a good/adequate moral theory. And you wouldn’t want to deny (or would you?) that legal, political, and moral theories are philosophical theories.
I can see the puzzle connecting things such as the Sidgwickian ‘point of view of the universe’ to what to do here and now. But this seems to be a mistake, if it’s supposed to be a knock-down argument against utilitarianism, though I disagree a bit with Ben about the mistake. A decision procedure is just a procedure to answer the question, What to do? If what to do is the right thing, then utilitarianism is clearly a decisions procedure. It give an answer about what the right thing to do is.
It does not say explicitly how to find out what the right thing to do is. That, as Campbell points out, is an empirical question. But that it is an empirical question hardly disqualifies it as a philosophical question. In any case, Hare famously and repeatedly insisted that this was a virtue of utilitarianism, that it made the question question which actions were right empirical, and that this made the procedure for finding this out emprical. It seems to me that a utilitarian should embrace the idea that it offers a decision procedure, that it is an emprical one, and that it is an empirical question which procedure gets us an answer in the most optimifica way.
I toss my cap in with Ben, David, Campbell, Doug, etc. and the hard-line response. Maybe there can be no correct and distinctively utilitarian decision procedure, but this doesn’t mean utilitarianism is false. The admission that this involves, however, leads to a worry raised by Robert Frazier. It might be something in the relevant neighborhood that utilitarians should worry about:
(1) There is no correct and distinctively utilitarian decision procedure.
(2) If (1), then believing that an act meets utilitarianism’s necessary and sufficient conditions for moral rightness is never justified.
(3) If it’s not, then either act utilitarianism is false, or believing that it would be morally right to perform an act is never justified.
(4) Therefore, either act utilitarianism is false, or believing that it would be morally right to perform an act is never justified.
(5) Sometimes, believing that it would be morally right to perform an act is justified.
(6) Therefore, utilitarianism is false.
Do utilitarians really have to be so skeptical? For my part, I have a hard time believing (2), because I think my belief that, say, preventing the murder of my department head is morally right on utilitarian grounds is justified.
Too many comments to respond to! Just one quick point. Dan and Kyle bring up similar objections. Dan’s objection (one of them) is that utilitarianism can’t be applied to any particular case because you could never stop counting utilities. And Kyle’s objection is that you could never know whether some proposed action was right, because you could never be justified in your beliefs about the utilities of the action and its alternatives. I think these are both interesting objections to utilitarianism. Neither has to do with decision procedures per se, but both have to do with the “applicability” of the view in some sense.
I think ultimately the utilitarian has to be content to say that we don’t have a high degree of certainty in our judgments about many particular cases. That might not be so bad. Some people have argued that it’s even worse – that we have basically no reason at all to believe that any particular act is wrong if utilitarianism is true, because the long-term, unknowable consequences swamp the immediate and knowable ones. James Lenman argues in this way, I think. I’m still not sure what the best thing is for the utilitarian to say about that argument.
Robert,
Here’s a quote from you:
“A decision procedure is just a procedure to answer the question, What to do? If what to do is the right thing, then utilitarianism is clearly a decisions procedure. It give[s] an answer about what the right thing to do is.”
Assuming that utilitarianism is a decision procedure, then one would think that applying utilitarianism would provide asnwers to questions about what to do. As there are so many utilitarian philosophers and ethicists around that are looking for answers to various ethical questions, you would think that philosophical journals and book were full of applications of the utilitarian calculus. After all, this is supposed to be the method of finding out what the right things to do are. It is then surprising that I cannot imagine a single instance of this method having been applied. Does anyone else? If utilitarianism is a decision procedure, where stops people using it to solve the ethical problems philosophers ponder about?
Hi Ben,
I was slowly trying to construct a short argument in reply to Douglas Portmore’s earlier entry on the TCR, but since you brought up similar points (though reaching different conclusions), I’ll address the matter here.
The lack of a decsion procedure for a criterion within a moral theory is a fatal problem for the theory, especially utilitarianism, where the criterion of right/ decision procedure distinction is so sharply drawn. The reason is fairly obvious: at least half the theory is missing, and it will never found. Morality is essentially wedded to, indeed incapable of being thought of (as a whole) without, action. Bernard Gert, in a recent talk remarked with suprise that in Moore’s PE there was a chapter entitled “Morality as related to action” (or something like that). Insofar as I understand him (I’ve only read _the Moral Rules_ 1972 ed.), I disagree with his view on some of his major points, but he has got something right. There are things which a moral theory cannot fail to satisfy and still be considered a moral theory. Acting rightly (satisfying the criterion), without it being purely lucky is one them.
By holding to a criterion and eshewing a decision procedure, utilitarianism is reduced to marks on paper: its worse than formalism in logic, for a person can get duped by utilitarianism.
Its lamentable that Bentham, and his utilitarian successors, who took such great pains bring about genuine change for the better in this world, would now see utilitarianism die as a concatenation of impotent scribbles.
Hi Kyle. Great to “talk” to you again. You write:
Just so there is no misunderstanding, the objection need not conclude that Utilitarianism is false. It may conclude only that inapplicability is one strike against the theory, which should be taken into consideration when deciding how well the theory achieves *all* the aims of a moral theory. Of course, for those who think that applicability is a necessary desideratum of a successful moral theory, the objection will in fact conclude that Utilitarianism is false. But one who uses this objection need not think that applicability is a necessary desideratum of a successful moral theory.
I don’t know what Tom Regan and Peter Singer are doing in arguing for the mountain of practical positions they do, regarding where to put your money and why we have obligations regarding animals, if it’s not applying utilitarianism.
Regan isn’t a utilitarian, he’s a rights guy. But yeah, Singer is an obvious counterexample to what Jussi says he can’t imagine. And there are others, e.g. Fred Feldman, and in fact probably every prominent utilitarian, who attempt to apply utilitarianism to various problems. Jussi, do those people count as doing what you say you can’t imagine utilitarians doing?
Adam, what exactly do you mean by a decision procedure? If it’s a procedure that everyone is supposed to be able to follow when deciding what to do, such that following that procedure leads to right action, then for reasons already explained, you should forget about it. If it’s something else, it might very well be something that utilitarians are trying to provide.
Dan,
I don’t deny that. I think utilitarianism does have ‘the potential to help people resolve practical problems’. As I said, finding a decision procedure involves two questions, one of which is answered by utilitarianism. That’s a helpful thing for it to do.
Earlier you levelled a more specific charge against utilitarianism: that it was incomplete. That’s all I was responding to. I say, the failure of a moral theory to answer empirical questions doesn’t show that theory to be incomplete. But that’s not to say that empirical questions are irrelevant in assessing moral theories.
Hi Campbell. My apologies, but I’m afraid I’m confused.
If a particular moral theory (or any particular theory) fails to satisfy a desideratum for the success of any moral theory (or any theory), then that theory is, to that extent, incomplete/inadequate/not good/etc. Can you tell me where you think I’m going wrong?
Dan, I think we’re just using ‘incomplete’ differently. I was thinking of incompleteness as a special kind of inadequacy. So theories might be inadequate without being incomplete. You might think, for example, that being true is a desideratum of moral theories. But it seems natural to say that a theory could be complete yet false (a theory could be completely false, as it were). Similarly, it seems quite odd to use ‘incomplete’ as synonymous with ‘not good’. Not every complete theory is a good theory.
Dan and Campbell,
I have been out of this discussion for a while, but since you are discussing what one might mean by saying that a moral theory is “incomplete” and since I introduced that description to this discussion, I thought it might help to add what I take it to mean.
To go back to the cake metaphor, a cookbook that tells you what ingredients to use to make a cake has given an incomplete description of how to make a cake, since knowing what elements to combine is only part of knowing how to combine them. Similarly, a moral theory that tells us what makes right actions right tells us something that is almost certainly important, but it is an incomplete account if it does not also tell us how to use that information to make decisions about actions and about how to evaluate other agents. After all, morality is often described under the heading practical philosophy, and a theory that tells us what makes something right or wrong without telling us how that plays a role in our decision making is not really practical at all. It is merely theoretical.
I earlier mentioned expected utility as a possible decision making procedure because it seems to me that while they don’t often explicitly endorse it, many utilitarians seem to think that this is how we should decide what to do. A “do what is right” procedure (as Robert Johnson suggested) seems to amount to this, since we can nly act on what seems to us at the time to be right, which is a matter of expected, not actual utility.
Finally, the question of what the most “important” moral question also seems instructive here. I don’t know the necessary and sufficient conditions for something being “cake”, but I do know how to make chocolate cake, cheesecake, banana bread, and coffee cake, each of which may or may not count as “cake”. Knowing how to make them seems to me to be more important than knowing if each should count as “cake” or not, especially from a practical point of view. Given a choice between living in a world where everyone knew the rightness conditions for action, but had no idea how to choose what to do and a world where no one knew the conditions for an action being right, but knew how to go about making moral decisions, I think most people would rather live in the latter world. I know I would.
Robert and Ben,
I’ve always wondered about that. I know some of Singer’s work and a bit of Feldman. You are right that they are saying that they are giving consequentialist solutions to practical moral problems. But, I wonder. I was just reading Singer’s Utilitarianism and Vegetarianism from PAPA 1980. I’m not sure how representative this is. It often says that utilitarian calculations lead to this and this normative conclusion. But, nowhere are the calculations made. Nowhere is it written down that our current policies have this amount of pleasure and pain as result for such and such beings and some alternative policy has these and these concrete results.
So, maybe I don’t just see this level of hypotheses of what the outcome of the utilitarian calculus would be as a proper, good-enough application of the consequentialism. Maybe it is done better somewhere. But, if Lenman (about the consequences) and Scanlon (about well-being) are right, there may be theoretical reasons preventinting the proper application of the theory.
Hi Campbell. Yes, I see now that we were using ‘incomplete’ differently. As I was using it, a particular theory is incomplete if it fails to satisfy all of the desiderata for a successful theory of that kind. And a theory is not good/indadequate *to the extent* that it is incomplete. So, for me, saying that a theory is incomplete is not synonymous with saying that it is “not good” (overall), for the theory might be good overall, yet be incomplete, if it were to satisfy, say, six of the most important desiderata for a successful theory. Thanks for clearing that up.
But, nowhere are the calculations made. Nowhere is it written down that our current policies have this amount of pleasure and pain as result for such and such beings and some alternative policy has these and these concrete results.
Jussi, no utilitarian has ever claimed, nor she needs to claim in order to remain a utilitarian, that a policy should be preferred because it brings about a precise amount of net pleasure or satisfaction of preferences, which is higher than the precise amount that any of the alternatives is expected to bring about. That would be like trying to ground the claim that Tom Regan has more hairs than Peter Singer on our beliefs about the precise number of hairs that each of those philosophers has. What the utilitarian does claim, and what utilitarians like Singer do claim, is that we have good reasons to believe that a particular policy would have better consequences than any of its alternatives, even if we don’t know, and even if we couldn’t possibly know, precisely how good each of these policies was.
Although I agree with Pablo and David, there are further considerations that could show that Jussi is right in the end. One would be that we can never have good reasons to believe one policy is better than any other in terms of its impact on overall happiness. Prima facie that seems false; surely people were on balance happier when the Bloody Code was repealed. But one might establish something like that by showing that the nature of the consequences themselves makes them untestable. Hedonism may be open to that objection: how does one measure amounts of pleasure? But most consequentialists are preference theorists, and that’s measurable (though of course open to further puzzles concerning inconsistent preferences, the paretian liberal problem, etc.).
Pablo,
that’s probably right. But, I’m not sure whether the question was are these people utilitarians but rather is utilitarianism ever used as a decision procedure. If not you start to wonder if it could be used for such a purpose.
As a result of this, I’m starting to wonder what counts as a using utilitarianism as a decision procedure. The sceptic in me thinks occasionally that what these people are *not* doing is that they face a problem they are uncertain about, open-mindedly look at it, estimate the consequences, collect the data approximately and come to a conclusion based on an estimating calculation.
Rather they have a preconceived idea of the rightness/wrongness of actions and they look for suitable consequences to form a hypothesis, which in many cases probably is informed as well, that the utilitarian calculus would turn out to vindicate their view. I’m hesitant to call this a proper employment of the utilitarian calculus. Are they sure that they have taken into account all the relevant consequences, beings and their pains, pleasures and preferences? Are they sure they’ve assigned right numeric values for the value-bearers? I’m not at all sure about this.
Dan,
Perhaps this is not very important, but your use of ‘incomplete’ is very idiosyncratic. Consider some examples.
1. Suppose two students sit an exam. One student fails to answer some of the questions, but, for all the questions he does answer, his answers are correct. The other student answers all the questions, but some of his answers are incorrect. Only the first exam is incomplete, although the second surely fails to satisfy a desideratum.
2. Suppose two historians write histories of analytic philosophy. One historian fails to discuss every relevant philosopher, but, for all the philosophers he does discuss, his discussions are accurate. The other historian discusses every relevant philosopher, but some of his discussions are inaccurate. Again, only the first history is incomplete, although the second surely fails to satisfy a desideratum.
To say something — an exam, a history, a moral theory — is incomplete is to say that, while it may be entirely correct as far as it goes, it doesn’t go far enough. This fits well with the technical use of ‘incomplete’ in logic. To say a logical system is incomplete is to say (roughly) that, while it might not be possible to prove any false statement, it’s not possible to prove some true statement. And, yet again, a logical system might fail to satisfy a desideratum without being incomplete; it might be complete but unsound.
Hi Campbell. Yes, I realized after your last post that I was using ‘incomplete’ in an odd way. I hope that doesn’t detract from understanding the objection: Utilitarianism supposedly fails to satisfy a desideratum for any successful moral theory.
Dan,
What do you think of the following analogy?
Speakers of our language are, by and large, pretty good at distinguishing grammatical sentences from ungrammatical ones. But they are not very good at formulating the rules of grammar. This, we might say, is the role of the grammarian: to formulate a system of grammatical rules that is implicit in people’s dispositions to judge sentences grammatical or ungrammatical.
Now, suppose the best supported system of grammatical rules is not very good as a practical guide to grammaticality. The rules are complicated and hard to follow. People do much better at judging grammaticality, by the lights of those very rules, if they don’t try to apply the rules, but instead trust their instincts. Does this show some kind of deficiency in the proposed system of rules? I don’t think so. The grammarian aims to accurately describe the grammar of our language; he doesn’t aim to provide a practical guide.
The way I think of moral philosophy, the job of the moral philosopher is much like that of the grammarian, except the former is concerned with moral rules, rather than grammatical ones.
In reply to Ben’s post (“…what exactly do you mean by a decision procedure? If it’s a procedure that everyone is supposed to be able to follow when deciding what to do, such that following that procedure leads to right action, then for reasons already explained, you should forget about it. If it’s something else, it might very well be something that utilitarians are trying to provide.”)
You stipulated that “…decision procedure is supposed to tell you, under any circumstances, what procedure to follow – how you ought to deliberate – in order to act rightly (or in order to have the best chance of acting rightly).”
For the great variety of cases you described (from 2 secs. to rescue to 3 mos. to choose a charity), I agree that there is no magical utilitarian algorithm to follow to decide what to do. It would be a rather broad area of moral theory, comprised (among other things) of what is labelled ‘moral education’ (to have the character, which includes the virtue that gives the disposition to just do – almost instinctively – what ought to be done when there is only a brief moment to act), and a small group of decision procedures that vary in stringency with the task and circumstances (for eg., having several weeks to decide something important for your livelihood (like one’s business) may require an economist with abilities in microeconomics and econometrics).
Granting that no decision procedure is air-tight, that is, it will never perfectly satisfy the criterion every time, the quest for the magical utilitarian algorithm is over. What is left is determining a rate of success for the decision procedure. The philosophically interesting part would come in justifying what level is acceptable. Since, 100% has been disqualified, should it be 99% or 25%? Well, if its *the* *best* chance of acting rightly, it would seem to be 99%, since it is less than 100% and more than any other. Perhaps the acceptable chance of success can be got by carving moral reality at the joints. There seem to be mysterious, vague and uncanny things at work here, and that is both philosophically interesting, and is what utilitarians, I suggest, could try to provide.
Campbell,
Ok, I now think we really are going in circles. It is clear that *you* don’t think that applicability is a desideratum for a successful moral theory. But those giving such an objection obviously think it is. The real debate, as I’ve been trying to say, is a meta-theoretical debate about whether applicability ought to be a desideratum for a successful/adequate moral theory, and if so, *then* we have the secondary debate about whether Utilitarianism *is* applicable. Now, I am not saying, nor have I ever said, that you are wrong in thinking that it ought not be a desideratum. I’ve only stated what I thought was obvious: that most folks throughout history have thought that one of, if not the main, *point* of trying to derive a system of general moral principles was to help guide us in being morally better human beings and, thus, that the job of an ethical theory is not merely to be descriptive. (And, thus, to the extent that a theory fails to achieve that aim, it is inadequate.) That’s why, as Dave W. says, ethics and ethical theory have been considered to be part of *practical* philosophy. My only point in being so insistent in this thread is that I don’t understand why some think that it is just *so obvious* that ethical theory is *just* a merely descriptive enterprise when ethical theorists for millenia have thought otherwise. (Once more to be clear: I’m *not* saying that, just because philosophers have thought as much for millenia, that they are therefore correct. I’m just saying that I don’t understand why some folks think that it is *so obvious* that it is no aim of an ethical theory to formulate principles that are intended to be, in part, action-guiding rather than merely action-describing.)
Campbell,
I agree with Dan’s most recent comments and would even go further: The grammarian’s job, as described by you, suggests that knowing the rules of grammar is not important at all. It might be very interesting to study for some, just for its own sake, but nothing seems to be riding on whether or not anyone actually does study it. So if the moral philosopher is doing the same thing as the grammarian, then we are engaged in an enterprise that won’t make anyone’s life better but at best offers personal satisfaction for the people involved in the investigation. I find it hard to believe that many people studying morality think of it in this way. For me, anyway, my interest in studying morality was precisely because I think that it is important in ways that the grammarian task is not. I wouldn’t do it otherwise.
Campbell,
I like your point about grammar. You are right to say that it is not necessarily the case that people best produce grammatical sentences by consulting the true grammatical theory of our language. Instinct seems to be a better guide.
But is it possible to produce grammatical sentences while thinking that the true theory of grammaticality is false? More strongly, could it be the case that in order to produce grammatical sentences, one MUST not believe the true theory of grammar? It seems likely that radical esotericism about grammar won’t fly. While consulting a grammar textbook before each and every utterance is a bad grammatical decision procedure, I don’t see how a theory of grammar could be true but not successfully usable once believed.
One problem in thinking about decision procedures and indirect moral theories is that there are _degrees_ of indirectness, and these are often confused. Here are several options. A theory about what to do is indirect in the sense that, to act succesfully by the lights of the theory:
1) it can’t (always?) be the only thing present to mind while acting, or
2) it can’t (always?) be the sole basis one uses to decide what to do, or
3) it can’t (always?) be the sole basis according to which one decides which set of rules to follow (or, alternatively, what kind of character to have), or
4) it can’t be true that one believes that the theory is true.
(There are intermediate options too.)
I have a forthcoming article coming out that sets this out in more detail. When I get to my other computer, I will make it available, in case anyone is interested….