Well, after a long drive, a tense week in a motel, an extended unpacking period, and a series of orientation sessions at BGSU, I’m happy to say I’m back on the blogosphere again. I wanted to extend a belated welcome to our recent additions Scott, Troy, and Michael. Welcome aboard! There have been a number of interesting and sharp posts while I was away, and your addition has contributed significantly to that trend. I have a number of ideas I hope to post in the weeks to come, but for now I figured I’d ask a question about moral motivation on which I’d like some feedback. What precisely is it that triggers the mechanisms involved in moral deliberation? In other words, we go about the living of our lives, addressing our day-to-day practical concerns (getting milk from the fridge, sitting down at the computer to do some work, shopping, going to see a movie, etc.), but then there are moments in which we are moved to deliberate about the specifically moral import of our actions. When/why does that happen? The answer to this question goes to the heart of the proper formulation of a normative ethical theory, it seems. T.M. Scanlon, for example, in both his “Contractualism and Utilitarianism” and his What We Owe to Each Other, claims the source of moral motivation (the desire/the reason to want to be able to justify one’s action to all similarly motivated others) is triggered by the thought that some proposed action would be wrong. This seems correct: the default seems to be that we go ahead and do what we do until the thought that something might be wrong crops up to stop us in our tracks. This view, however, is contrary to the assumption by many that the source of moral motivation is somehow triggered by thoughts about what the right thing to do would be. Thus most ethical theories are formulated in terms of rightness (whereas Scanlon’s is formulated in terms of wrongness, i.e., “an act is wrong if it would be disallowed by principles that no one could reasonably reject”). Now one can always simply stick a negation in front of the proposed “wrong act” to get the formulation to apply to the case in terms of rightness (as long as rightness and wrongness are contradictories), but this seems an indirect move that misses the import of the Scanlonian claim: morality comes up for deliberation only as a “check” on our ordinary practical living, rather than as a positive guide to our everyday lives. Thoughts?
In my own case, moral deliberation is prompted not only by the thought that some course of action is wrong, but also by the thought that some alternative course of action would be better. (And since I’m not a maximizer, I’d say that these amount to different claims.)
Hi David. I tend to think Scanlon is correct about the deliberative priority of wrongness over rightness, to wit, that we generally are moved to moral deliberation by concerns about the wrongness of our actions rather than concerns about their permissibility. Of course, deliberation about whether an action we are considering performing might be morally wrong quickly morphs into deliberation about whether that act (or some alternative act) is right. This is, I suppose, because the overwhelming majority of our voluntary behavior is neither morally required nor morally prohbited, but simply morally permissible and so there is no need for deliberation about it.
A couple of observations, though: First Scanlon’s position is definitely Kantian rather than Aristotelian in that morality is, as you noted, felt as a constraint on our ordinary behavior. This suggests that much of our conduct is morally indifferent, and morality enters as a demand on our conduct. Aristotelians (and others) might reject this picture of moral deliberation on two grounds: First, most all of our conduct is not ethically indifferent, since it concerns how to live well, even if it is morally indifferent. Second, Scanlon seems to follow the modern doctrine that morality and personal well-being are basically at odds, with morality working to constrain our pursuit of our self-interest. I suspect that Aristotelians might reject this picture of morality as a distinct and possibly alien body of concerns.
Second, Scanlon’s view is morally conservative, in that it suggests that ordinary moral consciousness is led to deliberate only when a clear possibility exists that a proposed act might be wrong. But of course whether an act strikes as possibly wrong may be shaped by moral commitments that are deeply mistaken or prejudicial. One could imagine utilitarians such as Singer arguing that ordinary moral agents have far too undemanding an understanding of their moral obligations to others, and so acts that strike them as morally unproblematic (eating meat, spending money on luxuries, etc.) ought at least to register in their consciousnesses as possibly wrong and hence subjects for moral deliberation.
The intriguing thing that gets me from these notions is then the thought of what we are left with: With morality relegated to a type of internal alarm system, what is it that then determines right action? This same alarm system, as you say, and what it leaves out and lets us do is then correct, or moral? Morality is then a negating force on our actions, attempting to nihilate those which by some internal code do not fly. The possibilities here are even more problematic, as then we come to the crux of this and other moral questions: Where is said alarm system calibrated? Externally and temporally, (I did this, I was shown/told/it was intimated to me as wrong after I did it, and so now I check myself before a like action), internally as some revelation of a permeating Good that is obscured potentially by other external or internal factors, or by some combination thereof which gives birth to our regulating moral intuitions? Either as a check or positive guide, the question of origin remains critical. Although an interesting argument could then be posited as to which mode of morality produces the more ultimately “moral” person; then again, the origin of the latter moniker again comes into question as soon as it is decided upon.
David, A description of how I experience the difference between mere deliberation and moral motivation…If I am offered my choice of meals or desserts from a menu then I may deliberate relative to my preferences and select a choice of action, but if I am given the same sort of choice AND I recognize that I may be violating some principle of fair distribution, or I become aware that my optimizing behavior my inflict pain, disappointment, or discomfort on someone else (e.g. If I start to head for the last piece of cake and a young child asks for it)– then I experience this situation in a) a phenomenologically different way and b) as a morally charged decision. This suggests several things to me: 1) that part of the difference is intentional, that it has to do with how we represent the situation to ourselves, 2) That we do not always control how we represent the situation, though we can convince ourselves in somecases to ignore factors that might be morally important (the Pro-PETA philosopher might e.g. think that once certain ‘facts’ about animal suffering are made ‘clear’ that it must take an effort for omnivore philsophers to eat meat; they must actively edit their representations. 3) Similarly, consider the use of de-humanization tactics in propaganda or warfare, if we don’t see X as a person, then our decisions about how to treat X are not morally charged (at least along THAT axis). So contra John Turri I think that the experience of the situation as a moral situation is primary and that this moral awareness prompts us to look for (morally significant) alternatives.
Very interesting question. I think we can formulate a thesis compatible with what both John and Michael say:
(T) Moral deliberation is motivated *mostly*, but not *only*, by consideration of what’s wrong.
This is a thesis in *moral phenomenology*, if you will. (The topic of moral phenomenology is gaining some interest, and the Center for Consciousness Studies at the University of Arizona is planning to sponsor a conference on it in Fall 2005, organized mainly by Terry Horgan and Mark Timmnons.)
BTW, I think we should distinguish *moral motivation* from *what motivates moral deliberation*. I think of moral motivation as the motivational power of moral judgements. Clearly, *some* moral judgements are the product of moral deliberation. Arguably, however, not *all* moral judgements are. In any case, *what motivates a process* is to be distinguished from *what the process’ prduct motivates*.
‘Wrong’? Why not ‘bad’? Putting things in terms of badness, rather than wrongness, seems to have some advantages. Consider the following situations:
(1) You are on the beach and someone appears to be drowning. Going in to save them would be risky. Suppose you think that, due to the danger, you are not required to try to save them; thus you think that remaining on the beach would not be wrong. Nevertheless, if you just stand there something bad will happen, so you still contemplate going in. Your moral deliberation is not here triggered by the fear that you might do something wrong, but rather by the fear that something bad might happen as a result of your action (or inaction.)
(2) Similarly, you may believe yourself to have the positive right to perform some action (make use of a possession that is clearly yours, for instance), yet worry because doing so will make someone unhappy. Again, it doesn’t seem too hard, by filling in the details in the right sort of way, to construct a case in which you believe that you would not be wrong to exercise your right, and yet still wonder whether that is the thing to do, given the bad effects of doing so.
Both of these cases involve supererogation; that is, they depend on (and illustrate) the fact that moral deliberation isn’t always aimed at avoiding wrong action. Rather, it sometimes aims at figuring out how much above and beyond the call of duty (the mere avoidance of wrong action) one is going to go on any given occasion. (This seems to be part of what John Turri was getting at in his comment on maximizing.)
Of course, someone could insist that what the people in (1) and (2) are trying to decide is whether, in some sense, refraining from acting (1) or exercising one’s right (2) is actually wrong in this case. But this seems artificial: we all understand what is meant by the assertion that, in each case, the agent believes herself to be in a position where action A would not be wrong, but nevertheless wonders, quite reasonably and on moral grounds, whether to perform A.
Notice that, since an agent’s performing a wrong action is presumably a bad thing, ‘a bad thing might happen’ will always be true when ‘the agent might do something wrong’ is true. However, the converse does not hold. In other words, the account in terms of badness can cover every case that the wrongness account covers, but not vice versa.
However, it would be too simple to put the account as follows: moral deliberation is typically triggered by the fear that something bad might happen as a result of one’s action. After all, if I order badly in a restaurant I might fail to enjoy my meal, and this would be bad, but it doesn’t make my deliberation a moral one. We’re probably better off saying this: moral deliberation is typically triggered by the fear that something bad might happen to someone else as a result of one’s action. (Notice that we have to make a similar move if we talk about wrongness – one can make the wrong choice in the restaurant, and that doesn’t make it moral – or if we talk about betterness, as Turri suggests.) This too might prove too simple; bring on the counterexamples!
Dave,
Good to have you back from the road.
I’m not sure that I’m always prompted to deliberate by the thought of wrongness, since I’m often prompted to deliberate by the thought of obligatoriness. In any case, I’m wondering if you could say a bit more about what hangs on all of this. I thought that maybe there was a tacit objection to theories that put their principles in terms of rightness, but as you acknowledge, there’s no theoretical problem in doing so (since rightness and wrongness, suitably restricted, are two sides of the same coin), even if, psychologically, rightness is not the first thought we have. Put differently, what is the “import of the Scanlonian claim” other than as a passing observation on moral phenomena?
Great comments, everyone. Much to think about, and I won’t be able to respond to them all here. First, though, I very much like the term “moral phenomenology.” I’ll have to figure out a way to get to that conference in AZ, if that’s what Timmons and Horgan are doing. And yes, the question is not about moral motivation, but rather about the process triggering moral deliberation. Of course, our responses here depend on our views of what counts as specifically *moral* deliberation. As Bob and Troy have implied, *moral* deliberation seems distinct from other kinds of practical deliberation, but it’s not clear from people’s responses what (a) the distinctness consists in, and (b) what it is that marks some deliberation as moral.
To get to Josh’s question, yes, there’s more lurking here than the question about how to properly formulate a theoretical criterion in normative ethics. This talk about what triggers the source of moral deliberation is, for Scanlon, rather important, and I don’t believe much has been made of it (even by him). It is actually (on my reading) supposed to provide a grounding for the contractualist criterion itself, via considerations of moral motivation. If what triggers my moral deliberation is a thought that what I’m doing might be *wrong*, then the source of that triggering is likely to be the foundational desire to be able to justify one’s actions to others (on grounds they could reasonably accept). When I’m prompted by the thought of “wrongness,” it seems its source is indeed my desire to be able to justify what I’m about to do to others, a desire whose ongoing satisfaction has suddenly been threatened. Thoughts about “rightness” (or supererogation) don’t have that same source, it seems. But Scanlon needs something like that accompanying source to move from there (moral motivation) to the contractualist formulation, because my desire to be able to justify my actions to others can be satisfied only if the grounds of said justification are accepted by all similarly inclined others. So I view the contractualist formulation as the principle that renders moral motivation coherent.
If, on the other hand, moral deliberation were triggered by thoughts of rightness, justification doesn’t really seem as pressing or even necessary. My thought that an action would be *right* or supererogatory isn’t accompanied by a desire for justification; I only need to justify those actions that may be construed as wrong in some way (or less dramatically: there’s less a need for me to think about justification when deliberating about doing what’s right/supererogatory).
Thought-processes of “right” and “wrong” don’t seem to be the proper trigger from my experience. Moral deliberation stems from the realization that what you do affects someone else. Getting milk for my coffee doesn’t come up for moral scrutiny until I consider that this may have an affect on someone/thing else’s well-being; in this case the cow or the milk-maid or the farm owner. Seems more to be the case that we hesitate to act, or reflect on our actions, when we consider their impact on the lives of other agents. Right-ness of wrong-ness don’t seem to come into play until we indentify that our action has a consequence for some other agent.
Thanks for the clarification, Dave; though now (of course) I’ve got other questions. For instance, it strikes me as phenomenologically inaccurate to say: “When I’m prompted by the thought of “wrongness,” it seems its source is indeed my desire to be able to justify what I’m about to do to others, a desire whose ongoing satisfaction has suddenly been threatened.” If I’m prompted by the thought of wrongness to deliberate further, this isn’t because I want the ability to justify my action to others, but because I want it to be justified simpliciter, a justification which may or may not be provided by a contractualist principle. (From my normal experiences and from the Euthyphro Dilemma post from a couple of weeks ago, I want to say that it won’t be contractualist, but that’s not important here). Do you experience the moral phenomena differently, or maybe I’m misreading you here?
Also, you say “my desire to be able to justify my actions to others can be satisfied only if the grounds of said justification are accepted by all similarly inclined others.” Fair enough, but that seems to allow an alternative to your way of securing contractualism: can’t you be prompted by rightness and I be prompted by wrongness, yet both of us accept the reasons that justify our actions, rather than the psychological elements that prompt our acceptance of those reasons, and thereby come to agreement on all relevant matters? If so, then maybe Scanlon shouldn’t place much emphasis on the issue of the psychological prompt.
David writes: “my desire to be able to justify my actions to others can be satisfied only if the grounds of said justification are accepted by all similarly inclined others.” But why must the ‘others’ be similarly inclined? If I understand what the ‘others’ will count as an acceptable justification then, even if I myself have very different views, I might well be in a position to offer them a justification they will accept even when it is not a justification I myself would accept. What triggers moral deliberation on this sort of view, then, is not the thought that I might do something wrong; it is, rather, the thought that I might do something that others perceive as wrong.
But surely this is all wrongheaded: I am and should be more worried about doing something wrong than about doing something others might see as wrong, and so the foundational desire is not a “desire to be able to justify my actions to others”; it is simply a desire to be able to justify my actions or, more simply still, a desire to act justifiably. The ‘to others,’ in other words, is either superfluous (the idea of justification includes the idea that a reasonable justification will be accepted by reasonable others) or downright misleading (if it moves the spotlight away from the justification itself, and focuses it on the aim of satisfying the demands of ‘others’.) Since the reference to and role of the ‘others’ is, I take it, the quintessence of contractualism, I guess what I’m saying is that I’m pretty skeptical of the whole contractualist enterprise.
Like David, I am very sympathetic to the idea that moral deliberation is driven mostly (though not only) by consideration of what’s wrong. And I too have a sense that this is a significant thing – something that has important lessons for us about the nature of morality. Josh’s question is just what these lessons are.
David’s suggestion is that one central lesson is that moral deliberation arises in the context of trying to justify one’s actions to other. I share Josh’s skepticism about this, but in any case I want to offer – hesitantly – another one.
There are two traditional models of moral deliberation and justification. The first is that an action remains unjustified unless it is justified by correct moral deliberation. The second is that it has no “justification value,” as it were, unless justified by correct moral deliberation. But if David’s thesis about moral deliberation is on the right track, it may be that an action is *justified* unless moral deliberation empties or annuls its justification.
Uriah’s suggestion is interesting: “an action is *justified* unless moral deliberation empties or annuls its justification.” But then isn’t the actual psychological prompt for the deliberation somewhat irrelevant, so that we’d have to gloss it that an action’s justificatory status can be annulled by a deliberation that should take place? Otherwise (i.e., if we just required that an act’s justification can be overridden only by the agent’s actual deliberations), the Nazi could just say, “Well, I wasn’t prompted to deliberate about the mass murders I committed, so it must have been justified.” Furthermore, wouldn’t we want to say that the deliberation must not only be prompted according to some normative (rather than psychologically descriptive) standard, but also that it must take place correctly? Otherwise the Nazi could say, “Well, I deliberated when I was supposed to, but I came to the conclusion that my mass murders were permissible.” If this is right, then doesn’t the correctness of the deliberation and the reasons the deliberation is correct, rather than the fact of deliberation itself, explain an act’s status as justified? (Okay, I think I’ve just reverted to the argument from the Euthyphro Dilemma post, but maybe it’s relevant here.)
That’s a good point, Josh. My colleague Michael Gill and I have been kicking around the idea of the unusual model of moral justification for a while now, and one of the big problems (perhaps the biggest) we found ourself facing is that of the Nazi unprompted to deliberate. (I actually think we used the exact same example in our conversations! But then again, who doesn’t use the Nazis for these purposes…) We’re slowly crafting a strategy for dealing with it, but it’s not in shape just yet…
I like Uriah’s move, but my earlier comments glossed over a crucial bit of the Scanlonian formulation that seems to have led to the Nazi counterexample. What contractualists are (or should be) after is not that our actions be *justified* to all similarly inclined others, but rather that they be *justifiable* to all *reasonable* others on grounds they could not reasonably reject. So the default as we live our lives would be that our actions are indeed so justifiable until we run across a situation that gives us pause — we’re unsure whether or not what we’re considering would indeed be justifiable — and we’re spurred to moral deliberation, triggered by the desire in question. The Nazi who thus isn’t spurred to do so has, then, a defective character or reasons-responsive set of dispositions. Now I grant that the two mentions of “reasonableness” in the formulation are loaded and controversial, but that’s not the issue here. Rather, it’s simply whether or not the trigger of moral deliberation is the thought of potential *wrongness*, which I continue to think it is.
And to respond briefly to Troy’s point, the “others” part of the formulation is crucial (and not superfluous or misleading), simply because those “others” are supposed to be a restricted set of somewhat idealized, reasonable people. It’s not just any old justification that counts, nor is it any old group of people to whom justification is owed. It’s a specific set of reasonable others with whom I construct moral principles.
Dave, I understand that contractualism wants the parties to whom agents would justify their actions to be reasonable, but if the issue is just a simple phenomenological question of when we’re prompted to deliberate, then the Nazi’s lack of being prompted to justify his actions to all reasonable others is very relevant to showing that the phenomenology of deliberative prompting doesn’t really seem to do any work towards generating the contractualist principle. The only way that generation could happen is if you restricted your phenomenological test subjects to non-Nazis to begin with, which seems slightly presumptive.
And, similarly, while the part about justifying one’s actions to others is of course what contractualism wants to get, what Troy (as I’m reading him) and I were suggesting that the phenomenology doesn’t bear this out in our cases. What we experience is wanting our actions to be justified simpliciter, in which case the “to others” is misleading or superfluous phenomenologically (at least in our cases). Isn’t it a problem for the possible rationale for contractualism, rather than a problem for the phenomena, if contractualism is supposed to draw support from the phenomena here and it turns out that no such support is available?
I think the Nazi *is* prompted to moral deliberation by his desire to be able to justify his actions to others. His mistake is a factual one, viz., thinking that those he’s victimizing aren’t part of the group to whom he owes justification. But the moral phenomenology would be the same in his case, triggered by the same sort of desire. So I don’t think the Nazi case undermines this general point. The prompt and the triggering desire are the same, and they are also what generate support for the contractualist principle (on the story I’ve already told). What differs are the sorts of specific cases in which the Nazi is moved to deliberate, but that’s not what’s at issue here.
As for the issue of justification (to others), I may try to address that in another full post.
Dave, if I understand correctly, you’re suggesting that a certain claim about what typically prompts or triggers moral deliberation might be used as a premise in an argument for a contractualist position in ethics (in particular, a position such as Scanlon’s). If so, I’d like to hear more about how such an argument would go. On the face of it, there appears to a significant hurdle to jump in the form of the “naturalistic fallacy”. Perhaps it *is* the case that people are prompted to moral deliberation by their desire to be able to justify their actions to others. But surely we don’t want to infer from this that people *ought* to act only in such ways as can be justified to others.
(As to the question of what prompts my own moral deliberation, I find it very hard to say.)
Very briefly, the argument I have in mind is a kind of transcendental argument: what renders the phenomena in question (the prompting of deliberation by thoughts of wrongness, along with the triggering source of moral motivation) *possible* is something like a belief in the contractualist account of wrongness. Notice, then, that these are two descriptive claims (what renders our moral desire possible is our *belief* in the contractualist claim about wrongness). I’m not really claiming any normative payoff (yet), so no naturalistic fallacy has been committed. Of course, I’m also not claiming that this is the move Scanlon himself makes. It’s just the move I find most plausible for contractualists to make. I think the most interesting stuff in ethics is actually descriptive: trying to figure out just how we are as moral agents, and then seeing if we can find out the conditions that either justify those states or at least render them possible. (This is a thoroughly Humean methodology.)