I am intellectually persuaded by the arguments for Consequentialism. However, like most people in that situation, by my own lights I fail to live up the demands of that moral theory by a wide margin. And again, like most in my situation I suspect, this is a source of disquiet but not persistent hand-wringing. But there is another moral view one might attribute to me. It is more deontological in tone. And this other moral code is connected much more directly to emotional reactions such as guilt and moralized anger. If others cheat in a business deal or steal (except in desperation) and I am close enough to the situation, I will likely have an engaged moral reaction to such a person. I will speak badly of them, refuse to hang with them, and think poorly of them. Yet the decently well-off person who fails to contribute much money to an effective charity does not elicit such reactions in me to a similar degree. Similarly, while I myself regularly fail to be governed by consequentialist morality in my actions or my emotional reactions to my or other’s actions, I am quite effectively governed in both my actions and my emotions to this other moral view. My conscience, let’s call it, effectively keeps me from doing a wide range of things such as lying, cheating, stealing, hurting and so on. In most cases I simply would not dream of doing such things and if I did somehow do some such thing (or even fear that I did) I would likely feel really bad about it. Such governance in deed and action would, if I believed in commonsense (more deontological) morality, pass for tolerable moral motivation.
We are used to wondering if a moral judgment necessarily motivates. This is the debate about judgment internalism. I don’t do research on this question, but for my money those who argue against judgment internalism, such as Svavarsdottir, are winning. But they win partly by positing the coherence of an amoralist—someone who makes sincere moral judgments but sees no reason to live their life by moral standards. I however will just ask you to take on faith that I am not an amoralist. Since the usual way the judgment internalist debate goes is to wonder whether a person necessarily has at least some (possibly quite small) motivation to act in conformity to their moral judgments, I likely pass this test anyway. But a stronger test seems appropriate for the role that morality plays in the life of someone who is not an amoralist. It strikes me as plausible to say that, in the life of a person who is stipulated to not be an amoralist, a case could be make that a person’s moral view is the one that governs in the right way their behavior and emotional life rather than the moral view that they argue for in journal articles. By that standard I am not a consequentalist and I have yet to meet many who would count as consequentialists by that standard. Let me try out the hypothesis that a person’s moral view is the one that governs their actions and emotions rather than the one that governs what they say and write in intellectual circumstances. If that is right, there are very few consequentialists out there. I don’t think this shows that consequentialism is defeated as a moral view, only that it is much harder to count as believing such a theory than we may have thought and harder to genuinely persuade someone of the truth of a moral proposition than we tend to suppose. (I should mention that several years ago Steve Wall, Dan Jacobson, and I talked about these issues and, it seemed to me, were all initially drawn to the conclusion I outline here. Some of the ideas presented here are perhaps as much theirs as mine.)
Dave,
I don’t really have an argument against your position; I think it just comes down to how we understand what a belief (or a judgement, or a view) is. But I, for one, like to keep these concepts intellectualized. On your view, it will turn out that no “practically inapplicable” views have proponents. No one is really an error theorist, a Cartesian skeptic, a solipsist (probably), etc. But it seems like each of these views might be true, and thus that we should not, by conceptual fiat, rule out the possibility of coming to hold them.
Nevertheless, I agree that space should be made for talking about belief-like (judgement-like, view-like) attitudes or mental states that play the deliberative role beliefs usually play. I like to call them ‘beliars’. They are, obviously, quite common among philosophers who hold counter-intuitive views. But I think they are found elsewhere, as well. For example, I have certain OCD-like tendencies. One manifestation is that I feel compulsions to do silly things, like avoid cracks on the sidewalk. But, phenomenologically, these compulsions do not present like mere urges or even desires. They often present as belief-like, or intuition-like. There is a sense that failing to crack-walk might have dire consequences. But even when I act on this beliar, it still seems to me (phenomenologically) that I am do something different than when I deliberate using a proper belief. A committed consequentialist might say the same about his day-to-day deontological moral behavior.
Hi David,
I posted something related to this at prosblogion a few weeks ago. Here’s a strange fact about this sort of case. I’m pretty sure that any avowed consequentialist will also admit that, for sure, he will in the near future fail to fulfill a consequentialist requirement to perform an action A such that (i) his commitment to consequentialism won’t have lessened (ii) he has no justification for failing to perform A and (iii) he has no mitigating reason for failing to perform A. If all that’s true, then I don’t think you’re genuinely a consequentialist. It amounts to asserting that I’m a consequentialist but my commitment to consequentialism is not so strong that I do not foresee my own unmitigated non-consequentialist behavior. The problem of course generalizes to other moral views.
Mike,
I will admit that, for sure, I will in the near future go to bed later than I believe I ought to, though (i) my commitment to the idea that going to bed earlier would be best for me will not have lessened, (ii) I have no justification for failing to go to bed earlier and (iii) I have no mitigating reason for failing to go to bed earlier. I don’t think this means that I don’t really believe going to bed earlier would be best for me. It just means that I’m suffering from weakness of the will. We all recognize that immediate desires can lead us to act irrationally. But surely deontological attitudes can be just as powerful motivators as desires. So why can’t the consequentialist just appeal to weakness of the will as I have?
Maybe you in-between believe in consequentialism:
http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/ActBel.htm
I think David F. is right to say that what we want to say here depends in part on how we understand what a belief is. On a widely accepted view in phil mind, being a belief with a certain content depends on its functional role, i.e. what stimuli causes it and what it causes–where the latter includes both other beliefs and (together with appropriate desires) actions.
So, here’s one way to think of the puzzle. Suppose functionalism is true and that acceptance of a moral theory is a matter of having a belief in the theory’s truth. On those assumtions, it’s easy to see how someone could be an amoral consequentialist who never does as consequentialism requires; she would be someone who has no desire to do what she regards as right. The puzzling feature of the case Sobel (and Wall and Jacobson) are interested in is that it involves someone who does have a desire to do what’s right. Appropriate belief/desire combinations need to be compatible with some weakness of will (as in DF’s not going to bed early case). But if we think belief is given by its functional role and that there is an appropriate desire, we’re going to have to posit an incredibly high degree of weakness of will in their case to explain the behavior pattern and hold onto the belief and that’s hard to square with functionalism. Some folks think functionalism isn’t the truth about all our mental states (most famously the phenomenal ones). But it’s pretty widely accepted for beliefs.
I don’t think this means that I don’t really believe going to bed earlier would be best for me. It just means that I’m suffering from weakness of the will.
Dave (and Dave), Weakness of will is a mitigating reason, so it’s not the sort of case I’m describing. You will fail to live up to consequentialist standards even when it is not a case of weakness of will. There will be cases in wihch you won’t even try to live up to those standards. It’s not true that every time a conseuqnetialist fails to live up to the consequentialist standard he is suffering from weakness of will. Sometimes he just can’t be bothered. Here’s a useful example. I believe that it would be best overall if I get off the sofa now and mow the lawn, but I’m not doing it. My failing to do so is not a matter of weakness of will; I’d have no difficulty doing it if I were suffficiently committed to my principles. It’s a matter of not having that sort of commitment to consequentialism. I think that’s true for every consequentialist (non-consequentialists, too).
No doubt what it takes to have a belief in an ethical theory will have something to do with what it takes to have a belief generally. But part of my interest in these questions stems from the thought that how connected belief in ethics is to action and emotion (at least in the moralist—that is in the person who is not an amoralist) has to do with the topic of the belief. I think it makes sense to think that whether someone really loves somebody is more connected to action and emotional responses that whether someone really accepts the truth of relativity. Partly this seems assured because there are so few actions and emotions which are especially fitting with the belief in relativity whereas we can say a lot about what actions and emotions are fitting with being in love with someone. So the “intellectualist” view seems more plausible when the subject is whether I count as believing in relativity and less plausible about whether I am in love with someone. At any rate this seems plausible enough to me that I am not sure Faraci can rightly claim my view commits me to denying the existence of Cartesian skeptics.
A person might dispute this by saying that if I really believe in relativity I would be expected to take bets which offer things I want as a prize if relativity were true, or at least to regret failing to do so. I have arguments elsewhere against direction of fit accounts of the difference between belief and desires and Copp and I there offer arguments against the idea that all desires or beliefs should be expected to show themselves in betting behavior in this way. However, I would need to think more about whether our arguments count as a decent reply to the reply I consider here.
One thing to notice here is that the phenomenon I mean to draw attention to is different from ordinary weakness of will. In the moral case it is not just that I don’t do as I think I ought by the lights of the theory I claim to accept, it is that I also don’t get worked up about failing to do so.
It occurs to me that my most recent post is perhaps guity of confusing what it takes to be in love with someone with what it takes to believe that one is. I’ll think more about that when I get a moment.
I think that we can explain the fact that David’s moral theory fails to govern his actions or emotions by the fact that his moral theory conflicts with his theory about what he has most reason to do, all things considered. I believe that David generally acts as he believes that he ought to act, all things considered. So his actions are governed by his theory of practical reasons even if they aren’t governed by his moral theory. So we don’t have to appeal to “an incredibly high degree of weakness of will” in his case, as Jan suggests. And I suspect that he generally feels guilty when what he does is both wrong and contrary to what he has most reason to do, but that he doesn’t feel guilty when he does wrong (by the likes of his moral theory) but has most reason to do so (by the likes of his theory of practical reasons). So I don’t see what the puzzle is here. Why should David or anyone else expect his moral theory to govern his actions and emotions when David doesn’t believe that his moral theory should govern his actions and emotions? After all, David has written articles expressing his belief that he doesn’t always have most reason to act as morality demands and that it would be strange to blame someone for acting as he has most reason to act. So should we really expect a moral theory to govern someone’s actions and behavior when he himself doesn’t believe that they ought to? Now, of course, Dave’s question is whether he counts as being a consequentialist if this moral theory doesn’t govern his actions and behavior. And I don’t see why not. We, as opposed to him, may believe that the true moral theory ought to govern people’s actions and behavior because we believe a person can truly be morally required to act only as she has most reason to act. But Dave has expressly said that he doesn’t believe this.
Doug,
Thanks, I wondered about that direction too. But I think I disagree that my subjectivism is playing an exciting role here. First the phenomenon I mean to be calling attention to is, I believe, rather common for consequentialists whether they are subjectivist or no. Second, I do not think that the moral theory that effectively modifies my action and emotions is more closely tied to my own theory of reasons than consequentialism. Well, at least that seems clearly true until you bring in legal sanctions–then perhaps the more deontological morality would have some extentional advantage. But the deontological morality effectively governs my interpersonal interactions and emotions where legal sanctions are not at all salient.
Instead, I would hazard a guess that it is more difficult to reorient one’s moralized emotions around an intellectually favored moral theory (when it conflicts sharply with aspects of commonsense) than we suppose. I think it is the weight of my moral upbringing together with social sanctions imposed by my culture in favor of commonsense morality that are the better explanation than my subjectivism.
Dave’s question is whether he counts as being a consequentialist if this moral theory doesn’t govern his actions and behavior. And I don’t see why not. We, as opposed to him, may believe that the true moral theory ought to govern people’s actions and behavior because we believe a person can truly be morally required to act only as she has most reason to act. But Dave has expressly said that he doesn’t believe this
Isn’t the question whether it’s true that he’s a consequentialist, not whether he can consistently believe it? Smith might believe he’s the best arm wrestler in the room, and at the same time lose to everyone in arm wrestling. Do we want to say that he still counts as the best arm wrestler since, by his lights, even someone who loses all the time can be the best.
Brad C: thanks for bringing that to my attention. One of the things I was hoping would happen here is that I would learn that others have already thought about such issues so your pointer is just the sort of thing I was hoping for. Look forward to getting to the Schwitzgebel paper.
Mike: have you written on these issues outside of the blog you mentioned?
I perhaps should say that there are general issues here that I am largely hoping to work around. I have some fear of flying and perhaps allow this to affect my actions (maybe reasonably–irrational fear is bad to experience too). But I think it clear that I count as believing that flying is safe. But if I felt fear when loved one’s flew as well and if I urged them not to fly in the same way I sometimes urge myself not to fly, then I would start to think it a more interesting question whether I really believe that flying is safe. In other words, the disconnect between action and emotional responses on the one hand, and belief on the other in the ethical case here is more significant than the fear of flying case.
One more random thought: I would assume that best functionalist story about belief would give some role to both the intellectualized (what do I say in seminar) side of the story and the action/emotion side of the story. So if there were cases where there is no action/emotion side to the story, the intellectualized part would carry the day about what the agent belives. But in the interesting cases where there is a conflict, I am positing that at least in the case of moral beliefs, the action/emotion side of the equation is the more weighty part.
Hi David,
First the phenomenon I mean to be calling attention to is, I believe, rather common for consequentialists whether they are subjectivist or no.
But isn’t it also rather common for consequentialists (subjectivists or not) to think that they have sufficient reason to act in ways that consequentialism deems immoral? Indeed, I can’t think of any consequentialist who has explicitly claimed or argued that we always have decisive reason, all things considered, to act as consequentialism demands, and yet I can think of many who have explicitly claimed and/or argued that we often lack decisive reason to act as consequentialism demands. So I take it that the phenomenon that I’m talking about (the phenomenon where the consequentialists takes there to be sufficient reason to act immorally) is quite common, and may be just as common as the phenomenon that you’re talking about.
Second, I do not think that the moral theory that effectively modifies my action and emotions is more closely tied to my own theory of reasons than consequentialism.
Could you give some examples. When it comes to not giving enough money to charity, isn’t that more in accord with your theory of practical reasons than it is with your consequentialist moral theory? On your theory of practical reasons, you do have sufficient reason to give precisely as much money to charity as you do, right? So there’s no weakness of will there, right? But, on consequentialism, the amount that you give is grossly inadequate, right?
Dave,
My interest in the question has more to do with understanding hypocrisy. Does it count as hypocritical that I avow consequentialism and just fail to live up to it? I’m not sure. I think it’s related to your concern which seems to be whether you should keep calling yourself a consequentialist. But I’ve nothing written up.
I think your worry sensible. There is surely something wrong with a moral theory which does not guide the actual behavior of even its most famous advocates and which entails that the best people we know are scarcely better than the worst people we can think of.
To be fair to consequentialism and Singer, I can’t think of any moral theory that has an advocate that meets it demands and consequentialism says nothing about how to rank persons from best to worst (or how to measure the distance better and worse persons). Gotta run. I sense that someone’s bad mouthing consequentialism in some disreputable corner of the internet…
Doug,
Well I think most people working in normative ethics likely lack a view about whether one always has most reason to act as morality requires. I think Sidgwick thought that one would never be irrational to do as C recommends. I don’t think of Mill as having a view on the question. Even if there is some tendency for contemporary C’s to be more likely to think that it can be rational to be immoral, I do not know that this is a majority view among C’s. Do you think it is? Even if it were a majority view, I still think the attitudes I talk about are likely common among those who take no stand on this question or who are rationalists.
It is notoriously hard to say what my view of reasons claims that a specific person has most reason to do. Nonetheless, I see your point that insofar as we are comparing C to a moral theory that lets me do as I like much of the time, C is more likely to conflict with my theory of reasons in a broader range of cases. However, since I claim that some of my concerns are to treat people morally, deontological theories will not scratch that itch (I say). If we compare the different moral views where they both give specific instructions (and the costs to me are otherwise low) such as whether to throw the switch in the trolley case, then I think non-C views will give answers that address my concerns less well in most cases.
Mike,
On the question of hypocrisy, it seems to me at least that to be the ordinary kind of hypocrite the person must think that they have most reason to do as morality commands (and perhaps have a tendency to negatively evaluate others for similarly failing to act according to C—something I doubt is common in those who think of themselves as believing C). Further I guess I want to say that it would be something bad that I don’t have a name for to change one’s view of what morality requires simply to make it the case that one counts as living up the morality one believes in.
Tomkow,
I myself think the criticism you offer is not very telling. Notice that I was not offering an argument against C, only saying that perhaps it takes more to count as beliving it than many of us are bringing to the table. As I said, I think of myself as persuaded by the arguments I have read that C is the best moral theory out there. That few are living up to it is not a refutation–and if it were then there would be periods in human history that would refute any sane ethical theory. Further, there are whole groups (such as perhaps in the Philippines, where people can come to think of themselves as their brother’s keeper). C does not ask something that is incompatible with the human frame–for it obeys ought implies can.
On the question of hypocrisy, it seems to me at least that to be the ordinary kind of hypocrite the person must think that they have most reason to do as morality commands…
In order to keep the situation you’re describing interesting, at least as I see it, you’ve got show that the problem is independent of much of what has been said by way of mitigation. Of course there are cases of weakness of will, and probably that explains some failures to follow C and (I guess) there are cases where ‘overall practical reasons’ recommend against C and that explains some failures to follow C. But apart from those cases (however extensive and mitigating they are), you’ve got the common occurrence that C-agents fail to live up to C and the explanation can’t be chalked up to weak wills, overall practical reasons, etc. And it can’t be chalked up to obvious insincerity in avowing C. The explanations in many cases make the agent look culpable for failing to live up to C. The failure is explained by things like not feeling like doing what is required, or not being in the right mood to act morally, or not feeling particularly generous, etc. It’s an empirical question, but I’m sure there are lots of cases like this where otherwise decent consequentialist agents give a pass to pretty clear violations of C. There seems to be something hypocritical (or hypocritical-like) in both avowing a commitment to C and giving a pass to these sorts of violations of C, even if the agent is not obviously insincere in avowing C. I guess it could be some other moral failing. I’m incidentally not picking on C or C-agents, I think the problem is broader.
Mike,
Maybe the place to start is to wonder what “hyocritical” is adding to our assessment of a person who we already understand as not living up to their own moral view. Obviously the word hypocritical adds a negative evaluation of that. To my ear, one is hypocritical if one has a kind of double standard, judging others by one set of standards yet living by another. Since I think most C’s would not be any more harsh on others for failing to live up to C than they are on themselves, it was not clear that this additional bit is in place.
Additionally, if we include in the set up that the agent not only thinks morality requires that they 0 but she also thinks that she has most reason to obey morality in this situation (and it seems to be the belief rather than the fact that is relevant here), and yet they fail to 0, then I am thinking they cannot avoid the charge of weakness of will.
Maybe the place to start is to wonder what “hyocritical” is adding to our assessment of a person who we already understand as not living up to their own moral view.
Isn’t it hypocritical to claim to be a moral vegetarian, say, and to be a carnivore 3 days a week? It’s expressly taking the high moral road, but living on the low road a lot. But it gets much less clear when the demands get higher. For some reason, I don’t think it’s hypocritical when a moral vegetarian chooses not to purchase every item he owns from other avowed moral vegetarians.
Maybe this only makes things more murky. If so, ignore it. I’m wondering whether C’s demands are sufficiently high that when C-agents fail to live up to C all the time, they’re a little like the moral vegetarain who is not bothered at all by making no effort to economically support other moral vegetarians.
Hi David –
Late to the party, so apologies if I’m duplicating.
I think Doug is on to something. I think we should draw a distinction between your “moral theory” and your “what matters most” theory. I’m tempted to think that my emotions, reactions, etc., etc., really reflect my theory of “what matters most”; but since I’m always spouting off about how great consequentlialism is, my theory of what matters most does not include (or includes imperfectly) my moral theory. A consequentialist may not realize that their theory of what matters most includes morality only imperfectly, but insofar as non-consequentialism governs her actions and emotions, and she is explicitly committed to consequentialism as a moral view, why not leave this possibility open?
Dale, (and Dave and Doug),
Presumably, what determines what counts as one’s theory of X is static across areas of inquiry. If non-Consequentialism governs Dave’s actions and emotions, then that should be relevant to the determination of either both or neither his moral theory and his theory of “what matters most.” Of course, we might say that it’s a fallback: If you profess a particular theory of X, then that’s your theory. If you don’t, then we “read” you theory off of your emotions and actions. This move seems suspect to me. But, in any case, so long as there is an avowed Consequentialist who professes to be a Consequentialist “all the way down,” we can raise Dave’s question for her. And it would seem ad hoc to grant Consequentialism as her moral theory but insist that non-Consequentialism is her theory of “what matters most.” What could justify using different criteria for determining “her” theory in each case?
I suspect that such people exist and that their (and even Dave’s) lack of C-action is (as Dave suggested) more about emotional override than theoretical tension. I can cite my own case as some evidence. Commonsense morality is frequently a guide for me–even one that I actively care about. But as a Normative Error Theorist, there’s nothing in my professed theory of “what matters most” (nothing) or my moral theory (nothing) that could clash! So the question (as Dave originally asked) seems to just be whether I count as an Error Theorist at all.
Mike,
I apologize for my breach of etiquette in not replying to you first; I wasn’t quite sure what to say about weakness of the will.
In your lawn-mowing case, what do you mean when you say you’re not sufficiently committed to your principle? Do you mean that you suspect it might not be true or merely that it doesn’t motivate you? If the former, then I’m not sure that most Consequentialists do have that problem. If the latter, then I think that’s what I was thinking of as weakness of the will (perhaps, as you point out, inappropriately). But what we call it doesn’t matter. I profess to believe X, call myself an X-ist, but fail to take X-actions. The question is whether I’m really an X-ist.
One way of reading this question (the way I was) is as asking whether being motivated to take X-actions is relevant to whether I really believe X, under the assumption that whether I’m an X-ist is determined by whether I really believe X. In some cases, though, whether one believes X or professes to be an X-ist isn’t relevant to whether one is an X-ist at all! The examples you’ve been citing (vegetarian, champion arm-wrestler) seem to be of this kind. So I think there are two questions. First, is being a Consequentialist like being a champion arm-wrestler or like being an Atheist (or something else which just depends on what you believe)? If the former, then it seems clear that Dave (and likely everyone else) is not a Consequentialist. If the latter, then we return to the question of whether motivation to act in certain ways matters for belief-ascription.
A consequentialist may not realize that their theory of what matters most includes morality only imperfectly . . .
Maybe I’m alone in finding this puzzling. If your theory of “what matters most” includes morality only imperfectly, isn’t that a bit worriesome already? I still think of moral reasons as (just about definitionally) the weightiest practical reasons one might have, so I don’t get the idea that an agent might be in a position to relegate moral reasons to ‘kinda mattering’. How does one get to do that? Point me to something that might make this (mildly) credible.
FWIW, Like Dale, I think Doug is onto something, though I suspect we need some additional resources to tell the whole story. What makes the story harder to tell is (among other things) that you (David S) are already rather sophisticated and aware of the possible tensions in your own psychology. I’d normally be tempted to say that people such as yourself both believe and disbelieve the same theory, but not in such a way that you are in a position to recognize that. But the fact that you yourself raise the worry makes the last part less plausible about yourself.
Just in case restating my own puzzlement is a fruitful contribution to a conversations . . .
Dave, maybe you believe C but do not alieve it?
(I don’t think it quite fits, really, but something in that neighborhood.)
Hi David, I’m late to the party too but I find this extremely interesting–and it resembles my own experience as well. I think it might be helpful to separate actions and emotions. As many have pointed out, because of weakness of will, virtually no one can say that their actions match up with the ethical theory that profess to believe in or accept. But it’s more common for emotions, especially more reflective second-order emotions like approval or disapproval or remorse to reflect our professed theories. Your description makes it sound like consequentialism doesn’t capture your considered feelings or intuitions about what you ought to to do.
Mike brought up vegetarianism and it’s a good example. If I believe I ought to be a vegetarian I may nevertheless eat meat on occasion due to weakness of will. But if it doesn’t bother me at all afterwards, if I feel no guilt and even approve of my action upon reflection, then I’m inclined to say that I don’t really believe I ought to be a vegetarian.
Tamler, I clearly need to read your paper that Jamie brought to my attention. I especially agree with what you say in the second paragraph above when there is another set of norms that does govern my actions and emotions and that other set has a sensible claim to count as a moral view. I think it sane to imagine an amoralist who sincerely thinks Theory X is the truth about morality, yet who thinks morality not worth caring about. I am less tempted to say that such a person does not count as believing Theory X than the person who has a rival moral sense that controls their action and emotions in the way we traditionally expect morality to do.
Tamler,
Very sorry, after a cup of coffee I realized I was confusing you with Tamar. I don’t know why I every try to do anything before coffee (except make coffee).
David, thanks. One point of clarification–the paper Jamie referred to you is by Tamar (Gendler) not Tamler… (Though I wouldn’t mind a J-Phil publication–I hope my tenure committee makes the same mistake.)
No problem, and I agree that the vegetarianism case differs from the amoralist one. I’m assuming the person who claims to believe in vegetarianism cares about morality in general. Once you throw your lot in the morality game (as the amoralist does not), then I think moral beliefs are more closely tied to emotions. On the other hand, I imagine someone could be a selective amoralist…
Dale,
If I am getting you, I don’t think that explains good parts of what I have in mind. As I see it, it is not just that I have private concerns which, by my lights, are more important than moral concerns and this explains the disconnect between my moral view and my view of what matters (to me). Rather, at least it seems to me, I would feel way more actual emotion guilt about tipping poorly a good waittron than failing to donate so that people in places with dirty water get Oral Rehydration packets. You might try saying that this is because I care more about not being seen to be a poor tipper or whatever but I don’t think that is really what is going on. When I feel such emotions (and I really hope and suspect this is common) it has the guise of a moral concern–the pang is the pang of moral failure (not just the vanilla pang of failing to promote my desires).
I’ll try to think more about this as several folks are pushing in this direction. I really hope my experience here is not so specific to me as to be explained by an attraction to subjectivism. If that were the case, I would just be wrong about what I take to be a somewhat common aspect of our moral experience–something that I would have assumed was common to most consequentialists regardless of their other allegiences but broader than that.
I’m not sure whether you are a consequentialist. What should we say about the person who intellectually rejects consequentialism, but on different grounds acts in ways that do in fact maximize good consequences? I’m tempted to say that such a person is not a consequentialist, despite the fact that her acts meet with consequentialist approval. This prompts me to say that you are indeed a consequentialist. But perhaps we should reserve that term for only those whose intellect _and_ whose activity conform to the theory.
I think would-be Christians have similar debates about what it takes to count as a Christian: belief? faith? works? love? Maybe the answer depends upon whether you would maximize good consequences by self-identifying as a consequentialist.
Eric,
That is interesting but I want to hear more about the case. I would think the best case to think about is not just one where the person intellectually rejects C but happens to in fact maximize goodness, but rather the case where she intellectually rejects C yet has both her actions and emotions non-accidentally attuned to whether in their own mind they are serving C or not. That is, they tend to feel real guilt (and not just say they are guilty) when they fail to serve what they think C requires and this tendency is quite robust. If that is the case we are considering, what would you say about it?
David,
You seem to be comfortable with the idea that “the best moral theory out there” is one “few are living up to”. Apparently you are willing to cut most people, including yourself, quite a bit of slack when it comes to “living up” to that best moral theory … “weakness of will”, &c.
But I’m guessing that there are differences in how much slack you are prepared to cut people, differences that don’t have a consequentialist justification. Thus I’m guessing you think people who don’t feed their own children are somehow worse than people who don’t feed — via charity– other peoples children?
But how worse?
The discrimination can’t be justified on consequentialist grounds so I take it you don’t think it a moral difference. What kind of difference is it?
David,
I think the case could go in a number of different directions. Here’s just one case: suppose God is a consequentialist, and commands Philip to V, on the grounds that V-ing maximizes good consequences. Suppose that Philip does not self-identify as a consequentialist, but instead as a divine command theorist. Finally, suppose Philip does not believe that God is a consequentialist. Philip Vs simply because God told him to. I don’t think Philip is a consequentialist who has failed to admit this to himself, not even if this takes place again and again. Philip is not a consequentialist at all.
It is trickier to describe a case where Philip does believe that God is a consequentialist, but that the rules for God are different from the rules for man. (Cf. Rawls in “Two Concepts..”)
Hi David,
Would you argue as follows? Doug’s actions and emotions are not governed by his legal positivism conjoined with his views about the content of the law. For instance, Doug knowingly breaks the law on a daily basis in that he often exceeds the posted speed limit and often goes through various stop signs without coming to a complete stop. He has no negative feelings regarding law-breaking of this sort so long as the law-breaker exercises due caution, as he does. Also, Doug used to live in South Carolina at a time when its constitution prohibited interracial marriages. Yet Doug had no negative emotional reactions to those who broke the state’s anti-miscegenation laws. Thus, we have to wonder whether Doug really is a legal positivist who believes that the law requires him to drive no faster than the posted speed limit, that drivers must come to complete stop at stop signs, and that miscegenation was prohibited in South Carolina prior to the amendment of its constitution in 1998.
I assume that you wouldn’t argue in the above fashion. So why would you argue, in like fashion, that you aren’t really a consequentialist who believes that everyone is morally required to maximize the good (impersonally construed) given that consequentialism doesn’t govern your actions and emotions. What is, by your likes, the relevant difference between the law and morality that makes the one but not the other line of reasoning acceptable?
It seems to me that when it comes to whether you believe some proposition (even a normative one) what’s relevant is whether you’re disposed, say, to use it as premise in your reasoning and to assent to in certain relevant contexts, not whether you’re disposed to live by the norms it expresses. After all, presumably the criteria for believing should be the same across all propositions (normative or non-normative).
On the other hand, consider Mike whose actions and emotions are not governed by his impartial concern for the well-being of Jane. For instance, Mike slaps Jane in the mouth each time she utters a word he does not like. He knows that does not display impartial concern for Jane. He pokes her with a knife when she fails to ask permission to talk. He knows that does not display impartial concern for Jane. He chokes her each time his dinner is late. He knows that does not display impartial concern for Jane. Indeed, he knows that all of this behavior displays a complete lack of concern for Jane’s well-being. But he does all of this without emotional qualm. He has no negative emotional reactions at all to his own conduct. Thus, do we have to wonder whether Mike really does have the impartial concern for the well-being of Jane? I’d say, yes, we should be wondering that.
David, it strikes me that your position here parallels Strawson’s in Freedom and Resentment. According to Strawson, incompatibilists hold IN THEORY that the determinism rules out moral responsiblity–and yet when we look at our responsibility-related actions, practices, and sentiments, the truth of global determinism is completely beside the point. So he accuses philosophers who develop their incompatibilist theories of “overintellectualizing the facts.” And then he writes:
“Perhaps the most important factor of all is the prestige of these theoretical studies themselves. That prestige is great, and is apt to make us forget that in philosophy, though it also is a theoretical study, we have to take account of the facts in all their bearings; we are not to suppose that we are required, or permitted, as philosophers, to regard ourselves, as human beings, as detached from the attitudes which, as scientists, we study with detachment.”
Perhaps a lot of this could apply to philosophers who develop theoretical defenses of consequentialism in scholarly journals but whose actions and sentiments reflect core non-consequentialist commitments.
David,
It just occurred to me that you could cut through some of these worries by distinguishing between being a consequentialist (as in title of the post) and believing consequentialism. You (probably) could not be a consequentialist without having (some) impartial concern for others. That’s what it is to be a consequentialist. But (again, probably) you could believe consequentialism without having the relevant concern. That is, you could believe consequentialism without being one. Though, the idea of a purely theoretical consequentialist is pretty creepy.
Hi Mike,
Yes, we certainly have to wonder whether David cares about acting morally as much as he does, say, about his material comfort. As a matter fact, I think that it’s clear that he doesn’t. So, perhaps, David is a consequentialist who doesn’t care as much about acting morally as he does about some other things. It seems to me that he, nevertheless, counts as a consequentialist (i.e., as someone who believes consequentialism).
Tomkow,
I’m not sure what kind of slack you mean to be saying I am cutting people like myself who do not live up to our own moral theory. I think I am acting wrongly, but I don’t get very worked up about that. Perhaps by slack you mean not getting in my and other people’s faces about it?
I am not sure about the claim that there is no good C based reason to be less happy with the person who does not feed their own child. Such a person is a danger in a wider range of contexts given our established practices. If things were ok elsewhere such a person would still create harm whereas the person who does not give would not similarly be a harm in such contexts. Also, I am confident it would be counterproductive to express the same degree of anger at the people who fail to give to charity as we do to the people who do not feed their own kids.
Doug,
Keep in mind that I stipulated that we are dealing with someone that is not an amoralist. And keep in mind that I wanted there to be a rival moral theory that we might attribute to the person–a rival moral theory to what they say in seminars which guides their action and emotions in much the kind of way that we tend to expect a (non a-moralist) person’s moral views to guide their actions and emotions. I don’t see any such parallel in your Doug example.
Tamler,
Thanks, that is a very useful point of comparison. We are likely to see just such a similar sort of divide between a person’s intellectual take on the question of determinism and her emotional reactions.
Doug,
Reconsider the ‘Mike case’ above and suppose he claims to believe that his wife Jane deserves special, favorable treatment. The case then goes,
. . . consider Mike whose actions and emotions are not governed by his belief that his wife Jane deserves special, favorable treatment. For instance, Mike slaps Jane in the mouth each time she utters a word he does not like. He knows that this does not display special, favorable treatment for Jane. He pokes her with a knife when she fails to ask permission to talk. He knows that this does not display special, favorable for Jane. He chokes her each time his dinner is late. He knows that this does not display special, favorable for Jane. Indeed, he knows that all of this behavior displays a complete lack of concern for Jane’s well-being. But he does all of this without emotional qualm. He has no negative emotional reactions at all to his own conduct. Thus, do we have to wonder whether Mike really does believe that Jane deserves special, favorable treatment? I’d again say, yes, we definitely should be wondering that.
Unless there is some so far unstated explanation for why Mike’s behavior is so wildly inconsistent with his professed beliefs, the best explanation for what he is doing is that he does not actually have the belief he professes to have.
David,
When you say that there is a rival moral theory (a rival to consequentialism) that “guides”/”governs” your actions and emotions, do you mean that you appeal to that theory as a sort of decision procedure or merely that there is some substantive theory which your actions and emotions conform to? If the latter, why do you think that, in my legal case, there can be no rival theory (not even a deeply pluralistic one) with which my actions and emotions conform? If the former, then I’m puzzled by your initial description of your psychology which cited your conscience as your guide. I was assuming that you don’t actually say anything like the following to yourself “lying for the greater good would be wrong, so I won’t do it”. Rather, I assume that you’ve just been inculcated with a disposition to avoid lying and feel bad when you do, not because you believe that it’s wrong or doesn’t accord with some deontological theory’s norms but because you’ve been socialized that way.
And there’s no reason why we can’t assume that in my example I have some motivation to obey the law. It’s just a weak one that’s often overpowered by the motivation that I have to do what’s best in terms of my self-interest.
Mike,
Is Mike disposed to reason on the basis of the proposition that his wife deserves special treatment? Would he bet a large sum of money that that proposition is true? Does he often assert the proposition that his wife deserves special treatment? Etc.
David,
You say “I think I am acting wrongly, but I don’t get very worked up about that?” and don’t think it worth “getting in other peoples faces about it”.
Okay. The question then is there any morally wrong behavior that you *do* get “worked up” about?
If not what’s the point of moralizing?
If so– if there are some things that you think “worth getting in peoples faces about” — then it seems to me you really should think about the difference is between what is worth getting worked up about what isn’t. You don’t have to call this– as we non-consequentialist do — “thinking about the difference between right and wrong”, but you might find it an interesting exercise nevertheless.
But maybe the problem with us non-consequentialists is that we take all this morality stuff too seriously. Sorry for getting in your face about it.
Doug,
No I don’t want to stipulate that the person uses the “rival” moral theory as a decision procedure. I did not mean to suggest that your Doug could not have there be some such rival moral theory–I was just saying it is crucial to my case that that be understood to be in place. Additionally it is crucial that Doug be understood to not be alegalist, as you note in your last paragraph.
So Doug in some sense cares about morality and thinks it reason-giving. And, crucially, it is not just that he fails to do the thing recommended by his avowed legal theory, he feels distinctively legalistic emotional attitudes (whatever they are!) in a way that is shaped by the rival legal norms and his action is tolerabley shaped as we would expect a person’s actions to be who accepted in the normal way that the rival moral theory were true. If we have all that in place, then do you still think it clear that Doug believes the legal theory that he says he does? Isn’t it at least clear that this person is deeply conflicted in their attitudes towards the law?
Mike,
I find that case persuasive.
Tomkow,
Perhaps if you would go back and read my initial post you would see that much of what is interesting me here is that I do get “morally worked up” about some things, just not the things my theory tells me to get worked up about.
Isn’t it at least clear that this person is deeply conflicted in their attitudes towards the law?
Yes. Doug is. He thinks that there is some reason to obey the law. And he thinks that there is some reason not to obey the law. So he’s conflicted. But do you think that there’s any reason to doubt that Doug believes that legal positivism is true or that the law requires that he drive at or below the posted speed limit?
Likewise, do you think that there is any reason to doubt that you believe that you’re morally required to maximize the good? Or do you concede that you believe that every agent is morally required to maximize the good, but you’re arguing that being a consequentialist may require more than this belief? Is that the thrust of the post? Are you denying that X is a consequentialist if and only if X believes that an act is morally permissible just when, and because, it maximizes the good?
Doug,
I was trying to make a go for the stronger claim. I take it you find that claim quite implausible. I don’t really feel that you have isolated exactly what it is about the claim that you find so implausible. It is, I admit, a surprising and strong thesis. It would be much weaker to say one is merely not a real C even tho one believes C. But I am not sure what that claim really comes to.
Perhaps you are thinking that the person, such as myself, who claims to believe C can be sincere and quite generally whatever a person sincerely claims to believe, she believes. I would have said we can think we believe something yet be wrong about that.
Anyway, if you have the energy, perhaps you could try to explain what it is exactly in the claim that you find not just surprising or controversial, but dead wrong. These analogies, I fear, are just highlighting that we disagree (or that I continue to think maintaining my view sane).
Hi David,
Well, in general, I think that S believes that P if and only if S has the disposition to rely on P as a premise in his or her reasoning, to assert that P is true, to assent to P when asked whether P is true, to wager something on P’s being true when a good betting opportunity (given one’s subjective probabilities) presents itself, etc. On this criterion, I certainly believe that I’m legally required to drive at or below the posted speed limit. The fact that, despite having some motivation to obey this law, I don’t obey this law and I’m not inclined to beat myself up for breaking this law is neither here nor there. For the fact is that just because I’m legally required to obey the law and have some reason to do so doesn’t entail that I have most reason, all things considered, to obey the law. And, in the case of such traffic laws, I have most reason, all things considered, to break many of them. And my actions and emotions are governed (in a robust sense — not the weak sense that you seem to be saying that you’re actions are governed by a deontological theory) by my judgments about what I have most reason, all things considered, to do and to feel. They’re not governed by my judgments about what the law requires.
I’m assuming that when it comes to you and the proposition ‘each agent is morally required to maximize the good’, the above criterion is met. So it seems to me that you believe this proposition. The fact that the rule/norm that this proposition expresses doesn’t govern your actions is best explained by the fact that you don’t think that you have most reason, all things considered, to obey this rule/norm — that is, to maximize the good. Thus, the explanation seems to be the same in your case as it is in my legal case.
Doug, I don’t think your criterion is very good, at least if I’m understanding it correctly.
Take a purely descriptive example:
As you will remember, in Philippa Foot’s example she asserts the sentence while sitting on an ordinary chair. The point of the example is that she might simply be using the words “pile of hay” in some odd way.
But your criterion will not sort Foot’s example correctly. For instance, the fictional Philippa* would no doubt infer from what she believes a conclusion that she would express with the sentence, “Somebody in this room is sitting on a pile of hay”, and if asked whether what she said is true she would say it was, and if asked whether she would like to wager that she is sitting on a pile of hay she would jump at the chance. And yet Foot’s point is that she would not, in that situation, count as believing that she was sitting on a pile of hay.
That is a purely descriptive example (that is, “pile of hay” is purely descriptive). I think the situation gets even worse for your criterion when the expressions are not descriptive.
Guys,
Good, this now seems to be getting towards the central issues. Doug: I was trying to imagine a person who was split. What they “intellectually” think goes one way and what guides their actions and emotions goes another way. You point to betting behavior as something partly criterial of what the agent counts as believing. That seems sane to me in most cases but it also seems to encroach on the admittedly vague practical vs. theoretical line. That is, if the agent merely judged that it would be good to make this bet or whatever, that would by my lights be clearly on the intellectual part of the line. But if you think it matters that they actually put their money where their mouth is, then I am tempted to say that you are, as I am suggesting, allowing that the action/emotion side as part of what determines what the agent believes.
After speaking against the usefulness of analogies here I came up with one I like. I was thinking we should ask about the case where the person is making judgments about what it makes sense to do which are so divorced from the action/emotion side. If the divorce really is as stark as it could be, I think the Foot/Jamie sort of point compelling that we would not think the speaker really was really using “makes sense” in the way we do.
Finally, I was thinking that the traditional judgment internalist is usually thought to be offering a sincerity condition on moral claims. But I think that is a bad way to put it. The person, I think the judgment internalist should allow, might be completely sincere. She need not set off lie-detector machines (she may even be a judgment externalist and so not think that her failure to be motivated counts against her sincerity in making moral claims). Rather I think the thought is better put in terms of whether the agent really believes what she (perhaps sincerely) thinks she believes. Again, I am not siding with the judgment internalist, just pointing out that they are partners in crime in allowing the action/emotion side to play a key role in determining what the agent believes. I think of my thesis as being less strong than judgment internalism.
Hi Jamie,
I take your point. My criterion will need some work. Nevertheless, I don’t think that my driving above the posted speed limit counts in any way against our thinking that I believe that I’m legally required to refrain from driving above the posted speed limit. So, unless someone can point to some disanalogy between my legal case and the case of David’s not doing what would maximize the good, I’m inclined to go with my purported explanation for why David doesn’t maximize the good rather than his explanation that he doesn’t actually believe that he is morally required to maximize the good.
And is David assuming that moral judgments are not descriptive?
Sure, I think that actions or the lack-there-of can count, in part, as determiners of whether someone has a belief. For instance, if someone is extremely thirsty and yet refuses to drink the liquid in the glass in front of them, then that’s some evidence that she doesn’t sincerely believe, despite what she may say, both that the glass contains potable water and that drinking it would quench her thirst. And I think that if someone believes that they ought, all things considered, to do X, then we should, absent depression, weakness of the will, and other forms of practical rationality, expect that person to do X. What I’m questioning, then, is whether we should expect someone who believes that she is required, according to normative realm, R, to do X to do X when she doesn’t believe that she has most reason, all things considered, to do X. That is, why should we expect normative claims that a subject doesn’t take to have rational authority to govern his or her actions?
Gosh, perhaps we have been talking past each other for a while then and perhaps we don’t disagree as starkly as it seemed. I did not mean to make the claim that you disagree with in the post of 9:56. I am trying to sort out why you attribute that view to me, but I may need some help.
I stipulated that the person we are talking about is not an amoralist. They think morality matters. That of course is compatible with the thought that other things matter too and some perhaps more than morality. You are right that I did not want to rule that out. But the person thinks that it is normatively relevant what morality says. Given that, how should we best get at what the agent thinks morality says in a situation. I was thinking we look at what actually gets weight in her actions and emotions rather than merely at what she says.
But perhaps I am not getting why you think I am committed to what you say above in which case I will need more help.
Well, Doug, I am assuming that Dave is assuming that moral judgments are not merely descriptive. If they are, then there is no real issue.
I thought that a widely (not universally, since this is philosophy) acknowledged difference between moral obligation, on the one hand, and legal obligation according to positivists, on the other, is that the first is normative (‘robustly normative’, as some people put it) and the second is not. Doesn’t that seem right to you?
Hi David,
I guess that I’m confused.
I thought that you were arguing as follows: If you genuinely believed that you are morally required to maximize the good, then your actions and emotions would reflect this (absent depression, weakness of will, and the like). But your actions and emotions do not reflect this belief — that is, you don’t maximize the good and you don’t feel bad about failing to do so — and you’re not depressed, suffering from weakness of the will, or anything of the sort. Thus, you conclude that you don’t genuinely believe that you are morally required to maximize the good.
Now, you are in fact a person who believes that you do not have most reason, all things considered, to maximize the good. So aren’t you someone who believes that he is required, according to normative realm, R, to do X when you don’t believe that you have most reason, all things considered, to do X? Just substitute ‘morality’ for ‘R’ and ‘what will maximize the good’ for ‘X’.
the person thinks that it is normatively relevant what morality says. Given that, how should we best get at what the agent thinks morality says in a situation. I was thinking we look at what actually gets weight in her actions and emotions rather than merely at what she says
What constitutes someone’s taking what morality says as being normatively relevant? Is it that he thinks that he has some, sufficient, or decisive reason to do what morality says? If it’s not thinking that there is decisive reason to do what morality says (and I assume that it’s not given your own views), then why should we expect that what morality says gets weight? It’s only the moral considerations themselves that should get some weight, but, for all that, there may always be better reason, all things considered, to do something other than what morality says you morally ought to do. In which case, we certainly wouldn’t expect you do what morality says.
Doug,
I am confident I am not tracking the distinction between moral considerations getting weight and what morality says getting weight, but let me try to say something anyway.
I have been assuming that a factor can be treated as normatively relevant by an agent even when she does not think it normatively decisive. She might treat the consideration as getting pro tanto weight in deliberation even if it does not win, by the agent’s lights, in the fight for what it makes most sense to do. We could see this in thinking about counterfactuals or perhaps in the agent’s attitude towards doing what she concludes she has most reason to do. (And I assume all this need not be consciously available to the agent herself.) So I agree that the person I have in mind should not necessarily be expected to do what they think morality requires. Nonetheless, it could be that their deliberation is shaped in characteristic ways by what they think is morally required. I am looking at what it is that does such shaping. That is, what gets weight insofar as morality gets weight in their deliberation.
I would not have put my argument the way you do in the first paragraph above. I have tried to say that I disagree with the standard judgment internalist in that I think that it is perfectly coherent for a person to be an amoralist and yet make sincere moral judgments that have no motivational tendency behind them. So I would reject the claim that all we have to know about me is that I think such and such is morally required and then we know that, absent weakness of will and the like, I will do what I think is morally required. Indeed, I think that not only are we not entitled to that inference, but we cannot even safely infer that the agent has any motivational tendency towards that which they think is morally right. Thus I think the standard judgment internalist claim too strong. But in a way I also think the claim too weak. For in a person who is not an amoralist, we should expect thoughts about what morality requires to provide more than just some, perhaps very small, motivation in favor of doing what is thought morally correct. In the person who is not an amoralist (such as, I flatter myself, I am), then I think we should look to what plays something like the characteristic moralish role in the agent’s actions and emotions rather merely to what that agent asserts in seminars to learn what they really think is right and wrong. And this moralish role will be a bit vague, admittedly, but it will be especially connected to episodes of real life moralized guilt and anger and have some tendency to shape (perhaps not decisively) deliberation. If we find a very good fit for this part of the role of an agent’s moral belief, then, I was trying out saying, perhaps that should beat out what the agent says in philosophical contexts to count as the agent’s moral beliefs.
For in a person who is not an amoralist, we should expect thoughts about what morality requires to provide more than just some, perhaps very small, motivation in favor of doing what is thought morally correct.
Why should we expect thoughts about what morality requires to provide more than just some, perhaps very small, motivation in favor of doing what is thought morally correct in someone, such as yourself, who thinks that there is very often sufficient reason, all things considered, to do other than what’s morally required of them?
I assume that you have some motivation to maximize the good and, thus, that you do so whenever maximizing the good isn’t very costly to yourself. And I assume that you think that you have sufficient reason to refrain from maximizing the good in most of the situations in which you in fact refrain from maximizing the good — situations in which your maximizing the good would involve significant personal costs.
It seems to me that we should expect thoughts about what morality requires to play only a small motivational role in a person who thinks that there is often sufficient reason to act contrary to what morality requires. Thus, it seems to me that your thoughts about what morality requires plays precisely as significant a motivational role as we should expect from someone who believes BOTH consequentialism and that there is often sufficient reason to act contrary to what consequentialism holds that you’re morally required to do.
The main idea is, then, that the extent to which S’s thoughts about what she is morally (or legally, prudentially, etiquettely, etc.) required to do will motivate her to act as she thinks that she is morally required to act is a direct function of the extent to which S thinks that she has decisive reason, all things considered, to act as she thinks that she is morally required to act. In your case, I take it, you think that you rarely have decisive reason, all things considered, to act as you think that you’re morally required to act. Thus, it seems reasonably to expect that your thoughts about what you’re morally required to do will rarely motivate you to act as you think that you’re morally required to act. Moreover, you’re not unique in this way. Many act-consequentialists (including you, Peter Singer, Henry Sidgwick, Dale Dorsey, etc.) think that there is often sufficient reason to act contrary to the dictates of consequentialism. Thus, we shouldn’t be surprised that consequentialists are so frequently insufficiently motivated to act as they think that they’re morally required to act. They just don’t think that the dictates of morality have the same rational authority that some of the rest of us do.
I’ll let you have any last words. Thanks for an interesting discussion, but with classes starting tomorrow I’m going to try to refrain from participating (although continue reading) any ensuing discussion.
Doug,
Thanks for your thoughts and energy for keeping this discussion going. In the end I think we are just interested in different questions. It may be that I should drop the claim that a nonamoralist would necessarily have more than a little motivation to be moral, but I don’t think that change would affect the overall argument much.
That was one heckuva ride…
David,
I’m curious whether there might be any grounds for describing you as an “indirect” consequentialist of some sort, in practice if not in theory. I’m usually not thrilled about that term, because I think it is ambiguous between a two-level act-consequentialist view like Hare’s and a rule-consequentialist view like Hooker’s. In this case, though, I’m happy with the ambiguity, because I’m curious whether your practice might conform to either view. To what extent do consequentialist considerations make a difference to what your conscience keeps you from doing? If in the proverbial cool hour you reflect that your doing X will usually have bad consequences, or that bad consequences would result if people generally felt free to X, would this make it more likely that in future you would have to break through a mass of feeling in order to X, one that you would probably encounter afterwards in the form of remorse if you did?
Well, in theory I am certainly intellectually persuaded by some variant of indirection at least if that only commits me to the thought that the C is in the first instance offering an account of truth-makers and not decision procedures. But while I find it a very hard question to actually figure out what decision procedure I should have according to my theory, I doubt I am broadly compliant with that. However, it certainly does make a dent here and there. How I direct my charitable contributions (if not how much I direct there) is so sensitive in my view and that I am a veggie is affected by my theory of value, I think. But enough about me–in short I doubt I can make use move to square my theory and practice but hope springs eternal.
David, I consider my self a consequentialist most of the time. Especially when I reply to blogs. For me, the act of thievery had bigger negative consequences than the inaction when it comes to charity. And I can give reasons for that.
Also, I don’t see how morality doesn’t have to do with consequentialism . No, it is a moral code, it has everything to do with morality. I just as a person don’t subscribe to just one moral code.
Basically I subscribe both to consequentialism and to “I am a selfish bastard and what I want matters” in cases where my morality doesn’t always make sense or has to do with the best consequences for the whole world or something.
In fact, I am kind of the opposite, My conscience might be the voice that tells me that consequentialism is right and my other morality is wrong.
Basically, the way I see it, you can be of the belief that consequentialism is the best morality for the world or best morality, but not what you want to be.
Hmm. Can one be of the opinion that consequentialism is the best morality without being that ? (Just like one can believe that Vegetarianism should be followed but not by him) Does that make such person a hypocrite, or if he admits that he doesn’t want to do it, but believes that the world would have been better if the philosophy was followed more ?
In my case, for the sake of personal comfort, I don’t really care if I am the best person for the world than I can be. I do support that we should all make our morality more consequentialist. And in regards to my own morality I am to some extend just that.
Maybe the problem here is that we are asking from persons too much. Maybe it is incredibly silly to expect an individual to commit 100% to consequentialism. Or to not be influenced at all by it. Rather what we must be looking for is more consequentialism not people being perfect consequentialist robots.
Just my random thoughts on the matter after reading a lot comments here.