Eric Schwitzgebel writes:

Here are four things I care intensely about: being a good father, being a good philosopher, being a good teacher, and being a morally good person. It would be lovely if there were never any tradeoffs among these four aims.

Explicitly acknowledging such tradeoffs is unpleasant — sufficiently unpleasant that it’s tempting to try to rationalize them away. It’s distinctly uncomfortable to me, for example, to acknowledge that I would probably be better as a father if I traveled less for work. (I am writing this post from a hotel room in England.) Similarly uncomfortable is the thought that the money I’ll be spending on a family trip to Iceland this summer could probably save a few people from death due to poverty-related causes, if given to the right charity.

Today I’ll share two of my favorite techniques for rationalizing the unpleasantness away. Maybe you’ll find these techniques useful too!

The Happy Coincidence Defense. Consider travel for work. I don’t have to travel around the world, giving talks and meeting people. It’s not part of my job description. No one will fire me if I don’t do it, and some of my colleagues do it considerably less than I do. On the face of it, I seem to be prioritizing my research career at the cost of being a somewhat less good father, teacher, and global moral citizen (given the luxurious use of resources and the pollution of air travel).

The Happy Coincidence Defense says, no, in fact I am not sacrificing these other goals at all! Although I am away from my children, I am a better father for it. I am a role model of career success for them, and I can tell them stories about my travels. I have enriched my life, and then I can mingle that richness into theirs. I am a more globally aware, wiser father! Similarly, although I might cancel a class or two and de-prioritize my background reading and lecture preparation, since research travel improves me as a philosopher, it improves my teaching in the long run. And my philosophical work, isn’t that an important contribution to society? Maybe it’s important enough to morally justify the expense, pollution, and waste: I do more good for the world traveling around discussing philosophy than I could do leading a more modest lifestyle at home, donating more money to charities, and working within my own community.

After enough reflection of this sort, it can come to seem that I am not making any tradeoffs at all among these four things I care intensely about. Instead, I am maximizing them all! This trip to England is the best thing I can do, all things considered, as a philosopher and as a father and as a teacher and as a citizen of the moral community. Yay!

Now that might be true. If so, that would be a happy coincidence. Sometimes there really are such happy coincidences. But the pattern of reasoning is, I think you’ll agree, suspicious. Life is full of tradeoffs among important things. One cannot, realistically, always avoid hard choices. Happy Coincidence reasoning has the odor of rationalization. It seems likely that I am illegitimately convincing oneself that something I want to be true really is true.

The-Most-I-Can-Do Sweet Spot. Sometimes people try so hard at something that they end up doing worse as a result. For example, trying too hard to be a good father might make you in a father who is overbearing, who hovers too much, who doesn’t give his children sufficient distance and independence. Teaching sometimes goes better when you don’t overprepare. And sometimes, maybe, moral idealists push themselves so hard in pursuit of their ideals that they would have been better off pursuing a more moderate, sustainable course. For example, someone moved by the arguments for vegetarianism who immediately attempts the very strictest veganism might be more likely to revert to cheeseburger eating after a few months than someone who sets their sights a bit lower.

The-Most-I-Can-Do Sweet Spot reasoning harnesses these ideas for convenient self-defense: Whatever I’m doing right now is the most I can realistically, sustainably do! Were I to try any harder to be a good father, I would end up being a worse father. Were I to spend any more time reading and writing philosophy than I actually do, I would only exhaust myself. If I gave any more to charity, or sacrificed any more for the well-being of others in my community, then I would… I would… I don’t know, collapse from charity-fatigue? Or seethe so much with resentment at how more awesomely moral I am than everyone else that I’d be grumpy and end up doing some terrible thing?

As with Happy Coincidence reasoning, The-Most-I-Can-Do Sweet Spot reasoning can sometimes be right. Sometimes you really are doing the most you can do about everything you care intensely about. But it would be kind of amazing if this were reliably the case. It wouldn’t be that hard for me to be a somewhat better father, or to give somewhat more to my students — with or without trading off other things. If I reliably think that wherever I happen to be in such matters, that’s the Sweet Spot, I am probably rationalizing.

Having cute names for these patterns of rationalization better helps me spot them as they are happening, I think — both in myself and sometimes, I admit, somewhat uncharitably, also in others.

Rather than think of something clever to say as the kicker for this post, I think I’ll give my family a call.

 

This was originally posted on Eric’s blog, The Splintered Mind.

7 Replies to “The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot (by Eric Schwitzgebel)

  1. I totally agree that such patterns look suspicious. I guess I think they look most suspicious when they purport to vindicate living exactly as one currently happens to live. That seems to be the sort of case you are focused on in this presentation. Perhaps such patterns of argumentation look less suspicious when the purport to vindicate the claim that, for example, living up to the demands of Consequentialism, while it would require doing much more than one currently does, would not require quitting one’s job and severing one’s connections to focus only on the worst off—that is, making morality more livable even if not comfortable. I see Peter Railton’s “Alienation, Consequentialism, and the Demands of Morality” as claiming to vindicate the latter sort of claim not the former. He is insistent that most of us are not doing enough as is. None of that is anything like an objection to anything you say. But I would welcome your thoughts about the comparative force of the worry when the argument purports to vindicate the status quo versus when it purports to vindicate something a bit more comfortable than we feared might be demanded of us. (I realize you were considering rationalization beyond just moral rationalization—I just focused on that as that is perhaps the most familiar philosophical example).

  2. Hi Eric,

    Thanks for the interesting post!

    I think I want to say that you might be offering yourself rationalizations when none are needed.

    Let’s say one’s life takes different “shapes” depending on what roles one emphasizes, and let’s assume that for each role there’s the possibility of breaching role-based obligations and performing supererogatory actions. All researcher and barely any father is one shape, all father and barely researcher is another shape, equal balance is a third shape, and so on.

    Some shapes require breaching obligations in at least one of one’s roles. Call those “blameworthy shapes.” People whose lives have blameworthy shapes would be justified or warranted in feeling “uncomfortable” with the tradeoffs they’re making. But there are lots of shapes among which one can choose that are non-blameworthy (e.g. time/energy 8 towards researcher paired with time/energy 5 towards father, or vice versa), and so feeling uncomfortable with the tradeoffs made among those shapes would be unjustified/unwarranted. There is no need to offer a rationalization if one’s life has a non-blameworthy shape.

    None of this requires affirming that they all “perfectly coincide” (Bishop Butler style) or that you need to be doing the best you can in each sphere without burnout, and so you don’t need to tell yourself either of these things. And I don’t think there’s anything unpleasant about acknowledging the tradeoffs you make in shaping your life when you see the shapes you’re choosing among as non-blameworthy shapes. (Of course, you might deceive yourself into thinking you’re not breaching any obligations, but that seems to be a more general problem than the one you’re raising here).

  3. Thanks for posting this, David — and thanks for the thoughtful comments, David and Reid!

    David: I agree that the prima facie plausibility of rationalization is greater in the case where one is justifying how one currently happens to be living than when offered as a general theoretical move in favor of showing that, e.g., act consequentialism might not be as demanding or unlivable as it might at first seem. However, I do have some concern about rationalization even in the latter context: It’s easy to see how an ethicist might be motivated to avoid theoretical conclusions favoring a highly demanding or almost unlivable ethics. This motivation could be epistemically justified (e.g., by a metaethical view that implies that ethics cannot be too demanding or unlivable) or it might be epistemically unjustified (e.g., because one wants a “marketable” position, or because one wants not to be held to high moral standards personally, assuming that there are versions of these motivations that do not *epistemically* justify theories that avoid demanding or unlivable moral conclusions). In the latter case, rationalization might have a role to play at the individual level — and possibly at the community level too?

    Reid: I think I agree with all or most of that. One can be clear-eyed about it, in which case maybe one can say: Yes, I’m settling for being a B+ father and a C-minus researcher and a B-minus teacher and a barely passable ethical person. I think that’s hard to acknowledge about oneself — at least it’s hard for me! — harder than dividing up time/energy, as you suggest (since not giving too much time/energy is consistent with Happy Coincidence reasoning). In the ethical domain, I think such a clear-eyed approach typically means “aiming for moral mediocrity” as I describe the phenomenon elsewhere. Since I favor demanding ethical norms — which maybe you do not? — I’m inclined to think that aiming at and achieving moral mediocrity does mean behaving in morally blameworthy ways on a regular basis. (In fact, in this post I’m trying out some ideas for a paper I’m drafting called “Aiming for Moral Mediocrity”.)

  4. Hi Eric,

    What’s the context for understanding moral mediocrity? In the case of morality, is the only relevant comparison class that of humanity at large such that being morally mediocre is being morally average? If that’s the case, it’s probably not difficult to surpass mediocrity, and once you surpass mediocrity, you’re likely getting into supererogatory territory. On the other hand, suppose the comparison class is not humanity at large and consider the following. Getting an A at Yale requires a higher level of performance than getting an A at a community college. You are, perhaps, receiving a C at Moral Yale, a B+ at Father Harvard, a B at Teacher Stanford, and an A- at Golf local community college (since it’s just a hobby of yours, let’s say). Given how difficult it is to gain admission and simultaneously attend all those schools, I’d say that would make you an elite (or at least above mediocre) human being who is rightfully proud of your overall performance; at the very least, rationalization is not called for.

    (Two side notes: i) I probably don’t favor what you’re referring to as “demanding ethical norms,” and ii) I think many of the roles we occupy are ethically significant despite not deriving that significance from ethical principles that apply to us qua moral agents. So it’s inaccurate, from my perspective, to talk of the demands of, say, fatherhood, on the one hand, and ethical demands, on the other).

  5. Anon, April 28: Sorry for missing your comment before! I only noticed it now. I agree about avoiding simple act utilitarianism. I think this kind of tension can arise on a number of demanding moral theories, on which the demands of morality conflict with other important life goals or values.

  6. Reid: I think that mediocrity, as a psychological phenomenon, involves a reference group which is normally your peers or social ingroup. Most people, I think, aim to be about as morally good as the people around them, rather than especially morally better or worse. Normatively, I’m not sure about the proper reference group for mediocrity — maybe it should be something closer to all of humanity, but that also seems too simple.

    On supererogation: I don’t love the concept. It seems to me overused and to lend too easily to rationalizing self-justifications. But if we are going to use it, we might think of getting a moral “C” as doing all that is required and nothing supererogatory. Certainly there are people who would be happy enough with that self-conception. “Yes, I’m a moral C — minimal pass! I’ve done exactly no more than the moral minimum not to be blameworthy.” If that’s the way to think of it, I have two thoughts: (a) I personally would rather not think of myself that way; I’d like to aim higher. And (b) there’s something a bit weird about that way of thinking, which I can’t quite put my finger on, which reflects part of the weirdness of overusing the idea of supererogation. (If you don’t see anything weird in that way of thinking, I could try harder to put my finger on it.)

    On demandingness, and also connected to (b): I think almost all of us do small blameworthy things very often — e.g., (i) are slightly inconsiderate of other people, or rude or self-centered, and (ii) judge people in ways that reflect objectionable bias.

  7. Pingback: Facing Up!

Leave a Reply

Your email address will not be published. Required fields are marked *