G. A. Cohen has a diagnosis of what he thinks is going wrong with political philosophy, a diagnosis that is supposed to explain why, say, Rawls, Dworkin, and Nagel, or Nozick, Schmidtz, and Gaus, are relatively sanguine about markets, while Cohen thinks markets are inherently repugnant. The problem in his view is that philosophers illicitly allow facts about people are willing to do to act as a constraint on what they ought to do. But while it may be the case that you can't have a duty to X unless you can X, that doesn't mean that you can't have a duty to X just because you aren't willing to do X.


Cohen thinks that once you stop making this mistake, it's relatively easy to show that socialism is inherently superior to capitalism. Some philosophers have responded to Cohen by challenging Cohen's methodological critique. As for me, I've got a book coming out where I grant Cohen everything he wants to say about about the methodology of political philosophy, about human nature, about ought and can, etc., but I argue that even if we grant him all that, a form of capitalism is still morally superior to the best form of socialism.

Personally, I find Cohen's arguments (about can't vs won't, not about socialism) compelling. However, here are some thoughts about how one might challenge them. There may be a pre-existing literature on this point, so sorry if I'm missing that. 

Normally, it seems false to say that just because you aren't willing to do something, you don't have a duty to do it. For instance, suppose I promise my spouse I will clean the bathroom. But now suppose the time has come to clean, and I'd much rather play guitar. I am unwilling to keep the promise. In this case, it seems obvious that "I don't want to" isn't an excuse. It doesn't excuse me from having to keep the promise. Rather, at most it's an explanation for why I'm acting like a jerk.

But now suppose that I was just about to clean the bathroom, when an evil wizard cast a magical spell on me. The spell causes me to have an intense, overwhelming desire not to clean the bathroom, as I promise, but instead to play guitar. The spell is so powerful that I cannot resist–no human being has or could be expected to have the willpower to resist the spell. Accordingly, I don't clean the bathroom and instead break the promise. Here, my default judgment is that I am excused for breaking the promise. The wizard is at fault, not I. (I could be convinced, however, to abandon this judgment, as it may conflict with other, deeper, more important philosophical beliefs.)

Now, let's modify the case a little more. Suppose the wizard didn't just cast a spell on me, but instead created me, from scratch, using his magic. In creating me, he gave me some degree of autonomy, but he also made it so that I would not keep my promise just now. Here, again, I'm inclined to think that I'm excused from keeping the promise. I'm mostly a moral agent, but, thanks to the wizard, I really can't keep the promise. I'm not really an agent when it comes to keeping this particular promise.

Now let's modify the case a little more. Suppose the wizard didn't create me from scratch. Instead, the wizard traveled back in time to the distant past. Suppose, in Star Trek-style, he casts a spell that "seeds" the DNA code of primitive organisms on Earth. This directs the organisms to tend to evolve toward human beings like us. However, he purposefully engineers the code such that it the beings that evolve will tend to have extremely strong desires not to keep certain promises, to harm one another at various times, etc. That is, he engineers them such that most people will just be unwilling to do certain things that, say, Gerry Cohen, or Peter Singer, or I, think they ought to do. In order them to do these things, they have to be willing to do them, but the wizard engineered most people such that they cannot be willing to do them. Here again, I'm inclined to think this excuses people from having to do things that would otherwise be their moral duties.

And you can see where this is going. Suppose that there was no evil wizard, but the result of the evolutionary process is exactly the same as in the last paragraph. Does that change anything?

My worry here is that ought implies can, and it may well be that people can't bring themselves to do certain things. Agency isn't all or nothing. Just as some people might compulsively engage in certain behaviors that they cannot control, so many of us might have an equivalent inability to do certain things that morality might otherwise require. It's not just that we are unwilling to these things, but that we are unable to be willing to do these things.

Of course, people are different. Peter Singer is willing to give more to charity (not as much as he says he should, though) than most people, including me. But that doesn't show that everyone could give as much as Singer. Everyone's psychology is a bit different. Perhaps Singer is a few standard deviations to the right of the curve when it comes to psychological ability to give to others. Perhaps some other people quite literally cannot will to give. For them to give 50% of their income to charity is physically impossible, because their brains just don't work that way. You might as well ask them to jump to the moon.

One might say that there is a slippery slope here: once you start talking about psychological can'ts for normal people, you end up tossing out all of morality. But that doesn't seem true. We might think instead that some people have so little psychological freedom that they aren't moral agents at all. Some might instead be complete moral agents, bound by all of morality. And perhaps most of us are somewhere on a bell curve between these two extremes. We psychologically can do some things but can't do others. We aren't willing to do certain things or to avoid doing certain things because our brains won't let us, and holding us responsible for being unwilling to do those things really is like holding us responsible for not being able to jump to the moon.

13 Replies to “Ought and Can; Can’t vs. Won’t

  1. I think the real worry that motivates Cohen, which I’m not sure you fully answer (then again, blog post…) shows up in an ugly dilemma: either a) whenever we don’t do what morality, in a full-agency world, demands, we fail to of so just because we don’t have the agentic capacity to do so, or b) sometimes when we don’t do what FAmorality demands, we have the agentic capacity to do so, but screw it up anyway.
    In world a), we’ve just abolished moral criticism, and perhaps reduced all of normative philosophy to soulcraft to the end of building stronger agency. In world b), we have an even harder problem: how do we tell the lack of agency cases from the lack of (whatever else it is that is necessary to act morally) cases? We’d need to do so in order to properly carry out practices of praise and blame, as well as coercion and punishment, but it’s hard to imagine even what kinds of evidence would count. (Assuming, of course, that the set of insufficient agency actors is broader than those with diagnosable psychological disorders.)

  2. I like what Paul says. Two more things.
    1. Doesn’t Cohen reject the claim that X being required by justice implies we are psychologically capable of bringing about X? One of his concerns is to make judgments about the goodness or badness of human nature, from the standpoint of justice. I guess this is a concern about whether you’ve granted him all his premises in the post.
    2. It seems at least worth considering that the limits on agents’ capacity for altruism are socially responsive. Maybe the wizard imposes some limits on us, but he does so in ways that manifest differently depending on what our social circumstances look like. So we might still have an argument to change those circumstances so as to get more people closer to being “Singer types,” even if wizard-impose limits on our capacities remain.

  3. Hm, I don’t find any of your examples compelling, frankly – the very first one about the wizard implanting a particular strong revulsion to bathroom-cleaning would have to be spelled out more, and for all the others I’m just inclined to say that the brainwashing victim has no excuse at all.
    Also, maybe a technicality:
    The fact that someone can’t bring himself to φ does not imply that he can’t φ. (The converse does seem to hold, though.)
    In any case, “ought implies can” is just a dogma. Don’t let it drive the rest of your view.

  4. Jamie is right to point to the fact that being unable to bring oneself to do X does not imply being unable to do X. Harry Frankfurt’s cases of “volitional necessity”, in fact, illustrate how such phenomena might enhance one’s autonomy, as when Luther said, “Here I stand, I can do no other.” The type of necessity at work is neither causal nor logical, but volitional, and it’s a function of some cares the agent has involuntarily come to have. So were the wizard or evolution to create us in such ways that we are volitionally necessitated to refrain from doing things that would betray our cares — i.e., if evolution designed us just the way we are — then I can’t see how we wouldn’t still be the appropriate subjects of praise or blame, say.
    What distinguishes these cases from compulsion cases is that the source of our “inability” in volitional necessity cases is ourselves, i.e., the attitudes flow directly from things that matter to us, whereas (except in willing addiction cases) that’s not true for compulsives.
    A literature point, in case you’re interested: Al Mele and Derk Pereboom construct detailed cases like the ones you’re after in various of their books/articles.

  5. If the claim boils down to “evolution makes use selfish so we are permitted to be selfish” you’ve committed the Naturalistic Fallacy.
    In addition, the entire argument from unwilling can be undone by arguing that norms in favor of agency demand taking responsibility for one’s will by shaping it in favor of morally good acts of willing. If there should be morally good acts, there must be moral agents who strive to be morally good.

  6. While it seems true that once one becomes sensitive to evolutionary and psychological factors the line between ‘can’t’ and ‘won’t’ begins to blur, and the postulation of ‘oughts’ consequently become problematic, this seems actually to be the existential mess of our ethical reality, where the muddle of experience is in tension with the regulative ideals of goodness, justice, and excellence. The psychological truths you describe where some people or some developmental stages or sometimes simply moments of our lives possess less degrees of freedom than others do not in my view pose a problem for morality but articulate one of its basic elements, i.e. the demand to strive for more self-consciousness, sensitivity, and autonomy. This is our challenge in raising children, our challenge in having a fair legal system, and most importantly in working out the dialectics of our own lives. A lot hangs on the meaning of ‘can’t’ and ‘unwilling’. I do see a danger in collapsing the difference. Difficulties and obstacles are part of moral value. Once we blur the line into a ‘can’t’ , moral struggle disappears. Life may be tragic, it may be struggle but, if keeping a promise is an ‘ought’ it remains so even if one’s deepest will resists it. unwilling isnt the same as cant.

  7. I take it the logic of your argument is supposed to be something like:
    1) we have the intuition in your 3rd case (where the wizard seeds the DNA of primitive humans) that someone’s implanted-by-wizards unwillingness to phi defeats the claim that she has a moral duty to phi.
    2) there’s no relevant difference between this case and the real world case.
    3) so, in the real world, someone’s unwillingness to phi defeats the claim that she has a moral duty to phi.
    Why not, as many have done in response to Pereboom’s 4-case manipulation argument, just run your argument backwards? In the real world, we have the intuition that unwillingness to phi *doesn’t* defeat the claim that one has a moral duty to phi, and, since there’s no relevant difference between the real world and the wizard world (your third case), implanted-by-wizards unwillingness to phi doesn’t defeat the claim that one morally should phi morally should phi.

  8. I don’t think Jamie’s point is a mere “technicality: The fact that someone can’t bring himself to φ does not imply that he can’t φ.” (I press that point and its relevance for political philosophy here. In light of that, we’d need some argument for letting “can’t bring himself to φ” block “he ought to φ” since “ought implies can” does not get engaged. Much of the literature on these questions is about freedom, responsibility or blame, but the immediate question is whether he can and so ought to φ.
    I lean toward the view that he can do it so long as there is an opportunity and were he to try to do it (as opposed to trying to bring himself to do it) he would (tend to) succeed. (This is a sketch of the general idea, employed by a number of authors. Refinements might well be needed to cover certain deviant cases, etc.) Motivations (installed or otherwise) not to try something are not inabilities to do that thing, since they put nothing in the way of the agent’s succeeding should he try. (Some might want to say it is an inability to try. But there are dangers in treating trying as an action in its own right in that way. And while some say this scenario counts against freedom or responsibility, I take no stand on that here.) So there may yet be lots of requirements on us that we cannot bring ourselves to meet, though we could meet them. I don’t know whether this favors socialism, but it would count against a very common way of arguing against relatively idealistic political theories, as if “ought” implied “can bring himself to.” It makes that look indefensibly conservative.

  9. But note that Cohen also holds that this is true: If you are committed to “you ought to do A”, then you are also committed to “You ought to do A if it’s possible to do A”. I reckon there is one interpretation according to which this fact-insensitivity thesis is true, but then it can be shown to rest on an unappealing form of ethical intuitionism. Or at least that’s what I argue here:
    https://www.academia.edu/5697561/Facts_Principles_and_Politics
    Sorry for the condensed comment; I’m in a frightful rush now, but will try and come back later.

  10. Hi everyone,
    Thanks for all the comments. As I suspected, there’s a vast literature on this topic I haven’t read.
    I’ll post more later, but start with Paul, since he was the first to respond:
    I don’t want to abolish moral responsibility altogether. But instead of treating moral responsibility like an on-off switch, treat it like a continuum. There are things I cannot will myself to do–I don’t have the psychological capacity to do so, period, and never will, not matter what I do. There are things that I cannot now will myself to do, but I could will myself to undergo moral training such that I acquire the will to do so. And then there are things I can will myself to do right now.
    Take, say, the Uruk Hai from LoTR. Why are they so cruel? Let’s say (I’m not sure how accurate this is) that just how they were made. They were designed to be evil. Sure, there’s a sense in which a Uruk Hai could be altruistic, if only its mind were different from how it in fact is, but that’s the same as saying a Uruk Hai could lift 10,000 pounds, if only its body were different from how it in fact is. One might say, like the view Estlund leans toward, that the Uruk Hai could do it if there were an opportunity and if were to try to do it. But what I asking here is whether there is a class of things, which might vary from person to person, that they are psychologically unable to try to do.
    Imagine a cartoony picture of human action in which right before you phi, the motive to phi has to get into in your motive box and stay there for 1/10th of a second. Otherwise, you don’t phi. Now, picture the motive box as broken. The motive box can hold motives to psi and gamma, but it has big phi shaped holes at the bottom, such that the motive to phi just slips out. And suppose the agent can’t fix his motive box in any interesting sense.
    Maybe brain damage–a stake through the head–might break the “motive box”, so to speak. Maybe massive abuse suffered as a child might break it. Or maybe being cursed with bad DNA might break it. There’s a worry here that once we start down this route, we can’t blame anyone for anything. But what if it’s that we can praise and blame people for many things, but not all things? (The practice of praise and blame becomes much harder, of course, because we often won’t know if we’re doing it properly.)

  11. I may be missing the main thrust of the argument. Phenomenologically speaking it seems perhaps a truism that all freedom is conditioned freedom, all choice conditioned and limited more or less by history, finite knowledge, perspective, the body, situations and so on , and therefor all moral judgment of human action in striving for accuracy and fairness need to take such into account. I am not sure what there is to argue about in this regard. The problems and complexity of the practice of praise and blame shouldn’t be a surprise given that it seems inevitable. The whole mess however is predicated on two things: on the one hand that there is a significant manner in which it makes sense to speak of things being good and bad in themselves [recall the history of debates on natural evil] and on the other on the assertion of moral evils as dependent on the freedom of possibility to do something [the can]. The former [like your example of the urikhai, or earthquakes, storms, diseases, tigers or whatever] are only morally evil to the extent that we connect them with a god, devil, or human who freely caused them. The latter holds that we are responsible and culpable even for the feelings, motives and acts that may condition our inability to carry out what is moral. So, to live morally is to demand this of ourselves, even while to judge morality [in others] is to take into consideration just these conditions as ameliorating circumstances. To confuse these perspectives [first and second] may lead one to throw out the proverbial baby with the bathwater.

  12. J, you say, “what I [am] asking here is whether there is a class of things, which might vary from person to person, that they are psychologically unable to try to do.” Are you prepared to treat trying as an action, so that questions can be asked about abilities to perform it? There’s a regress there, since then the trying must be tried, and so on. If it’s not like that, but rather just necessary, for some reason, that they will not try (which is different) that’s just the kind of thing determinism does generally. I think my good upbringing necessitates that I won’t steal your car (so long as conditions aren’t too dire). That hardly bears on whether I am able to. It might be easy to steal your car. I just (necessarily, given the past) don’t want to.

Leave a Reply

Your email address will not be published. Required fields are marked *