Hi all. Thanks to Doug for letting me invite myself to the party. I think I’ll post a substantive comment in the next few days, but maybe I’ll start with a poll. It seems to me that the following set of claims is inconsistent:

1. The moral point of view is impartial (in other words, when acting/deliberating/etc., from the moral point of view, I am to grant no greater weight to myself or my narrow circle than to anyone else).

2. The demands of impartiality can require agents to suffer horrible fates for the sake of others (if I can’t treat myself as more important than anyone else, it would be illegitimate to save myself from some horrible fate if my suffering that fate would yield more important goods for others, including salvation from horrible fates).

3. Moral reasons are rationally overriding (in other words, whenever I have a moral reason to X and a non-moral reason to NOT-X, the moral reason always defeats the non-moral reason, all things considered).

4. Practical rationality does not require agents to subject themselves to horrible fates.

I’ve stated them in a rough-and-ready way, and they may call for some finessing, but as far as I can see it, these four claims are inconsistent. But there are a number of ways one could go in avoiding inconsistency.

Partialists reject (1)–here I’m thinking of David Brink’s recent work on “self-referential altruism,” Samuel Scheffler’s rejection of some features of impartiality, etc. (2) is rejected by, for instance, Garrett Cullity in The Moral Demands of Affluence, Brian Barry in Justice as Impartiality, and others. The overridingness thesis, (3), is rejected by (early) David Brink, Philippa Foot of “Morality as a System of Hypothetical Imperatives,” Roger Crisp in Reasons and the Good, and “The Dualism of Practical Reason” and is suggested by Peter Singer in Singer and His Critics, in his reply to Frances Kamm. (4) is rejected by those who accept (3) and standard versions of utilitarianism, for instance. Catherine Wilson also worries about something like (4) in “On Some Alleged Limits to Moral Endeavor”. Question (if I may): which are you inclined to reject? How come?

15 Replies to “A Question about Rationality and Demandingness

  1. I definitely want to reject (3). I don’t think that moral reasons are morally overriding, let alone rationally overriding. Indeed, it would be very odd to think that moral reasons are rationally overriding, but not morally overriding — it would, for instance, be odd to think that moral reasons rationally trump other reasons and thereby generate rational requirements even when they fail to morally trump other reasons and thereby generate moral requirements. I don’t think that moral reasons are morally overriding, because I’m intuitively drawn to the idea that there are agent-centered options and supererogatory acts and I don’t see how there could be if moral reasons were morally overriding — see my “Are Moral Reasons Morally Overriding?”.
    I’m also inclined to reject either (1) or (2); I’m not sure which. I think that there is a sense in which none of us is any more important than anyone else from the moral point of view, but I don’t think that this entails that we’re morally required to give equal consideration to the interests of each of us.

  2. Here’s another thought: If (1)-(4) are to be an inconsistent set, then shouldn’t (3) read as follows:
    (3′) Moral demands are such that agents are always rationally required to act as they are morally required to act.
    It seems to me that a person could accept (3′), but deny (3). Indeed, I’m such a person. I accept what might be called strong moral rationalism: If S is morally required to do x, then S must also be rationally required to do x. And because I also think that agents are not, typically, rationally required to make extreme sacrifices for the sakes of strangers, I conclude, given strong moral rationalism, that agents are not, typically, morally required to make extreme sacrifices for the sakes of strangers.

  3. Ah yes. Very good. You’re right. I should have made the distinction between “moral requirements” and “moral reasons” more clear. Thinking out loud here: doesn’t your proposal still leave you having to deny (1) or (2)? Perhaps it depends on what the “moral point of view” involves, i.e., moral reasons or “moral requirements”, as you define them. Let’s say it involves “moral requirements.” Then, I take it, you would have to deny (1) or (2). But if the “moral point of view” is the point of view of moral reasons, then presumably you could keep (1) and (2), no? Because moral reasons wouldn’t necessarily determine moral requirements? Again, I’m thinking with my fingers, here.

  4. Welcome Dale — It’s great to have you!
    I guess I’m inclined to reject (1), (3) and (4), and I am not sure whether I understand (2).
    1. I would reject (1) for several reasons — e.g. I believe that each of us has special obligations to their nearest and dearest, and in my view, these special obligations are incompatible with giving one’s nearest and dearest no greater weight than random strangers.
    2. Since I don’t accept (1), I don’t really have to have any view about “the demands of impartiality”. (Indeed, I am not quite sure that I understand what these demands are.)
    3. Like Doug, I reject (3), at least you formulate it, because not all “moral reasons” are moral requirements. Some moral reasons are supererogatory considerations; other moral reasons are themselves overridden by weightier countervailing moral reasons. So I wouldn’t accept (3). But I would accept the weaker principle that replaces the occurrence of ‘moral reasons’ in (3) with ‘moral requirements’.
    4. I’m also inclined to reject (4). Surely it must be at least possible for practical rationality occasionally to require someone to subject himself to a horrible fate (i.e. an avoidably horrible fate). The only way to avoid this result would be to insist that the reason of self-interest against subjecting oneself to a horrible fate has a kind of infinite weight, guaranteeing that it always defeats all countervailing reasons, no matter how many such countervailing reasons there might be. And that just seems too strong to be credible.

  5. Thanks Ralph. It’s good to be here. You note some additional ways that the claims should be finessed. (2), as you note, is inelegant. Perhaps it would have been better to say: “If you are required to act impartially, you will occasionally be required to suffer a horrible fate,” or something like that.
    (4), however, could be weakened and still generate an inconsistent set. What about (4′): Practical rationality allows me to place significantly greater importance on my suffering a horrible fate than others’ suffering horrible fates (perhaps up to some threshold)? Given your comments that might look more plausible. But it’s still inconsistent with (1)-(3), given that if I’m required to sacrifice myself under the guise of impartiality, I cannot grant any more weight to my horrible fate than to any others’.

  6. I take a roughly Aristotelian point of view on all this, which means I don’t accept a strong distinction between moral and practical reasons, or between acting morally and acting rationally. I also think that “point of view” is not a very helpful philosophical tool. I am inclined to endorse (4) (particularly the weakened 4′) and the amended version of (2), to the effect that if one is being impartial one may have to suffer some horrible fate. But I would reject (1), and (3) doesn’t make a lot of sense on my view. I would reject (1) mainly because I think it’s fine (praiseworthy, in fact) to weigh one’s own good, and the good of those connected with one to greater or lesser degrees, more heavily than the good of complete strangers.

  7. Dale,
    Would you accept this as being the inconsistent set:
    (1*) Agents are morally required to act impartially.
    (2*) If agents are morally required to act impartially, then agents are, other things being equal, morally required to act so as to incur some serious harm if this will prevent some stranger from having to incur a slightly more serious harm.
    (3*) If an agent is morally required to do x, then that agent is also rationally required to do x.
    (4*) Agents are not, other things being equal, rationally required to act so as to incur some serious harm if this will prevent some stranger from having to incur a slightly more serious harm.
    If so, I reject (1*) or (2*). Either agents aren’t morally required to act impartially or acting impartially doesn’t require what (2*) says it does. And I reject one or the other because, as Ralph puts it, “I believe that each of us has special obligations to their nearest and dearest, and in my view, these special obligations are incompatible with giving one’s nearest and dearest no greater weight than random strangers.”

  8. Yeah, that sounds OK to me. Only I think I would want to be more flexible on (2). I could imagine some person holding the view that the only relevant achievement from the point of view of morality is x, and that the only relevant information is about x, rather than something slightly worse than x, or an achievement slightly better than x. (In other words, there’s a more serious harm than x, but it isn’t any more morally serious.) Why not say: If agents are morally required to act impartially, then agents are, other things being equal, morally required to act so as to incur some serious harm if this will prevent a more serious morally relevant harm (this might be an equivalent harm to more people). (Actually, this isn’t totally ecumenical either. But any plausible view–let’s say–will allow that there exist greater morally relevant harms than one agent’s serious harm.) I was trying to be as ecumenical as possible between different views. But you have successfully captured the gist.

  9. I’m currently playing around with rejecting (3), only because its purported scope is universal. I think that moral concerns may be rationally rejected by an agent whose projects are of extraordinary non-moral value. This means that, for most people, (3) will be true, but that for some, it is false.

  10. Hi Nick –
    Coincidentally, that’s the one I’m playing with rejecting as well. Without any argument whatsoever, it seems to me that any attempt to reject (2) fails. I think the weaker version of (4) is hard to deny. That leaves (3) or (1). And I think that there’s some serious intuition behind (1). Also, you’re right, I think, to point out the strength of (3). There are lots of systems of norms out there. Lots and lots. One of which is surely prudence. But if morality always defeats prudence in the rationality sweepstakes (by this I mean “reasons to comply with moral moral requirements” defeat “reasons to comply with prudential requirements”), then it appears as though morality has a lexical stranglehold on rationality that just seems too implausible to me. The least-weighty moral requirement would defeat any prudential reason no matter how weighty. Anyway, that just seems really strong to me, and on reflection, implausible. Lexical priorities can be justified in some cases, I think, but fungibility seems a plausible working hypothesis when it comes to systems of norms. Really this is just table-pounding against (3). But for what it’s worth, that’s just my intuition.
    Also, one could deny (3) and still accept that we have strong obligations to our friends and neighbors. One would simply suggest that those are obligations of friendship or neighborliness, not morality, that compete with morality for rational attention, and when they get strong enough, outweigh it. Anyway, this is the line I’m pushing in my paper: “Weak Anti-Rationalism and the Demands of Morality”.

  11. Great to have you on Pea-Soup Dale!
    I was wondering how serious a role 1 plays in generating the inconsistent set you are discussing. Most deontologies would say you are morally forbidden to do something that you may need to do in order to not suffer a very bad fate (e.g. steal some unwilling donor’s organs). I am with you that it seems that morality need not pander to oneself as much as rationality seems to. And, if this is right, this would be enough to generate the puzzle for just about any moral theory, not just consequentialism.

  12. Great to have you on Pea-Soup Dale!
    I was wondering how serious a role 1 plays in generating the inconsistent set you are discussing. Most deontologies would say you are morally forbidden to do something that you may need to do in order to not suffer a very bad fate (e.g. steal some unwilling donor’s organs). I am with you that it seems that morality need not pander to oneself as much as rationality seems to. And, if this is right, this would be enough to generate the puzzle for just about any moral theory, not just consequentialism.

  13. Great to have you on Pea-Soup Dale!
    I was wondering how serious a role 1 plays in generating the inconsistent set you are discussing. Most deontologies would say you are morally forbidden to do something that you may need to do in order to not suffer a very bad fate (e.g. steal some unwilling donor’s organs). I am with you that it seems that morality need not pander to oneself as much as rationality seems to. And, if this is right, this would be enough to generate the puzzle for just about any moral theory, not just consequentialism.

  14. Hi David –
    Thanks for the comment. I agree! This puzzle is not just for consequentialists. Here’s another thought. You could significantly weaken (1) and still come up with an inconsistent set. Imagine the following (don’t take the particular wording of this seriously, I just want to get the intuitive thought out there):
    1*: One can be partial, but the maximum weight of one’s own interests in not sacrificing one’s self is (impartial badness of self-sacrifice)*(x).
    1.5*: The scope of morality includes x+y persons, and everyone else’s interest in avoiding self-sacrifice is evaluated impartially. (Basically the thought here is that the scope of morality is wide enough so that one’s own interest in avoiding self-sacrifice is not always great enough to outweigh the interest of every other person within the scope of morality in avoiding the same calamity.)
    So long as y is non-zero, there are possible worlds in which such a view issues the demand that you sacrifice yourself, i.e., when everyone else is going to snuff it if you don’t. I leave y as a variable to be defined in light of (4). If one thinks that it is implausible that rationality would ever require you to sacrifice yourself then y=1. But y could be increased depending on the range of circumstances in which it is plausible that rationality would require self-sacrifice. Impartiality is actually the limit case here: x=1, and y=all other persons.
    I’m musing here. Anyway, I think you’re right that this could be a problem for many views.

  15. I reject (1) for deontological reasons as well as because of duties to those close to us. I accept (2). Moreover, I accept that the demands of a non-impartial morality can require us to suffer horrible fates, e.g., for deontological reasons (one must be willing to die of torturer rather than to commit murder) or to protect our children. While for Heath (3) doesn’t really make sense, for me (3) is trivial, because I don’t recognize non-moral reasons. And I reject (4), simply because for me practical rationality and morality are the same thing. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *