So I was composing a reply to pressing comments from both Brad Hooker and Doug Portmore on the original post on this topic (which can be found here, as can their comments), when I realized that the reply was going longer than seems appropriate for a comment. So I’ve taken inspiration from Dave and decided just to log it as a full-fledged post. (I see that just last week, Jonathan Weinberg did a similar thing here, over on what I think is another great blog, Experimental Philosophy. Maybe this is becoming – or has already become – a blogospheric norm.) Let me kick it off by thanking both Brad and Doug for the comments…
Regarding Brad’s concerns, I agree, first, that it would be nice to have at least some guidance on the question of how much we should self-sacrifice when the duty of beneficence calls. I also agree that judgment will nevertheless be ineliminable here. (I also think that there’s a presumption against any moral theory that entails otherwise, though that is harder to defend, and I’ll therefore not pursue it here!). So there are two questions: how much guidance does Kantianism give us, and, quite apart from Kantianism, how guidance much is neither too much nor too little? I must confess that I’m not really confident about how to answer either question. I do, however, maintain the point from the original post that even if Kantianism gives us no help here, it isn’t a huge knock on the theory, since it might still be the best theoretical account of what makes right acts right and wrong acts wrong. (Here I probably place less value than Brad on the ability of a moral theory to be applicable.)
I also agree with Brad that a case in which I must satisfy a perfect duty of beneficence-in-times-of-critical-need by either saving one man’s arm or another man’s life shows that “we need more than the distinction between perfect and imperfect duties of beneficence”. So the question here is: how might a Kantian show that we need to help the life-threatened man, rather than the arm-threatened man? There are several routes one might go, but let me just suggest one. We might follow Tom Hill’s take on the Formula of the Kingdom of Ends, such that (on this formulation) the right act is the one that would be endorsed by legislators in Kant’s Kingdom of Ends. These legislators, in turn, might make their policies on all sorts of grounds. Surely they’d be constrained by a respect-for-humanity principle, but in at least some cases where humanity must be traded off against itself, they might take a quasi-consequentialist approach of maximizing well-being, and argue that, ceteris paribus, saving the life clearly produces more well-being than saving the arm. Now, I’m not sure that the legislators should/would reason this way in this case, but the point is that they could, and so Kantianism, in principle at least, can account for our intuitions in Brad’s case. (Note that this would still be a distinctly Kantian theory, since the fundamental right-making feature of the act is not that it maximizes well-being, but that it is endorsable by legislators in the Kingdom of Ends.) However, I do agree with Brad that we in the Kantian camp haven’t come up with a comprehensive answer to all of the difficult questions about beneficence. So there’s still work to do.
As for Doug’s concern, I think the heart of my point still stands. He’s right, of course, that the reason we can’t help everyone is not that we can only help those nearby. But the point of my comment was different (and it was probably obscured by the way I wrote it up in the original). The point was simply that perfect duties of beneficence might be limited in demandingness, without them losing their status as perfect duties. One way to limit the duty is by limiting it to what can reasonably and non-overdemandingly be expected of the average agent, which would be justified (Hooker-style) by appeal to intuitive plausibility. And one possible way of doing that is to limit our potential beneficiaries to those in the neighborhood. But that’s only one possible limitation, and as Doug, following Unger and Singer, points out, it might not be a very efficient one. So maybe we’d go for some other limit (like, give a non-overdemanding amount of your income to Oxfam, perhaps). Again, the point of my comment was not to figure out the right limits (that’s way too demanding!), but, rather, to point out that there could be limits, and that when there are limits the duty is no less perfect.
The same is true of the contradiction-in-conception approach to limiting perfect duties. Doug is quite right that I wasn’t very specific on this question (and now that I look at it again, the original comment was misleadingly put, too), so hopefully this will go some way to helping out on that front, but I don’t presume to give a full answer here. What I had in mind was that for some beneficent actions, they’d cost so much to the agent that those actions themselves couldn’t be universalized. For example, it seems hard to conceive of a world in which everyone gives up all of their free time to make free time for others. Thus, such an action would actually be impermissible (note: it would be impermissible to give up all of your free time.) And this comports with our intuition that agents should not be overburdened by duty so much that they have no free time for themselves. Again, this is not a complete account, but I hope is goes some way towards better suggesting what I had in mind.