Let’s start off with a rough characterization of what a normative reason (hereafter, simply “reason”) is. The fact that p will be the case if S Xs is a reason for S to X if and only if that p will be the case if S Xs is a consideration that counts in favor of S’s X-ing. Now some such facts are capable of giving rise to S’s being morally required to X, while other such facts are incapable of giving rise to S’s being morally required to X. This distinction between reasons that are capable of giving rise to moral requirements and reasons that are incapable of giving rise to moral requirements is, I think, the best way to characterize the distinction between moral and non-moral reasons. But even if I’m wrong about this, it is, nevertheless, an important distinction to make.
Before exploring whether I’m right about this, let me propose the following formal test for determining whether or not a given reason is a moral reason: A reason for S to X — namely, that p will be the case if S Xs — is a moral reason if and only if it would give rise to S’s being morally required to X in some possible world where (1) S has no reason (of any kind) not to X and (2) S has no reason (of any kind) to perform some other available act that would in any way impede or preclude S from X-ing. (The reason I say “in some possible world” rather than “in all possible worlds” is that it may be that a given reason can only give rise to a moral requirement when taken in conjunction with another reason. In such cases, the two reasons (both moral reasons on this account), taken together, give rise to a moral requirement in any possible world where conditions (1) and (2) are met.)
Now, this criterion gets the intuitively correct results in a number of cases. For instance, intuitively, the fact that X-ing will benefit me counts as a non-moral reason for me to X, whereas the fact that X-ing will benefit someone else counts as a moral reason for me to X. And my test gets the right results here. In no possible world where conditions (1) and (2) are met does the fact that X-ing will benefit me give rise to my being morally required to X. The fact that X-ing will benefit me may give rise to a prudential requirement to X, but not a moral requirement to X. Failing to benefit myself may always be foolish and imprudent, and it may even sometimes be wrong (as where others will benefit from my being benefitted), but it is never wrong merely because I will lose out if I fail to benefit myself. By contrast, in the possible world where conditions (1) and (2) are met, the fact that X-ing will benefit someone else does give rise to a moral requirement to X. If there is absolutely no reason why I shouldn’t benefit someone else (not even a reason associated with opportunity costs), then surely it would be wrong for me not to do so. At the very least, the fact that X-ing will benefit someone else does give rise to a moral requirement to X when taken together with other such reasons so that a substantial benefit to others is at stake.
Of course, a number of questions arise: (i) Is this distinction between reasons that are capable and reasons that are incapable of giving rise to moral requirements the one that that best maps onto our intuitive understanding of the moral reason/non-moral reason distinction? (ii) Even if the answer is no, is this distinction still an important distinction? And (iii) is the test that I’ve given for determining whether or not a given reason is a moral reason a sufficiently practical one?
In answering (i), we must consider alternative proposals. I’ll consider one such proposal and then leave it to readers to suggest others that they think can do a better job of capturing our intuitive understanding of the moral reason/non-moral reason distinction. The possibility that I’ll consider is to characterize a moral reason as one that is relevant to arriving at a moral judgment about the act in question (that is, a judgment about whether or not it is morally permissible to perform the action). A non-moral reason is, then, a reason that is not relevant to arriving at a moral judgment about the act in question. One problem with this way of characterizing the distinction between moral and non-moral reasons is that reasons that intuitively seem to be non-moral reasons (e.g., the fact that X-ing will benefit me) could on this account turn out to be moral reasons. Another problem with this account of moral reasons is that whereas the question “Is S always morally required to do what S has most moral reason to do?” seems to be an open question, it is, on this account, a closed question. To make the same point a bit differently, whether moral considerations (i.e., moral reasons) always override other types of considerations in determining what we are morally required to do seems to be a substantive and an important issue that this account renders a non-issue. We might wonder, for instance, whether or not prudential considerations can override moral considerations and thereby cause it to be morally permissible to act contrary to what the balance of moral considerations supports doing. More specifically, we might wonder whether it is permissible to break a rather trivial promise (say, a promise to wash the dishes) if one has a significant prudential reason to do so (say, for the sake of going on a date requested at the last minute by someone who one has long fancied). (I borrow this example from Sarah Stroud’s “Moral Overridingness and Moral Theory.”) Here, it seems that the issue is whether the prudential consideration in favour of going out on the date overrides the moral consideration in favour of keeping one’s promise.
In answer to (ii), I claim the answer is a resounding yes. The distinction between reasons that are capable of giving rise to moral requirements and reasons that are incapable of giving rise to moral requirements is an important distinction whether or not it is best characterized as the distinction between moral and non-moral reasons, for it is important to ask whether the one type of reason always overrides the other.
In answering (iii), we must consider the following problem, which Jamie Dreier has pointed out to me in a previous comment — see here. The problem is that my test is difficult to apply in certain instances. Take, for instance, the fact that X-ing will produce greater equality. It is incredibly difficult, if not impossible, to imagine a world where conditions (1) and (2) are met. The closest I can come to imagining such a world is the following. Imagine the possible world where I must choose between performing Y or Z. If I perform Y, four people (A-D) will exist on two far apart planets (p1 and p2). On p1, A and B will both have 99 hedons, and, on p2, C and D will both have 101 hedons. If I perform Z, four people (again, A-D) will exist on two far apart planets (again, p1 and p2). On p1, A and B will both have 100 hedons, and, on p2, C and D will both have 100 hedons. Unfortunately for my test, it would seem that there’s a reason against my performing Z, specifically, that the people who will exist on p2 if I do Z will not be as happy as the people who will exist on p2 if I do Y. Thus there seems to be no way to meet conditions (1) and (2). Perhaps, though, the solution is to add the word “net” in front of the word “reason” throughout conditions (1) and (2), where a net reason is a reason that is not offset by some countervailing reason. I’m not sure whether this works. I’m certainly open to other suggestions.
I’m now thinking that the following might be a better version of my test (call it test – version 2, or “TV2” for short):
A set of facts (f1, f2, f3,…fn) constitutes a moral reason for S’s X-ing if and only if S is morally required to X whenever the following holds: (1) this set of facts obtains, (2) there are no further considerations that count in favor of S’s X-ing, (3) there is no consideration that counts against S’s X-ing, and (4) S wouldn’t be morally required to X if merely some proper subset of this set of facts obtained. A non-moral reason for S’s X-ing is, then, a set of facts that constitutes a reason, but not a moral reason, for S’s X-ing.
I would be interested in which version (TV1 or TV2) people think is better.