The following is a Pareto principle that concerns reasons for actions as opposed to the standard preferences for outcomes. I call it

The Pareto Reasons Principle (PRP): If S must either do x or do y, and S has, with respect to one class of reasons, better reason to do x than to do y and, with respect to all other
classes of reasons that make up the remaining complement of reasons, no better
reason to do y than to do x, then S has a decisive reason to do x—that is, S is
rationally required to do x.

Theoretically speaking, PRP seems extremely plausible,
and yet it seems to have counter-intuitive implications in certain particular
cases. To illustrate, consider the following three cases and assume that in
each case I know all the relevant facts:

The Office: I’m selecting which of two offices will be mine: Office A or Office B. I know
that my welfare will be the same whichever I choose, but I also know that Bob would
be slightly better off getting Office A than he would be getting Office B. Furthermore,
I know that Bob will get whichever office I don’t choose. Regardless, I choose
Office A. My choice seems inconsiderate, but not irrational. According to PRP,
though, I had a decisive reason to choose Office B. After all, I had better
altruistic reasons to choose Office B. And, with respect to all other classes
of reasons that make up the remaining complement of reasons (self-interested
reasons seem most salient here), I had no better reason to choose Office A than
to choose Office B.

The Rescue: I
must choose either (a) to save the life of a stranger named Abe or (b) to save the
life of a stranger named Bob, who is, in all relevant respects, exactly like
Abe. There is absolutely no impediment to my saving Bob. To save Abe, however, I
must wade through some thick brush, and this will be moderately uncomfortable.
Because of the impediment in the one case but not the other, I will be slightly
better off if I choose to save Bob rather than Abe. Nevertheless, I save Abe.
Now, with respect to one class of reasons (viz., self-interested reasons), I clearly
have better reason to save Bob. With respect to all other classes of reasons
that make up the remaining complement of reasons (altruistic reasons seem most
salient here), though, I have no better reason to save Bob than to save Abe.
According to PRP, then, I have a decisive reason to save Bob, and, therefore,
my saving Abe was (objectively) irrational. Yet, intuitively speaking, it
doesn’t seem irrational to save Abe.

The Afternoon:
This afternoon, I must choose either to watch TV or to work on my book. Suppose
these are my only choices, and let’s suppose that, self-interestedly speaking,
I would be better off watching TV and that, altruistically speaking, no one
will be affected either way. Thus, with respect to
one class of reasons (viz., self-interested reasons), I clearly have better
reason to watch TV. With respect to all other classes of reasons that make up
the remaining complement of reasons (i.e., altruistic reasons), though, I have
no better reason to watch TV than to work on my book. According to PRP, then,
it would be (objectively) irrational for me to work on my book. Yet, this is
what I choose to do, and, intuitively speaking, it doesn’t seem at all
irrational to do so.

These cases may seem to suggest
that the both altruistic reasons and self-interested reasons can fail to be rationally
decisive (and thus fail to generate a rational requirement) even when they
favor an act that is “Pareto superior” to all other available alternatives—here,
I’m using “Pareto superior” in a nonstandard sense to mean “favored by one
class of reasons but not disfavored by any of the other classes of reasons that
make up the remaining complement of reasons.” How might we account for the fact
that both types of reasons can fail to be rationally decisive even when they
support the Pareto superior alternative? There are, at least, two possible
accounts:

Gert’s Account:
Gert draws a distinction between two dimensions of strength and maintains that all
altruistic reasons and certain self-interested reasons (those associated with
gains/benefits as opposed to losses/harms) have no requiring strength but only
justifying strength. (Roughly speaking, a reason has justifying strength to the
extent that it can make it rationally permissible to perform an act that would
otherwise be irrational, and a reason has requiring strength to the extent that
it can make it irrational to refrain from performing an act that it would
otherwise be rationally permissible to refrain from performing.) Thus, the
self-interested reasons that I have in both The Rescue and The Afternoon are
incapable of generating a rational requirement. Likewise, the altruistic
reasons that I have to choose Office B in The Office are incapable of generating
a rational requirement to do so.

Sidgwick’s Account: Sidgwick claims that agents always have sufficient reason to do what they
have best (or tied for best) impartial reasons to do as well as sufficient
reason to do what they have best (or tied for best) self-interested reasons to do.
Since, in both The Office and The Rescue, these two things conflict, I have, in
each case, the rational option to do either. Sidgwick’s Account cannot account
for The Afternoon, however.

I think that there are problems
with both approaches (which I would be happy to get into if asked), but, for
now, let me mention an alternative approach to dealing with this philosophical
problem: Deny that that there is a Pareto superior alternative in either of the
above cases. One merit of this approach is that it allows us to maintain PRP,
which is, theoretically speaking, quite intuitive. Here’s two ways this
approach might go:

Imperfect Reasons Account: There is a distinction between perfect reasons and
imperfect reasons. The fact that performing a1, an act-token, would increase my
pleasure is a reason to perform a1. Such reasons – that is, reasons to perform
act-tokens – are what I’ll call “perfect reasons.” By contrast, the fact both that
my finishing a certain task by a given deadline would increase my pleasure and that
I must spend half my waking hours over the next week to do so is not a reason
to perform any particular act-token, but rather a reason to spend half my
waking hours over the next week performing a certain kind of act (viz.,
working-on-this-task-type act). I’ll call reasons of this sort – that is, reasons
to perform acts of a certain type to a certain extent – “imperfect reasons.” Now
consider The Afternoon case. Perhaps, the relevant reasons here are not perfect
reasons as the description suggests, but imperfect reasons. Perhaps, I have is
an imperfect reason to spend a certain amount of my time relaxing as well as an
imperfect reason to spend a certain amount of my time working on my book. If so,
we need to take a broader view than that which is taken in the above
description. The relevant choice is not between spending this afternoon
watching TV and working on my book, but between, say, (i) spending this
afternoon watching TV and the next working on my book and (ii) spending this
afternoon working on my book and the next watching TV. And, although when we
compare the world in which I watch TV to the nearest possible world in which I
instead work on my book, we find that I’m better off watching TV, when we compare
the world in which I choose (i) to the nearest possible world in which I choose
(ii), we find that equally well off either way. Thus, neither choice (i) or
(ii) is superior to the other.

The Subjectivist’s Account: The descriptions of the cases all presuppose that
reasons are all value-based, pertaining to what’s worth achieving. But the
subjectivist could argue that, in that case, we’ve overlooked a way of preserving
PRP. The subjectivist will argue that I could not have voluntarily acted in the
way that I did in each of the cases unless there was something in my subjective
motivational set that favored the act that I actually performed. This desire,
whim, feeling, or whatever provides me with a reason to perform the act that I
actually did, and this reason is in addition to those mentioned in the above descriptions.
This additional reason tips the scales in favor of the act I actually
performed, such that what I did is what I had decisive reason to do.

Well, I know that I don’t
like Gert’s Account, Sidgwick’s Account, or the Subjectivist’s Account. This
leaves me with the Imperfect Reasons Account, but I’m not entirely happy with it
either. What do others think? And is there any other solutions to this
philosophical problem that I’ve overlooked, and is it even a genuine problem?

26 Replies to “The Pareto Reasons Principle

  1. Doug,
    Interesting post. Let me start with a question–I seem to be missing something here.
    You claim that Gert’s analysis of the issue is this:
    all altruistic reasons and certain self-interested reasons (those associated with gains/benefits as opposed to losses/harms) have no requiring strength but only justifying strength.
    Does Gert mean to imply, then, that if a self-interested reason is associated with losses/harms, then this reason will have requiring strength? If so, then I do not see how his account will help in *The Rescue*. For in that case you are faced with two choices: save one person and suffer no harms, or save another and suffer a harm. So doesn’t Gert’s account imply that you have decisive reason to save the first? (I’m aware that you do not favor this account, but I just didn’t see how it was supposed to help in *Recscue*.)

  2. Doug,
    I think I have an answer to my last question, but I now have a new question (again for Gert’s account).
    My mistake in my last question was to forget that the reason to save either person is an altruistic reason, and so on Gert’s account, saving either person will not generate any decisive reasons.
    However, doesn’t this account imply that it is always irrational to help another if doing so will require a loss or harm on your part? This seems to follow because these sorts of reasons are decisive and altruistic reasons never are. So in the rescue case Gert’s account implies that you can do one of two things rationally: either do nothing and let both men drown, or save the first at no cost to yourself; saving the one that requires a loss, however, still seems to be irrational, and so Gert’s account still seems to conflict with your intuition there. Or am I still missing something here?

  3. Scott,
    Thanks for pointing out the need to clarify things. I didn’t explain the implications of Gert’s account in The Rescue case very well, mainly because I hadn’t thought it through carefully enough. Nevertheless, I think that Gert’s account gets the intuitive result in The Rescue, for what I neglected to mention was that it is only the self-interested reasons that one has to avoid non-trivial harms and/or losses that have requiring strength. I think that in The Rescue, the harm is rather trivial: discomfort. But this brings up an interesting question: What if we modify The Rescue so that the harm involved in saving Abe is non-trivial. I’m not sure what counts as non-trivial, but suppose that the loss of a thumb counts as a non-trivial loss/harm. In that case, Gert’s account would imply that I’m rationally required to save Bob. This sounds about right. Although it is morally permissible to save Abe in this case, it does seem objectively irrational.

  4. Doug, I’m too lazy to work through the different stories about how the (PRP) can be false, but maybe this will be useful. Consider this analog:

    Suppose x is better than y in some way, and in all other ways x is not worse than y. Then x is better than y.

    That principle is definitely not true if better than is incomplete. For instance, suppose I can go to hear Branford Marsalis play jazz, or else I can go to hear B. B. King play blues. In respect of music quality, neither is better, though they are (intuitively) not tied, either. In that respect, I think, they are unordered. Now it turns out that the B. B. King concert is 25 seconds closer to my house. That going to B. B. King is better in respect of propinquity, and no worse in respect of music quality, does not, intuitively, mean that it is better.

  5. Jamie,
    Yes, I don’t blame you; the post is too long.
    In any case, I would put this in the Sidgwickian Account category. According to Parfit, Sidgwick held his dualist view because he believed that impartial reasons and self-interested reasons are incomparable. Parfit adopts a Sidgwickian-type view, but rather than holding that impartial reasons and self-interested reasons are incomparable, he holds that they are only “imprecisely comparable.”
    I take it that what you have in mind with your suggestion is that there may goods that are incomparable or only imprecisely comparable. Is that right?
    The reason that I’m unsatisfied with this solution is that I’m not sure that it can deal with cases like The Afternoon, where the reasons in question are all of the same type and seem comparable.

  6. Let me add: I think that the Sidgwickian Account might be the best account of the first two cases (i.e., The Office and The Rescue). My worry is that it cannot explain all the cases where PRP seems to fail.

  7. Hmm. My problem with this is not so much the principle, as the conditional it begins with: “If S must either do x or do y…”– I just don’t think there are many cases, if any, where this condition holds. And the examples given so far don’t fill me with confidence that there are any such cases, either: e.g. (and most obviously) “This afternoon, I must choose either to watch TV or to work on my book”. What could make it true that I MUST do one or the other? I have the same feeling about The Office case too; and even in The Rescue, which is the most likely of the three to fulfil this condition, I still don’t think it is fulfilled. It is at least possible for me to rescue neither person. So in what sense MUST I rescue one or the other? Better not be a moral must, since (I take it) we’re talking practical reason here, not normative ethics. But better not be a practical-rational must either, since that’s supposed to be the output of this line of thought, not the input.
    The moral I draw is first, that it’s not as easy to regiment the alternatives for choice as we philosophers tend to assume; and second, that doing nothing is an alternative for choice the theoretical and rational (and indeed ethical) significance of which we neglect at our peril.
    In any case I doubt that if X’s only alternatives are A B or C, and he has reason of strength Tiny to do A, reason of strength Tiny + infinitesimal to do B, and reason of strength Tiny + 2 infinitesimals to do C, that we’re justified in inferring that he has decisive reason to do C! Surely the facts about his other reasons can’t affect the facts about his reason to do C in this “bootstrapping” fashion?
    Or am I just being a wet blanket?

  8. Timothy,
    Point taken. Let’s assume that, in all three cases, if I do neither x nor y, you’ll shoot me dead. Now revise PRP as follows:
    If S must either to do x or y, or otherwise face some dire consequence that S has an overriding and decisive reason to avoid, and S has, with respect to one class of reasons, better reason to do x than to do y and, with respect to all other classes of reasons that make up the remaining complement of reasons, no better reason to do y than to do x, then S has a decisive reason to do x—that is, S is rationally required to do x.
    Does this address your worry?
    I was just trying to make an already long and complicated post no more so. In any case, I’m not sure I’m seeing what the looming peril is.

  9. Doug,
    Thanks for that. I don’t know that there is a looming peril, other than the collapse of the issue… having people around who are going to shoot me dead if I don’t make a choice is not, after all, terribly common. In the absence of such maniacs, it’s hard to see how we can be rationally obliged to do (e.g.) what we have hardly any reason to do, but more reason to do than any other alternative action. And in the presence of such maniacs, our reasons change, of course. So either way I’m wondering if the issue can be so much as set up.
    On the related problem about the near universal availability of the option of not acting: the presence of this option undermines the plausibility of talk of forced choice between actions. But maybe the principle can be easily tweaked to accommodate this? Make it a principle about options, not actions.
    Anyway, here’s another thought about that Pareto principle. One way of rejecting something like it would be to be a satisficer in practical rationality. You don’t regard yourself as bound to take that unique option that you have most reason to take (or any from the set of tying most-rational actions); simply as bound to do something that is rational enough. I think this is the right view in the theory of rationality. As a matter of observation, we constantly take options on the grounds that they’re merely good enough, and count ourselves and others rational for doing so. As a matter of theory, the rationally best option is at least usually, and I would say always, undiscoverable; so we don’t waste computing energy going after it. We just satisfice.
    Corollary: because this is the right view in the theory of rationality, and because there’s no relevant difference at this point between the theory of rationality and normative ethics, satisficing is the truth in normative ethics too.
    David Sobel was asking for objections to consequentialism the other day… well, here’s one (in the making) to maximising consequentialism…

  10. This may be related to Timothy’s objection, but I’m inclined to suggest that the issue can be explained away. I wonder if the sheer /triviality/ of the cases described affects our intuitions.
    In the office, our own ambivalance suggests that there’s little difference between the offices, so I think that inclines us to think that Bob’s preference can only be quite weak. In the rescue, the uncomfort of strolling through the brush seems like a tiny reason in comparison to the fact that someone’s life is at stake. Finally, “The afternoon” also seems like such a trivial choice that I’m not sure I trust our intuitions to be senstive enough to judge that you have greater reason to watch TV than read the book.
    Due to this sheer triviality, I wonder if we treat the apparently sub-optimal choice as permissible simply because sheer luck is going to outweigh our decision anyway – this is especially clear in the rescue case, where it seems as though facts about the kinds of people Abe and Bob are are going to be far more important than the slight uncomfort you face to save them.

  11. Alex Gregory makes a good point, I think. Here’s a little tweak: it might be that we regard each of the trivially different options as permissible because we think a person is blameless for not thinking over the alternatives carefully. “Why did you choose A over B?” “I don’t know, it just didn’t seem worth worrying about.”
    I don’t agree with Tim Chappell, though. How many options there are is a matter of how we divide the outcome space in describing the case. We can always divide it into two if we like: today Doug can either watch TV in the afternoon or not watch TV in the afternoon. (The reasons he has to work on his book are reasons not to watch TV in the afternoon.)
    Doug; yes, that’s the sort of thing I had in mind. It’s not clear to me that The Afternoon is a case in which all reasons are obviously comparable. Fine, we can stipulate that you will be better off if you watch TV, but it could be that you will be better off in some respects if you work on the book, and the different respects of welfare support the intuition that the alternatives are not ordered.
    In any case, my intution about The Afternoon is not strong at all. I’ll let the best theory classify that one for me.

  12. Timothy, Alex, and Jamie: Thanks for the excellent points. One thing that has become clear is that I need better examples. So what about these:
    The Office*: This is like the original example except that Bob would be much better off getting Office A. Let’s suppose that, unlike Office B, Office A comes with hundreds of free books that will be useful to Bob’s research but not mine (books with no re-sale value). Also, let’s assume that the situation is such that unless I pipe up and request Office B, Office A will be assigned to me and Office B will assigned to Bob.
    The Rescue*: This is like the original example except that all that I have to do to rescue Abe is stand perfectly still over the next minute. If I move during the next minute, I will thereby rescue Bob. (I know Timothy still won’t be happy with this case because it’s so weird.) If I rescue Bob, I’ll receive a reward of $10,000. Nevertheless, I choose to stand perfectly still and rescue Abe.
    Now Gert’s Account and Sidgwick’s Account will still say that there is a rational option in both these cases, whereas PRP says that I’m rationally required to benefit Bob in both cases. Intuitively, Gert and Sidgwick seem to get it right. Or are my intuitions different than those of others’?
    (1) But note that these are not trivial choices. There’s a significant benefit at stake for me in both cases.
    (2) An appeal to satisficing may not help here unless we’re willing to say that each of the options in both cases are good enough. But thanks Timothy. I will add satisficing as another potential solution.
    (3) I think that these revised examples deal with Timothy’s worry about overlooking certain available options, but correct me if I’m wrong.
    (4) Jamie: I accept your point that in The Afternoon the reasons (although both self-interested) may be self-interested reasons of different types: the reasons that I have to watch TV do seem to be of a different type than the reasons I have to work on my book. But I would hate to say that these types of reasons are incomparable. It seems that I can and do compare these kinds of reasons on a daily basis. It would be bad enough if Sidgwick was right and the cosmos of duty were in chaos in that there was no way to compare partial and impartial reasons. But you’re suggesting there’s even more chaos, that there no way to compare even many self-interested reasons. In any case, what about a case where I decide to watch one TV show rather than another, the one that I know that I’ll enjoy less. I know that I never enjoy these reality shows but I’m drawn to watch them just as I’m drawn to look at the carnage at the scene of a roadside accident. Have I done anything objectively irrational, as PRP implies?

  13. “Or are my intuitions different than those of others’?”
    I have to confess that my intuitions to these modified cases is indeed that you are acting unjustifiably (I think “rational” may be misleadingly loaded) by severly impeding Bob’s research, or by forsaking the oodles of free cash.
    What do others think?

  14. Alex,
    I think that ‘unjustifiable’ is misleadingly loaded, bringing to the table moral connotations. The question, however, is whether what I do in these cases is irrational regardless of whether it is immoral. My intuition is that what I do may be inconsiderate in The Office* and less than optimally prudent in the Rescue* but not irrational. I think that we would hesitate to call my actions irrational.

  15. My intuition is that the actions are indeed prima facie irrational. It’s not a strong intuition, though.
    But I would hate to say that these types of reasons are incomparable.
    They are partly comparable. That is, there are some cases in which the reasons do not add up to one action’s being required, the other action’s being required, or the two actions’ being tied. But only some.

  16. Jamie,
    Good point. They could be partly comparable, and that seems fairly plausible. So I’m not sure that there is a problem here after all. Maybe Gert’s Account and Sidgwick’s Account get the intutitive implausible result in The Rescue* and The Office*. And, perhaps, in whatever cases in which it is intuitive to say that PRP is violated, we should hold that this is so because the relevant reasons are only imprecisely (or partly) comparable and thus accept a kind of Sidgwickian Account.
    But I’m still interested in whether anyone has any strong intuitions about The Office* or The Rescue*.

  17. A tiny response to Jamie: yes, it is indeed always possible to divide all our option space into two. Won’t always be rational, sometimes won’t even be sane… but it’s always possible.

  18. It’s irrational to divide an outcome space into two?
    Man, you’re tough! I know some statisticians who are glad you aren’t their boss.

  19. It’s often irrational to regard a twofold (or some other fold) division of an outcome space as uniquely rational. Is that better?

  20. Unexceptionable!
    But does that fact mean that Doug’s problem cannot be set up?
    I think I’ve misunderstood your initial comment.

  21. I have read this exchange with great interest, but I must admit that I am confused. I admit that I was confused after readng the first scenerio. I thought that with age and reflection things would get clearer, but they seem to get muddier!!!!
    Question: Assume the office scenerio is between Tom and Bob. If Tom is a nice person why would he hurt Bob? For a person who is considerate of others, this issue will not arise, he will certainly give Bob the office that benefits Bob the most, if there is no downside to him. Isn’t that how we differentiate nice people from jerks? And is not Tom being a jerk in his (ongoing)relationhsip with Bob, which I assume has been mutually beneficial up to now, irrational?
    How else would one describe being irrational other then doing something that has no gain, but only potential harm to the person performing the action. And if Bob finds out that Tom took the office that would have benefitted him, but not harmed Tom had he not taken it it, he is gong to be mad and/or disappointed, and that will harm Tom (as well as Bob). Now, under PRP would that not be irrational?

  22. John,
    To answer your first few questions: Yes, clearly, Tom is an inconsiderate jerk.
    But you should assume that Bob has no way of knowing this, for he has no way of finding out that Tom didn’t strongly prefer Office A over Office B. So assume that, self-interestedly speaking, Tom has nothing to lose from choosing Office A. Let’s just stipulate that this is so. In that case, Tom’s act is not an instance of “doing something that has no gain, but only potential harm to the person performing the action.”

  23. Thanks Doug for your response. It helped to clarify what you are doing.
    Questions: Why should I stipulate what you want me to stipulate? It seems to me that Tom must have a supporting reason for choosing to do an action that makes the choice rational. You are asking me to stipulate that he does not have anything to lose re any choice he makes relative to the office he chooses, but that seems to be asking to much of me. But, even if I do stipulate that Bob will never find out, how does this affect how Tom should act? Should not Tom act out of the best reason that he has for acting, namely aiding Bob, even though Bob will never know he did so? If I were to ask Tom why he choose office A what reason would he give? If he had no reason, then the action seems to me to be irrational, where irrational means doing something without reason. I take it that PRP supports rationality as doing what you have the best supporting reason to do?
    Doug: Thanks for an interesting topic for discussion.

  24. John,
    I have the intuition that it is never irrational to do what it is in one’s best interests to do as well as the intuition that it is never irrational to do what is tied for best in terms of one’s self-interest. In other words, acting as Rational Egoism prescribes is never irrational, or so it seems to me. (Note, however, that I’m not a Rational Egoist, for I believe that it is also rational to act in accordance with other reasons, e.g., altruistic reasons.)
    Now this case is meant to be a case where things are tied in terms of the agent’s self-interest, and thus choosing Office A is not irrational, or so it seems to me. Now it sounds to me that you just don’t share my intuition. In which case, PRP doesn’t have a counter-intuitive implication in this case. Perhaps, though, it has counter-intuitive implications in one of the other two cases. All I need is one.

  25. Doug
    Then, I believe, the rescue case will not work either. You have self-interested reasons for saving Bob although there is no altruistic difference between saving Bob and Abe. It seems to me that, given what you state above, you need two conditions fulfilled to judge the rationality or irrationality of an action. Doing y is not irrational ony if 1) there is no self-interested reason to do x over y and 2) from an altruistic standpoint x and y are of equal value. The 1st condtion is not met in the rescue case although the 2nd is. Your intuition that it is not irrational to save Abe would only hold if you focused all the explanitory and justificatory weight on the 2nd condition and disregarded the 1st, but that would be counter-intuitive given your stated criteria that to be irrational there has to be a self-interested reason to do one over the other.
    Or am I missing your point entirely?
    Thanks for your response. I am enjoying and benefitting from this exchange.

  26. John,
    You write,

    It seems to me that, given what you state above, you need two conditions fulfilled to judge the rationality or irrationality of an action. Doing y is not irrational ony if 1) there is no self-interested reason to do x over y and 2) from an altruistic standpoint x and y are of equal value.

    What I said was only that my doing y is not irrational if (not: only if) it is what’s best (or tied for best) in terms of my self-interest.
    Regardless, you seem to be missing the point. Either you share my intuition that what I did in at least one of these three cases is rationally permissible (i.e., not irrational) or you don’t. If you don’t, then you should accept PRP. If you do, then PRP has counter-intuitive implications and so we should consider whether and why it might be false, which is what I do in the rest of the post.

Leave a Reply

Your email address will not be published. Required fields are marked *