Welcome to our Ethics review forum on Errol Lord‘s The Importance of Being Rational (OUP 2018), reviewed by Nathan Howard. Excerpts from the blurb and the review are below, but you can read both in their entirety via OUP’s website and Ethics, respectively. (Though of course, you are welcome to participate in the forum even if you haven’t read either.)

The book abstract:

The Importance of Being Rational systematically defends a novel reasons-based account of rationality. The book’s central thesis is that what it is for one to be rational is to correctly respond to the normative reasons one possesses. The book defends novel views about what it is to possess reasons and what it is to correctly respond to reasons. It is shown that these views not only help to support the book’s main thesis, they also help to resolve several important problems that are independent of rationality. The account of possession provides novel contributions to debates about what determines what we ought to do, and the account of correctly responding to reasons provides novel contributions to debates about causal theories of reacting for reasons. After defending views about possession and correctly responding, it is shown that the account of rationality can solve two difficult problems about rationality. The first is the New Evil Demon problem. The book argues that the account has the resources to show that internal duplicates necessarily have the same rational status. The second problem concerns the ‘normativity’ of rationality. Recently it has been doubted that we ought to be rational. The ultimate conclusion of the book is that the requirements of rationality are the requirements that we ultimately ought to comply with. If this is right, then rationality is of fundamental importance to our deliberative lives.

From the review:

The Importance of Being Rational is wonderfully clear and sensibly organized. One of its major strengths is its synthesis of some of the best insights in epistemology and metaethics concerning rationality. Lord is attracted to discussions in epistemology because those discussions often concern questions about how an agent’s evidence supports their attitudes taken individually, not collectively. Because the type of irrationality to be explained in these cases is that of a single attitude, claims about coherence relations between attitudes offer far less natural explanations of irrationality. Consequently, thinking about rationality as the epistemologist does gets a reasons-based account of rationality off on the right foot. Accordingly, Lord’s account steers closer to the epistemologist’s tack than the metaethicist’s, though Lord’s ambitions outstrip epistemology. As we’ll now see, however, this tack leads to rough water in areas where the epistemic domain is a poor guide to the rest of the normative.

One such place is the difference between creditworthy belief and creditworthy action. Lord’s disjunctive view offers a twofold distinction between ways of reacting for a reason. These two ways map cleanly onto the familiar distinction in epistemology between properly and improperly based beliefs. As a result, Lord’s analysis of the achievement of ex post rational belief is simple and elegant: ex post rational belief is simply belief for a sufficient normative reason.

By contrast, we need at least a threefold distinction between ways of acting for a reason, if we want to maintain the parallel analysis for moral creditworthiness. That’s because there are at least two kinds of normative reasons for action: prudential and moral. Only moral reasons are relevant to moral achievements or what’s morally creditworthy; prudential reasons are relevant to prudential achievements.

As a result, one reason not to go down Lord’s path is that his disjunctive theory of reacting for a reason immediately begins to proliferate once we accept that there are important differences between reasons for action. If we accept Lord’s disjunctive strategy, we must hold that one is morally creditworthy just when one acts for a sufficient moral reason, where the relation named by ‘acts for a sufficient moral reason’ differs from the ones named by ‘acts for a sufficient normative reason’, ‘acts for a sufficient prudential reason’, ‘acts for a motivating reason that is a sufficient moral reason’, and so on. In sum, while a twofold distinction in reacting for a reason perhaps suffices in epistemology, it does not suffice in the broader normative domain. As a result, Lord’s disjunctive theory of reacting for a reason has a great many disjuncts, not merely the two he mentions.

[…] [Riffing on a well-known case from Kant, Lord describes two grocers who are each moved to return correct change during a transaction by the fact that the change is correct. The first is motivated to do so only prudentially, because he knows that his business will thrive if he gains a reputation for returning correct change. By contrast, the other grocer returns correct change out of purely moral concern. Both grocers do the right thing (return correct change) for a fact that gives a sufficient moral reason (because the change is correct), but only the altruistic grocer is creditworthy.]

Whether we want to accept Lord’s disjunctivism hinges on whether we want to locate the difference between the altruistic and egoistic grocer in their motivating reasons themselves or in how they treat those reasons. At first blush, we should expect a Reasons Firster like Lord to prefer the former approach over the latter. But Lord wishes to serve two masters. In the introduction, Lord avows two foundational commitments: to the Reasons First research programme and to the Knowledge First research programme. As Lord clarifies in a footnote, his commitment to the Knowledge First programme extends only to the relative priority of the concept of knowledge over those of belief and justification, so it does not imply the contradiction that both reasons and knowledge are uniquely first. Nevertheless, it should not surprise us that these two commitments create a subtle tension in Lord’s view for both knowledge and reasons have been thought to uniquely underlie a wide range of norms.

Here’s a first pass at locating that tension: if we distinguish the moral way of acting for a reason from the prudential way of acting for a reason, reasons needn’t play a crucial role in Lord’s analysis of creditworthiness. We can simply claim that an agent is morally creditworthy just in case their act manifests the right kind of know-how, i.e., knowledge of how to use some fact as a sufficient moral reason. Although one can manifest this knowledge only in the presence of a sufficient moral reason, we don’t need reasons to play the foundational role distinctive of the Reasons First programme in this analysis of creditworthiness; knowledge alone appears to suffice.

But I think there’s a deeper worry with appealing to know-how in order to supplement claims about reasons. According to Lord, agents must satisfy both knowledge conditions with respect to a reason in order to possess it; they must be in a position to know the fact that gives the reason (the epistemic condition) and they must know how to use the fact as the reason that it is (the practical condition). Lord is in good company when he argues that what we should do is partly a function of the facts that we’re in a position to know. But because his practical condition is novel, he is alone in thinking that what we should do is partly a function of the reasons we know how to use. And I think we should resist being convinced here.

If I don’t have a concept of law, then while I can act for motivating reasons that are legal reasons, I cannot act for legal reasons, in Lord’s disjunctive sense of ‘acts for’. For example, I can hardly be said to act for a legal reason in this sense if the reason why I cross streets at intersections is that, when I don’t, I am often harassed by a dislikeable person in a curious blue suit with a gun and a badge. Lord’s theory explains why. Given my ignorance of the law, there is no rational route from the fact that a particular act is jaywalking to my refraining from the act. I simply don’t grasp the legal import of concepts like ILLEGAL jaywalking, so I don’t even possess legal reasons for action, given my ignorance, since I’m not in a position to manifest the know-how necessary for acting for a legal reason.

More generally, when we are deeply ignorant about a normative domain, we don’t know how to use the reasons particular to it. And on Lord’s theory, when we’re ignorant in this way, we don’t possess those reasons. So we’re exempt from obligations that originate in those reasons. But this exemption is too permissive: failing to know how to use a reason doesn’t eliminate its bearing on what we should do. For example, suppose that I am a deeply ignorant amoralist. I find others’ talk of moral requirement thoroughly confusing and confused. I am completely devoid of moral knowledge. I’m not in a position to manifest knowledge of how to use a fact as a moral reason. As a result, deeply ignorant amoralists do not possess moral reasons. Consequently, these amoralists are morally permitted to do anything; they lack moral obligations altogether.

This is a bad result — an amoralist’s ignorance doesn’t exempt them from moral responsibility. Just as ignorance of the law is no defence against it, an amoralist’s ignorance of how to use moral reasons does not grant them carte blanche for a life of theft, lies, and murder. The problem comes from tying possession to moral know-how. Agents can fail to have such know-how but they can’t fail to have moral obligations. So I think we should resist the temptation to tie possession to know-how. Lord’s competing commitments to Reasons First and Knowledge First pull him in the wrong direction here.

This tension takes nothing away from Lord’s lucid, rich, and ingenious account of rationality. Along with Kiesewetter and Wedgwood’s recent contributions, The Importance of Being Rational marks a new moment in debates about the nature of rationality. It is absolutely compulsory reading for epistemologists, ethicists, and meta-ethicists alike.

 

18 Replies to “Ethics Review Forum: Lord’s ‘The Importance of Being Rational’, reviewed by Howard

  1. Thanks to Nathan for the thoughtful engagement with my book. In this opening comment, I will reply to the three main criticisms in the abridged review.

    One of the main claims of the third part of the book–wherein I defend a view about correctly responding to reasons, among other things–is that reacting for reasons is disjunctive. I argue that there are two different relations we can stand in with considerations when they move us to react in different ways.

    One way to react for reasons is to react for normative reasons. This is the way in which we need to react in order to correctly respond to possessed normative reasons. On my view, when we react for some normative reason r to X, we manifest knowledge about how to use r as the reason it is to X. This view is intimately related to my view of what it is to possess a normative reason. You possess some normative reason r to X when you are in a position to manifest knowledge about how to use r as the reason it is to X.

    The other way to react for reasons to merely react for motivating reasons. I argue that we cannot reduce either relation to the other, and thus there are simply two relations. I defend substantive accounts of both relations.

    Nathan concedes that this works well in the epistemic case, but worries that it doesn’t generalize nicely to the practical case. This is because there are a variety of practically normative domains; at the very least, there is morality and prudence. Nathan worries that in order to account for these, we’ll need not just two relations, but three relations. To illustrate, he discusses my riff on Kant’s famous grocer cases. In my version of the cases, both grocers return the correct change on the basis of the same fact–viz., the fact that $9.76 is the correct amount. One grocer, Gerald, does this only because returning the correct change is the best way to maximize profit in the long run. The other, Gary, does this because it is the right thing to do (or because it is fair, or because the customer has autonomy that demands respect etc.).

    Nathan maintains, and I agree, that both Gary and Gerald return the change for normative reasons; Gerald’s reason is a prudential one, and Gary’s is a moral one. Nathan holds that I need to say that they stand in different relations to the common fact they react for in order to account for the prudential nature of Gerald’s achievement and the moral nature of Gary’s achievement.

    This is not so. The fact that $9.76 is the correct change provides two different normative reasons for both Gary and Gerald. One is a prudential reason and one is a moral reason. These reasons are not numerically identical to each other, nor are they identical to the fact that constitutes them–the fact that $9.76 is the correct change. My account of acting for normative reasons easily explains what they are doing. Gary is acting for a moral normative reason because he is manifesting knowledge about how to use the relevant fact as the moral normative reason that it is, whereas Gerald is acting for a prudential normative reason because he is manifesting knowledge about how to use the relevant fact as the prudential normative reason that it is. We don’t need to posit extra reacting-for-reasons relations in order to accommodate these cases. They are simply cases where two different agents act for two different normative reasons. The only feature that is slightly unusual is that both of those reasons are provided by the same fact.

    Nathan seems to anticipate this response. He thinks that it causes problems for my claim that reasons come first. Nathan writes

    if we distinguish the moral way of acting for a
    reason from the prudential way of acting for a reason, reasons needn’t play a crucial
    role in Lord’s analysis of creditworthiness. We can simply claim that an agent is morally creditworthy just in case their act manifests the right kind of know-how, i.e., knowledge of how to use some fact as a sufficient moral reason. Although one can manifest this knowledge only in the presence of a sufficient moral reason, we don’t need reasons to play the foundational role distinctive of the Reasons First programme in this analysis of creditworthiness; knowledge alone appears to suffice.

    The idea, then, is that if I try to solve the first problem by appealing to the fact that Gary manifests a different instance of know-how than Gerald, I will have drained the account of its reason-first credentials. We can do all of the work just by appealing to the know-how.

    I don’t think this is so either. What it is one knows how to do is crucial. One way of putting the central claim of the book is that rational achievements are the manifestations of knowledge about how to use normative reasons. I claim that one is rational only when one manifests that sort of know-how. The presence of the normative reasons is essential. This is why it is right to say that I provide a real definition of (ex post) rationality in terms of normative reasons; the type of know-how is essentially knowledge about how to use normative reasons.

    Even if this works, Nathan raises a third worry about my account’s appeal to know-how. If possession and correctly responding depends upon the possession of know-how, then agents who lack the relevant know-how will fail to possess the relevant reasons. This will mean that those reasons do not get a rational grip on incapacitated agents. Nathan illustrates the phenomenon by writing

    More generally, when we are deeply ignorant about a normative domain, we don’t know how to use the reasons particular to it. And on Lord’s theory, when we’re ignorant in this way, we don’t possess those reasons. So we’re exempt from obligations that originate in those reasons. But this exemption is too permissive: failing to know how to use a reason doesn’t eliminate its bearing on what we should do. For example, suppose that I am a deeply ignorant amoralist. I find others’ talk of moral requirement thoroughly confusing and confused. I am completely devoid of moral knowledge. I’m not in a position to manifest knowledge of how to use a fact as a moral reason. As a result, deeply ignorant amoralists do not possess moral reasons. Consequently, these amoralists are morally permitted to do anything; they lack moral obligations altogether.

    It is of course a prediction of my theory that those who lack the relevant know-how lack the relevant reasons. So, Nathan’s amoralist will not possess the relevant moral reasons. I don’t think this is a particularly bad prediction. For one, Nathan’s last sentence does not follow from the prediction. I have given no theory of moral obligation. I’m sure that the best theory of moral obligation will predict that there is at least one sense in which the amoralist is morally obligated to behave well. (Even if the best theory of moral obligation didn’t make this prediction, it wouldn’t immediately follow that the amoralist is morally permitted to do anything. Complete lack of obligation doesn’t entail complete permission.)

    Still, the amoralist will not possess moral reasons qua normative reasons, and thus those reasons will not contribute to the case for what rationality requires. This is a bummer for those who want to rationally condemn the amoralist. I say this is a perfectly reasonable thing to give up in order to account for the ways in which abilities seem to constrain what we are rationally required to do (see the first few pages of chapter four for a catalog of data). This is made all the more reasonable when it is pointed out that there are still many other ways in which can condemn the amoralist. We can look at him funny, speak harshly to him, lock him up, and banish him from our social circles.

    In the end, I think the account survives Nathan’s objections. I look forward to seeing what the wider PeaSoup community thinks. Thanks again to Nathan.

  2. Thanks to Daniel for organizing this and to Errol for both the absorbing book and excellent response above.

    I’ll develop the two arguments that Errol singles out for rebuttal. Errol writes above,

    “This is not so. The fact that $9.76 is the correct change provides two different normative reasons for both Gary and Gerald. One is a prudential reason and one is a moral reason. These reasons are not numerically identical to each other, nor are they identical to the fact that constitutes them–the fact that $9.76 is the correct change. My account of acting for normative reasons easily explains what they are doing. […] They are simply cases where two different agents act for two different normative reasons. The only feature that is slightly unusual is that both of those reasons are provided by the same fact.”

    I am attracted to the account of the relationship between reasons and facts that Errol suggest above, where that relationship is not identity (as a great many Ss like Scanlon, Setiya, and Schroeder assume) but one where a single fact can ‘provide’ multiple reasons, where these are numerically distinct from each other and from the fact that provides them.

    However, there’s a tension between such an account and Errol’s disjunctive account of (re)acting for a reason. In short: troublesome cases show that either we need to draw new distinctions in our ‘acts for a reason relation’, as Errol advocates in the book OR we need to draw new distinctions in reasons, as Errol suggests above. But we don’t need do both. I’ll illustrate.

    Errol uses cases like El Clasico to motivate the disjunctive account of acting for a reason. Here’s a simple such case: I believe that Garfield is a mammal based on the belief that he’s a cat, but using the deviant inference that from any belief whatsoever, infer that Garfield is a mammal. Suppose that Errol competently performs the same inference, correctly inferring that Garfield is a mammal using valid inference rules. The standard account predicts that because [[Garfield is a cat]] is a normative reason, both Errol and I believe for a normative reason. TIOBR does an excellent job of explaining why that’s a bad implication.

    We can avoid this bad implication *either* by making our ‘(re)acts for a reason’ relation more finely grained or by making our reasons themselves more finely grained. Errol defends the former account in TIOBR. But part of what drives the Garfield case’s challenge to the standard account is the assumption that both Errol and I believe for the same reason because we believe for the same fact. Rejecting that assumption, as Errol appears to do in his response, opens another route to solving the puzzles on which his disjunctive account of acting for a reason depends for its motivation. If we’re willing to concede that Gary and Gerald react for numerically distinct reasons, we should say the same about Errol and me. He reacts for a good normative reason when believing that Garfield is a mammal but I react for a merely motivating reason, even though our beliefs are motivated by the same fact. If a single fact can provide multiple, distinct reasons, we’re free to assume that Gerald, Gary, Errol and Nathan all bear precisely the same ‘acts for a reason’ relation to the relevant motivating reasons. However, only some of those motivating reasons are normative reasons or the right ones to produce achievements like moral worth or ex post rationality.

    On to the second point. Errol writes,

    “It is of course a prediction of my theory that those who lack the relevant know-how lack the relevant reasons. So, Nathan’s amoralist will not possess the relevant moral reasons. I don’t think this is a particularly bad prediction. For one, Nathan’s last sentence does not follow from the predition. I have given no theory of moral obligation. I’m sure that the best theory of moral obligation will predict that there is at least one sense in which the amoralist is morally obligated to behave well.”

    Errol is right to press me here for I need to say more to make the argument fully satisfactory. But there’s more to say. In the first chapter of TIOBR, Errol sketches the context and motivation for his reasons-based account of rationality. Early challenges to reasons-based accounts were less-than-fully convincing because they failed to take the agent’s perspective into consideration. The main challenge for defending such an account is developing an account of the agent’s perspective. I understand Errol’s account of possession to be just such an account of an agent’s perspective.

    Some ordinary senses of ‘ought’, ‘should’, ‘obligation’, etc. are also tied to the agent’s perspective. For example, suppose that you reasonably believe that giving the patient the yellow pill will cure them but your evidence is misleading and only the green pill will cure them. In one sense, the ‘objective’ sense, you should give the green pill. But in another sense, the ‘subjective’ sense, you should give the yellow pill, since your evidence indicates that that’s the pill that will save the patient.

    I’ll assume, along with many other philosophers, that these two oughts correspond to two different kinds of reasons: objective reasons for the former and subjective or possessed reasons for the latter. Though I’ve impugned his RF credentials, Errol is a fellow Reasons-Firster. So I beg no questions against him in assuming that both of these oughts are analyzed by reasons. Indeed, it’s natural to assume that when Errol talks about possessed reasons, those are the reasons that analyze subjective shoulds like that the physician should give the yellow pill, since that’s the pill that looks like it will save the patient.

    However, these claims are incompatible with Errol’s account of possession as know-how. Suppose that an amoralist is the physician prescribing the pills and that she stands to make a nice profit by selling them on the black market instead of prescribing them. Suppose also that her evidence misleadingly suggests that only the yellow pill will cure the patient when in fact only the green pill will. The scenario entails the following three claims: what she objectively should do is give the green pill. What she should subjectively do is give the yellow pill. What she should rationally do is sell the pills.

    But Errol’s account of possession as know-how predicts something different. Errol’s account of possession implies that amoralists do not possess moral reasons, like the reasons to give the yellow pill. Consequently, Errol’s account predicts that the amoralist should, in the subjective sense, sell the pills instead. This seems like a bad prediction: the amoralist should give the yellow pill if their evidence suggests that it’s the cure, even if it’s rational for them to sell the pills.

    I agree with Errol that there’s one sense of moral obligation, the objective sense, in which the amoralist physician is morally obligated to behave well. This is the sense in which she should, objectively, give the green pill. My claim is rather that there’s also a second way that the amoralist is morally obligated to behave, the subjective sense, that is the sense of possessed reasons. The amoralist should give the yellow pill, in this sense. But the know-how account implies that she should sell the pills instead. So the know-how account of possession is wrong for it predicts that amoralists possess fewer subjective moral reasons than they in fact possess.

  3. Hi Nathan and Errol (and anyone else reading this),

    I agree with Nathan on two points. The first is that the book is a brilliant book. I think we can all learn a lot by working through the arguments and reflecting on the positions that Errol defends. The second is that I’m concerned about the role that know how plays in the theory. The amoralist-type cases worry me. Granted that the book isn’t intended to offer a theory of moral obligation, it does tell us something interesting about what an agent ought to do, what’s rational to do, etc. and this is tied to something like the abilities of the relevant agent. (If I remember the ins and outs, this plays an important role in motivating the kind of perspectival view about what an agent ought to do that Errol likes.) So isn’t this enough of a worry. Take someone like Rosen’s ancient slaveholder. The problem with this baddie isn’t that he’s ignorant of some important non-normative facts or generally incapable of responding to reasons. The problem is that there are certain kinds of normative considerations that don’t move him (e.g., he doesn’t get that people can’t sell people to people, he doesn’t get the significance of the suffering of the slaves, etc.) and others that do (e.g., he wants to provide for his family). When it comes to one kind of consideration (what’s good for his family) he often manifests the kind of de re responsiveness that might (other things equal) merit praise and when it comes to another kind of consideration (what it takes to respect the dignity of persons, what it takes to avoid treating slaves cruelly) he manifests de re unresponsiveness. Now suppose that he’s in one of those situations in which it’s just obvious to him that selling some slaves is the only way that he can do something good for his family. (They’re comfortable, mind you, but they want a bigger house to give the kids more space, a new minivan, etc.) It looks like the lack of know how prevents the one set of reasons from entering into the mix of reasons that determines what the agent ought to do. And it looks like the other set is unopposed. Shouldn’t they win on the proposed account?

    I guess the worry could be put like this. If the theory says that our baddie ought to sell these people, it gives the wrong result. And if our baddie is akratic and fails to sell these people, we (or, better, _we_ who know better what the relevant reason-constituting facts require) know that he’s luckily managed to avoid doing what he ought not. Would we criticize him? Perhaps. However, there’s an important notion of ‘ought’ that I would think would be sensitive to reasons that place categorical demands on the agents who are cognizant of them (even if they’re not cognizant of these facts as reasons). And I worry that building the know how conditions in means that the theory on offer is either going to struggle to give us a good account of this kind of ‘ought’ or (worse) end up pressuring us into thinking that we are mistaken when we say that this person shouldn’t sell people to people.

    I think similar problems arise in the epistemic case. I’ve discussed this kind of case with Errol ages ago, but here’s a kind of case. A doctor knows the base rate and knows how reliable a test is, but they don’t grasp the relevance of the base rate to estimates of how likely it is that a patient has a disease given a certain test result. So, they have a body of evidence E and, let’s say, the probability on the evidence that the patient has the disease is actually quite low but this doctor (like many of us) has a high degree of belief that the patient has the disease. Why? Because they lack the relevant kind of know how–they don’t grasp the relevance of the base rate on the probability that the patient has the disease. Now there’s a question about how confident they ought to be. We might say that they ought to have the mathematically kosher credence in spite of their failure to grasp the significance of the base rate. Or we might say that they ought to be very confident of something that isn’t very probable. I don’t like this second option, but I don’t see how to get the first option to come out as correct.

    My general worry might be put something like this. Start with a very competent processor of reasons. Put then into a situation in which they know various facts (f1…fn) and think about how they’d respond given the feasible options. Now imagine less competent versions of this agent while keeping the factual knowledge and options fixed. I think it’s sort of weird that as we mess about with the know how, we get differences in the ordering of options so that what comes out as best supported by the reasons changes from one agent to the next. Depending upon the details of the case, I worry about saying that the less competent agent ought to do something that the competent agent ought not do. The only way that I can see to avoid saying that some agents who are aware of f1…fn ought to X and ought not Y and others ought to Y and ought not X simply because the second agent isn’t as competent as the first is to deny that the agent’s know how has relevance to the ordering of options (from best supported by the reasons on down). (For example, I don’t want to say that we should not sell people to people because we’re better at understanding the moral relevance of things and the ancient slaveholders should sell people to people because they understand that it is important to provide for the family. I don’t want to say that the Harvard math students of the 70s shouldn’t be very confident that the patient has the disease but that the Harvard med students should have been very confident that the patient has a disease when it’s unlikely on their evidence that the patient has it.) But then I worry that we don’t have a good way to bring know how into the picture to say what it is to possess the relevant reasons. (I think it’s good to keep that out, but that’s probably where we disagree.)

  4. This is a great discussion!
    I’m afraid to admit that although I have purchased Errol’s book, and I have read Nathan’s review, I have not yet got round to reading the book. So, I am sure that most if not all of my questions are addressed in the book, in ways that Nathan didn’t have time to explain in his (excellent) review…. But here goes.
    The book seems to defend a conception of rationality as consisting in responding correctly to the reasons that one possesses. There are, as I see it, many reasons to reject this conception.
    First, some of us have strongly *internalist* intuitions about rationality: you and your mental twin who is the victim of the evil demon are thinking equally rationally, even though there are on the face of it big differences between you and them with respect to what reasons you possess, and to what would count as responding correctly to those reasons.
    Of course, Errol is an avowed externalist, and devotes Chapter 7 to responding to this sort of objection. I will be interested to see how this response goes. But I’m willing to bet that I won’t be convinced….
    Secondly, the account seems to imply that the way in which it is rational for you to think at a time t *supervenes* on the facts that you are in a position at t both (i) to know, and (ii) to manifest knowledge of how to use to form the relevant attitude. But this seems to me to be far too limited a supervenience basis for rationality. Consider two agents who are exactly alike with respect to the facts that in a position to know, and to manifest knowledge of how to use to form the relevant attitudes. Nonetheless it could be the case that these two agents have significantly different *prior credences*. In that case, I would say that the rational way for them to respond to the evidence that they both have could well be different. So, rationality does not supervene on possessed reasons.
    Thirdly, if these normative reasons must also *explain* the truth about how the agent ought to think and to act, it seems to me much too demanding to claim that the agent must be in a position to know all such facts.
    I say this partly because I believe that there are many different kinds of ‘ought’. Even if we focus on the all-things-considered practical ‘ought’ — the sense of ‘ought’ in which it is *akratic* to act contrary to one’s own judgment about how one in this sense “ought” to act — we can distinguish between a more “objective” and a more “subjective” version of this ‘ought’. (For example, using the more “objective” version of this ‘ought’, we might say, retrospectively, about a past decision that we have made: “Wow, it turns out that we ought not to have done that — although of course we couldn’t have known it at the time.”) The way in which one “ought” to act in this objective sense is evidently not determined by the facts that one is in a position to know at that time — and for Errol, the reasons that one possesses at the time are a subset of the facts that one is at that time in a position to know.
    Even if we focus on the more subjective version of the all-things-considered practical ‘ought’, I doubt that what one in that more subjective sense “ought” to do is fixed by the reasons that one possesses. This is because there are surely other facts about one’s mind — about one’s desires, emotions, dispositions, prior credences, and so on — which are also part of what determines what one “ought” in this subjective sense to do. But the agent is surely not always in a position to know all these facts. So, according to Errol’s theory, these facts do not count as reasons that the agent possesses.
    I’m sure that Errol has thought about these kinds of objections to his conception of rationality. Indeed, there are probably passages in the book where he explicitly addresses them. But I thought I’d raise these concerns anyway. I certainly look forward to studying the book closely!

  5. Hi Nathan,

    Thanks for the replies (to my replies to your replies). Some thoughts:

    When it comes to our grocers, you write

    We can avoid this bad implication *either* by making our ‘(re)acts for a reason’ relation more finely grained or by making our reasons themselves more finely grained. Errol defends the former account in TIOBR. But part of what drives the Garfield case’s challenge to the standard account is the assumption that both Errol and I believe for the same reason because we believe for the same fact. Rejecting that assumption, as Errol appears to do in his response, opens another route to solving the puzzles on which his disjunctive account of acting for a reason depends for its motivation. If we’re willing to concede that Gary and Gerald react for numerically distinct reasons, we should say the same about Errol and me. He reacts for a good normative reason when believing that Garfield is a mammal but I react for a merely motivating reason, even though our beliefs are motivated by the same fact. If a single fact can provide multiple, distinct reasons, we’re free to assume that Gerald, Gary, Errol and Nathan all bear precisely the same ‘acts for a reason’ relation to the relevant motivating reasons. However, only some of those motivating reasons are normative reasons or the right ones to produce achievements like moral worth or ex post rationality.

    I don’t think being more fine grained about the reasons is going to do it. To rehearse the main argument of chapter 6, if you have a univocal view about reacting-for-reasons, then you need a relation that (i) explains why reacting for normative reasons is an achievement and (ii) explains why motivating reasons always make reactions intelligible, even in cases where the agents in question are thoroughly deluded. My claim is that there is no relation that is flexible enough to explain both of these things. Any relation that can explain (ii) will fail to explain (i) and vice versa.

    I just don’t see how allowing singular facts to provide multiple reasons is going to help respond to those arguments. One way to see this is to point out that in the bad inference case, one is simply not reacting for a normative reason, even if the consideration one is reacting to happens to provide a normative reason. So the point I initially made in response to your claims about Gary and Gerald–that what is going on is that Gary is responding to one normative reason and Gerald is responding to another–doesn’t bear at all on the question of whether you and I stand in the same relation to the reasons for which we react in the inference case. It is right, of course, that we each stand in some relation to the same consideration. The fact that singular facts can provide multiple normative reasons doesn’t really help determine whether we both stand in the same relation to that consideration. With Gary and Gerald we were just assuming they did stand in the same relation–the reacting-for-a-normative-reason relation. The trick was whether my account could explain this.

    Another way to put the point is that in order for your line to work, it needs to be that the distinctions in reasons fully explain the achievement involved in reacting for a normative reason. But, again, the kind of distinction I relied on to explain Gary and Gerald can’t be used to distinguish you and me for the simple reason that you don’t believe for a normative reason.

    As for the amoralist, you write

    Suppose that an amoralist is the physician prescribing the pills and that she stands to make a nice profit by selling them on the black market instead of prescribing them. Suppose also that her evidence misleadingly suggests that only the yellow pill will cure the patient when in fact only the green pill will. The scenario entails the following three claims: what she objectively should do is give the green pill. What she should subjectively do is give the yellow pill. What she should rationally do is sell the pills.

    But Errol’s account of possession as know-how predicts something different. Errol’s account of possession implies that amoralists do not possess moral reasons, like the reasons to give the yellow pill. Consequently, Errol’s account predicts that the amoralist should, in the subjective sense, sell the pills instead. This seems like a bad prediction: the amoralist should give the yellow pill if their evidence suggests that it’s the cure, even if it’s rational for them to sell the pills.

    I agree with Errol that there’s one sense of moral obligation, the objective sense, in which the amoralist physician is morally obligated to behave well. This is the sense in which she should, objectively, give the green pill. My claim is rather that there’s also a second way that the amoralist is morally obligated to behave, the subjective sense, that is the sense of possessed reasons. The amoralist should give the yellow pill, in this sense. But the know-how account implies that she should sell the pills instead. So the know-how account of possession is wrong for it predicts that amoralists possess fewer subjective moral reasons than they in fact possess.

    The first point to make is that you don’t get to just stipulate these claims about what the amoralist subjectively ought to do. I take it most of the literature about subjective obligations has been about determining what an agent’s perspective is–i.e., has been about what most of my book is about. We can say similar things about debates in epistemology about what one’s evidence supports. Given this, the challenges I raise to views that don’t account for the practical condition are challenges for those views. I think there are straightforward counterexamples to the sort of views you are implicitly relying on. That’s not to say that the amoralist doesn’t present a challenge to my view; but it is to say that I’m not pulling the commitments that lead to the challenge out of thin air.

    It sounds like you want the amoralist to be (subjectively) obligated enough to bite the bullet in all the other cases. Fair enough. I do wonder, though, how you avoid the charge that your view is overdemanding. After all, there might be plenty of facts that you know that provide (objectively) excellent reasons to do various things that you simply don’t know how to respond to. For example, you might know that there are 178 days left in 2019 (you do; I just told you). This might be an excellent reason to start your latest paper project; for, as it happens, 178 days is the amount of time it will take to finish that paper this year, and you have good reason to finish it this year. Of course, though, I take it you are unlikely to know how to use that reason because you do not have that fine grained of knowledge about how long it will take you. If you think there is not ability condition on possession (and subjective obligation), doesn’t it turn out you subjectively ought to start the paper today? Doesn’t it follow you are open to a personal criticism for not starting today, even though you simply can’t be sensitive to normative reason in question?

  6. Hi Clayton,

    Thanks for the comment.

    And thanks for bringing up a different sort of case. A nice thing about amoralists for my opponents is that it is easy to stipulate that they don’t have the relevant know-how. Real agents will be trickier. I think if we are thinking of realistic versions of the slaveholders, it is quite plausible that they have the relevant know-how. Indeed, ancient slaveholders seem particularly likely to have the relevant know-how given common views about the capacities of slaves. Of course, you can feel free to stipulate a particularly incapacitated slaveholder if you want. You are right that if they completely lack the capacities, my view will say that those reasons to do not contribute to what they ought to do.

    I think a similar thing when it comes to the base rate. Of course, it might not be that they possess sufficient reason to be very confident (it’s not clear why that follows from your stipulations), but if they really are incapacitated in the relevant way, they won’t be required to have low confidence by E. I see why you might not like this, but, as I say to Nathan above, I think completely excluding capacities leads to overdemandingness worries. After all, it might be that, through a series of amazing coincidences, the fact that you are London right now greatly diminishes the likelihood that the USWNT will win on Sunday. So, given what you know, it is not likely they will win. But if you have no way of putting that fact to use in coming to reasonably have low confidence they’ll win, it seems harsh to think you are required to have low confidence on that basis. This seems to be analogous to the base rate case.

  7. Hi Ralph,

    Thanks for the comment. Some thoughts:

    You write,

    First, some of us have strongly *internalist* intuitions about rationality: you and your mental twin who is the victim of the evil demon are thinking equally rationally, even though there are on the face of it big differences between you and them with respect to what reasons you possess, and to what would count as responding correctly to those reasons.

    The main thesis of the book is compatible with internalism, although you are right that I defend an externalist version.

    I agree that my deceived twin possesses different reasons than I do, but I argue in chapter 7 that, despite that, our possessed reasons support the same reactions. I don’t expect to convince you.

    You write next

    Consider two agents who are exactly alike with respect to the facts that in a position to know, and to manifest knowledge of how to use to form the relevant attitudes. Nonetheless it could be the case that these two agents have significantly different *prior credences*. In that case, I would say that the rational way for them to respond to the evidence that they both have could well be different. So, rationality does not supervene on possessed reasons.

    What I think about this will depend on your story about their priors and their differences. If both sets of priors are rational, as I’m sure you’re assuming, then I would think there are some possessed reasons that explain why. If you think it is just a brute fact that both sets of priors are rational, I will have to disagree.

    Finally, you say

    Even if we focus on the more subjective version of the all-things-considered practical ‘ought’, I doubt that what one in that more subjective sense “ought” to do is fixed by the reasons that one possesses. This is because there are surely other facts about one’s mind — about one’s desires, emotions, dispositions, prior credences, and so on — which are also part of what determines what one “ought” in this subjective sense to do. But the agent is surely not always in a position to know all these facts. So, according to Errol’s theory, these facts do not count as reasons that the agent possesses.

    Some facts might help determine what one ought to do on my theory even if those facts aren’t possessed reasons. What makes it the case that one ought to X, on my view, is the fact that one possesses decisive reasons to X. What one needs to know how to do is use those reasons to X. Other facts might help determine why those reasons are decisive (e.g., facts about one’s desires, emotions, dispositions, prior credences). My theory does not require one to know about those things.

  8. Re the base rate, and perhaps the slave holder, this leads onto the legal treatment of negligence and what the community of one’s epistemic peers have recently realized about existing practices.

  9. Thanks Errol. I don’t think I have much more to add about the amoralist point, but you’ve given me more to think about, so I’ll discuss the first point instead. But thanks, again, for thinking through the amoralist case with me.

    To sum up, we both want to defend analyses (very very roughly) like that an agent’s act has moral worth just when and because they act for the right reason(s). The grocer case is a challenge to this analysis. You address the grocers case by (1) distinguishing between acting1 for a reason and acting2 for a reason, (2) claiming that the analysis involves only acting1 for a reason, and (3) claiming that only one grocer bears the acts1 relation. An alternative solution is to distinguish instead between the two reasons given by the fact that moves each grocer and claim that only one such reason suffices for moral worth. I proposed saying something similar in the Garfield/El Clasico case, distinguishing the two reasons, one merely motivating and one normative, given by the relevant fact. You argued that because the two reasons in the Garfield/El Clasico case are motivating/normative not normative/normative, as in the grocer’s case, the two cases deserve distinct treatments. I’m not sure I see why that follows: I can see wanting to treat the two cases differently *given* disjunctivism about acting for a reason. But the major source of motivation for disjunctivism on which you draw is its solution for challenges like El Clasico and the grocers. So disjunctivism can’t be non-circularly assumed in arguments against competing solutions, like thinking that facts are not identical to the reasons they give.

    For example, you write, “the point I initially made in response to your claims about Gary and Gerald–that what is going on is that Gary is responding to one normative reason and Gerald is responding to another–doesn’t bear at all on the question of whether you and I stand in the same relation to the reasons for which we react in the inference case.” Whether this is so is what’s at issue in our debate — that it, whether disjunctivism about acting for a reason is true. So I take it that you’re not so much providing an argument against the alternative solution we’ve discussed as restating the implications of your view, which is fine and good.

    Rather (and perhaps this is exactly what you meant to imply) the dialectical weight in your argument rests on the first point you make: “if you have a univocal view about reacting-for-reasons, then you need a relation that (i) explains why reacting for normative reasons is an achievement and (ii) explains why motivating reasons always make reactions intelligible, even in cases where the agents in question are thoroughly deluded. My claim is that there is no relation that is flexible enough to explain both of these things.” Bloviating at any length about the view that *I* think can do this is inappropriate in this venue, where your work is the focus. So I won’t do it. But let me say how I think the dialectic stands. I take it that the standard post-Davidsonian account of motivating reasons for action according to which they’re, very roughly, what we deliberate about when acting answers (ii). Perhaps I’ve missed your arguments against that account — but I understood your argument against the standard causal account of motivating reasons to rest on the difficulty of solving problems with deviant causal chains. So what’s at stake for the standard account is really whether it satisfies (i); and that, in turn, turns on the nuances of how we understand achievement.

  10. Hi Clayton!

    I think we have similar intuitions about the ‘oughts’ that apply to certain agents, like the ancient slaveholder or the doctor, who lack the know-how to respond to certain reasons. Subjective oughts are the ones that take the agent’s perspective — the reasons they possess — into account. Because subjective oughts appear insensitive to know-how, know-how isn’t a condition of possession on reasons. Or so I’m (we’re?) tempted to think. Errol’s terminology provides a nice way of stating the difference. Errol distinguishes between the epistemic and practical possession conditions (thought, on his view, the latter entails the former). He holds that both conditions figure in one’s perspective. I (we?) think only the epistemic one does.

  11. Hi Nathan,

    Now I think our discussion of the grocers was based on a misunderstanding. I don’t think the grocers motivate disjunctivism when we understand it the way you seem to–as a case where one is responding to the (actually normative) prudential considerations and the other is responding to the (actually normative) moral considerations. Disjunctivism is motivated by the sort of cases involved in El Clasico. Those are cases where each reacts for the same consideration even though only one is sensitive to the actual normative facts.

    In the related by distinct debate about moral worth, the grocers are sometimes used to make the sort of point I make with El Clasico. In that context, the non-virtuous grocer is thought to not be responding to the relevant normative facts–the moral ones–even though they react for a reason that happens to be a moral reason. I noticed this complication and took note of it footnote 10 of chapter 6.

    I agree that your point would stand if I were using the grocers as you were understanding them to argue for disjunctivism. I wasn’t meaning to do that. In your version, they each react for a normative reason. If you imagine that the non-virtuous grocer is not reacting for a normative reason, then things are different. If, as I mention in n 10, they are actually bad at knowing what maximizes profit, then I think you’d need to the reacting-for-motivating-reasons relation in order to explain Gerald. There is no relation that each could stand in with their reasons that could explain Gary’s achievement and Gerald’s intelligibility.

  12. Hi Errol,
    Thanks, that’s helpful. If I understand correctly, it seems that you’re open to this kind of possibility: A and B are aware of all and only the same facts: f1…f10. A is perfectly competent when it comes to handling these reasons. In A’s case, f1-f5 favour Xing, f6-f10 favour Ying. Since the case for Xing is stronger than the case for Ying, A ought to X. B is perfectly competent when it comes to working out what f6-f10 favour but lacks the competence to grasp the significance of f1-f5. Because of this, B’s case is one in which the possessed reasons favour Ying and aren’t opposed. B ought to Y.

    To make it concrete, compare one of us to Don (a guy from the 50s who cares about his kids, drinks a bit too much, works for an ad company in NY, etc.) If we had a daughter that wanted to go to uni and enjoyed sailing, we might see that we have reason to put money into a college fund, reason to buy a sail boat, and that we ought (assuming we can do only one) to put money into a college fund. Don knows that his daughter wants to go to uni and enjoys sailing. He doesn’t get what we do about a daughter’s ends and their importance, but he gets that there’s reason to make his kids happy. Given that he doesn’t get that he has reason to promote his daughter’s ends and goals in the way that he ought, say, to promote his son’s, but gets that he has reasons to make his kids happy, he ought to buy his daughter a sail boat. This result, to me, isn’t great. And if Don did unexpectedly put money into a college fund, I don’t think we would criticise him for failing to do what he ought to do.

    I can see that intuitions will divide across such cases. (Rosen might say that he’s blameless for his failings as a father but still ought to promote his daughter’s aims and ends in the same way that he’d promote his son’s, but I don’t really like this, either.) One theoretical question that this raises which I don’t quite know how to answer is this. Suppose Don does what we’d expect and therein does what I think your theory says he ought to (i.e., buy his daughter a boat because that would make her happy rather than invest in her education even though that furthers some ends (and we have to assume that not having this end frustrated doesn’t tip the happiness balance)). And suppose that we do what we agree we ought to (i.e., invest in the education of our children). In what way does Don lack know how? What does he not know how to do? He reliably does what he ought (buy his daughter presents and never invest in her education) and we reliably do what we ought (invest in the education of our children). I think in the epistemic case, it’s sort of clear what this lack of know how might amount to (e.g., we can say that the failure to draw conclusions supported by logical inferences is a lack of know how, that failures to proportion beliefs to the evidence is a lack of know how), but it’s less clear in the practical cases what Don’s lack of know how consists in since he is no less good at doing what he ought than we are. (Is there some way in which we do things better? How should we understand that?)

    Hi Nathan,
    I think we agree on the (ir)relevance of know how. One way of putting the worry might be something like this: even if you are a kind of perspectivist, you might want to say that the situations as the agent perceives them generate categorical requirements that the agent ought to respond to even when they lack the ability to, say, discern the force of some of the facts they’re cognisant of (or, perhaps, cannot competently judge that the stronger reasons are stronger). We could put the point this way: to understand why some vices are vices, we might want to say that the deficiency they show explains why they fail to discharge the obligations they’re under (or to do what they ought to do) precisely because they lack a kind of know how. The racists, sexists, etc. struggle to see the relevance of the facts staring them in the face and we don’t want to say that their vices ‘shift’ what they ought to do.

    I might be less comfortable than you when it comes to reasons talk. I think that the subjective oughts are determined by the evidential probabilities or rational degrees of belief and values, norms, etc. I worry that reasons (understood as facts such as the facts that we are in a position to know) are too coarse grained to do the work of determining what we ought to do if, as seems plausible, rational agents do not distribute their credences uniformly across the propositions that correspond to the facts that constitute the ‘possessed’ reasons. (I worry that we can generate three options cases to create difficulties for any reasons-based account of ‘ought’ (provided that we think of reasons as including facts about the situation). And I don’t see a good way to deal with uncertainty in the reasons perspectivist views. I might be missing something about the merits of reasons-based views, but it strikes me that something close to Michael Zimmerman’s (from Living with Uncertainty and ignoring the stuff about evidence of value) view gives us everything that the reasons perspectivists want to say but without some of the difficulties that their view faces because of the inclusion of reasons (understood as facts).)

  13. Dear Errol,

    Thanks for the helpful clarification. I can see now how beginning with El Clasico and then building a theory to deal with it would subsequently lead to treating the grocers as you do. I wonder whether, from a theory-neutral point of view, the default approach is to treat the two cases differently or to treat them in a unified way. But that seems to me to be a broader question than I can deal with adequately here. So I’ll conclude by thanking you again for such a provocative and rich book.

    Dear Clayton.

    Yes, I am aware of worries about whether the weighing explanations (putatively) given by reasons are compatible with those given by credences, roughly speaking. You’re likely aware that Mark Schroeder’s recent paper in Ethics addresses this point, but his solution involves expressivism, a controversial commitment. Other things equal, it would be preferable if there were a second solution that did not involve such a heavyweight commitment. But I’m not aware of one and I certainly haven’t figured one out. So I suppose I’m optimistic that inquiry will unidentify the dispensable assumption in our theory of reasons responsible for the incompatibility, such that we needn’t adopt expressivism. But I see where you’re coming from.

  14. Hi Nathan,
    The issues concerning uncertainty raise interesting problems for the reasons-centric approaches to ‘ought’. Before I saw Mark’s paper, I discussed some of the problems having to do with uncertainty (in a PPR paper ‘Being More Realistic about Reasons’) arguing, inter alia, that we won’t get rationality and reasons on the same page by applying an epistemic filter on reasons in the way that Dancy, Kiesewetter, and Lord suggest. I think that someone who wants to defend Errol’s view could say that the facts about uncertainty, risk, etc. are among the facts that help to determine what we ought to do but there might be reasons to think this is problematic. (I’m convinced that there are, but discussing them gets into complicated discussions of the cases above in which thinkers commit the base rate fallacy where we have to work out how precisely the known uncertainty bears on what we ought to do, believe, and feel.)

    There’s a different approach that doesn’t seem to lead to expressivism that retains some of Lord’s framework. You were talking about a subjective and objective ‘ought’ above, but let’s modify that talk just a bit and talk about ‘primary’ and ‘secondary’ readings of ‘ought’. (I think talk of an ‘objective ought’ might come with lots of baggage.) We could say that normative reasons that apply to an agent determine what we ought-p to do and then characterise ought-s in terms of the factors that determine what we ought-p to do in the following way — we could try to consequentialise the view (e.g., by representing acting against a reason as a kind of bad outcome or by representing acts in ways that we ought-p not as a bad outcome) and then characterise what we ought-s to do in terms of the ‘badness’ of the possible failures to conform to a reason and the agent’s rational degrees of belief. This would require introducing different readings of ‘ought’ (which people have mixed feelings about) but it would let us say that there’s a sense in which the relevant reasons are normative (i.e., they determine what we ought-p to do) and they connect up to rationality (which might be more closely connected to what we ought-s to do). The rough idea would be that a rational agent would do what she ought-s to do because doing so is the way to deal with uncertainty about the possible reasons that apply to her that determine what she ought-p to do. She might never know with certainty which reasons place demands upon her, but provided that she knows which ones might and how ‘good/bad’ it would be to conform/fail to conform, she could rationally respond accordingly. This is inspired by some recent work on decision-theory for non-consequentialists (e.g., Seth Lazar, Chad Lee-Stronach, and Kristian Olsen) and I think it might be a way of capturing parts of Errol’s view but it might require modifying parts of it, too.

    This won’t give us everything that Mark’s view is designed to give us (e.g., it won’t vindicate his intuitions about the more than three envelope cases), but I’m not certain that we should want a view that says what his view says about such cases. It’s an interesting question whether such views undermine some of the motivation for introducing the epistemic filter. Once you start thinking about the importance of the probability that you’ll act against some reason, it’s less clear why you’d think that the only facts that are normative are ones that pass through some epistemic filter, but it seems like a promising way for someone who likes reasons and is worried about uncertainty to go. (I think it works really nicely in the epistemic case because of how it handles preface cases.)

  15. Hi Clayton,

    Yes, I agree that there are going to be cases like this.

    If I understand correctly, it seems that you’re open to this kind of possibility: A and B are aware of all and only the same facts: f1…f10. A is perfectly competent when it comes to handling these reasons. In A’s case, f1-f5 favour Xing, f6-f10 favour Ying. Since the case for Xing is stronger than the case for Ying, A ought to X. B is perfectly competent when it comes to working out what f6-f10 favour but lacks the competence to grasp the significance of f1-f5. Because of this, B’s case is one in which the possessed reasons favour Ying and aren’t opposed. B ought to Y.

    As for your particular case: I think you are going to have make Don really unusual in order to say that he lacks the relevant know-how. I think the actual (actual fictional?) Don clearly has the relevant know-how. Of course, other parts of his psychology might prevent him from manifesting it, but he is not completely insensitive to the normative importance of education for regular humans.

    Now, of course, you can stipulate that Don doesn’t have the know-how, and is thus completely blind when it comes to the normative importance of education for his daughter. He will thus be seriously incapacitated. My theory would then make the predictions you note. Of course, it might also be that Don is culpable for his incapacitation. So we could criticize him on those grounds.

    In this case, what would he not know how to do? He wouldn’t know how to save for his daughter’s education in virtue of the fact that, say, the fact that her intellectual development is important is a reason to save. He would lack a particular disposition that has as its triggering condition certain normative features of certain facts and has as its output a particular action. As I said, I think realistic versions of Don do have this disposition, but if we want to stipulate he doesn’t, this is what he lacks.

    There are lots of bad things we can say about fully incapacitated Don, and thus lots of ways in which we’re better off than him. He is blind to part of the normative world. We can see that part of the normative world. He also fails to do what he objectively ought to do much more often than we do (at least when it comes to how we treat our daughters). He does a lot more morally objectionable things than we do, etc. He is reliable at doing what he deliberatively ought to do only because he is seriously debilitated. That itself is objectively bad, and my theory doesn’t prevent us from saying that.

  16. Thanks, Errol — that’s helpful!
    As for the first half of my third point, I take it that you will just deny that there is a fully “objective” version of the “deliberative practical ‘ought'” of the sort that I was pointing to. If there were such an ‘ought’, then that would be another case where what you “ought” to do cannot be explained by any fact to the effect that (as you would put it) you “possess a decisive reason” in favour of the relevant action or attitude.

  17. Hi Errol,

    Thanks, that’s very helpful. For what it’s worth, I don’t think Don is _that_ weird. I suspect that lots of fathers care about a kind of happiness for their children even if they do not see some of their projects or interests as reason-providing. I would think that the world is still filled with fathers who want their daughters to live lives that are pleasurable even if they don’t get that their projects, education, and the like matter.

    If we stipulate that Don is pretty systematically insensitive to one kind of reason, then what you say seems right (i.e., ‘He wouldn’t know how to save for his daughter’s education in virtue of the fact that, say, the fact that her intellectual development is important is a reason to save’). But then I wonder how many bad things we can say about him.

    a. In his case, it’s not clear that it would be true to say that her potential intellectual development is an important reason _for him_ to save. (Or maybe can say that it is a reason for him to save, but not one that he possesses? But if it’s still true that such a reason isn’t any reason for him to save, it seems that we’d be criticising him for doing something when there’s a sense in which we’d be criticising him for responding correctly to the relevant reasons.)
    b. In his case, it seems that the theory says the following: he knows how to do what he ought to do when it comes to providing for his daughter and he manifests this know how. He could be, according to the theory, just as good at doing what he ought to do when it comes to his daughter as we might hope to be even if he systematically deprives her of educational opportunities. (I don’t see how we could deny the know how in this case. It’s consistent with his failing to provide for her that he does just what he ought to do (providing her with sailboats and luxuries while refusing to put that money towards an education).) And he could believe correctly that he is just as good at doing what he ought as anyone could hope. (He could believe correctly that he has a perfect track record.) So, if Don can know that he does everything that he ought to do when it comes to providing for his daughter, it seems like he’s doing pretty well on one dimension of assessment. (We could at this point wheel in some objective ought to say that he’s not done what he’s objectively ought to have done, but is there any interesting sense in which there is an objective ought that he acts against that is normative?)

Leave a Reply

Your email address will not be published. Required fields are marked *