Welcome to what we expect will be a very interesting and productive discussion of Samuel Asarnow‘s “Internal Reasons and the Boy Who Cried Wolf.” The paper is published in the most recent issue of Ethics, and is available hereUlf Hlobil has kindly agreed to contribute a critical précis, and it appears immediately below. Please join in the discussion!

Ulf Hlobil writes:

Samuel Asarnow’s “Internal Reasons and the Boy Who Cried Wolf” is an admirably forceful and clear attack on what Asarnow dubs the “if I were you” argument for reasons internalism.  Here, I want to raise two critical questions.  Before I do that, however, a brief summary is in order.

Consider the following case (not in Asarnow):

Sweatshop:  I want to buy new sneakers today.  The sneakers are available in store X and in store Y, and store X is closer to me than store Y.  I falsely believe that store X is closed and store Y is open; in fact, both are open.  Now, the sneakers sold at store X stem from a batch that was produced in a sweatshop; the batch sold at store Y was produced ethically.  Furthermore, it is a moral truth that one ought not to buy sneakers that were produced in sweatshops.  However, I have no goals or values that would speak against buying sneakers that were produced in sweatshops.  My motivation is out of sync with morality.

Question:  Given these facts and supposing that other things are equal, is there (sufficient, undefeated) normative reason for me to go to store X, to store Y, or neither?

Let’s agree that I have reason to do F iff, given the right way to fix outcomes O and information state I, doing F will promote outcomes O if I is accurate.  To answer our question, we have to answer two further questions.

(Q1)     Are the outcomes, O, that are relevant for fixing my reasons determined exclusively by what goes on in my mind, rather than the normative truths?

Let’s say that the relevant facts about outcomes are internal if the answer is “yes” and external otherwise.

(Q2)     Is the information state, I, that is relevant for fixing my reasons determined exclusively by what goes on in my mind, rather than how the world really is?

Let’s say that the relevant information is subjective if the answer is “yes” and objective otherwise.  Now, there are four ways to answer these two questions:  (1) no / no; i.e. objective externalism.  (2) no / yes; i.e. subjective externalism.  (3) yes / yes; i.e. subjective internalism.  And, finally, (4) yes / no; i.e. objective internalism.  Asarnow identifies this last option with reasons internalism.  Reasons internalism is the view that “[f]acts about what there is normative reason for an agent to do are objective internalist facts. That is, they are facts about which actions promote a set of outcomes determined by A’s mind, given a body of information representing how the world really is” (p. 39).

According to reasons internalism, in Sweatshop, there is normative reason for me to go to store X.  After all, store X is conveniently close, and (unbeknownst to me) it is open.  Options 1-3 all agree that I don’t have sufficient reason to go to store X, though they disagree why.  Both kinds of externalism (options 1 and 2) yield this result because here the relevant outcome is that I buy sneakers that were produced ethically.  Subjective internalism (option 3) yields this result because the relevant information state includes that store X is closed.

Reasons internalism is best understood, according to Asarnow, as a claim about the reasons (let’s call them “deliberative reasons”) that determine what one ought to do, in the sense of the so-called “deliberative ought.”  So, while all options, 1-4, may succeed in defining a coherent notion of normative reason, the crucial question is which of these notion is the one that answers the question “What should I do?”, as we usually raise this question in deliberation.

Asarnow Against the “If I Were You” Argument

With this framing in place, Asarnow looks at what he calls the “if I were you” argument for reasons internalism.  The idea behind the “if I were you” argument is this:  We can take up an agent’s deliberative perspective and consider, given that perspective, what we should do in their situation.  Call that “if I were you” thinking.  Since “if I were you” thinking adopts the perspective of the agent’s deliberation, what matters from within this perspective are exactly the agent’s deliberative reasons.  Now:

(P1)     A suggestion that an agent, A, does F is appropriate in “if I were you” thinking iff there are normative (deliberative) reasons for A to do F.

(P2)     A suggestion that an agent, A, does F is appropriate in “if I were you” thinking iff, given what the world is really like and A’s goals and values, it would make sense for A to do F.

(C1)     Therefore, there are normative (deliberative) reasons for A to do F iff, given what the world is really like and A’s goals and values, it would make sense for A to do F.

C1 is a formulation of reasons internalism, i.e., objective internalism.  A lot is packed into the notion of “making sense” in P2.  The idea is that the suggestion that A does F, must make suitable contact with the agent’s rational capacities and her take on her situation.  The agent must be able to appreciate the force of the suggestion without this requiring a conversion experience or the like.  I will leave it at that here.

Asarnow rejects P2 and holds instead that a suggestion that A does F is appropriate in “if I were you” thinking iff, given A’s beliefs and A’s goals and values, it would make sense for A to do F.  Thus, according to Asarnow, suggestions in “if I were you” thinking are constraint by the reasons recognized by subjective internalists.  Here is where Aesop’s The Boy Who Cried Wolf appears:

Boy:  A shepherd’s boy who cried “wolf” too often in the past sees a wolf and cries “wolf.”  Past experience with the boy is such that the villagers, upon hearing the boy, now rationally believe that there is no wolf around.

Question:  Do the villagers have normative reason to check on their flock?

If we accept P1, this question reduces to:  Is the suggestion that the villagers check on their flock appropriate in “if I were you” thinking?  In response to that question, the reasons internalist must hold (on pain of rejecting P2) that what matters is just whether or not there actually is a wolf around.  What the villagers believe and what it would be rational for them to believe is irrelevant.  Asarnow argues, however, that considerations that parallel those why internalists want the outcomes to be fixed by the agent’s mental states apply here.  E.g., the villagers cannot appreciate the force of the suggestion by good reasoning alone, i.e., there is no sound deliberative route to the belief that there is a wolf around.  Hence, if those considerations are cogent, then we should let the information state be fixed by the agent’s mental states.

Why Not Subjective Internalism?

Asarnow’s strategy is to force the advocate of the “if I were you” argument into subjective internalism.  And he thinks that this will put considerable pressure on reasons internalists, for two reasons:  First, in Williams’s famous example, the agent has normative reasons not to drink the gasoline that she believes is gin. Conversely, that her glass contains gin cannot be a normative reasons for her to drink its content because reasons must be facts.  These verdicts are incompatible with subjective internalism.  Second, accepting subjective internalism would reduce the debate about reasons internalism to a merely verbal debate.  For, disputes between objective externalists and subjective internalists are best understood as merely verbal, says Asarnow (p. 49).  They mean different things by “normative reason.”  And Asarnow thinks that subjective internalist reasons are not plausible candidates for being deliberative reasons.

That brings me to my first question:  Why are subjective internalist reasons poor candidates for being deliberative reasons?  In Williams’s example, the reason that is the “right kind of thing to close deliberation” – to use Schroeder’s (2011, p. 9) gloss on the deliberative “ought” – is the (false) consideration that the glass contains gin.  The agent is not morally culpable or criticizable or deliberating badly if she drinks what is in the glass on the grounds that it contains gin.  That brings us to the second point.  What matters in good deliberation as well as for matters of moral culpability and criticism is what the agent believes (or rationally believes), not the facts.  Hence, if the deliberative reasons that we are interested in are the reasons that determine the quality of deliberation, moral culpability, and criticizability, then the points Asarnow mentions don’t do anything to show that deliberative reasons are not subjective internalist reasons; or so it seems to me.

I admit that subjective internalism will probably seem unattractive to many reasons internalists.  That doesn’t imply, however, that it is false that deliberative reasons are subjective internalist reasons.

Is the Internalist’s Notion of Reasons Useless?

My second question is this:  Asarnow admits that the objective internalist notion of a normative reason is coherent.  (Thanks to Asarnow for clarifying this in private correspondence.)  He suggests that the objective externalist’s notion of reason is useful because it tracks, at least roughly, what would be the optimal course of action.  The subjective internalist’s notion of reason is useful because it tracks, at least roughly, what it would be rational for the agent to do.  It would be rational for me to go to store Y because I believe that store X is closed.  And it would be the optimal course of action, too, because the sneakers sold in store Y are ethically produced.  Asarnow seems to think that there is no use for the objective internalist’s notion of reason, and especially no use that comes close to the work done by the notion of deliberative reasons.  But why?

It seems to me that the objective internalist’s notion of a normative reason is useful in certain contexts of advice, and that this is what the “if I were you” argument is trying to bring out.  If you know that store X is open, it would make sense for you to give me advice by saying:  “I think you should go to store Y because the batch of sneakers sold at store X was produced under unethical conditions.  As I know that you really don’t care about that kind of thing, however, I guess you should go to store X because it is actually open, contrary to what you believe.”  Of course, we also sometimes give advice without holding the agent’s goals and values fixed; as when you tell me that I should care more about the production conditions of the goods I buy.  There is, however, also the kind of advice where I offer you my best thinking on how to realize your goals and values, given (what I take to be) my knowledge of the descriptive facts.  I can, e.g., give you advice on where to go if you are into vegan food, collecting stamp, or bird watching, even though I couldn’t care less about these things.  This kind of advice seems to be constraint in much the way that objective internalists suggest “if I were you” thinking is constraint.  If you are, e.g., a villager who hears the boy cry “wolf” and I know that there is actually a wolf around, I might say:  “I personally think that sheep are overrated and wolfs deserve a good meal, but given that you care so much about your sheep, I suggest that you believe the boy today and check on your flock.”

Now, is this context of advice a context where we appeal to deliberative reasons?  For what it’s worth, Mark Schroeder tells us that it is a hallmark of the deliberative “ought” that “it matters directly for advice” (Schroeder 2011, p. 9).  I am not sure that I have a firm enough grip on the idea of deliberative reasons to know whether this applies to them too.  In any event, I cannot find anything in Asarnow’s paper that rules this out.  The three responses that Asarnow considers and dismisses at the end of his paper, e.g., all try to establish that suggestions in “if I were you” thinking must be evaluated in light of objective internalist reasons.  He might be right that this evaluation is not obligatory, but that doesn’t imply that we cannot choose to adopt it or that, if we adopt it, the reasons we talk about are not (some kind of) deliberative reasons.

Schroeder, M. (2011). Ought, Agents, and Actions. Philosophical Review 120 (1):1-41.


13 Replies to “Samuel Asarnow: “Internal Reasons and the Boy Who Cried Wolf”. Précis by Ulf Hlobil

  1. Hi Sam,

    Great paper! I wonder if I could pick up on Ulf’s last question about advice and press you on it a little more. It seems that advice in the “if I were you” mode that involves correcting for the agent’s relevant false beliefs, with the aim of helping the agent pursue what they want or value, is possible. (I don’t take you to disagree on this point.) It also seems like a useful sort of thing to be able to do, and something we do fairly often with friends and family: not sharing each other’s particular projects, and often not even all the relevant values (“what do they see in X?”), we might nonetheless offer each other relevant information about how to pursue our projects or realize our values, when we notice each other reasoning on the basis of some false descriptive information. (We might also offer each other relevant information about what a desired end is really like; on this, see Sobel’s (2009) “Subjectivism and Idealization,” in _Ethics_.) Moreover, it seems that agents can often fluidly take such advice into account in deliberation, updating their beliefs; and intuitively, offering advice while also offering such new information doesn’t simply speak past the agent’s deliberative question “what should I do”? To the contrary, *if* the agent is deliberating about how to get what they want, or about how to realize particular values they hold–so that their “what should I do?” question is best interpreted as “what should I do in order to realize my ends and values?”–then it looks like their particular aim in deliberation is best served by advice that corrects for errors in information that is relevant to realizing their ends or values. That is, “objective internalist” advice looks most relevant to the agent’s deliberative question, assuming that that question is best understood as “what should I do in order to realize my ends and values?”

    Now suppose the internalist can also argue that the agent’s deliberative question is always, at least tacitly, “what should I do in order to realize my ends and values?” Then the above considerations would seem to provide some support for the idea that advice in the “if I were you” mode, where that is understood to be advice that is addressed to the agent’s deliberative perspective / question, would be objective internalist advice. (Or at least, internalist advice that is *more* objective than simply taking the agent’s beliefs as they come.)

    Would you agree with this? If not, why not? You might perhaps worry that the suggestion about the agent’s deliberative question that I made is too controversial for the “if I were you” style argument to usefully lean on it?

    I also had a slightly different question about how objective internalists could defend their preferred disambiguation of “if I were you” thinking. You note, in section IIIA (pp.48-49), some drawbacks of subjective internalism about reasons. You say that objective internalism is “substantially more plausible” with regard to its extension (p.48), as illustrated by e.g. Williams’s famous “gin and petrol” case (p.49). And you note that reasons seem to be facts or true propositions, which subjective internalism can’t account for. My question is simply: Why don’t these constitute (some) principled reasons for objective internalists to prefer their disambiguation of the kind of “if I were you thinking” that they claim is relevant to normative reasons?

    Thanks in advance for any responses. Again, a great paper.


  2. Samuel Asarnow writes:

    Thanks so much, Ulf, for reading my paper so carefully, for the insightful and concise precis, and for two excellent questions. I’ll answer your second question in this comment, and then post an answer to your first question later today.

    Let me begin with one piece of background about how I think about reasons concepts and ought concepts. My view is in one way pluralistic and in another way chauvinistic. I’m pluralistic because I think there are indefinitely many coherent reasons concepts (and ought concepts): legal reasons, strictly moral reasons, subjective internal reasons, and so on. Officially: for any triple of an information set, an outcome set, and a promotion function, we can define a coherent reasons concept (and a coherent ought concept). I’m happy to say each picks out a real property.

    I’m chauvinistic, however, because I think only some of these reasons properties matter very much, from the standpoint of practical philosophy. For example, I think subjective internalist reasons are important because I think they track something like instrumental rationality, which I think is important. (I argue for that in a paper in progress.) By contrast, legal reasons matter for lawyers, but they don’t per se matter in practical philosophy (at least, not in general).

    More importantly, I think there is a unique reasons concept that is particularly important in practical philosophy. Following some others, I call it the “deliberative” concept of a reason. (I don’t especially like that name–it might be better to call it the “all-things-considered” sense” or, following John Broome’s “Linguistic Turn…” paper, the only truly “normative” sense). This is the sense of reasons/oughts: (a) for which the enkrasia requirement and judgment internalism are supposed to be true, (b) which is at stake when we ask whether instrumental rationality is normative; (c) and which is at stake in the debate about whether morality is universal (i.e., whether every adult human has normative reasons to do what morality requires of them).

    I’m going to label this idea with a name, in case anyone else wants to talk about it:

    *Normative Chauvinism*: There is a sense of “reason” and ought” that is uniquely important in practical normative theorizing; it’s the one at stake in debates about enkrasia, judgment internalism, the normativity of rationality, and the universality of morality.

    Let me add that I’m not completely confident Normative Chauvinism is true, though I sort of don’t know how to do practical philosophy without appealing to it.

    I take the internal vs external reasons debate to be a debate about reasons in the deliberative sense. I take that to be so because it’s supposed to be in part a debate about whether morality is universal.

    I’m now in a position to answer Ulf’s second question, which was whether I think objective internalist reasons are useless. I think they are *not* useless, though that’s not clear in the paper. I actually agree with Ulf’s important point that there is a *kind* of advice-giving where it makes sense to reason on the basis of someone’s goals and values—even when you think their normative and evaluative judgments are false. I have some sympathy for desire-satisfaction theories of well-being, and so I’m tempted to say that objective internalist reasons track something like what promotes a person’s well-being (though that may be wrong…don’t hold me to it).

    My main point in this paper is that reasons in the deliberative sense (i.e., all-things-considered/normative reasons) are objective externalist reasons, not objective internalist ones. I take Williams and Manne (and other Reasons Internalists) would disagree with this. Their view is that *deliberative* reasons are objective internalist reasons ones, since otherwise their view would not threaten the universality of morality. And, to be frank, that’s what really worries me (and many others) about Reasons Internalism.

  3. Thank you so much for the clarifications, Sam. Great paper! Like you, I am also a normative chauvinist of some sort. Unfortunately, I am not sure about enkrasia, judgment internalism, or the normativity of structural rationality. But I agree that there is a privileged sense of “reason” that matters for morality and the universality of morality. Also, insofar as Reasons Internalists want to threaten the universality of morality, I emphatically reject the view.

    I admit that the kind of advice that Hille and I were talking about is not the kind of advice that is constraint by moral truths. So, I agree that objective internal reasons are not the reasons that matter for morality. Hence, I think we are in agreement regarding the second issue I raised.

    Let me try to clarify a bit how I see the connection between this and the first issue. When I say that I think morality is universal, I mean roughly the following (I am wondering whether you mean the same): No one could ever have good (sufficient, undefeated) reasons to do something immoral. And I am inclined to understand that as equivalent to: Necessarily for all F, S, and C, if doing F is morally impermissible for S in her situation C, then S couldn’t have come to do F by way of good practical reasoning in C. If that is correct, then the privileged sense of “reason” is the sense in which S has (sufficient, undefeated) reasons to do F in C iff S could come to F by way of good practical reasoning in C. Hence, our overarching question becomes: Does (a) information that S doesn’t possess in C or (b) goals and values that S doesn’t possess in C make a difference to whether S can come to F by way of good practical reasoning in C. I am inclined to say that information that S doesn’t possess doesn’t matter. Hence, I believe that the privileged sense of “reason” is subjective, whether or not it is internalist or externalist. If that is right, I have trouble to see how forcing the objective internalist to subjective internalism undermines her claim about the privileged sense of “reason.” Of course, there may be excellent reason to say that the privileged sense of “reason” is subjective externalist and not subjective internalist. But it is difficult to see how the argument regarding the boy who cried wolf etc. could bear on that issue. So, to sum up, I am on board with the idea that objective internalist reasons are not the privileged reasons in which we are interested. But I am wondering how much beyond that we can get from your arguments.

  4. Hi Hille,

    Thanks for your very helpful and clearly put questions. For your first question, you write:

    “Now suppose the internalist can also argue that the agent’s deliberative question is always, at least tacitly, “what should I do in order to realize my ends and values?” Then the above considerations would seem to provide some support for the idea that advice in the “if I were you” mode, where that is understood to be advice that is addressed to the agent’s deliberative perspective / question, would be objective internalist advice. […] Would you agree with this?”

    My reaction is to say that there are two ways to understand the question “what should I do in order to realize my ends and values?”. On one of those readings of the question, I agree that it’s the question relative to deliberative reasons, but I think it points to objective externalism. On the other reading, I think it points to subjective internalism.

    Here’s why. On one reading, the question holds fixed one’s descriptive beliefs (which actions, given my beliefs, promote my ends and values?). On the other reading, the question does not hold those fixed (which actions will, in fact, promote my ends and values?) The former reading points toward subjective internalism. The latter reading, however, points toward objective externalism. That’s because (in my view) it makes no sense for someone to care about whether their descriptive beliefs are true or false without caring about whether their evaluative and normative beliefs are true or false. So on the latter reading, the question really is: which actions will, in fact, promote my ends and what is really valuable?

    If Reasons Internalists want to hold that the deliberative question you highlight is one where the agent doesn’t care about the truth of their normative and evalutaive beliefs, they either need to accept some kind of nonfactualism about normativity (e.g., old-fashioned non-cognitivism) or a view according to which the truthmakers for an agent’s normative beliefs are facts about her own mind (e.g., indexical relativism). Yet Reasons Internalists have not typically taken themselves to be committed to either of those views.

    I have to think some more about your second question — it’s a good one and I’m not quite sure what I want to say.

  5. Hi Ulf,

    I seem to have fixed my commenting problem, so I can now post my reply to your first question in the original comments. I think along the way I’ll say some things that will sort of reply to your follow-up comment.

    You write:

    “Why are subjective internalist reasons poor candidates for being deliberative reasons? […] [I]f the deliberative reasons that we are interested in are the reasons that determine the quality of deliberation, moral culpability, and criticizability, then the points Asarnow mentions don’t do anything to show that deliberative reasons are not subjective internalist reasons; or so it seems to me.”

    This is a terrific question.

    In response, I would want to draw a sharp line between, on the one hand, whether someone did what they have all-things-considered reason to do (i.e., normative reasons, deliberative reasons, or whatever), and whether someone deliberated well, whether they are morally culpable, and whether they are morally blameworthy. I don’t think there’s any *direct* connection between all-things-considered reasons and the other things you mention. (Because of that, my borrowed terminology of “deliberative” reasons was probably a mistake, since it invited a misunderstanding.) I think beliefs about all-things-considered reasons figure into the enkrasia norm and obey judgment internalism, but I don’t think that facts about a-t-c- reasons are directly related to the other things you mention. I think that all three of those things are subjective, in being relative to what an agent’ believes.

    My official view, not described in the paper, is that the “prima facie good reasoning” relation is what’s primitive. Then we use that relation to build out both an objective externalist reasons concept (roughly: what would be good reasoning, if you believed all of the descriptive and normative/evaluative facts), and a subjective internalist reasons concept (roughly: what you might decide to do, given your mind as it is). The latter (which I discuss in more depth in an unpublished manuscript) lets us define a notion of quality of deliberation. My personal view is that moral culpability and moral blameworthiness are much more complicated, and we’re not going to get a theory of them directly from a theory of reasons, nor should we expect to.

    I do worry a little bit that the all-things-considered sense of reason/ought should be in at least some way subjective, because of three envelope / miner’s paradox-type cases. I don’t have an official view about this, but my tendency is to think that handling those cases requires appeal not to the agent’s beliefs as they are, but to the descriptive facts “filtered” by the agent’s ignorance (i.e., a subset of the facts, excluding facts the agent knows she doesn’t know). I’m not sure how to tell that story, though.

  6. Fantastic paper Sam, and very interesting discussion.

    I have a thought about a possible further reason why it might make sense for advisors in ‘if I were you’ mode to take account of an advisee’s goals and values but not her beliefs. Suppose we accept a kind of pessimism about normative testimony: bare testimony that there’s reason to F can’t (or typically doesn’t) make it rational to accept that there’s reason to F. This view generates a restriction on when it makes sense to advise someone that there’s reason to F. Since the advisor’s mere say-so can’t be enough, the conclusion that there’s reason to F must be accessible through reasoning from the advisee’s goals and values.

    By contrast, of course, bare testimony as to a descriptive claim can make it rational for an advisee to accept that claim – even if, independently of the testimony, there’s no rational route from the advisee’s beliefs to that claim. Thus it can make sense to advise people on the basis of what one takes to be the facts, rather than on the basis of what is independently rationally accessible from an advisee’s beliefs.

    I wonder then whether this kind of pessimism might be able to vindicate an asymmetry in how advisors in ‘if I were you’ mode should treat normative and descriptive facts. Of course, pessimism is a controversial view. Nonetheless, I’d be interested to hear any thoughts on whether this might be a way for the internalist to go.

  7. Hi Jonathan, I really like your point about normative testimony pessimism. Doesn’t it raise the question, though, why “if I were you” thinking is constraint by what we can impart via testimony, rather than, say, what we can impart by giving explanations (or whatever else transmits understanding)?

    Thanks Sam for your answer to my first question; that is very helpful. I am wondering why you go to Miners and Three Envelop Cases for support for the idea that the privileged reasons are subjective. Background: I agree that culpability and blameworthiness might be tricky and should be set aside. Since I want to stay away from enkrasia and judgment internalism, I think that leaves us with the relevance for morality (and its universality) as the mark of the privileged reasons on which we can agree. Now, it seems to me that we can see in cases that are much simpler than Miners or Three Envelop Cases that what morality requires of me varies with my (perhaps: rational) beliefs. E.g., if you are sick and I am a doctor, then whether morality requires that I give you penicillin depends on whether I (rationally) believe that you are allergic to penicillin. It does not depend on whether you are actually allergic to penicillin, or so it seems to me. Perhaps I am off track here, but it might clarify things for me understand why you think we have to go to Miners and Three Envelops to feel the need to say that the privileged sense of “reason” is subjective.

  8. Hi Jonathan,

    Thanks for taking the time to weigh in! I like your point about normative testimony. (And your follow up, Ulf.) I hadn’t thought of it in quite that way, and I’ll have to think more about this connection.

    My first inclination is to think that Reasons Internalists would have to adopt a *very strong* type of pessimism in order to pursue this strategy (much stronger than Alison Hills’ pessimism, if I understand it correctly). Specifically, it looks to me like salvaging the “If I Were You” argument would require something like the idea that it is almost never *epistemically rational* to change our *any* of our normative or evaluative beliefs on the basis of testimony. I don’t think Hills believes that, and I don’t think it’s a super mainstream view, though please correct me if I’m wrong.

    Ulf’s point is also a good one: Reasons Internalists would need to combine this strong pessimism about evaluative and normative testimony with a strong optimism about descriptive testimony, according to which we quite generally can be epistemically rational to change any of our descriptive beliefs on the basis of testimony. This is the kind of asymmetrical view of evaluative and descriptive matters that I argue is hard to motivate (in Section IV.B of the paper).

    More broadly, I think we can see appeal to this kind of pessimism as a nice example of how Reasons Internalists can respond to my objection by adopting a highly controversial epistemological commitment. That’s in the spirit of the main project of my paper, which is to argue against the idea that Reasons Internalism can be motivated just by reflection on relatively uncontroversial intuitions about reasons. Instead, I think motivating Reasons Internalism would actually require appeal to heavy duty commitments in metaethics, epistemology, or some other area.

  9. Hi Hille,

    Thanks again for weighing in. I think I have an answer to your second question, which I punted on above.

    If I understand your second question, you’re imagining a Reasons Internalist who sees that the “If I Were You” argument really supports subjective internalism, but who sees the drawbacks of subjective internalism, and so proposes objective internalism as a kind of compromise. It goes some way toward capturing the relation between reasons talk and “if I were you” thinking, but it doesn’t have the problematic consequences of subjective internalism.

    I had not thought of this suggestion, and I can imagine some philosophers being tempted to argue this way. I think my reply is structurally analogous to my reply to your first question. These philosophers will have to explain whether they allow us to correct an agent’s evaluative and normative beliefs, or not. If they don’t, then they need to explain why not. That’s hard to do, unless they appeal to old-fashioned non-cognitivism or indexical relativism. They’ll also have to explain why even reasons-ascriptions that involve normative and evaluative predicates are factive, which their view would contradict.

    If they do allow us to correct normative and evaluative beliefs, then their view has a very substantally different extension than that of the Reasons Internalism familiar from Williams and Manne, and I think it no longer deserves the name Reasons Internalism. (It doesn’t contradict the universality of morality, for example.) I quite like the resulting view, however — indeed, this is basically the view I defend in my 2016 Ethics paper and my 2017 PPR paper. Very, very roughly, I claim that what someone has reason to do depends on what they could come to decide to do on the basis of their desires, if they had true descriptive and normative/evaluative beliefs.

  10. Hi Ulf,

    Thanks for pushing me on this issue in your comment from 4:19. What you say is very clarifying. I clearly remember having thought in the past that I needed to treat ignorance (e.g. miner’s paradox) differently from false belief, by employing the filtering mechanism. However, I can’t for the life of me remember why I thought that. It might have had something to do with my commitment to the enkrasia norm. Right now, it seems to me that you’re right and I ought to treat them the same way.

    Specifically, here’s how I treat false belief cases. In those cases, my inclination is to distinguish a subjective externalist from objective externalist (moral) ought, and to argue that there is a distinctively important concept of moral permissibility that’s linked to the objective externalist ought. So the doctor in your case acts impermissibly, though they do what they subjectively ought to do, and so act blamelessly. (Well, they act blamelessly as long as they meet some other criteria as well.)

    Unless I can figure out why I thought I had to treat ignorance cases differently, that’s how I would treat them as well.

    It now occurs to me that there is a complication here (noted also by Mark Schroeder, in his “Means-ends coherence, stringency…” paper) about the subjective ought. In the miner’s paradox case, for example, they agent believes (correctly) that they objectively ought not to block neither shaft. Yet people like me are committed to saying that they subjectively ought to block neither shaft. So whatever theory I have about how an agent’s mental states determine what she subjectively ought to do, it had better be possible for it to be the case that someone subjectively ought to do something they believe they objectively ought to do. I wouldn’t be surprised if this ends up making things a little bit tricky when fleshing out one’s theory of the subjective ought / subjective reasons.

  11. Hi Sam,

    Thanks for your response. I definitely agree with your point that these further commitments still weaken the argument, and thus further reinforces the general lessons of your paper. However, I wonder whether a version of the argument might get by with weaker assumptions than you suggest.

    First, I take it that the pessimism required only concerns claims about reasons which are either nonderivative or which derive from reasons (or goals, or values) which the advisee doesn’t accept. For if the claimed reason derives from a reason (goal, value) which the advisee accepts, then there will be a rational route from the advisee’s attitudes, perhaps with further descriptive premises, to accepting that claim. But this seems like the kind of case where pessimism is fairly attractive. It certain seems to me like there’s something pretty odd about changing one’s final values merely on the basis of reliable testimony. Pessimism captures that.

    Second, I’m not sure exactly how much optimism about descriptive testimony is needed. Of course, there are plenty of cases in which it’s not rational to accept descriptive testimony. One kind of case is where you (rationally) take the speaker to be unreliable. Perhaps we can put those cases aside though, since ‘if I were you’ discussion presumably requires a certain amount of trust in the speaker. Another kind of case is where you generally trust the speaker but have independent grounds for doubting a specific claim they make (e.g. because you have good conflicting evidence). That kind of case does look like a problem for the attempt to motivate objective internalism, because it means some descriptive facts won’t be appropriate to appeal to in ‘if I were you’ thinking.

    However, maybe there’s another option. Consider ‘semi-objective internalism’ (or “perspectivist” internalism): the view that reasons depend on the agent’s goals and values together with the epistemically accessible facts. (This is a version of the view you mention in discussion with Ulf, in connection with mineshafts etc). Maybe this view could be motivated by the ‘if I were you’ argument: the idea would be that what’s appropriate in ‘if I were you’ thinking depends on the agent’s goals and values together with what the world is like, insofar as it is epistemically accessible to the agent. I don’t think this view would be undermined by the point that it’s not always rational to accept descriptive testimony, since that might just be a way for facts to be epistemically inaccessible. And insofar as this kind of view is more plausible than a purely subjective view, it at least weakens the costs which the argument takes on.

  12. Hi Jonathan,

    Very helpful comments — thank you. I’m especially on board with your comments about “semi-objective internalism.” That’s a Reasons Internalism-adjacent view that I can imagine someone trying to motivate by an “if I were you”-type argument. I don’t think anything I say in the paper undermines that view. It does seem to me that, in some cases, semi-objective internalism has a very substantially different extension than objective internalism. So I wouldn’t want to understate how much of a departure from objective internalism it is. (At one point, an earlier draft of this paper had a discussion of this view, but it ended up getting cut somewhere between 2015 and 2019…)

    Your point about what kind of pessimism about evaluative testimony is required is also well-taken. That being said, I don’t have a very good understanding of when it is rational for people to change their final values, and I suspect that no one else does, either (though I could be wrong). Interestingly, most mainstream metaethical views are compatible with the rejection of this pessimism. And while I agree that there are cases where altering one’s final evaluative beliefs on the basis of testimony seems odd, but it’s not clear to me that all such cases will seem odd, and it’s not clear to me in what sense they seem odd. (Perhaps someone who does that lacks an important kind of understanding, but must they be irrational?)

    Because of that, I still think reasons internalists should be wary of leaning on the claim that it is *always* epistemically irrational to alter one’s beliefs about final value on the basis of testimony. That’s a controversial and perhaps not well-motivated epistemological commitment that reasons internalism has not typically been thought to require.

Comments are closed.