Welcome to our NDPR Forum on Benjamin Kiesewetter’s book The Normativity of Rationality (OUP 2017), recently reviewed by Alex Worsnip at NDPR. Please feel free to comment on any aspect of the book, the review, or the discussion below.

From the book blurb: “Sometimes our intentions and beliefs exhibit a structure that proves us to be irrational. The Normativity of Rationality is concerned with the question of whether we ought to avoid such irrationality. Benjamin Kiesewetter defends the normativity of rationality by presenting a new solution to the problems that arise from the common assumption that we ought to be rational. The argument touches upon many other topics in the theory of normativity, such as the form and the content of rational requirements, the preconditions of criticism, and the function of reasons in deliberation and advice.

Drawing on an extensive and careful assessment of the problems discussed in the literature, Kiesewetter provides a detailed defence of a reason-response conception of rationality, a novel, evidence-relative account of reasons, and an explanation of structural irrationality in terms of these accounts.”

From Worsnip’s review: “Benjamin Kiesewetter’s book is a sophisticated and extremely thorough examination of the question of whether rationality is normative. This question may seem to many non-specialist readers ill-formulated, and to make sense of it one has to understand what the parties to this debate mean by ‘normative’ and ‘rationality’. ‘Normative’ is used not in a weak sense that contrasts with ‘descriptive’, but rather in a stronger sense whereby rationality is normative just if there are reasons to be rational. Meanwhile, ‘rationality’, at least for several prominent parties to the debate, is used to refer to what is sometimes called structural rationality, where to be structurally rational is to have attitudes that are not jointly incoherent. So, for these writers, the question ‘is rationality normative?’ comes roughly to the at least somewhat less odd-sounding ‘do we have reasons to be coherent?’

“Kiesewetter follows these writers in their usage of ‘normative’, but not in their usage of ‘rationality’. For Kiesewetter, to be rational is to correctly respond to one’s reasons. That puts us back into a position of oddness, for the question ‘is rationality normative?’ now comes to something like ‘do we have reasons to do (believe, intend, etc) what our reasons favor doing?’ The question answers itself.

“Given that, how are we to make sense of Kiesewetter’s project? One answer, germane to his own presentation, is that it is a substantive project to show that rationality is not(merely) a matter of coherence, but instead consists in responding to reasons — and is, thus, normative. But this glosses over the fact that, at least for some prominent participants in the debate about the normativity of rationality, the usage of ‘rationality’ to refer to coherence alone had the status of a terminological stipulation.[1] Such philosophers acknowledged that one can use ‘rationality’ in a more capacious sense, to refer to reasons-responsiveness — call it “substantive”, as opposed to “structural” rationality — and that substantive rationality is, of course, normative. Their interest was in whether there is reason to be coherent in particular.

“But there is a better way to reframe Kiesewetter’s project, and to show its interest and importance. On this framing, the positive part of the project has two elements. The first is to give a detailed account of what substantive rationality consists in. The slogan that (substantive) rationality concerns responding to reasons is easy, but it turns out to be surprisingly hard to make precise. Ethicists often distinguish “objective” and “subjective” reasons, where the former are relative to all of the facts (no matter how inaccessible), and the latter are relative to one’s beliefs. Now, there’s plausibly norecognizable sense of the term ‘rationality’ that requires us to respond to our objectivereasons. If the glass in front of me appears to contain gin and tonic, but — undetectably and contrary to appearances — actually contains petrol, then in drinking it I fail to do what I have most objective reason to do, and yet there is no good sense in which my act is irrational. But if we say that rationality requires us only to respond to our subjective reasons — where subjective reasons are relativized to our beliefs — then we have retreated to a notion of rationality that is arguably no more demanding than that of structural rationality, or coherence. For plausibly, coherence already requires us to (intend to) do what we believe ourselves to have most reason to do — and more besides. So, if these are our only options, there seems to be no notion that deserves both the label ‘substantive’ and the label ‘rationality’.

“Kiesewetter’s excellent point, however, is that these are not our only options. There is an intermediate, evidence-relative notion of a reason. Substantive rationality consists in responding to our evidence­­-relative reasons. Consider a different version of the aforementioned case whereby my evidence strongly suggests that what is in front of me is petrol, and yet I go on obstinately believing that it is gin and tonic. In this case, though my act of drinking would still not be a failure to respond to my “subjective” reasons understood in a purely belief-relative sense, it is a failure to respond to my evidence-relative reasons, and this grounds the fact that my act is substantively irrational — even though I may display no incoherence. Though one could certainly quibble with Kiesewetter’s particular version of the evidence-relative view, I will not do so here. In my opinion, he is clearly right that it is the evidence-relative notion of a reason that we want for an account of substantive rationality.”

24 Replies to “NDPR Forum: The Normativity of Rationality

  1. Let me start by thanking Alex Worsnip for engaging with my book, and Dave Shoemaker for setting up and inviting me to this forum!

    Alex does a great job in describing my project of explaining structural irrationality in terms of substantive requirements of rationality, i.e. requirements to respond to available reasons, and so there is no need to summarize it here. As far as I can see, he poses three challenges for my view. The first pertains to cases in which attitudes are irrational in combination, even though they are individually permitted by substantive requirements of rationality (“permissive cases”). The second concerns cases in which attitudes are irrational in combination, even though they are individually required by substantive requirements of rationality (“conflict cases”). The third has to do with my general account of irrationality. I will take up these issues in turn.

    (1) Permissive Cases

    As Alex remarks, I offer “no fully general response to the challenge from permissive cases”, by which he means (I take it) a response that applies to both the practical and the epistemic domain. The reason for this is simple: I do not believe that there are relevantly permissive cases in the epistemic domain. Regarding outright belief, I hold that once one has epistemic justification for believing p, and once one seriously attends to the question of whether p is the case, one is no longer permitted to suspend judgment on p or believe not-p. I don’t know why Alex says that I “tacitly assume the absence of permissive cases”, since I make that assumption quite explicit (185). The reason why I do not discuss “the more full-throated permissivist view”, according to which even in cases in which one attends to p, each belief and suspension of judgment can be permitted, is that I have nothing substantially to add to the excellent arguments that others, especially Roger White (2005), have provided against this kind of permissivism (cf. 181). But I realize that the rejection of “full-throated” epistemic permissivism is a substantial commitment of my theory that some will see as a cost.

    In the book, I am concerned with rational norms for outright belief, not for partial beliefs or credences, but it certainly seems worthwhile to ask whether the general sort of explanation of structural irrationality that I offer can be applied to partial belief. Alex’s case does not convince me that it can’t. Alex suggests that substantial rationality might permit us to assign p a credence of 0.64, while at the same time permitting us to assign p a credence of 0.65 and thus to assign not-p a credence of 0.35. I deny that this is possible. Suppose there is a precise number we can assign to p’s evidential probability, say 0.64. In such a case, substantive rationality does not permit you to assign p a precise credence that is higher or lower than 0.64 (it may, however, permit you to assign no precise credence at all). That is, if you assign p a credence of 0.65, you are less than fully rational (note that it does not follow that you are irrational, given my account of irrationality discussed below). Given that substantive epistemic rationality is a matter of responding correctly to evidence (as Alex does not seem to deny), and given that p’s evidential probability is 0.64, it must be more rational to have a credence of 0.64 than to have a credence of 0.65, and this seems to imply that you are less than fully rational if you have a credence of 0.65. Alternatively, suppose that the evidential probability is imprecise. All that we can say is that it is in the range of 0.64–0.65, or that it is roughly two thirds, or something of that sort. In that case, it also seems to me impermissible to adopt a precise credence of 0.65. If you are going to adopt a credence, it shouldn’t have more precision than the precision of the probability that your evidence yields.

    Finally, what about Alex’s worry that intransitive preferences are irrational even though each of a set of such preferences might be substantively rational? Again, the book does not discuss rational requirements on preferences, partly because I think that the notion of a ‘preference’ is ambiguous. If preferring a over b involves taking a to be better than b, then it’s not clear to me that each of a set of intransitive preferences could be substantively rational. If preferences amount to mere wants or desires, it’s not clear to me why intransitive preferences should be irrational (here I agree with Parfit 2011, 128).

    (2) Conflict Cases

    Can substantive rationality require attitudes that are irrational in combination? I do accept that substantive rationality can require attitudes that are incoherent in combination, but I deny that these are cases of irrationality. In the preface paradox, for example, I have incoherent beliefs each of which is supported by the evidence: I believe of every claim in my book that it is true, but I also believe that the conjunction of these claims is too unlikely to be true. Intuitively, I am rational if I conscientiously form all of these beliefs on the basis of sufficient evidence. I take it to be a virtue of my theory that it accommodates and explains this intuition.

    I am more skeptical that substantive rationality can require the kind of incoherence involved in practical or doxastic akrasia, as Alex has argued elsewhere (Worsnip 2018), but for reasons of space I have to put aside this question here (see 250–4 for a brief discussion). Instead, I will focus on Alex’s claim that cases of unalterable attitudes provide examples for substantially required irrationality. Consider Setiya’s smoker, who is unable to give up her intention to smoke and her belief that a necessary means for smoking is buying cigarettes (Setiya 2007). Let us assume that Setiya’s smoker is required, by standards of substantive rationality, not to intend to buy cigarettes. And let us assume that, since ‘ought’ implies ‘can’, substantive rationality does not require her to give up the intention to smoke or her means/end-belief. Doesn’t this mean that she can become substantively rational only by becoming structurally irrational?

    If Setiya’s smoker is substantively rational, she will end up having incoherent attitudes. But note that her incoherence is not due to a failure of her rational capacities but to pathological compulsion. She can reflectively accept her incoherence as the result of her being as rational as she possible can be, a form of incoherence that seems immune to legitimate criticism. For these reasons, in my view this case is best understood as a case of rational incoherence.

    Is this analysis of Setiya’s example consistent with Setiya’s own argument against the wide-scope account, according to which a wide-scope instrumental principle licenses unacceptable bootstrapping in cases of unalterable attitudes? Arguably, if there is a rational requirement demanding means/end-coherence as such, then the incoherence of Setiya’s smoker is due to a failure of rational capacities and not to pathological compulsion. For if there is such a structural requirement, then intending to buy cigarettes is a sufficient and necessary means for satisfying it, and intending to buy cigarettes is something that the smoker is able to do. Accordingly, the structuralist view will deem Setiya’s smoker irrational, and will (implausibly) entail that she has reason to intend to buy cigarettes just because she has an unalterable intention to smoke. There is no inconsistency in holding that Setiya’s smoker isn’t irrational, while holding that the structuralist view entails that she is and hence licenses unacceptable bootstrapping.

    (3) Irrationality

    According to my account of irrationality, in order to be irrational, it is not enough to have an attitude that fails to be (substantively) rational, rather one’s attitudes must guarantee such a failure. This might be (i) because a given set of one’s attitudes is such that not all of these attitudes can be sufficiently supported by reasons (this is structural irrationality), or (ii) because a single attitude is such that it cannot be supported by sufficient reasons (e.g. Parfit’s famous indifference to pain on Tuesdays; this is substantial irrationality). One great advantage of this view is that it gives us a unified theory of both structural and substantive irrationality; another is that it avoids any appeal to structural requirements of rationality.

    I don’t know why Alex claims that “Kiesewetter should want major failures to respond to one’s reasons to come out as irrational”. For one, even if all cases of irrationality were of kind (i), this would be compatible with all other important claims in the book: the criticizability of irrationality, the denial of structural requirements of rationality, the normativity of rationality. For another, even if all cases of irrationality were of kind (i), this wouldn’t mean that “the pure coherentist about (ir)rationality turns out to be half right”, for my account of structural irrationality is given in terms of substantive requirements of rationality, and thus puts substantive rationality first.

    More importantly, however, my claim that there are non-structural cases of irrationality does not mean that I want “major failures to respond to one’s reasons to come out as irrational”. I don’t think that irrationality is a matter of the severity of the failure to respond to reasons, but rather a matter of its modal robustness. For example, I agree with the common point that even severely immoral intentions need not be irrational (the mistake, in my view, is to conclude from this that moral requirements are not rational requirements or that immoral action can be fully rational). For the same reason, it is not true to say that I was “trying to avoid” that some “slight failures to respond to reasons will count as irrational” (although I wanted to avoid that all slight failures to respond to reasons will count as irrational). Thus, I do not object to Alex’s suggested modal interpretation of “guaranteeing a violation of reasons”, since I wasn’t trying to avoid the implication that Alex thinks I was trying to avoid. Alex seems to think that this implication is a problem for my view, but as far as I can see he did not provide any reason for thinking so.

    References

    Parfit, Derek. 2011. On What Matters. Vol. 1. Oxford: Oxford University Press.
    Setiya, Kieran. 2007. “Cognitivism about Instrumental Reason.” Ethics 117 (4): 649–673.
    White, Roger. 2005. “Epistemic Permissiveness.” Philosophical Perspectives 19 (1): 445–459.
    Worsnip, Alex. 2018. “The Conflict of Evidence and Coherence.” Philosophy and Phenomenological Research 96 (1): 3–44.

  2. I’d like to raise a somewhat fundamental question to both Benjamin and Alex. It concerns the question of what exactly available reasons are, on Benjamin’s picture.

    As Alex presents things, “There is an intermediate, evidence-relative notion of a reason” that is central to Benjamin’s view, and which is intermediate between objective and subjective reasons, where subjective reasons are relative to the subject’s beliefs. But I don’t see how the evidence-relative notion is different from the subjective notion.

    Here is why: Alex’s presentation of the point makes it sound like maybe there are the reasons, and then additionally there is the independent evidence-framework against which it is determined what counts as a reason. But that, I take it, is not how we should understand Benjamin’s position. Rather, one’s reasons (practical and epistemic both) are, or are constituted by, one’s evidence, so that the very reason one has to phi may (or will?) be a piece of evidence bearing on whether one should phi. (Is this correct, Benjamin?)

    If this is the right interpretation, what are one’s reasons relative to? It seems that they have to be relative to the other reasons or evidence the subject has. This gets us to the question: When does one HAVE evidence or a reason? To this Benjamin’s answer (p. 162 of the book) is that the evidence a subject has is (a) facts or occurences and (b) what the subject believes or internal facts about her experience. Putting this all together, it seems that a subject’s reasons are relative to her true beliefs or conscious mental states. But this, to me, sounds awfully close to the subjective claim that reasons are belief-relative.

  3. Hi Eva,
    Thanks for your question! I’m not sure that there is such a thing as *the* subjective notion of a reason, but many people would identify subjective reasons with what Parfit calls “apparent reasons”, roughly: reasons one would have if one’s beliefs were true. Such reasons are constituted by contents of beliefs, no matter whether these beliefs are true. So this is a non-factive notion of reason. More generally, one might take subjective notions of reasons to be non-factive notions, according to which reasons are provided by contents of actual (or counterfactual) beliefs (compare my definition of “subjective perspectivism” on p. 198). In contrast, the notion of an available reason that I use is factive: P is an available reason only if p is the case (or – in case p is not a proposition – only if p occurs). That seems to me a crucial difference between a subjective notion and the “evidence-relative” notion that I have in mind.

    Two relevant clarificatory points:
    – I do not claim that evidence or reasons can be constituted by “what the subject believes”. That would seem to be a subjective conception.
    – When I say that in order for something to be a reason or available reason, it needs to be part of the agent’s evidence, I do not thereby make any assumption about the relation between a reason to phi and the specific evidence that an agent has for the proposition that she ought to phi. This relation seems to me an entirely separate issue.

  4. I would also like to add my thanks to Dave for setting this discussion up, and to Benjamin to his illuminating reply to my review! In this first reply (which was so long that PEA Soup made me split it into two posts!), I will discuss the first and second issues that Benjamin discusses in his reply, treating them together. I will discuss the third issue in a subsequent (hopefully at least somewhat briefer) reply.

    As I say in the review, there are two main components to the positive project of Benjamin’s book: his account of (what I’m calling) substantive rationality in terms of responding to evidence-relative reasons, and his account of structural rationality in terms of the avoidance of combinations of attitudes that are such that they are guaranteed to be substantively irrational. Dave excerpts my description of the former in his introduction to this discussion, but Benjamin focuses on my discussion of the latter (which is natural, since that is the focus of my concerns about Benjamin’s view). So for those who haven’t read the whole review, I’ll provide a little background about the latter part of the view here.

    To recap, the label ‘structural irrationality’ is usually used to refer to the irrationality involved in having particular combinations of attitudes that seem to be irrational in virtue of their joint incoherence, or failure to fit together well – inconsistent beliefs, means-ends incoherent intentions, akratic attitudes, and so on. One feature that seems to be distinctive of structural irrationality is that one can tell that a set of attitudes is structurally irrational without having to determine whether the individual attitudes are supported by one’s reasons. For example, we can tell that the set {believing p, believing not-p} is structurally irrational without knowing anything about whether the individual reasons support believing p (or believing not-p); indeed, we can tell this even without knowing what proposition ‘p’ stands for!

    This might seem to rule out an analysis of structural rationality in terms of substantive (ir)rationality (the latter of which is, Benjamin and I agree, concerned with reasons-responsiveness). But as Benjamin’s view shows, that is too hasty. On his account of structural irrationality, a set of attitudes is structurally irrational when the combination is such that it is impossible for each of the attitudes to be sufficiently supported by (evidence-relative) reasons, that is, to be substantively rational. In my review I raised challenges to this view from two kinds of case: permissive cases and conflict cases constitute a challenge to this account, for they are (putative) cases where there is a structurally irrational set of attitudes, and yet it is still possible for each of these attitudes to be substantively rational.

    Benjamin’s response to this challenge, as I understand it, is to hold that at least many of these putative cases should in fact be understood as ones where the relevant set of attitudes is admittedly *incoherent*, and yet not actually structurally *irrational*. In this way he can, it seems, preserve the view that attitudes structural irrationality proper always carries with it a guarantee of substantive irrationality. But I also think this way of going also comes with significant costs, which I’ll now elaborate on.

    Let’s contrast two different ways of understanding what Benjamin is trying to do in his account of structural irrationality. On one way of understanding it, we begin with paradigmatic cases of incoherence that are, intuitively, structural irrational: inconsistent beliefs, means-end incoherent intentions, akrasia, and so on – and we take on board the assumption that such combinations of attitudes are always irrational, taking that as a datum to be explained. Then, the account of structural rationality in terms of substantive irrationality attempts to explain this datum, by claiming that these combinations of attitudes can never be supported by one’s substantive reasons. One a second way of understanding it, we do not assume that these paradigmatically incoherent patterns of attitudes are always irrational. To be sure, it’s a good thing if the account can show that such patterns are typically irrational. But ultimately, if it turns out that an incoherent set of attitudes is not actually such that it is guaranteed that one of the attitudes is not substantively rational, we can conclude on this basis that the set of attitudes, though incoherent, is not irrational after all.

    By arguing that putative permissive and conflict cases are in fact instances of incoherence without irrationality, Benjamin seems to be plumping for something closer to the second approach than the first. Benjamin deploys this response in at least four kinds of case. In chapter 10 of his book, he deploys it for cases of means-end incoherence ends is permissible but not required. In this thread, he also deploys it for preface-type cases of inconsistent beliefs, for intransitive preferences between permissible options, and for “unalterable ends” cases such as Setiya’s smoker. In each case, he suggests, there is incoherence without irrationality. Now, it seems to me, with the possible exception of the preface cases (see my 2016 AJP paper), that each of these cases is one where, pretheoretically, the combination of attitudes involved is in a good sense (structurally) irrational. Of course, if we already accept that if a combination of attitudes does not deliver a guarantee of a failure of substantive rationality, then it is not irrational at all, then we will conclude that these cases are not instances of irrationality after all. But evidently, the strategy we are now pursuing is not to vindicate the intuitive thought that the kinds of incoherent patterns usually identified with structural irrationality are, indeed, always irrational. The goalposts have shifted.

    This second way of understanding Benjamin’s project makes it considerably less ambitious. It would be a remarkable and significant result if all or nearly all cases that pretheoretically qualify as structurally irrational turned out to involve a guarantee of imperfect substantive rationality. If the claim is just that such cases may or may not involve a guarantee of imperfect substantive rationality, and only count as genuinely irrational if they do, we give up the ambition of vindicating this sort of result. We can still say that all cases that involve genuine (as opposed to apparent) structurally irrational involve a guarantee of imperfect substantive rationality. But this is just a trivial consequence of the insistence on counting any case that does not involve such a guarantee as merely apparently, and not genuinely, irrational. That is, it follows from already accepting Benjamin’s view, but does not provide independent support for it.

    Benjamin might protest here that he doesn’t share my intuitions that the relevant cases do pretheoretically qualify as (structurally) irrational, and so that he still takes himself to be vindicating the view that all cases that are pretheoretically (structurally) irrational cases involve a guarantee of imperfect substantive rationality. Perhaps we will be reduced to a standoff of intuitions here. But it is worth making at least one remark about the dialectic here. Benjamin admits that these cases are cases of incoherence, but not that they are cases of irrationality. But on my view, to say that states are (genuinely) incoherent just is already to say that they are (structurally) irrational. The property of incoherence and that of structural irrationality are one and the same thing. In acknowledging that these states are incoherent, Benjamin is (I think) acknowledging that there’s at least a sense in which they don’t “fit together” right – and, I would say, that just is to at least tacitly acknowledge their structural irrationality. (Of course, in the permissive and conflict cases, there’s by hypothesis another sense in which the attitudes aren’t irrational – they aren’t substantively irrational. But our sense that this is so shouldn’t preclude our acknowledging the structural irrationality that is present.) If Benjamin thinks, as he seems to, that attributions of incoherence don’t necessarily amount to attributions of irrationality in any sense, I’d like to hear more about what he thinks attributions of incoherence do come to, and in virtue of what states count as incoherent, on his view.

    (continued in next post)

  5. (continued from previous post)

    A further worry that I have about Benjamin’s way of responding to the challenges from permissive and conflict cases is this. As I’ve already said, it seems to be distinctive of structural irrationality that one can tell that a set of attitudes is structurally irrational without having to determine whether the individual attitudes are supported by the relevant reasons. In his book, Benjamin seems to acknowledge this feature of structural irrationality. But now consider Benjamin’s contention, in response to the challenge from permissive and conflict cases, that means-ends incoherence is not always irrational. For example, it’s not irrational in Setiya’s example of a person who has an unalterable intention to smoke. It now seems that, on Benjamin’s view, we cannot tell just by looking at the relevant pattern of attitudes – intending to Ф, believing that Ψ-ing is necessary for Ф-ing, not intending to Ψ – that there is structural irrationality involved. Moreover, we can’t even do this when we substitute determinate actions in for the ‘Ф’s and the ‘Ψ’s – intending to smoke, believing that buying cigarettes is necessary for smoking, and not intending to buy cigarettes. Rather, whether this is structurally irrational now turns on further details – namely, whether the end is unalterable, and – crucially – whether one’s reasons support pursuing it. (After all, presumably Benjamin will say that failing to take the necessary means to an unalterable end that one has excellent reasons to pursue *is* structurally irrational.)

    Benjamin’s view, then, seems to be that if the end is unalterable and the end is one that one’s reasons decisively tell against pursuing, then being means-ends coherent is not structurally irrational, but that if either of these conditions isn’t met, it is structurally irrational. This means that Benjamin cannot, after all, accept the claim that one can determine which attitudes are structurally irrational without having to determine whether the individual attitudes are supported by the relevant reasons. This seems to me a cost, since that view about structural rationality is very appealing. Intuitively, whether the pattern of attitudes involved in means-ends coherence is structurally irrational or not should not turn on whether the relevant end is something that one has good reasons to pursue.

    A final and related worry, which I obliquely alluded to in fn. 4 of my review, is this. Suppose we admit, as we want to that Setiya’s smoker, though incoherent, isn’t irrational, either structurally or substantively. Benjamin still wants to say that most instances of means-ends incoherence are structurally irrational. But recall that on Benjamin’s view, attitudes are only structurally irrational if it is guaranteed that at least one of the attitudes is not substantively rational. But if there are some cases of means-ends incoherence without substantive irrationality – such as Setiya’s smoker – then in what sense does means-ends incoherence guarantee substantive irrationality? And if it doesn’t, how can Benjamin maintain (given his view) that any instance of means-ends incoherence is irrational?

    Again, the problem persists even if we switch from talk of means-ends incoherence in general to a specific combination of attitudes toward determinate ends and means. For example, consider a more ordinary version of the means-ends incoherent smoker, who intends to smoke, believes that to do this she must buy cigarettes, yet doesn’t intend to buy cigarettes – but whose intention to smoke is, unlike Setiya’s smoker’s, not unalterable. Presumably Benjamin will want to say that this person is structurally irrational. But how does his view deliver that result? It isn’t true that she has attitudes such that the presence of those attitudes guarantees substantive irrationality, since she has a counterpart (Setiya’s smoker, whose intention is unalterable) who has all the same attitudes and isn’t substantively irrational. Perhaps Benjamin could try saying that unalterably intending and alterably intending are different attitudes, but this does not seem correct to me. So I worry that once it’s conceded that some incoherent agents aren’t structurally irrational, it will quickly follow, given Benjamin’s own “guarantee” account, that many more incoherent agents aren’t structurally irrational either – including in cases where the intuitive verdict of irrationality is clear and decisive. This bridges nicely into Benjamin’s general account of irrationality (the third issue he discusses), which I will take up in a separate reply later.

  6. Benjamin, thank you for your helpful clarifications (and I hope that I’ll get the terminology straight in what follows).

    I just wanted to add that my question is coming from a point similar to Conor McHugh’s concerns in his review in Mind. Possessed evidence on your picture is not just facts that are believed, but also internal facts or facts about experience (or internal occurrences). Moreover, you appeal to the backup view to make your view about available reasons fit with internalism about rationality (p. 173), which seems to commit you to the claim that subjects in bad cases have reasons of exactly the same strength as their duplicates in good cases, even though only the latter have believed truths as the evidence that fixes their available reasons. Finally, the subjects in bad cases have these equally strong reasons in virtue of their internal states. So overall, it seems that the believed truths qua truths (= part of the had evidence) don’t really do any work when it comes to fixing what reasons a subject has, but that all the work is done by the subject’s internal states. And this gets me to my worry from the last post that this seems to be basically a subjective or belief-relative view of reasons.

  7. Hi All,

    I’ll be travelling the next few days but I wanted to participate to the best of my abilities. I’ve enjoyed Benjamin’s book considerably and learned quite a lot from it. I think that it changes the landscape considerably as I have to admit that I was one of those people who was too easily convinced that reasons and rationality require different things from an agent. I wanted to raise two questions about rationality in this picture.

    From above, “On his account of structural irrationality, a set of attitudes is structurally irrational when the combination is such that it is impossible for each of the attitudes to be sufficiently supported by (evidence-relative) reasons, that is, to be substantively rational”.

    I’m curious about cases of normative ignorance/mistake/uncertainty. Huck Finn, let’s say, is aware of the non-moral facts. It’s unclear how much he grasps of their normative significance, however. Suppose that someone has an even more tenuous grasp of the normative significance of these facts and decides that he ought to turn Jim in. In being aware of these non-normative facts and intentionally acting this way, it seems that this combination of attitudes guarantees that the agent acts against decisive reason where the relevant reasons are all constituted by/provided by facts that he knows. But I didn’t think that this was supposed to be an instance of irrationality on Kiesewetter’s view (neither substantive nor structural), but it’s not clear why not. Are we supposed to say that such facts don’t provide the relevant agent with reasons because the agent doesn’t grasp their significance or are we supposed to say that this is an instance of irrationality because of the necessary mismatch between what the reasons require and what the agent did?

    I’m also sort of curious about a tricky case where it seems that a structural requirement and a substantive requirement seem to come apart:

    Enkrasia: A ought not both believe she ought to X and do other than X;
    Evidentialism: If A is actively considering whether p and has sufficient evidence to believe p, she ought to believe p; If A lacks sufficient evidence to believe p, she ought not believe p.

    I’m pretty confident that the following thing is possible–there is some set of propositions {p1, …, pn} such that A has sufficient evidence to believe that she has sufficient evidence to believe each proposition in this set even though she knows that there is one member of this set such that she does not have sufficient evidence to believe this proposition. On the one hand, it seems that if, say, A has sufficient evidence to believe that she has sufficient evidence to believe p1 and she knows Evidentialism, she has sufficient evidence to believe that she ought to believe p1. And we can then use Enkrasia to rule out that she ought not believe p1. And we can do this for p2 through pn (for some suitably large n). So it would seem that A should be permitted if not required to believe each. But we also have it that there is one that A ought not believe.

    I can see fiddling around with Enkrasia and Evidentialism to avoid this kind of clash, but I haven’t found a suitable fiddling and so I wonder if it’s not best to think that these claims characterise very different notions. I can imagine someone trying to rule this case out by appealing to some sort of levels principle (e.g., one that says that the first-order epistemic oughts are luminous or some view that says that we must have sufficient evidence when we have sufficient evidence to believe that we do), but I think that the costs of these approach are pretty prohibitive (for reasons that Williamson and Dorst have discussed in some new work on higher-order evidence).

    My vague worry is that this kind of case and the case that precedes it raises some tricky issues about the objectivity of the support relation that holds between the agent’s evidence and/or available reasons and the options that the agent considers. And my worry is that if we tinker with the support relation to make it less objective we get one kind of undesirable result (e.g., agents are justified if not required to do things that seem to go against our best judgments about what their reasons require) or requires us to distinguish between things like responding correctly/rightly to reasons and responding rationally, but that might rob rationality of its normative role.

    That’s all very hand wavy, but if someone has some constructive suggestions, that would be great.

  8. I now want to take up the third issue Benjamin discusses in his post, that of his general account of irrationality. As Benjamin says, his view is that one is only irrational if one’s attitudes guarantee that one is imperfectly rational, where this understood modally: there’s no possible world in which those attitudes are perfectly rational. By contrast, I prefer a simpler view that says that to the extent that one fails to be perfectly rational, one is thereby irrational to that same extent. (So, if one deviates from perfect rationality only slightly, one is only slightly irrational; if one deviates from perfectly rational severely, one is severely irrational.) No doubt we sometimes talk of ‘irrationality’ in a more “on-off” way, instead of in this degreed way – and when we do so, it will be unnatural to describe minor deviations from perfect rationality as ‘irrational’ simpliciter. That is consistent with my view; ‘irrational’ in the on-off sense can just refer to irrationality in the degreed sense above some threshold (where the relevant threshold plausibly, as with many other such terms, varies with conversational context).

    Let me bring out what I consider to be the advantages of my view over Benjamin’s on this point. One is simply that there are many deviations from perfect (substantive) rationality that are very naturally described as irrational, but that lack the modal guarantee that Kiesewetter’s view requires. For example, consider how natural to describe beliefs that fly utterly in the face of the evidence – the belief that the Earth is flat, or that climate change is a hoax, or that fairies live at the bottom of the garden – as irrational. (More precisely: it’s natural to describe such beliefs as irrational when they’re held by people who possess decisive evidence against them, as most people do in the Western world in 2018.) Such beliefs are gross failures to respond to one’s evidential reasons, and as such are (on my view) substantively irrational. But it isn’t true that there’s no possible world in which such beliefs are sufficiently supported by one’s reasons: we can easily imagine bodies of evidence that would support any of these beliefs. So they don’t count as irrational, on Kiesewetter’s view.

    When presenting my work on structural irrationality, I’ve sometimes found it hard to get epistemologists to take the notion of structural irrationality seriously, precisely because these kinds of beliefs don’t, in and of themselves, involve any structural irrationality. Epistemologists tend to think that any notion of irrationality that can’t call these beliefs irrational isn’t worth taking seriously. But at least I can readily allow that these beliefs are irrational in a different sense – they are substantively irrational. Kiesewetter’s account, by contrast, can’t even say that they are irrational in any sense. This seems to me a substantial cost. Part of the problem with a purely structuralist view, whereby the only requirements of rationality are structural, is that it can’t accommodate the simple inclination to call gross failures of reasons-responsiveness (especially gross failures of responsiveness to epistemic reasons) irrational. It seems to me odd that Kiesewetter, who so forcefully rejects that structuralist view, should then embrace an account of irrationality that saddles his own view with the same result.

    As I’ve noted in some previous work (see my 2016 Phil Quarterly paper, though I would now revise some of what I say there), and as Benjamin also alludes to in his reply, our intuitions are far less-clear cut when it comes to gross failures to respond to practical, and especially moral, reasons. Many philosophers want to deny that failures to do what is morally right count as irrational. Kiesewetter’s view (partially; see the next paragraph) accommodates this, as he notes. But my own view is that this impulse is a mistake. It should certainly be readily admitted that failures to do what is morally right do not involve any structural irrationality. Indeed, some influential deniers of the irrationality of immorality (such as Williams and Foot) seem to assume a structuralist account of (ir)rationality – and if all they mean to say is that immorality is not structurally rational, they are surely correct. But I think it should still be maintained that, at least insofar as acting immorally involves a failure to respond to evidence-relative moral reasons that are decisive over one’s other evidence-relative reasons, it does involve substantive irrationality. We are inclined to describe other failures to respond to reasons (viz. epistemic and prudential reasons) as (substantively) irrational, and I cannot ultimately see a principled reason why moral reasons should be treated differently. I do realize that many people assume that the cases are different, but I don’t think this can be philosophically vindicated. As such, I think that it is ultimately just a bias against morality; part of a more general mistaken tendency to view moral normativity as somehow more puzzling or problematic than other kinds of normativity.

    It’s also worth noting that if one is attracted to the view that failures to be moral are never ipso facto instances of irrationality, Kiesewetter’s view doesn’t straightforwardly deliver this result. Plausibly, some instances of immorality are going to come out as irrational on his view, while others won’t. Specifically, if there are acts that we have decisive moral reasons to refrain from in every possible world, they will be irrational on Kiesewetter’s view, but acts that could be sufficiently supported by one’s reasons, in some possible world, will not be irrational.

    This problem, in my opinion, is only intensified if the modal robustness of the wrongness of an action doesn’t track its severity. Suppose, to fix ideas, that some form of agent-neutral consequentialism is true, and that moral reasons are overriding. Suppose further that in the actual world, starting a nuclear war would have devastatingly terrible consequences (and that one’s own evidence indicates this). Doing so is thus a severe failure of reasons-responsiveness. Nevertheless, there’s some possible world in which starting a nuclear war has net beneficial consequences – so the act is, on Kiesewetter’s view, not irrational. By contrast, consider the action of choosing one’s own relative’s slightly lesser happiness over some stranger’s slightly greater happiness. This is only a very slight failure of reasons-responsiveness, but (if agent-neutral consequentialism is true, and moral reasons are overriding), then it is a failure of reasons-responsiveness in every possible world – and so the act is, on Kiesewetter’s view, irrational. This combination of verdicts – that starting the nuclear war involves no irrationality but that the minor favoring of one’s relative over a stranger does involve irrationality – seems to me perverse.

    Of course, if agent-neutral consequentialism is false, or moral reasons aren’t overriding, the example will need to be adjusted. But the general point that the example illustrates is that it does not seem right to make the (substantive) irrationality of an act or attitude turn on the modal robustness of its reasons-unresponsiveness rather than the extent of its reasons-unresponsiveness. My own view, in contrast to Kiesewetter’s, makes it turn on the latter. As such, I think, it gets the above case right: starting the nuclear war is much more irrational than slightly favoring one’s relative over a stranger (and, if we’re talking in on-off, threshold-y terms, we might just describe the former as ‘irrational’, and the latter as ‘not irrational’).

    One might try to deal with the problem by describing the action of starting a nuclear war at a finer level of grain that I did above. Perhaps the relevant description of the action is not ‘starting a nuclear war’ but rather ‘starting a nuclear war without the presence of such-and-such conditions’, where those conditions are the exceptional circumstances that would make such an action justified. But I think this only points toward another disadvantage of the account in terms of modal robustness, which is that it delivers different results about which actions are irrational depending on the level of detail at which they are described: an act described one way might be wrong in all possible worlds, but described another way, it might be wrong only in some possible worlds. This is roughly analogous to the famous problem for the first formulation of Kant’s categorical imperative, that it yields different results about which actions are immoral depending on the level of specificity at which the relevant maxim is described. So once we go in for an account of irrationality in terms of modal robustness of reasons-unresponsiveness, we have to come up with a principled and general account of the level of detail at which to describe actions. The simpler account of irrationality in terms of extent of reasons-unresponsiveness, by contrast, avoids having to get into this thorny problem.

  9. I should also add, finally, that there is an interaction between my reply to Benjamin’s first and second issues, and my reply to his third. The view that some instances of (particular patterns) of incoherence involve no deviation from perfect rationality, plus the view that irrationality requires a *guarantee* of imperfect rationality, are what *together* threaten to yield the result that *no* instances of those patterns of incoherence involve irrationality. So I think the two views sit especially badly together.

  10. Last comment (apologies): a correction to my post from 10:32am: when I wrote “if all they mean to say is that immorality is not structurally rational, they are surely correct”, I meant to say “if all they mean to say is that immorality is not [necessarily] structurally *irrational*, they are surely correct”.

  11. I’m pleased to follow this interesting discussion of Benjamin’s fantastic book. I have some questions about connections he draws there between reasons, evidence, and oughts.
    #1
    Consider the following principle (p. 175, my labelling):
    PR⇒EPR If p provides a reason to φ, then evidence for p also provides a reason to φ.
    Benjamin appeals to PR⇒EPR in an effort to show that the conception of rationality he defends – roughly, as a matter of correctly responding to available reasons – is compatible with internalism about rationality. It also figures in his defence of the Evidence Principle (discussed below).
    Benjamin anticipates and responds to one objection to PR⇒EPR based on the thought that reasons for acting are good-making features. However, he acknowledges that “more needs to be said for a proper defence” (p. 180). This is really just an invitation to Benjamin to say a bit more on this issue.
    PR⇒EPR is not to be confused with a (partial) analysis of reasons in terms of evidence of the sort advanced by Judith Jarvis Thompson in Normativity (2008) and Stephen Kearns and Daniel Star in “Reasons as Evidence”, Oxford Studies in Metaethics (2009). Benjamin explicitly rejects such analyses by appeal to some familiar objections from the literature (p. 187). I wonder, however, whether some of the objections to evidence-based analyses of reasons are also objections to PR⇒EPR.
    Suppose, for example, that Kelly promised to go to the cinema and this is a reason for her to do so. Kelly knows that, normally, if she can go to the cinema, she promised to do so. She also knows that she can go to the cinema. So, that Kelly can go is evidence she has that she promised to go. Given PR⇒EPR, that Kelly can go to the cinema is a reason for her to do so. This, one might think, is the wrong verdict. That Kelly can go to the cinema is not a reason for her to go but an enabler – a condition that enables the fact that she promised to go to be a reason for doing so.
    In view of this, one might object to PR⇒EPR on the grounds that it entails that in some cases enablers on reasons for φing are reasons for φing. (Compare the objections to evidence-based accounts of reasons in John Brunero’s “Reasons and Evidence One Ought”, Ethics (2009), and Guy Fletcher’s “A Millian Objection to Reasons as Evidence”, Utilitas (2013). Compare also Benjamin’s remarks on p. 187.)
    #2
    Consider next Benjamin’s Evidence Principle (p. 185, my labelling):
    EP If A has sufficient evidence that she herself ought to φ, and A can φ, then A has decisive available reason to φ.
    This principle, as Benjamin stresses, plays a “crucial role” in his account of structural irrationality.
    Benjamin notes that cases of normative testimony might seem to be counterexamples to EP (pp. 188-189). Suppose that Betty tells Jim that he ought to clap his hands. Betty is a close friend and reliable advisor. In this case, one might think, Jim has sufficient evidence that he ought to clap his hands. He can do so. Given EP, it follows that he has decisive available reason to clap his hands. However, one might think, Jim does not possess any reason to clap his hands. Betty’s testimony might be evidence that Jim ought to clap, but it is no part of what makes it the case that he ought to do so. (Compare John Broome’s objection to evidence-based accounts of reasons in his “Replies” in Ethics, 2008).
    Benjamin discusses a similar example (p. 189). Adapting his response to this case, the suggestion is that Betty’s testimony is sufficient evidence that Jim ought to clap only if it (also) provides sufficient evidence of facts that make it the case that Jim ought to clap, say, that he promised to do so, or that it will benefit Betty to do so, or whatever. In that case, given PR⇒EPR, Betty’s testimony is indeed a reason Jim has for clapping, since it is evidence of (other) reasons for clapping.
    But let’s stipulate that Betty’s testimony provides no indication whatsoever of any features of the situation which might make it the case that Jim ought to clap. In this version of the case, we cannot defend EP by appeal to PR⇒EPR.
    Perhaps Benjamin will say that, if Betty’s testimony provides no evidence of facts which explain why Jim ought to clap, it is insufficient evidence that he ought to clap. But is this plausible? Again, Betty is a trusted friend and reliable advisor. Moreover, in general, sufficient evidence that p need not be evidence of why p, sufficient or otherwise.
    Benjamin might instead offer the following line of thought. Betty’s testimony is evidence that Jim ought to clap. Suppose that utilitarianism is true a priori. So, that Jim ought to clap is evidence that clapping maximises utility. So, Betty’s testimony is indeed evidence of why Jim ought to clap. So, given PR⇒EPR, it is a reason Jim has for clapping. Generalising, the suggestion is that subjects always have sufficient a priori evidence for true normative theories. Relative to this background, (reliable) normative testimony always provides reasons for facts which explain why subjects ought to act. All I’ll say here is that this is a strong commitment to take on.

    For what it’s worth, I’m sympathetic to evidence-based accounts of reasons (cf. my “Right in Some Respects” in Philosophical Studies, 2017). I think a proponent of (at least some versions of) such accounts have answers to the objections from enablers and from testimony. But I’m more interested now in how versions of those objections apply to the principles Benjamin invokes and what he might have to say in response.

  12. Thanks Eva, I see. On my view, it is quite important to distinguish between what reasons an agent has and whether the agent has reasons, or has decisive reason, for a response.
    I’m externalist about reasons (whether R is a reason for A may depend on factors that make no difference to A’s internal state), but I sympathize with internalism about decisive reasons, which is compatible with externalism about reasons if one assumes something like the backup view. Thus, I disagree with your interpretation that “believed truths qua truths don’t really do any work when it comes to fixing what reasons a subject has”. Since I’m externalist about “what reasons a subject has”, external facts make a difference here. For example, whether that it’s raining is among your reasons really depends on whether it’s raining. However, it would be correct to say that external facts don’t do any work when it comes to fixing whether the agent has (decisive) reason for an attitude (at least when we focus on the synchronic case) – this is trivially entailed by the kind of internalism I sympathize with.

    Given this, you might now ask: What’s the difference between my view of decisive reasons and a belief-relative view about decisive reasons? Well, a belief-relative view would presumably hold that whether one has decisive reasons for an attitude supervenes on the contents of one’s beliefs (including false or unjustified beliefs), while I would rather say that it supervenes on one’s phenomenal state. These views differ in their verdicts about what we ought to believe/intend/etc. It also seems to me relevant that in contrast to a belief-relative view, my view does not assume that false propositions or unjustified states determine what we have decisive reason to believe/intend/etc.

    Finally, you might want to know whether my view on decisive reasons is “subjective”. That depends on what you mean by “subjective”. If you mean “belief-relative” or “non-factive”, then no. If you mean “internalist” or “accepting supervenience on the mental”, then yes.

    Hope that helps!

  13. Thanks for getting on board, Clayton!

    Regarding the first issue, there seem to be two separate questions. One is whether someone with impaired abilities of grasping the normative significance of certain potential reasons can nevertheless be said to *have* these reasons or be subject to them as available reasons, and thus on my view can be said to fail to be rational when violating them. The second question is whether someone who knows the relevant non-normative facts in Huck’s circumstances and intends to turn Jim in despite having decisive available reason not to, is on my view irrational.

    With respect to the second question, I would have thought that, from what you’ve told us about the case, it does not follow that the person is irrational: she intends against her decisive reasons, but there are possible scenarios in which the intention to turn Jim in would be supported by sufficient reasons. For example, there is a possible scenario in which someone knows all the non-normative facts that are relevant in Huck’s case, but also knows that by turning Jim in she would save Jim’s life, and in that scenario she would have sufficient reason to intend to turn Jim in. Thus, the intention fails to be rational, but is not irrational – which seems to me the intuitive thing to say if one wants to make a distinction between being less than rational and being irrational.

    With respect to the first question, I am indeed inclined to believe that R is an available reason for A to  only if A is able to grasp the favoring relation between R and -ing. And this does indeed entail that if A is unable to grasp the favoring relation, someone with a better grasp of the favoring relation cannot truly say that R is a reason available to A even if A knows R – all that this person could truly say is that R is a *potential reason* for A, i.e. a fact that would be a reason for A if A could grasp its normative significance.

    Regarding the second issue, I do in fact embrace in the book the principle that sufficient evidence for sufficient evidence for p entails sufficient evidence for p, and I also provide some reasons for accepting it (250–3). But I’m aware that it is a very controversial principle, and this is definitely an issue I have to think longer and harder about. Please let me know what work of Williamson and Dorst you are referring to.

  14. Hi Daniel, Good to hear from you, and thanks for your questions!

    Re #1, I found your counterexample to principle PR⇒EPR convincing and was startled to be told that I embrace that principle in the book. But I think there is a misunderstanding. Your counterexample relies on an understanding of “p provides a reason” according to which “p provides a reason iff p is a reason”. But in the relevant passage, I meant to use “provide” in a looser sense. In your case, I would agree that the fact that Kelly can go to the cinema is not a reason for her to go, even though it provides a reason for her to go. It provides a reason by making it likely that she has promised to go to the cinema, which in turn is, in my view, a reason to go to the cinema (i.e. the reason for Kelly to go to the cinema is not that she can go, but that she probably has promised to go, or that she would risk breaking a promise if she didn’t go).

    Re #2: Yes, the case in which pure testimony that one ought to  (without any evidence of why one ought to ) is supposed to provide sufficient evidence for believing that one ought to , seems to be the hardest case for the evidence principle. I’m not yet convinced that I need to accept that such a case is possible. You are right, of course, that in general sufficient evidence that p need not be evidence why p. But in this case, ‘p’ is special: it is a proposition about the deliberative ‘ought’, the truth conditions of which depend on Jim’s epistemic circumstances and perhaps also, as I’m inclined to believe, on Jim’s capacity to grasp the normative significance of the relevant reasons (see also the exchange with Clayton). It’s difficult to see how Betty could speak truly (and thus difficult to see why she should be trusted) if Jim has no independent evidence of any reasons whatsoever for clapping.

    Supposing, however, that such a case is possible, couldn’t we say that Betty’s testimony that Jim ought to  raises the expected value of -ing, and that Jim’s available reason for clapping in this case is the fact that clapping is expectably good or best?

  15. Many thanks to Alex for raising all of these challenging objections, which help me to clarify and (hopefully) improve my view of these issues! I fear that I will have to write another book in order to adequately respond to all of these points. But let me try to say at least a bit about them. (I should add that I’m leaving for holidays tomorrow and may not be able to respond to further comments in a timely manner – I will, however, try to do my best.)

    Let me start by saying that, in my opinion, everyone should agree that there are cases of rational incoherence. Consider Broome’s Goldbach conjecture case:

    “Suppose you believe the axioms of arithmetic, and you also believe the Goldbach conjecture. The axioms together with the conjecture may constitute a set of logically inconsistent propositions; mathematicians have not yet worked out whether or not this is so. If it is so, you have inconsistent beliefs, but we should not say on that basis that you are not rational. Despite a lot of devoted work, no one has yet discovered a counterexample to the Goldbach conjecture. That suggest you may at present rationally believe it, even if it is actually false.” (Broome 2013, Rationality Through Reasoning, 154–5)

    Thus, you might have inconsistent beliefs (and thus incoherent attitudes) without being irrational in the ordinary sense of that term. It follows that the property of incoherence cannot be identified with the property of irrationality, or the property of a kind of irrationality, as long as we are dealing with the ordinary sense of ‘irrationality’. One might of course stipulate a theoretical notion of ‘structural irrationality’, according to which every incoherence by definition amounts to irrationality. However, I am not concerned with such a stipulative notion, but with the ordinary notion of ‘irrationality’.

    Alex wants to know what incoherence, in my view, amounts to if it cannot be identified with a kind of irrationality. I have to concede that I do not have a full-blown conception of coherence, but the point I just made will be true for any conception of coherence according to which coherence requires logical consistency. One way to approach the question what coherence requires above and beyond consistency would be to start from the idea that attitudes have correctness conditions, and that sets of attitudes are incoherent when they cannot all be correct. But as far as I can see, I might also adopt a psychological notion along the lines of the one that Alex suggests elsewhere, according to which attitudes are incoherent when agents are disposed to give up at least one of them under conditions of full transparency. On either notion, it seems to be an open question whether it is irrational (in an ordinary, non-stipulated sense) to exhibit a certain instance of incoherence. It often is, but as Broome’s Goldbach case or the preface case illustrate, it need not be.

    On what grounds then can we decide whether a given case of incoherence is a case of irrationality? One way, which corresponds to Alex’s first interpretation of my response, is to reflect on our intuitions about the case without having a particular theory of irrationality in mind. Another, more revisionary way, which corresponds to Alex’s second interpretation, is to decide this on the basis of a theory that is independently motivated. While I agree that the non-revisionary approach is preferable, I don’t think there is anything in principle wrong with pursuing the second approach under certain conditions. If we have to choose between a theory of irrationality that captures all pretheoretical intuitions about particular cases, but at the cost of being able to explain, for example, what different cases of irrationality have in common and why irrationality is criticisable, and a theory that entails that some of our first-order intuitions about irrationality are overgeneralizations and need to be revised, the latter might be preferable.

    However, when I said that preface beliefs or intransitive preferences aren’t irrational, I took myself to be pursuing the non-revisionary approach. I believe that pretheoretical reflection alone suggests that these attitudes aren’t irrational. One way to get to this conclusion is by asking yourself whether there is any good sense in which people are criticisable for having such attitudes; another one is by asking whether they have made any mistake in exercising their rational capacities. Using this method of pretheoretical reflection, I can also plausibly conclude that Setiya’s smoker isn’t irrational (although this might already involve a refinement of intuition, because, as I said in the book, the boundaries between irrationality and certain kinds of compulsion aren’t sharp in ordinary language).

    In sum, by rejecting the claim that these cases of incoherence are irrational, I do not give up on the ambition of vindicating our pretheoretical intuitions about irrationality – I do maintain that my account captures at least nearly all cases that should be qualified as irrational after pretheoretical reflection.

    ***

    In his second post, Alex objects that my treatment of Setiya’s smoker gives up on the idea that “one can tell that a set of attitudes is structurally irrational without having to determine whether the individual attitudes are supported by the relevant reasons”. I don’t think that I’m forced to give up on this idea. Generally, I think that ascriptions of rationality and irrationality are capacity-sensitive, and for this reason one can never tell whether a person is irrational just by looking at her attitudes. One also needs to have a look at her capacities, and so the fact that one of the attitudes involved is unalterable is relevant. What is distinctive about structural irrationality is that one need not know which individual attitude is insufficiently supported by reasons in order to know that a person is irrational. Alex points out that I couldn’t maintain this claim if I wanted to say that Setiya’s smoker would be structurally irrational if she had decisive reasons to pursue her unalterable end, for this would mean that whether she is structurally irrational would depend on her reasons for the end. This is an interesting point, which I need to think about a bit longer. For now, I’m not sure why we should think that it would be a fundamental problem if a theory entailed that in such a case, the agent is not irrational due to her incoherence, but merely fails to be rational for not intending an action she has decisive reasons to intend.

    Alex also worries that allowing for cases of rational incoherence undermines the general idea that attitudes are irrational when they guarantee a reason violation. If Setiya’s smoker isn’t irrational, Alex asks, how could the ordinary incoherent smoker, who has the same attitudes, be irrational? More generally, if there are exceptions to the claim that means/end-incoherence is irrational, how can an account according to which a set of attitudes is irrational if it *guarantess* a violation of decisive reasons allow for non-exceptional cases of irrational means/end-incoherence to begin with?

    This is a fair question, and I think I could have been clearer in the book about the implications of exceptional cases for my general theory of irrationality. As far as Setiya’s smoker is concerned, the answer has to do with the capacity-sensitivity of rationality and irrationality. As I said above, having certain attitudes is never sufficient for being irrational, one also needs to have the capacity to alter them in the light of reasons. Thus, what guarantees a violation of decisive reasons is, strictly speaking, not that one has incoherent attitudes of a certain sort, but that one has *alterable* incoherent attitudes of a certain sort, i.e. attitudes with regard to which one is capable of changing one’s mind in the light of reasons. Thus, refining the proposal I made on pp. 236–9, I would now say that having *alterable* attitude-states that guarantee a violation of decisive reasons is a necessary and sufficient condition of having irrational attitudes. This explains why the ordinary incoherent smoker can be irrational even if Setiya’s smoker isn’t.

    With regard to other exceptional cases, my answer is contained in the general account of instrumental irrationality that I propose on pp. 288–91. Roughly, one’s (alterable) attitudes guarantee a violation of decisive reasons if one intends to , does not intend to , and has attitudes relative to which the expected costs of intending to while refraining from -ing are higher than the expected costs of making a decision either for -ing or for giving up the intention to . This proposal is designed to distinguish the non-exceptional cases of means/end-incoherence from the exceptional ones and explains why the former can be irrational, while the latter aren’t.

    ***

    I will take up Alex’s more general worries about my account of irrationality (raised in his third comment) in another post, hopefully before long!

  16. Hi Benjamin,

    Thanks for your response. I just wanted to say two things to follow up.

    1. On normative ignorance and structural irrationality
    There were two related things that I was interested in. In raising the question, I was assuming something like this:

    Counterparts: If A and A’ are epistemic counterparts (i.e., non-factive mental duplicates with the same capacities), there would be a decisive (available) reason for A not to X iff there was a decisive (available) for A’ not to X.

    One idea behind the question was something like this—maybe someone with different mental states and/or aware of different facts could, on your view, have lacked decisive reason not to hand in Jim, but it seemed that given this agent’s mental profile I was envisaging, this subject would have had no epistemic counterpart who would act on the intention to turn Jim in who wouldn’t thereby act against a decisive reason. (At least, not if the original subject acted on this intention and thereby acted against decisive reason.) I raised the case because I thought it pointed to an interesting difference between cases where (a) acting against decisive reason supervenes upon the agent’s mental states and (b) the irrationality seems to be due to clashes, tensions, etc. among the mental states as I thought that the agent acting in normative ignorance against a decisive reason would be a case of (a) but not (b).

    There’s the second issue here as to whether a kind of inability to grasp (what I’d think of as) the normative significance of the known facts has any bearing on what the available facts give the agent reason to do. One worry (which I pressed in a paper criticising internalists who accept Enkrasia and similar bridge principles) might be put like this. Suppose Duck knows that Jim is an escaped slave and doesn’t understand (what I’d think of as) the normative significance of this. Suppose Duck also knows that there’s a reward for identifying runaway slaves that Duck could collect and that Duck could use this money to support his family. (Maybe it’s an economic need that isn’t massive but enough such that if there’s a way to see to it that it’s taken care of and there’s no reason not to do it, it’s something that Duck ought to do.) I’m not really inclined to say that this second reason is switched on and the first one isn’t if that means that Duck ought to turn Jim in. I certainly don’t want to blame Duck for –not– turning Jim in if he doesn’t, but on some views this is weird because these views seem to predict that Duck might know that he ought to turn Jim in and it is weird to think that we shouldn’t blame someone for failing to do what they know they ought to do (if, say, they aren’t under duress, say, and the case is simply a case of akrasia).

    Maybe I could put the second point like this. I think this is a sensible principle:

    If A knows that she ought to X and A fails to X because it’s a clear case of akrasia, it is proper to blame A/criticise A.

    Putting this to use, I might say this. If Duck doesn’t turn Jim in but believes that he ought to, criticism/blame isn’t appropriate. So, I don’t think we want a view on which Duck can know that he ought to turn Jim in. If a view says that what Duck ought to do is determined by the available reasons and his ability to grasp their significance, the view implies that the available reasons don’t provide Duck with a reason not to turn Jim in but only some moderately strong reason to turn him in. Because of this, it seems prima facie plausible that such views say that Duck can know that he ought to turn Jim in. So, I think such views cannot give us the verdicts we want—that it’s not true that Duck ought to turn Jim in and that it’s not appropriate for us to blame/criticize Duck.

    2. Enkrasia and Evidentialism
    On the issue of sufficient evidence and Enkrasia… This is a kind of update on an argument from my 2012 book—an argument that is supposed to show that (given some assumptions that you probably aren’t happy with) Enkrasia supports externalism about justification, ought, etc. (in part because it supports the view that a kind of false normative proposition cannot be justifiably believed). What I like about this new little puzzle is that the surprising(ly problematic) features of Enkrasia don’t require any particularly externalist assumptions about reasons, grounds of duty, etc.

    It seems that Enkrasia supports what I’ll call (following Titelbaum) Fixed-Points. For any X that A can do, either A ought to X, A ought not X, or neither. Suppose that A is in the second situation—A ought not X. In such a situation, A cannot (without violating Enkrasia) believe that she ought to X and do other than X. In such a situation, A cannot (satisfying Enkrasia and meeting her other responsibilities) believe that she ought to X and X. So, if Enkrasia is correct (i.e., A ought not: believe that she ought to X and do other than X), this is, too:

    Fixed Points: If A ought not X, A ought not believe that A ought to X.

    Fixed-Points isn’t surprising on some views of justification (i.e., a view like mine that says that for all p, A ought not believe p if ~p), but moderately surprising on standard views of justification that allow for false, justified beliefs. A typical motivation for these non-factive views of justification appeals to something like the evidentialist view mentioned above (i.e., A ought to believe p when A is actively considering whether p and A has sufficient evidence that p). Unfortunately, it seems that the failure of evidence of evidence principles suggests that Enkrasia and Evidentialism might be in tension.

    One thing you get from the Dorst and Williamson articles is a way of thinking about evidence of evidence principles using some tools from epistemic logic. In Dorst’s model, you get this cool little principle:

    If the probability that (the probability of p is at least b) is at least a, the probability of p is at least a x b.

    Let these probabilities be probabilities on the agent’s evidence at a time. If the models vindicate only something this weak and there’s some threshold probability that p must have on the evidence for thinker to have sufficient evidence to believe it, we should expect that a thinker’s evidence might be such that it could provide sufficient evidence that the thinker has sufficient evidence to believe some large set of propositions {p1, p2, …, pn} even if the thinker knows that there is one member of this set that doesn’t have evidence that provides sufficient support to believe it.

    Suppose a thinker has this kind of evidence. Using Evidentialism and Enkrasia, we can derive that each proposition that is a member of the set is such that the thinker may believe it. Using Evidentialism and the description of the evidence just given, we can derive that there is at least one that the thinker ought not believe. (NB: this is all an extension of stuff that Dorst and Williamson say in their papers. It’s non-trivial, but not implausible.) So, given a certain model of how first-order and higher-order evidential probabilities relate and some assumptions about the need to cross thresholds to have justification to believe, it seems we should predict clashes between Evidentialism and Enkrasia.

    That might make it hard to believe in certain kinds of ‘down’ principles (e.g., if you justifiably believe that you justifiably believe p, you justifiably believe p), but as Dorst notes, this really depends upon how you think of justified belief. I think it should be identified with knowledge, so the down principle is a trivial consequence of this factive account of justification. If, however, you have a non-factive view on which justification requires some strong degree of support on the evidence, we should expect there to be counterexamples to the down principles. It’s helpful to think about the converse of the JJ principle:

    Converse JJ: If you lack justification to believe p, you have sufficient justification to believe that you lack it.

    This kind of principle might, given the threshold picture of justification, commit us to something like a kind of luminosity or lustrousness principle about evidential support relations of the kind that Williamson would be sceptical of. It seems that scepticism about such principles might extend to scepticism about evidence of evidence principles of the kind that you’d have to accept if you combined Evidentialism with Enkrasia. Of course, the models that you get in Dorst and the assumptions operative in Williamson’s discussion are among the things in the world that aren’t luminous! Still, I think the issues that they raise are interesting, particularly given how popular Enkrasia is in some circles. (And if anyone wants to join my band of Enkrasia-loving radical externalists, they’re free to do so!) The Dorst and Williamson papers are really interesting (and incredibly difficult for someone like me with little formal training), but if you’re interested, have a look:

    http://media.philosophy.ox.ac.uk/docs/people/williamson/evidenceofevidencenew.pdf
    https://www.kevindorst.com/uploads/8/8/1/7/88177244/17.12_eagu.pdf

    It’s an interesting question about how Evidentialism would have to be understood to make it compatible with Enkrasia if I’m right about what’s possible given their work on evidence of evidence principles. As with the Jim case, it might be that the right thing to say here is that support relations have to be understood in ways sensitive to the kinds of abilities an individual thinker has–maybe this helps block the idea that there can be propositions such that a thinker lacks sufficient evidence to believe them whilst having sufficient evidence to believe that they have sufficient evidence to believe. That would seem to require cutting some ties between evidential support relations understood in terms of evidential probabilities, but that’s a route to go.

  17. Hi again Benjamin! Thanks for your replies. A couple of responses, once quick and the other longer.

    First, on the point about the relationship between incoherence and irrationality. I agree with you (as I hinted in my previous reply) that inconsistent beliefs are not necessarily irrational. But I also deny that inconsistent beliefs are even necessarily incoherent, and so I don’t think of this as a counterexample to the claim that all incoherence is irrational, or indeed to the stronger claim that incoherence just is a kind of irrationality, such that any considerations that show attitudes not to be irrational thereby show them not to be incoherent. For what it’s worth, I also think that those inconsistent beliefs that are irrational also fail to meet the (counterfactual) psychological definition of incoherence that I defend elsewhere and that you refer to. In the preface case, it is possible to sustain beliefs in each of the claims in the book plus the preface claim even under conditions of full transparency – on my account, this means that the set of beliefs is thereby not incoherent.

    Second, the issues relating to Setiya’s smoker case and the more general issues it raises. We have four instances of means-ends incoherence as we vary two variables: alterable vs unalterable, and bad end vs good end:

    a) Alterable, bad end (e.g. alterably intends to smoke, believes buying cigarettes is a necessary means to smoking, does not intend to buy cigarettes)
    b) Unalterable, bad end (unalterably intends to smoke, believes buying cigarettes is a necessary means to smoking, does not intend to buy cigarettes) (Setiya’s original case)
    c) Alterable, good end (e.g. alterably intends not to smoke, believes staying away from the bar is a necessary means to not smoking, does not intend to stay away from the bar)
    d) Unalterable, good end (e.g. unalterably intends not to smoke, believes staying away from the bar is a necessary means to not smoking, does not intend to stay away from the bar)

    Uncontroversially, cases (a) and (c) involve structural irrationality. Benjamin’s view holds that case (b) is (though incoherent) not an instance of structural irrationality. Given that verdict, plus Benjamin’s commitment to the view (which I share) that whether attitudes are structurally irrational is independent of whether there are good reasons for them, he is also committed to saying that case (d) is not an instance of structural irrationality. But this claim about case (d) seems to me unmotivated. One thing that brings that out is a comparison with case (c). Case (c) uncontroversially involves structural irrationality. But what about case (d) makes it less structurally irrational than case (c)? The mere fact that the (good) end is unalterable does not seem, in and of itself, to make an intuitive difference.

    While I don’t find there to be an intuitive difference between case (d) and case (c) so far as rationality is concerned, I do acknowledge a kind of intuitive difference between case (b) and case (a). In case (b), there is something to be said in favor of means-end incoherence, namely this: given that one can’t change one’s end, being means-end incoherent is the best one can do to avoid the bad consequences of smoking. That’s not so in case (a), since one can just give up the end itself in that case. (But, note, nor is it so in case (d): beings means-end incoherent is not the best one can do to avoid realizing a bad end in case (d), since by hypothesis the end is not bad!)

    My own view can accommodate this difference by saying that case (b), unlike all the other cases, is one where it’s substantively rational to be means-ends incoherent. After all, the “something to be said” in favor of means-end incoherence in case (b) is something substantive; it turns on the fact that smoking is bad and is something one has reason to avoid doing. But, in my opinion, it would be wrong to conclude from this that case (b) doesn’t involve any structural irrationality. Indeed, we can sum up the considerations I’ve given here into an argument for this conclusion:

    1) The attitudes in case (c) are structurally irrational
    2) If the attitudes in case (c) are structurally irrational, the attitudes in case (d) are structurally irrational
    3) If the attitudes in case (b) are structurally irrational, the attitudes in case (b) are structurally irrational
    Therefore,
    4) The attitudes in case (b) are structurally irrational

    Premise (1) is uncontroversial, since case (c) is just garden-variety means-end incoherence. Premise (3) follows from the claim, accepted by Benjamin, that whether structural irrationality is present shouldn’t turn on whether the attitudes involved are supported by substantive reasons. That leaves only premise (2) for Benjamin to resist, but as I’ve already suggested, this is undermotivated.

    My own view preserves the simple thought that all means-ends incoherence (and therefore, all of cases (a)-(d)) is structurally irrational. But it holds that in some odd cases (like case (b)), there are substantive reasons to be structurally irrational. This seems to me to get all the intuitions right. But it does entail a conflict between substantive and structural rationality, which creates a problem for Benjamin’s account of structural rationality.

    By the way, it’s worth noting that the reasons to be means-end incoherent in case (b) are ones that it might be quite difficult to respond to. If one really does unalterably intend to smoke, and one is conscious of this intention, and conscious of one’s belief that to smoke one must buy cigarettes, then it may be hard to avoid also intending to buy cigarettes. Plausibly, if one deliberately and consciously refrains from intending to buy cigarettes, in full awareness of the belief that doing so is necessary for smoking, then one has ipso facto given up one’s intention to smoke (which was, by hypothesis, supposed to be impossible in the case at hand). Even more plausibly, if one refrains from intending to buy cigarettes specifically in order to avoid smoking – as might be necessary for this refraining to count as a response to the relevant reasons – one has ipso facto given up one’s intention to smoke. If this is right, the unalterability of the intention to smoke may entail an inability to refrain from intending to buy cigarettes – or at least an inability to reflectively and consciously refrain from intending to buy cigarettes, on the basis of the relevant reasons. It may well be, then, that any deliberate attempt to get oneself to be means-end incoherent (or structurally irrational generally) will have to involve some measure of self-deception or concealment of one’s own mental states from oneself. Again, this fits in with the account of incoherence I give in terms of what can be psychologically sustained under conditions of full transparency. And for me, it reinforces how the incoherent states in case (b) are, for all that there is to substantively recommend them, structurally irrational. The fact that it takes such self-deception to get oneself into such a state reflects the irrationality of the state.

  18. Again, I caught a bad typo just after posting. When I wrote “I also think that those inconsistent beliefs that are irrational also fail to meet the (counterfactual) psychological definition of incoherence…”, I meant to say “those inconsistent beliefs that *aren’t* irrational”.

  19. …and premise (3) in the argument given later should read:
    “3) If the attitudes in case (d) are structurally irrational, the attitudes in case (b) are structurally irrational”

    So sorry. Need to work on my proofreading.

  20. Thanks to Clayton and Alex, for their new posts. What follows is a response to the general worries that Alex raised in his third post for the view that irrationality is a matter of the modal robustness rather than severity of a reasons violation. It might take a while until I’ll be able to respond to more recent and further posts, as I’m without internet for the next couple of days.

    To begin with, it’s worth pointing out that I could in principle concede quite a lot of what Alex says and integrate it into the general picture that I defend. For one, it seems to me that I could just accept that there is a graded notion of ‘irrationality’, according to which one is irrational to the degree to which one fails to be rational, in addition to the on/off notion in terms of modal robustness of a failure to be rational. The existence of a graded notion in terms of severity does not seem to conflict with my project. For another, it seems to me that I could even renounce my conception of irrationality and accept the graded conception that Alex prefers when it comes to substantive irrationality as the general theory of irrationality. If there were no difference between failing to be fully rational and being irrational, this would in effect make some things easier for me. It would turn out that some forms of structural irrationality are relatively weak forms of irrationality, but the distinctiveness of structural cases of irrationality might still be explained by reference to the fact that these are cases in which the irrationality is guaranteed by the structure of one’s attitudes.

    Still, the idea that there is a non-graded notion of irrationality that has to do with the modal robustness of the reasons violation seems to me worth pursuing, so let me say a couple of points in response to Alex’s worries:

    According to Alex, empirical beliefs, such as the belief that climate change is a hoax, can be good examples of irrationality, even though they do not qualify as irrational on my view. In response, I first want to point out that resistance to the idea that such beliefs aren’t irrational might well come from the presupposition that if such beliefs aren’t irrational, then they are rational. But of course, I accept (in contrast to some proponents of structural rationality norms) that these beliefs involve severe failures of rationality. Once this is on the table, it might seem less clear that such beliefs must qualify as irrational.

    However, I also share the intuition that beliefs of the sort at issue can be irrational, and would like a theory of irrationality to capture this intuition. My suspicion is that the intuition in question is due to background assumptions about how this belief is based rather than how it is supported by reasons. When we think of people with such beliefs, we do not typically think of them as seriously trying to get the evidential support relations right but then failing to do so (at least, if I think of them this way, I don’t see the pressure to think of them as irrational – rather than as failing to be fully rational). We implicitly assume that the beliefs are based on ideology, wishful thinking, or the like. Now suppose, as seems plausible in my view, that there are substantial norms of rationality that prohibits, for example, basing beliefs on desires. It would follow that beliefs that are based in this way are necessarily and not only contingently prohibited by substantial rationality norms. I thus seems to me that I can agree that such beliefs can be irrational, although I would give a different diagnosis of why this is so. According to Alex, the beliefs are irrational because there are strong epistemic reasons against them. In my view, that only shows that these beliefs fail to be rational. That they are irrational has to do with the fact that they are based in ways that could not be permitted by substantial norms of rationality. The views differ in cases where the relevant beliefs are based only on evidence, even though this evidence happens to be insufficient. In such cases, I find the verdict of my view more plausible.

    Alex next highlights some implications of my view, when it is combined with moral rationalism and certain moral theories. For example, on the assumption of act-utilitarianism and moral rationalism, my account implies that intending to start a nuclear war is not irrational, while a minor favoring of a relative over a stranger is irrational. Alex thinks that this result is “perverse”. It is clear that this must seem perverse if one presupposes that irrationality is a matter of the severity of a violation of reasons, but if one does not presuppose this, but grants the possibility that this implication is compatible with the fact that the intention to start a nuclear war amounts to a much more significant failure of rationality, the implication does not strike me as problematic.

    Finally, Alex objects that on my account whether an action is irrational will depend on the description of the action, and thus faces the challenge of coming up with a general account of the level of detail at which to describe actions. But I think that if actions can be irrational at all, they can be irrational only in virtue of being intentional and only in virtue of the fact that the relevant intention is irrational. Thus, the relevant description will always be the description under which the action is intended.

  21. Thanks, Benjamin!

    First: for my own part, I find it natural to describe beliefs that grossly go against the evidence as irrational, even when they are based on mistaken evaluations of the evidence rather than merely being based simply in desires. I think it is a mistake to think that evidential support relations are obvious to everyone, such that the explanation of deviations from what the evidence supports must be explained by a failure to even try to respond to evidence, rather than by mistaken evaluations of what the evidence supports. Many conspiracy theorists and others with paradigmatically irrational beliefs have complex rationalizations of why they take themselves to have evidence for their beliefs. I don’t deny that desires may play a role in generating their beliefs, but they do so *via* mistaken beliefs about what the evidence supports, so that the beliefs aren’t just *directly* based in desires alone. (See Ziva Kunda, “The Case for Motivated Reasoning”, for a very helpful survey of relevant empirical work here.)

    Secondly, on the issue about starting a nuclear war (where the evidence makes it obvious that this will have disastrous consequences) vs. minor favoring of a family member over a stranger. It’s certainly right that:

    1) If we presuppose an account of irrationality as (graded) failure to be perfectly rational, then it will seem wrong to describe the minor favoring of a family member as irrational while describing starting a nuclear war as rational.

    It’s also true that:

    2) If we presuppose an account of irrationality in terms of modal robustness of the failure of be perfectly rational, then it will seem OK to describe the minor favoring of a family member as irrational while describing starting a nuclear war as rational.

    However, when we use such cases to try to test the intuitiveness of these two theories of irrationality, clearly this can only work if we try not to presuppose either theory, and try instead to just ask what is pretheoretically intuitive. To the extent that I think I can do this, it does seem pretheoretically unintuitive to me to describe the minor favoring of a stranger as irrational while describing starting a nuclear war as rational. That speaks against the modal robustness theory and in favor of the graded failure to be perfectly rational theory. Of course, it’s possible that I’m just in the grips of the latter, and am failing in my attempt not to presuppose it. But this is always a danger when we use a case to try to test two theories that yield conflicting results about that case. We’d just have to see what ordinary folks who have no prior commitments to either theory would say, I guess.

    Finally, I think the solution to the ‘level of detail of description’ problem in terms of the description under which the action is intended is promising. However, it still has drawbacks. It presupposes that there is a fact of the matter about the exact description under which our actions are intended. In the stranger/family member case, it is natural *both* to say that I intend to help Jessica *and* to say that I intend to give Jessica’s well-being priority over the stranger’s. Yet while the first act is not guaranteed to be a failure to respond to my reasons, the latter (assuming act-utilitarianism and moral overridingness) is. Maybe there is a fancy solution to this problem, but I still think an account on which it never even arises has the upper hand.

  22. Dear Clayton,

    Thanks for your clarification regarding the issue on enkrasia and evidentialism, this is very helpful.

    Regarding the issue of normative ignorance, I’m still not sure I understand the connection between this and the issue of structural irrationality. I agree that “the agent acting in normative ignorance against a decisive reason would be a case of (a) but not (b)”. Perhaps you assume that if the decisive reason violation supervenes on the mind, then it is guaranteed by one’s attitudes and thus irrational on my account? This would be a misunderstanding because by “attitudes” I mean judgment-sensitive attitudes, while the mental supervenience base includes all sorts of other mental states.

    Regarding the issue of capacity-dependence, I can see that there are interesting challenges for the view that inability to detect normative significance has bearing on what reasons the agent has. Your case is a good example to think about. I’m inclined to think, however, that a person is indeed criticizable if she decides against the balance of those reasons the normative significance of which she is capable of grasping. Thus, provided that Duck’s lack of understanding is due to inability, I would maintain that he is criticizable. I might still join you in refraining from criticizing him, but for reasons that are independent of his criticizability. Your claim that “if Duck doesn’t turn Jim in but believes that he ought to, criticism/blame isn’t appropriate” seems to me in conflict with the plausible assumption that akrasia in criticizable. Finally, I have the worry that a view according to which an agent’s reasons do not depend on this agent’s ability to detect normative significance can be mineshafted. Suppose that A knows that p, A knows that she is incapable of detecting the normative significance of p, and A knows the following: If reasons/wrongness/rightness are independent of the ability to detect normative significance, then (if p is normatively significant, then option 1 is right, option 2 is a major wrong, and option 3 is a minor wrong; while if p isn’t normatively significant, then option 1 is a major wrong, option 2 is right, and option 3 is a minor wrong). It looks like the only morally responsible option for A is to go for 3 – yet if reasons are independent of the ability to detect normative significance, this option is known to be wrong. You can now run the misguidance argument (207–8) against this kind of view.

  23. Dear Alex,

    Here are some responses to your most recent posts.

    Regarding the question what incoherence is, I’m not sure it makes sense to ask for *the* correct notion of incoherence. It seems to me clear that there is a legitimate notion of incoherence that requires logical consistency. Surely, there is a sense in which inconsistent beliefs don’t “fit together right”, to use your own words. For this notion of coherence, it is a substantial question whether it’s always irrational to be incoherent, and we both agree that the answer to this question is “no”. I’m fine with there being other notions of coherence – possibly there is also a notion according to which all forms of incoherence are necessarily structurally irrational. All that matters for what I have said in my initial response to your review is that there is a useful, legitimate notion of ‘coherence’ that does not entail that incoherence is necessarily irrational.

    For what it’s worth, I have some doubts that your psychologistic account gets the result that you want to have. In particular, I worry that it’s going to be difficult on your account to defend the assumption that preface beliefs aren’t irrational, while akratic attitudes are. In one of your posts, you say that it is possible to sustain preface beliefs under conditions of full transparency, and that this means that they are not incoherent on your account. But as you note in the article that defends your account on incoherence, it is also possible to sustain akratic attitudes under conditions of full transparency. Your move in the article is to weaken the account and say that akratic attitudes are incoherent to some degree because agents must at least be somewhat disposed to give up one of their attitudes under conditions of transparency. But it seems to me that the same can be said about preface beliefs, and perhaps about all other cases of inconsistency in belief that one is aware of. So I’m not convinced that your account is in a position to explain why akrasia is irrational while preface beliefs aren’t. My account, in contrast, can explain this: akratic attitudes guarantee a violation of decisive reasons, while preface beliefs might all be supported by sufficient reasons.

    But even if your account were able to get the extension of structural irrationality right, it would still seem to me that your view lacks much of the explanatory power that an account along the lines of the one I suggested has in virtue of being able to explain (a) what structural and substantive forms of irrationality have in common and (b) why irrationality is criticizable. On your view, structural and substantive irrationality are simply two entirely different properties that have virtually nothing to do with each other. And on your view, the property of structural irrationality is just some kind of psychological property we might have no reason to avoid having. It is mysterious why ascribing this property to someone is a form of criticism.

    These explanatory virtues seem to me much more important than the costs that are involved in revising some judgments of irrationality in some very peculiar cases in which agents have ends that they literally cannot abandon. Consider the three relevant variants of Setiya’s smoker again:

    b) Unalterable, bad end (unalterably intends to smoke, believes buying cigarettes is a necessary means to smoking, does not intend to buy cigarettes) (Setiya’s original case)
c) Alterable, good end (e.g. alterably intends not to smoke, believes staying away from the bar is a necessary means to not smoking, does not intend to stay away from the bar)
d) Unalterable, good end (e.g. unalterably intends not to smoke, believes staying away from the bar is a necessary means to not smoking, does not intend to stay away from the bar)

    My view seems committed to saying that in (d) the agent is “not rational” instead of (structurally) irrational, and thus to reject the premise (2) of your argument, according to which (d) is a case of structural irrationality if (c) is. This, you say, is “unmotivated”. I disagree. For one, I have already offered pretheoretical considerations in favor of saying that (b) is not a case of irrationality ([b] involves no failure of rational capacities and no criticizability). We both agree that (b) is a case of structural irrationality if (d) is a case of structural irrationality (this is premise 3 of your argument). It follows that (d) is not a case of structural irrationality (note that I reached this conclusion without presupposing my account of irrationality). For another, as I already indicated in an earlier post, it does not strike me as unmotivated to revise some first-order judgments about marginal cases like this on the basis of a general theory, if the theory has great virtues that other theories lack. This might be the most reasonable outcome of reflective equilibrium, all things considered.

    Finally, some points on the question of what my view entails about believing conspiracy theories and intending nuclear wars. First of all, you are misunderstanding my view when you say that it describes “starting a nuclear war as rational”. On my view, irrationality is not the contrary of rationality, but a specific kind of failure to be rational. Hence, when I say that intending to start a nuclear war is not (as such) irrational, it does not follow that it is rational; it might still involve a huge rational failure.

    This is, in effect, exactly the kind of misunderstanding that I was suspecting behind your reaction that it is “perverse” for a rationalist utilitarian to describe minor favoring of a relative, but not starting a nuclear war, as irrational. You are presupposing a conception of irrationality according to which every violation of reasons amounts to irrationality, and ‘not irrational’ thus entails ‘rational’. I realize that this assumption is quite common in certain branches of philosophy (although it is much less common in moral philosophy than in epistemology), and that on this assumption the implication of my view seems highly questionable. I also don’t object to this usage of ‘irrationality’, which is well-defined and useful. My worry is that it is not the ordinary notion of ‘irrationality’, and in any case, my view rejects the assumption. In arguing against my view, one should thus avoid presupposing it.

    If, over and above the intuition that it is *not rational* to intend a nuclear war (according to the rationalist utilitarian), there is pressure to vindicate the idea that it is *irrational* to do so, I suspect that this is due to the background assumption that the relevant agent believes that starting a nuclear war will tend to produce more pain than pleasure. An agent with this intention/belief combination can be described as irrational by a rationalist utilitarian, according to my account. If we think of the agent as believing (perhaps falsely and unjustifiedly) that the nuclear war will tend to produce more pleasure than pain, in contrast, it is not clear to me why the rationalist utilitarian should insist that this agent is irrational (rather than just getting things wrong and thereby failing to be rational).

    With respect to irrationality in belief, I agree that the beliefs in question (such as the belief that climate change is a hoax) need not be based directly on desires in order to count as irrational. What I’m saying is that if the failure of conformity with standards of epistemic rationality is *merely* a matter of getting the evidential support relations wrong, then this failure of rationality does not amount to irrationality. I maintain that in the cases that really support the intuition that a belief is irrational, more than this kind of failure is involved: the belief is motivated in a way that is generally prohibited by substantive basing norms of rationality. This proposal is compatible with what you say about conspiracy theories and similar phenomena, in which the relevant beliefs are partly based on motivated false assumptions about evidential support relations, because substantive basing norms may well require us not to base beliefs on beliefs about evidential support relations that are themselves based on desires. My hypothesis was that the irrationality of certain individual empirical beliefs can be traced back to basing failures that are prohibited by substantive basing norms. Cases of irrational beliefs that are based on motivated beliefs about evidential support relations do not as such provide counterexamples, because they involve basing failures as well.

Leave a Reply

Your email address will not be published. Required fields are marked *