Welcome to our NDPR Forum on Ralph Wedgwood‘s The Value of Rationality (OUP 2018), reviewed by Ali Hasan. Please feel free to comment on any aspect of the book, the review, or the discussion below!
From the book blurb:
Ralph Wedgwood gives a general account of the concept of rationality. The Normativity of Rationality is designed as the first instalment of a trilogy – to be followed by accounts of the requirements of rationality that apply specifically to beliefs and choices. The central claim of the book is that rationality is a normative concept. This claim is defended against some recent objections. Normative concepts are to be explained in terms of values (not in terms of ‘ought’ or reasons). Rationality is itself a value: rational thinking is in a certain way better than irrational thinking. Specifically, rationality is an internalist concept: what it is rational for you to think now depends solely on what is now present in your mind. Nonetheless, rationality has an external goal – the goal of thinking correctly, or getting things right in one’s thinking. The connection between thinking rationally and thinking correctly is probabilistic: if your thinking is irrational, that is in effect bad news about your thinking’s degree of correctness. This account of rationality explains how we should set about giving a theory of what it is for beliefs and choices to be rational. Wedgwood thus unifies practical and theoretical rationality, and reveals the connections between formal accounts of rationality (such as those of formal epistemologists and decision theorists) and the more metaethics-inspired recent discussions of the normativity of rationality. He does so partly by drawing on recent work in the semantics of normative and modal terms (including deontic modals like ‘ought’).
From Hasan’s review:
Ralph Wedgwood’s book is the first installment of a trilogy, to be followed by The Rationality of Belief and The Rationality of Choice. It is a rich volume that offers an ambitious, general theory of rationality and its value. On Wedgwood’s view, rationality is a matter of coherence, in a broad sense that includes not only relations between beliefs or credences, but also between other sorts of mental states or mental events, e.g., between beliefs and sensory experiences. Rationality is a value. It is distinct from other sorts of values or norms at least in that it is an internal standard — rationality supervenes on what is in the mind — and a constitutive standard: “all thinkers have at least some disposition to conform to the most basic requirements of rationality, simply in virtue of their counting as thinkers at all” (202). As an internal standard, rationality is in the service of an external “aim” or standard of correctness. Wedgwood takes there to be a single, core normative concept of rationality, which can be applied to different “ways of thinking”: e.g., to beliefs and judgments in the theoretical domain, and to intentions and choices in the practical domain. For beliefs and judgments, correctness is a matter of truth or accuracy; for intentions and choices, it is a matter of the “practicable good.” The plan for the second and third installments is to elaborate on and defend this view of rationality in the theoretical and practical spheres respectively, but much of the groundwork and main contours are already present in the first book.
There is much to like about the volume. It promises to provide something of a holy grail in the philosophy of rationality: a unified account of rationality, applicable to both the practical and the theoretical domains, that preserves and explains internalist intuitions while also accepting a strong connection to the external “aim” of truth or the good. It brings a number of different debates that have largely — though not entirely — been pursued independently into a conversation with each other, including debates in metaethics, traditional epistemology, and formal philosophy (formal epistemology and decision theory). It investigates the interconnected debates on rationality in a clear and careful way, and develops a sophisticated account with important implications for each of these areas. The book can be a challenging read, largely in a good way. It is guaranteed to make you think hard about fundamental questions and puzzles regarding the nature of rationality. Readers familiar with one of the above areas or debates but not others might wish for a bit more guidance and background here and there. But there is something for everyone interested in rationality in this rich book, and many chapters can be read independently, depending on one’s interest.
[…]
Wedgwood says that rationality, while supervening on the internal, has an external “aim”: correctness (Chapter 9). That’s why rationality matters and is not just a “pretty pattern” in the mind. Its value is not “free standing” but depends on its relation to correctness. If some way of thinking is rational, that is “good news” about correctness: it “tells” us that our ways of thinking can be expected to do well in terms of securing these external goods. This all sounds very intuitive, but we have to be careful here, for it is not to be taken literally. To say that rational ways of thinking “aim” at correctness is just to say that there is an essential probabilistic connection between these ways of thinking and correctness. One’s current mental states and events give us “news” about the correctness of some particular way of thinking, but this just means (roughly) that our mental states and mental events determine a space of possible worlds, and a probability measure on this space of possible worlds.
How exactly is this space of possible worlds and probability distribution determined? Wedgwood provides only a sketch of the account here. “The intuitive idea is that these internal mental states and events have some connection to the truth (including the truth about the external world) that are essential to these mental states and events,” where the connection is “somehow built into the constitutive essence of those mental states and events” (223). The space of possible worlds for the agent — the epistemically possible worlds — will include only worlds that can be built out of propositions one can think using concepts one has. Conceptual truths, and truths about one’s own current mental states, that are built out of concepts you possess must be true in every world in this space, and so have probability 1; the (ideally) rational state to have is full confidence in these truths. Some propositions are true in most possible worlds, and some true in more worlds than others. For example, most of the epistemically possible worlds in which one has the perceptual experience as of seeing a tree are worlds in which one is seeing a tree, and are not, say, demon worlds. It is less intuitive that such synthetic, merely probabilistic connections would be constitutive of experiences, concepts, and/or beliefs that we have, and there are some significant challenges to the approach, but the details will have to await the second volume.
As already mentioned, on Wedgwood’s view the value of rationality is not “free standing” but depends on the value of its “aim”: correctness. This makes good, intuitive sense when it comes to practical rationality. But why should we think truth is a value? If we do not literally always aim at truth or have it as a goal, why should we think it has value at all? Unfortunately, hardly anything is said about this in the first book, which is surprising given the importance of this controversial claim for the account. Wedgwood does at one point appeal to the absurdity of Moorean questions like: “I agree that p is the correct proposition for me to believe, but why should I believe it?” (231) But the oddness here has multiple potential explanations. For one, if I earnestly agree that p is true, then it seems I can’t help believing it, and saying I should believe what I can’t help believing is odd, violating “ought” implies “can do otherwise.” Or the oddness might reflect the normativity of rationality rather than of truth, for it is a constraint of rationality that one believe what one takes to be true.
The other concerns I want to raise have to do with the internalist-externalist debate and Wedgwood’s related discussion of “guidance.” According to Wedgwood, the access internalist claims that rational belief requires that the agent have access to all the facts that determine one’s rationality, or all that rationality supervenes on, where access is understood in terms of being in a position to know these facts. Wedgwood argues that this leads to a vicious regress (166-7), and I think he is exactly right about this. However, although he identifies Fumerton and BonJour as access internalists (179, n. 18), Fumerton explicitly rejects the view so characterized, giving an argument very similar to Wedgwood’s own (Fumerton 2001 and 1995, 81). BonJour (2003) states that justification requires access to reasons to think one’s beliefs are true, and although he sometimes characterizes the view in apparently stronger ways, it is, at the very least, controversial to claim that he requires access to all that determines one’s justification. Moreover, for both, the fundamental kind of access is understood in terms of acquaintance or direct, conceptually unmediated, awareness, and not in terms of being in a position to know. Such a view requires a form of access, is not vulnerable to the regress problem raised by Wedgwood, can accept that rationality supervenes on the mental, and provides an explanation why certain mental states are epistemically relevant: they constitute our access to facts that make true, or make probable, what we believe.
For Wedgwood, rationality supervenes on the mental because facts about what is rational must be capable of “directly guiding” one’s thinking, and only what is in the mind can guide one’s thinking. But what is meant by “guidance”? Initially, Wedgwood says that what is essential to each normative concept is a certain guiding or regulative role it plays in one’s reasoning. For example, one might be guided explicitly by the belief that A is better than B (all things considered) by preferring A over B. More commonly, one might be guided implicitly, by a concept. The only example Wedgwood gives here is that one might be disposed to prefer A over B when one has evidence that supports A’s being better than B, without having the explicit belief (47). He later returns to the question of what direct guidance is. He considers the possibility that “some devious neuroscientist . . . manipulated your brain so that you would form a belief in whatever proposition you considered at that time, regardless of whether it was a logical truth or not” (182-3). Wedgwood says “it is a fluke” or “accident” that you believe what is (abstractly or propositionally) rational; guidance requires that it be “no accident that you form this belief in a situation in which it is rational for you to do so” (183). For it to be no accident for you to form the belief in a situation in which it is rational for you, your belief must manifest a general disposition of the right sort — a disposition, applying to some range of situations, to believe the proposition that is rational in the situation. Any reference to being guided explicitly by what you think is rational, or implicitly guided by evidence, disappears. At a fundamental level, it seems there is only a disposition that is (roughly) likely to be reliable in most or all possible worlds.
Given this view of what guidance involves, it is natural to consider apparent counterexamples to the sufficiency of reliabilist conditions, like BonJour’s Norman the clairvoyant (1980) and Lehrer’s Truetemp (1990), tailored to fit the mentalist constraints. Thus, consider Truelog, who has been fitted with a chip that gives him a highly reliable disposition to form very precise beliefs about logical matters, and Truepref, who has a highly reliable disposition to form very precise beliefs about his own preferences. Suppose they have no defeaters for their beliefs. Intuitively, Truelog and Truepref are no better justified than Lehrer’s Truetemp. It is not by accident that they get things right, but from their perspective it is no different than an accident; from their perspective, it is just “dumb luck” if they get things right. Wedgwood does not consider such cases, perhaps because he takes the access internalist view that can accommodate the relevant intuitions here to lead to a vicious regress, but we saw above that not all access internalisms are so vulnerable.
I am deeply grateful to Ali Hasan for his wonderfully careful, perceptive, and generous review of my book. I am delighted to have this opportunity to continue the discussion on PEA Soup.
As Niko Kolodny once memorably asked: “Why be rational?” – At least when the word ‘rational’ is understood in the sense that is most common in philosophy, the question has a vaguely trivial air to it – rather like asking, “Why should one think as one should think?”
But how can that be true? Many of the theories of rationality defended by formal epistemologists and decision theorists, or by prominent recent authors on the topic (such as John Broome in Rationality through Reasoning) make it quite unclear whether “rationality” is a normative concept at all… So, how can the concept of “rationality” be a normative concept – a concept of the way in which one should think?
This is the problem that I aim to solve in my book. My solution, in broad outline, involves the following three theses:
1. The concept of “rationality” implies that rationality is a value – indeed, rationality is a virtue of thought. Rational thinking is a kind of good thinking, while irrationality is a kind of bad thinking. In general, the more irrational your thinking is, the worse – in a certain respect – it is.
2. Rationality is an internal value: The degree to which your thinking exemplifies the virtue of rationality depends purely on the internal mental states and mental events that are present in your mind at the relevant times.
3. To put it metaphorically, rational thinking has an “aim” – namely, thinking correctly (e.g., believing the truth, or choosing a course of action that is both feasible and choiceworthy).
As Hasan notes, the book is designed as the first instalment of a trilogy – to be followed by sequels dealing with rational belief and rational choice respectively. So, the focus in this book is on general features of the concept of rationality – features that apply equally to rational belief and to rational choice.
There is currently a flurry of writing on this general topic: My book was closely followed by Benjamin Kiesewetter’s The Normativity of Rationality and Errol Lord’s The Importance of Being Rational. One key difference between my book are these other writings is that most of these other writings take notions that can be expressed terms like ‘reasons’ or ‘ought’ as fundamental. My approach is different: I take values – ways of being good or bad, better or worse – to be the fundamental notions in this domain.
As Hasan notes, I do not take the idea that rationality “aims” at correctness to be literally true. I interpret this talk of an “aim” as a metaphor.
So, what is the cash value of this metaphor? My proposal is that “correctness” is also an evaluative concept. Correctness, according to this proposal, comes in degrees: Some mental attitudes and pieces of reasoning are only slightly incorrect, while others are much more badly incorrect.
Correctness is in a sense the primary dimension of assessment for mental attitudes and pieces of reasoning. Rationality is a sense a secondary dimension: to say that it is “derivative” from correctness might be misleading – it might encourage the thought that rationality is not a distinctive evaluative standard in its own right – but rationality is not a fully independent standard. Instead, rationality must be understood in terms of its relations to correctness.
The general picture that I propose is this: If you are thinking irrationally, then your thinking is giving you bad news about how correct it is; and the more irrational your thinking is, the worse the news that it is giving you about how correct it is.
There are many questions that can be raised about this picture. But I shall now turn to the questions that Hasan has raised about my book.
His first question concerns the value of correct belief:
However, as I said in my previous book The Nature of Normativity (pp. 157f.), we must distinguish between truth and correctness. Truth – at least in the sense in which the term is most commonly used in philosophy – is a property of items like propositions or sentence-types (relative to a context). On reflection, it seems clear there is nothing bad about false propositions or false sentences as such. It is extremely useful for us to consider false propositions – to explore their consequences, etc. Even the utterance of a false sentence can be fine – if the false sentence is uttered as the antecedent of a conditional, or in a question within the scope of ‘Is it the case that …?’
What is bad is not the false proposition itself, but believing the false proposition. – “Bad in what way?”, Hasan might ask. – Well, a belief in a false proposition is bad insofar as it is incorrect. There need not be any further badness or disvalue in having an incorrect belief. My claim is that “incorrectness” is itself the concept of a kind of flaw or defect.
In general, I have a more abstract notion of “value” than Hasan seems to appreciate. A value is something that can be expressed by an evaluative concept. The relevant evaluative concepts are distinguished by a certain kind of conceptual role that they essentially play in our thought:
a. These evaluative concepts rank alternative states of affairs;
b. This ranking plays a reasoning-guiding role, guiding us towards realizing the higher-ranked states of affairs, rather than the lower-ranked states of affairs.
The concept of “correctness” counts as evaluative by this test. It ranks the state of affairs of your having belief as your only attitude towards p as either more or less incorrect than the state of affairs of your having disbelief as your only attitude towards p; and your estimates of this ranking play a role in guiding your reasoning – specifically, in guiding the way in which your form and revise your beliefs.
In this way, the concept of “correctness” – as much as when it is applied to beliefs as when it is applied to choices – counts as an evaluative concept.
“Correctness” differs from “rationality” in the following way. “Correctness” is an external evaluative concept (whether a mental state is correct typically depends on the external world). “Rationality” is an internal evaluative concept (whether a mental state is rational depends solely on its relations to what is going on inside the thinker’s mind).
In Chapter 7, I explain why we need the concept of an “internal” virtue of this sort: as I argue, such an internal virtue can “directly guide” our thinking – while external virtues can only guide our thinking indirectly, through our being guided by evidence of what the requirements of these external virtues are.
What is it for you to be “directly guided” by the requirements of rationality? As I unpack this notion, you have a disposition that is directly triggered by the fact that you are rationally required to φ; in manifesting tis disposition, you directly respond to this trigger by φ-ing. So, this disposition is directly responsive to the fact that you are rationally required to φ – it does not respond to your knowing about this requirement (or even to your having the rather mysterious “conceptually unmediated awareness” of what grounds the requirement that Hasan suggests in his review).
When you manifest a rational disposition of this kind, I suggested, that is what it is for you to be thinking rationally. If your believing a proposition is the manifestation of such a rational disposition, then, as epistemologists say, your belief is not just propositionally but doxastically justified.
Hasan’s second question is whether this claim is vulnerable to counterexamples:
But is this “intuitive”? Why not say that Truelog’s chip has enhanced her logical acumen, giving her a kind of Ramanujan-like insight into logical truths? Why not say that Truepref’s chip enhances his power of introspecting his preferences?
The key issue, I believe, is whether the operations of this chip are integrated into the general processes whereby the thinker responds appropriately to rational requirements. If they are, then the chip is an artificial enhancement of the thinker’s rationality. If the operations of the chip function as an intrusion into the general processes of rational thought, then – but only then – Hasan’s verdict on the case is intuitive.
But of course, it is only in the former case that the operations of the chip count as the thinker’s manifesting a rational disposition according to my theory. So, as I see things, my view is not obviously vulnerable to counterexamples of this sort.
Hi there,
I have two questions for Ralph, one about his response to Hasan’s review and another, more general one about the book and how it relates to the Kolodny/Broome debate.
The first question concerns Hasan’s worry about the value of true or correct belief. As I understand Hasan, he wonders whether it is by itself valuable or good to have correct beliefs, in the ordinary sense of ‘good’, in which it means something like ‘desirable’. You reply that you have a “more abstract notion of ‘value’ than Hasan seems to appreciate”, and then you outline a conception of evaluative concepts which suggests that correctness is an evaluative notion. I would like to understand this reply better. Is your view that having correct beliefs is good in the ordinary sense, or do you merely hold that having correct beliefs is good in some other, more abstract or theoretical sense? If the former, I’m not sure I understand why one would need to appeal to a “more abstract notion of ‘value’” in defending the axiological assumption that it’s good when beliefs are true or correct. If the latter, maybe you could say a bit more about the relation between the ordinary notion and the abstract notion that you use.
My second question concerns the relation between your project and the debate between Kolodny, Broome, and others about the normativity of rationality. Kolodny and Broome seem to be concerned with a pretheoretical notion of rationality, which is anchored in ordinary judgments about what is rational or irrational (cf. Kolodny 2005, 515). In contrast, you seem to be mainly concerned with a theoretical notion: you qualify your central thesis (that rationality is normative) by saying that “when the term ‘rational’ is used in such branches of intellectual inquiry as formal epistemology and the theory of rational choice, it expresses a normative concept” (196). As is well-known, in such theories the term ‘rational’ often is used in highly idealized ways that do not necessarily reflect ordinary judgments of rationality (for example when the theories assume that rational agents must be logically omniscient). It seems to follow that your notion of rationality differs from Kolodny’s and Broome’s. At the same time, you clearly aim at answering Kolodny’s challenge and reject his skeptical conclusion. This is just an invitation to say a bit more about how your notion of rationality relates to Kolodny’s and Broome’s. Do you assume that your notion incorporates the ordinary one that Kolodny is talking about?
Thank you so much, Benjamin, for those extremely illuminating questions!
In my view, terms like ‘good’ and ‘rational’ are context-sensitive: there is a family of more-or-less closely related concepts, and the context in which these terms are used determines which of these concepts is expressed by the term in the context. In your question, you ask whether I am saying that correct beliefs are “good” in “the ordinary sense”. But in my view, there is no such thing as the ordinary sense of ‘good’. There are many such senses!
Still, every ordinary sense of ‘good’ expresses what I call a “value” – in the somewhat abstract sense of “value” that I described in my comment.
Now, admittedly, the terms ‘good’ and ‘bad’ are not used very often in English to describe beliefs. But the terms ‘right’ and ‘wrong’ are very frequently used to describe beliefs; and in my view, ‘right’ and ‘wrong’ are also evaluative terms.
In English, we say ‘do the right thing’. But in French the very same concept is expressed by saying ‘faire la bonne chose‘, and in Italian by ‘fare la cosa giusta‘ – and surely it’s indisputable that the French word ‘bon’ and the Italian word ‘giusto’ normally function as evaluative terms. (‘Right’ and ‘wrong’ have some fascinating features that differentiate them from other evaluative terms. But these special features don’t matter for my purposes.)
So, in my view, the perfectly ordinary senses of ‘correct’ and ‘incorrect’ that I am using in my account of rationality are just as much evaluative concepts as those expressed by ordinary uses of ‘right’ and ‘wrong’ in English, ‘bon’ in French, and ‘giusto’ in Italian.
Now, the term ‘rational’ is much less common in ordinary language than ‘good’ or ‘right’ or ‘wrong’. So I shall take your second question as inquiring about the relation between what e.g. John Broome means by ‘rational’ and what e.g. the formal epistemologists and decision theorists mean by ‘rational’.
In my account, I say that rationality comes in degrees. What is most fundamental, then, is the concept of one way of thinking’s being more rational (or less irrational) than another. When we say, simply, that a way of thinking is “rational” without further qualification, we are saying that it is at least as rational as the contextually salient standard (whatever that may be).
So, this is my hypothesis about how to answer your question:
1. When Broome says that a certain way of thinking is “not irrational”, he has a much less demanding standard in mind than the formal epistemologists. He means that this way of thinking is no more irrational than this pretty easy-to-achieve forgiving standard.
2. When the formal epistemologists say that a certain way of thinking is “irrational”, they have a much more demanding standard in mind. They mean that this way of thinking is at least a bit more irrational than the standard of perfect rationality – which is most rational sort of thinking that it is logically possible to exemplify.
This hypothesis makes the two ways of talking about “rationality” different from each other, but also closely related. Broome and the formal epistemologists are genuinely contributing to the same topic, even if they use the terminology a bit differently!
Thanks Ralph, this helps me to understand your view better. I still wonder, however, what you have to say to someone who disagrees with you about the axiological assumption that there is something in itself good about having correct beliefs. Is this person conceptually confused, according to your view? On the face of it, she seems to have a perfectly intelligible view (even if in fact she were mistaken).
I also had two further questions about your reply to Hasan, if I may. You say that concepts are evaluative if they (a) rank alternative state of affairs, and (b) this ranking plays a reasoning-guiding role. My first question is how you avoid that merely descriptive concepts that constitute what are sometimes called good-making features, such as ‘pleasurable’, come out as evaluative concepts. Those concepts seem to pass the test as well, but they don’t seem to be evaluative. Second, criterion (b) seems in need of some qualification: just because some crazy person is in fact guided by some kind of ranking of states of affairs in terms of some comparative concept, it doesn’t seem to follow that this is an evaluative concept, let alone a positive one. For example, someone might rank alternative states of affairs alongside the dimension of population size and then be guided in such a way that she always intends to realize the state with the bigger population. Surely, we don’t want to say that therefore having a bigger population size is a good thing. So how do we qualify criterion (b)? (One option would be to qualify (b) normatively, by saying that the ranking ought to be playing a guiding role, or something of that sort. But I take it that this would be in tension with your view that evaluative concepts are the fundamental normative concepts.)
Thanks again, Benjamin! Yes, someone who rejects my view is misinterpreting the concepts that they are themselves using. ‘Correct’ and ‘right’ are evaluative terms in English (as is ‘richtig’ in German, etc.). It’s just a conceptual mistake to deny that.
However, just because someone makes a claim like “There’s nothing bad about having incorrect beliefs” or “There’s nothing good about having correct beliefs”, it doesn’t follow that they are really rejecting my view. Given that ‘good’ and ‘bad’ are such context-sensitive terms, there are certainly quite a number of true readings of those claims. (E.g. there certainly need not be anything financially disadvantageous or morally despicable about having an incorrect belief!)
As for your questions about my comment on Hasan, my response to your first question is that it is not essential to the concept “pleasurable” that it should play a reasoning-guiding role (Antisthenes the Cynic is supposed to have said, “I would rather go insane than experience pleasure!”) By contrast, it is essential to the concept of what is “correct” that it guides our reasoning: if a thinker had no tendency to avoid having beliefs that she judged to be “incorrect”, then that thinker would not count as even possessing the concept “correct” at all.
The same point answers your second question as well. When I said that the concept’s playing a “reasoning-guiding role” was part of the concept’s “conceptual role”, I meant the concept’s essential conceptual role – the role that the concept must be at least disposed to play in a thinker’s cognitive life if the thinker is to count as possessing the concept at all. (I’m sorry for not explaining this in more detail. I was presupposing the view of concepts that I defended in my first book, The Nature of Normativity.)
Thanks Ralph, this is again very helpful for me. One more question (and then I’ll be quiet). Can you say something about the use of ‘correct’ in “This is a correct chess move” or “This is the correct way to tie a knot”? Is it also a conceptual mistake to deny that it’s good to make correct moves in chess or to tie knots correctly, or do we express an entirely different concept if we use the term ‘correct’ in such contexts? If the latter, shouldn’t there be some kind of connection between correct beliefs and correct chess moves or correct ways of tying knots?
Sorry, I forgot to type in my name. This was me again, obviously.
Thanks, Ralph, for your very thoughtful response to my review, and to Benjamin Kieswetter for following up on my question about the value of correctness. I had very similar concerns and the exchange has been illuminating so far.
I am still a bit puzzled, however, about the value of rationality. As I mentioned in the review, you say in the book that while you do accept that the standards of rationality are constitutive of the types of states to which these standards apply, you deny that the this can explain why rationality matters; the “constitutivist approach” to normativity is not adequate (pp. 207-8). Why, then, should the fact that some concept plays an essential guiding role explain why thinking in ways that are so guided matters? Is your answer to “why rationality matters?” some kind of constitutivist approach after all?
Sorry — the above post is mine obviously. I forgot to type my name in as well. 😀
Thanks, again, for another very illuminating question, Anonymous / Benjamin!
My general story about ‘correct’ / ‘right’ and ‘incorrect’ / ‘wrong’ is that in every context these terms presuppose some scale of value, and ‘correct’ and ‘right’ basically mean optimal (the top of the scale), while ‘incorrect’ and ‘wrong’ mean suboptimal (inferior to the top of the scale).
There is also a strong tendency to apply these terms only to a domain of alternatives in which there is a unique “right” alternative, and a significant difference in value between the “right” alternative and those that are “wrong”. This encourages the assumption that all that matters, according to the value that is presupposed in the context, is hitting on the “right” alternative and avoiding those that are “wrong”.
One difference between ‘right’ and ‘correct’ in English is that the use of ‘correct’ often seems to imply that there is something technical – something that can be demonstrated on the basis of some kind of specialized expertise – about whether or not an alternative is “correct”. (This is a special feature of the word ‘correct’ in English: in many other languages, there is no analogue of this.)
This means that ‘correct’ is going to be an especially appropriate term in contexts where the presupposed value is that of compliance with a set of social rules S — and particularly when the social rules in question are either purely instrumental (as with rules for tying knots in the most effective way) or else fundamentally conventional (as with the rules of etiquette or the rules of a game like chess).
Is this really a “value”? Well, I think so, yes. Compliance with social rules is good for whatever goal those rules are designed to serve. Our concept of what is “right according to these rules” also counts as an evaluative concept on my theory: it ranks alternatives, and is essentially reasoning-guiding.
Thanks, Ali, for your astute question!
There is a difference in philosophy between a suasive argument, which is a way of showing that its conclusion is true, and an explanatory argument, which provides an explanation of its conclusion.
By my lights, the essentially reasoning-guiding role of the concept of “rationality” is enough to show that rationality really is a value. However, we still need more of an explanation of how there can be a value like this. As I say on p. 200:
My point on pp. 207f. is that the constitutive nature of rationality cannot provide this sort of explanation, not that it does not help to establish that rationality is in fact a genuine value.
Thanks, Ralph! That does help clarify things for me.
I had one last question having to do with defeaters in deductive reasoning. Suppose that I have a highly reliable disposition to reason correctly, forming beliefs in line with truths of logic (perhaps like my case of TrueLog but where, if you like, the chip’s work is fully integrated into my general reasoning processes). Though I am not sure of this, in the book you seem to take this as sufficient for my belief to be (highly) rational. But suppose that I also have very good, but misleading, empirical evidence to think that I’m not very reliable when it comes to such judgments. (E.g., perhaps, as Christensen (2010) imagines, I know that I have taken a drug that has an 80% chance of messing with my thinking.) Suppose I’m actually still highly reliable, and my reasoning in a particular case is a manifestation of this reliable disposition. Would the misleading empirical evidence I have not defeat the rationality of my belief in logical truths? And would there be a difference in the response to this case depending on whether the ideal/formal or non ideal standard is used in the context? I should add that, as I am thinking of the case, the evidence I have is not (or not merely) that I am not “justified” or “rational”; the evidence can be put in terms of something like reliability rather than such epistemic terms. I know that you have discussed this in earlier work, though I don’t recall a direct discussion in the book (perhaps I missed it). Thanks!
Thanks, again, Ali! That’s another terrific question.
I do very much plan to take up the challenge that you raise in my next book, Rationality and Belief.
The view that I’m going to defend is a version of probabilism: the system of levels of confidence (or “credences”) that it is ideally rational for a thinker to have at any time t must in a sense match whatever counts as the “rational probability” function for the thinker at that time – that is, the probability function that rationally should be guiding her at that time. And I will argue that logical truths will always have probability 1 according to this probability function.
It follows that our propositional justification for logical truths is indefeasible. We may not be able to believe these truths in a doxastically justified manner, but nothing can remove their propositional justification.
So, I try to explain away the appearance of defeaters here. Yes, any normal rational thinker’s confidence in these logical truths will wobble a bit in the face of this misleading empirical evidence. But this is because the normal rational thinker is not perfectly rational. The propositional justification that the thinker has for the logical truth is as strong as propositional justification can get – probability 1 in the relevant probability function. Nothing can undermine that. But we’re imperfect creatures, and so we can’t expect ever to have doxastically justified credences that match the credences that we have propositional justification for!