I am pleased that PEA Soup will feature an exchange on Hanno Sauer’s book Moral Judgements as Educated Intuitions. Regina Rini reviewed this book in the most recent issue of Ethics. You can find an open access version of that review here.
Now we hear from Sauer in reply. And of course, as always, all are welcome to join in the discussion, ask clarificatory questions, press concerns, etc. Looking forward to a fruitful and thoughtful exchange. Here now is Sauer:
Reason in Nature? A Response to Rini
I tend to be relaxed about it when people engage with my work. Still, book reviews make me nervous. That one paper you wrote may be flawed – even embarassing, a dead end. But a whole book? It would be deeply unpleasant to find out if people thought that years of your toil had been worthless. When I heard that a review of my Moral Judgments as Educated Intuitionswas about to appear in Ethics, I got even more nervous. And the news that Regina Rini was its author really made me worry.
Not because I was afraid that my views would be misrepresented – and indeed, Rini’s review is a model of merciless charity and constructive criticism for which I am tremendously grateful – but because I knew that she would cut to the core of the issue and, rather than get lost in this or that nitpicky complaint, expose the crucial gaps and fundamentally jarring points in my argument.
Rini thinks that I am “too much a rationalist to be a naturalist, and too much a naturalist to be a rationalist”. This tension perfectly characterizes the thrust of my project, which was to take a look at the available empirical evidence regarding the etiology of moral judgment, and to go as rationalist as possible in interpreting it. In doing so, I tried to develop a response to the evidence suggesting a tight link between emotion and moral judgment that takes this evidence more or less at face value. The response is supposed to work even if all the evidence is solid. Actually, since I started working on the topic, a good deal of the data, in particular on incidental affect and moral cognition, essentially fell apart. But it remains plausible that emotions are, in one form or another, deeply involved in moral judgment. The question is how, and what the answer to it means.
Rini would like to see the tension between rationalism and naturalism resolved in favor of the naturalist side. Indeed, Rini thinks the move I am making is a “legerdemain” and a “sleight of hand”. I embrace the former characterization, but want to resist the latter. A trick is a trick; but a trick is legitimate when it actually works, rather than creating the mere illusion of working.
Here is one of my tricks: apparent moral judgments only count as genuine ones when there is a sufficient degree of rationalist contamination. This can be accomplished, I think, in a number of ways: firstly, if the judgment came about in a way that we could reflectively endorse if we could look behind the curtain of its genesis; or if, secondly, in cases where we couldn’t so endorse it, the subject would either give it up or provide further reasons for holding on to it. I think Rini underestimates the latter two conditions. What I mean to suggest is that an apparent moral judgment counts as a genuine one even if it did notin fact come about properly, as long as the judging subject reacts appropriately to this information either by suspending or fortifying her judgment.
The problem with this, Rini argues, is that this account “impl[ies] that we almost never make moral judgments. If I knew the complete causal history of each one of my (apparent) moral judgments—every last turn in the highly contingent, bumptious story of how I came to be disposed to react to certain things in certain ways—and I were ideally rational, could I reflectively endorse all or even most of them? I doubt it. There’s just too much randomness in there, too much sensitivity to morally arbitrary features of my upbringing and experience.“
My response is part bullet-biting, part disagreement: I do think that we make genuine moral judgments much less often than we think, perhaps even only rarely. In my forthcoming Moral Thinking, Fast and Slow, I endorse this line even more explicitly. What I there refer to as rationalist pessimism is the view that the influence of reason on our moral judgments is real, but rare. A normative twist to this, which I also explore, is a form of elitism about moral judgment. Making proper moral judgments is formidably difficult precisely in virtue of the rationalist demands I articulate, so most people should refrain from it most of the time.
It seems clear to me that Rini is right to suggest that the origin of most, perhaps even all, of our moral judgments is “contingent [and] bumptious”. But a process can be haphazard and fragmented without being therefore “morally arbitrary”. At least some of the moral judgments we make pass the test. Inevitably, many won’t. But this is a far cry rom Rini’s suspicion that an “[un]edited encounter[] with empirical reality” would reveal “my entire pychological history as one absurd accident, not something fit for rational endorsement”. My implicit methodological Hegelianism immunizes me against this angst: if we look at our psychology with reason, it looks back at us with reason.
Many thanks to David Sobel and PEA Soup for hosting this debate and the participants below for their comments.
Thanks for these comments Hanno. I’m looking forward to the discussion! I think a good place to start is my worry about how many potential moral judgments are disqualified by your rationalism. You point out that sometimes a judgments is saved even when it has an un-endorsable aetiology if “the subject would either give it up or provide further reasons for holding on to it.”
It looks to me like all of the following now count as moral judgments:
– Judgments with a causal aetiology we can reflectively endorse
– Judgments we would give upon discovering a bad aetiology
– Judgments we would provide further reasons for upon discovering a bad aetiology
I think a huge amount of work is now being done by the last condition, and its significance turns upon how we understand ‘providing’ reasons. Does this just mean going through the motions of offering reasons (like Haidt claims his dumbfound subjects do)? Or does it mean providing actual, legitimate, vindicating reasons?
If the former, then I think this account turns out not very rationalist. Virtually all potential moral judgments will pass this test, since people usually at least try to offer (flimsy, ad hoc, rationalizing) reasons when challenged on aetiology. If that’s all it takes, then I don’t think the account disqualifies many judgments, in which case I’m not sure it does any theoretically interesting work.
If the latter, then this a seriously rationalist account, but it acquires some of the same problems as other rationalist views. For example, consider what is sometimes called the Reinhold/Sidgwick problem for Kant’s theory. Briefly: Kant seems to say that one cannot autonomously choose to do evil, but if moral responsibility requires autonomy then it seems we cannot ever be morally responsible for doing evil. (See Courtney Fugate’s 2015 paper in Euro J Phil for a helpful discussion.) There’s something similar going on here. If I discover a bad aetiology behind my favorite moral judgment and I proceed to offer bad/non-vindicating reasons to go on holding it, then I don’t count as having an incorrect moral judgment. Instead, I count as not making a moral judgment at all.
Doesn’t that seem weird? It looks now like a domain-identity question (is this judgment a moral judgment?) turns not (simply) on the subject matter of the judgment, but instead on normative assessment of the aetiology and justification of the judgment. I suppose we could make this move, but then how do we distinguish genuine-but-incorrect moral judgments from not-even-moral-judgments-at-all? Don’t those two categories collapse into one another?
Another way to put the same worry: I think naturalism needs to have somewhat relaxed domain-identity conditions. Otherwise it’s hard to get naturalistic inquiry started at all, since normative disagreements within the target domain (which ones are the vindicating moral reasons?) will ramify through to divergently identifying the inductive base for empirical study (i.e. if we disagree about which are the right moral reasons, we’ll end up studying different sets of genuinely-moral judgments). It’s not clear how we could get a scientific moral psychology going in that case (assuming science requires collective inquiry over a broadly shared set of observations).
I have some other worries about rationalist pessimism, but for the moment this is probably enough. What do you think?
Thanks for the follow-up! Two points up front: (1) I think this worry is entirely legitimate, and (2), I think that the “Kantian” way — the one that is vulnerable to the Reinhold/Sidgwick problem — is to be avoided.
As you mentioned in your review, I am interested in giving a metaethical account of what moral judgments are, rather than a substantive account of which moral judgments are good ones. But then tying the second-order question to a kind of reasoning-condition, as I do, makes this whole distinction a bit awkward to maneuver.
In general, I believe that moral judgment is a practice people engage in, and therefore can be (and frequently is) done badly, or in a way that means that people have completely misunderstood the game they think they’re playing. So there is *some* distinction here between proper moral judgments and pseudo-moral judgments, and that’s the distinction I am trying to capture, preferably in a way that will defuse the anti-rationalist flavor of the emotionist challenge. And the suggestion I make is to view our emotionally charged (or perhaps even grounded) moral beliefs as links in the default/challenge/response chain I describe in the first part of the book.
An account of the nature of moral judgment, and certainly one that aspires to be empirically convincing, should not entail that moral judgments are either justified by really, really good reasons or aren’t moral judgments at all. So what I want to say is that judgments that have a fart spray-etiology aren’t really moral judgments, at least if people would simply shrug off this information about the origin of their beliefs as if it didn’t mean anything, or didn’t require some sort of reactive correction. And the ones who do insist on their beliefs whilst refusing to offer any meaningful justification for their reactions of disapproval have therefore opted out.
But then if people do offer reasons in response to filthy desk-style undermining challenges, I think those reasons can be pretty silly and they would still count as making genuine, though badly justified, moral judgments.
You think that in that case, my account ends up not being “very rationalist” — and I think that’s true. It’s only supposed to be moderately rationalist! But this is a lame rejoinder, so let me offer something more helpful: I think the reasons many people offer a lot of the time are pretty flimsy, but they are not flimsy in the sense of being morally irrelevant, they are flimsy in the sense of being not very well thought through, and they often don’t even apply to the case at hand (think dumbfounding again).
How many candidates for moral judgments are thereby disqualified? Many are, I think, though I am not sure how many. The rest may be a matter of degree, such that those who engage in justificatory practices more patiently, or more eagerly, or both, will count as making more moral judgment-like moral judgments than those wo do not. And only those who do all that well — which I don’t think are very many, thus leading to rationalist pessimisim — count as making not just genuine, but also good moral judgments.
Many thanks to Regina Rini and Hanno Sauer for starting the debate about Hanno’s book. I take it that we all agree that (apparent as well as genuine) moral judgment almost always consists in post hoc rationalization rather than step-by-step reasoning leading to a moral conclusion. I would like to point to one thing I find particularly interesting about Sauer’s position. His book is entitled “Moral judgments as Educated Intuitions” for a reason. As Rini makes clear in her excellent review, Sauer thinks that Haidt and many others are wrong because they are only interested in what people are actually capable of stating as reasons for their moral intuitions when pressed by psychologists in weird situations. Sauer argues that the test persons’ intuitions are often “educated” in various ways such that the reasons speaking for those moral intuitions are actually deeply engrained in the very intuitions. This view is a perfectly natural one to hold. After all, this is how humans develop. We use instructions to learn to do things, and then we forget the manual.
However, there are several problems with this view when it comes to ethics. Let me just mention two of these. One, what exactly is the difference between “educated” intuitions and what Richard Brandt—discussing the role of moral intuitions in reflective equilibrium—called “moral prejudices”? After all, prejudices are often “learned” or “educated.” Similarly discussing reflective equilibrium, Peter Singer worried that moral intuitions are “likely to derive from discarded religious systems, from warped views of sex and bodily functions, or from customs necessary for the survival of the group in social and economic circumstances that now lie in the distant past?” What is it that sets “educated” intuitions apart from those dubious moral intuitions?
Two, if we accept that moral intuitions are educated, and that this education engrains (good) moral reasons, does this have implications for who is to be considered a moral expert, i.e. for whose moral intuitions are “better educated”? I assume that moral philosophers usually have excellent moral education. However, having attended many ethics workshops and conferences, and having debated ethical issues hundreds of times with my peers, I doubt that they (and I), normally, have “better educated” intuitions, let alone better reasons, for their moral beliefs than laypersons do. Moral philosophers rarely change their views when their peers provide arguments against these views. They are much more likely to hold that the others just don’t fully understand their position. But if this is true, then moral intuitions do not seem to be “educated.” Rather, they seem to be prejudiced, just as Brandt suspected.
Thanks for getting such an interesting discussion going, Hanno and Regina! While I’m a rationalist about moral judgment as well, I want to briefly raise two worries for Hanno’s view.
(1) Genuinely Moral Judgment:
Like Regina, I worry that Hanno’s constraints on moral judgment are too onerous. The “rationalist pessimism” Hanno endorses in response strikes me as not just biting the bullet but choking on it. It’s one thing to claim that we rarely make *good* moral judgments; quite another to say we rarely make moral judgments at all. Quite the opposite I’d say. The empirical evidence suggests that people constantly evaluate each others’ behavior and rationalize their own via motivated reasoning and other forms of rationalization. If that’s not moral (or more broadly, normative) judgment, I worry Hanno risks stipulating away a very interesting and pervasive aspect of human psychology that sure seems normative.
(2) Limits of Emotions:
Here’s a different worry, and one where I may come off as even more rationalist. Hanno maintains that “emotions are, in one form or another, deeply involved in moral judgment,” despite acknowledging that much of the key evidence “essentially fell apart.” I don’t see any reason to make such concessions. Not only do we lack have evidence that incidental emotions substantially influence moral judgment; we have positive evidence that emotions are largely effects of moral judgment.
For example, one is less inclined to feel compassion for a person suffering if one judges that person to be blameworthy for that suffering. We have experimental evidence documenting this (e.g. Hector Betancourt’s work), but just think about how differently liberals and conservatives *feel* about the suffering of Syrian refugees, not to mention the poor in their own country. Their emotional reactions (or lackthereof) are elicited by their prior moral judgments.
Similarly, we tend to feel more disgusted by certain people and practices that we regard as immoral. Paul Rozin’s work on “moralization” suggests that we’ve become more disgusted by smoking after deeming it a morally dubious habit and industry. And vegetarians are more likely to become disgusted by meat if they change their diet primarily for moral reasons, rather than for the health benefits.
Now, we needn’t deny that emotions *can* sometimes influence moral judgment. But I believe the picture that’s emerging from the sciences is that emotions influence moral cognition just like they influence other forms of cognition: by affecting attention, which influences reasoning (often unconscious inference). In other words, emotions influence moral judgment only by influencing our rational capacities. We don’t get support for the distinctively sentimentalist claim that emotions play a special role in ethics, playing a foundational role in generating distinctively *moral* judgment.
In short, even as naturalists, I think we can avoid elitism *and* embrace hardcore rationalism!
Thanks for your question, Norbert! I think you point to a very real problem here. Let me clarify one thing first: what we should all agree on, I think, is that *reasoning* is almost always post hoc. But one of the main points of the book is that due to the migration of System II processes into System I, not all post hoc reasoning is rationalization. In fact the concept of education is supposed to help draw the distinction between proper post hoc reasoning and mere rationalization/confabulation. I think that may have been what you meant anyway. I just wanted to emphasize it again.
To your main question: on a descriptive level there is, I think, simply no difference between educated intuitions and learned prejudices. Learned prejudices are educated intutions, because the learning mechanisms that shape our moral intuitions pick up on whatever is out there. It’s GIGO all the way down! This is a point that has also recently been raised against others who have pursued a “moral learning” strategy, such as Railton or Nichols/Kumar/Lopez/Ayars/Chan. It all depends on what information is available in your environment. This information can be morally relevant but it can also be morally irrelevant or pernicious. In the latter case, your educated intutions will be shaped by bad reasoning.
This is why I think moral progress is so important, which is the topic that I currently devote most attention to. One thing that happens over the course of moral progress is that more good moral information becomes available for moralizers to draw on in their learning. Improved moral concepts and more nuanced principles get deployed by the fellow moralizers that we receive our instructions from.
The Catch-22, of course, is that moral progress only happens when people’s moral judgments improve. I think that’s just the deep shit we are in, so to speak, and it means that moral progress, though possible, is very fragile and improbable, and regress always waits around the corner.
On your second question: I am the last person to extol the virtues of professional ethicists. However, while you are right that ethicists rarely change their minds in response to arguments, I think this fact is almost irrelevant. The education I describe just doesn’t really happen at the level of the individual, or more precisely: only in barely visible ways. That’s the argument of chapter 3: moral reasoning is adversarial. One reasoner is a biased rationalizer, but *two* reasoners form an interlocking structure that — in the good case at least — creates epistemic externalities a third (or fourth or …) reasoners can use to push the general frontier of moral knowledge further.
Hi Josh! Thanks for your two excellent points.
On (1): what I call rationalist pessimism is a descriptive view that says that it really happens that processes of reasoning — which I now think consist in an interaction between intuitive Type I, algorithmic Type II, and reflective Type III processing — influence our moral beliefs (that’s the rationalist part), but that this happens not as often as we think and/or hope (that’s the pessimist part). Elitism about moral judgment is a related but separate idea. It’s a normative consequence that I consider, but the desirability or feasibility of which I remain skeptical about myself.
On (2): you accuse me, in so many words, of being only a softcore ratonalist — outrageous! As another softcore rationalist once said: I guarantee you, there’s no problem. Depressing references aside, I agree with you that the evidence for what I call the sufficiency thesis is in dire straits. Incidental affect seems to have an effect on moral judgment only in the way you describe (directing attention etc.) or none at all.
I don’t know why, but I find the claim that emotions are typically caused by moral judgment, rather than the other way around, difficult to accept. For one thing, I still think that the evidence regarding how emotional impairments (of fear, guilt, empathy — the usual stuff) lead to moral impairments shows at least something. Secondly, incidental disgust may not have a large influence on moral judgment (actually if I’m right, disgust either causes pseudo-moral judgments or genuine ones, if it does so in a reason-responsive way; see above), but it does seem to be influential at the trait-level (disgust-sensitivity for instance). Third, the general functional profile of emotions such as empathy (high false negative rate) or disgust (high false positive rate) has a lot of explanatory power when it comes to people’s moral beliefs.
But as I argue in the book, rationalists only need to be intimidated by this if they buy into the Platonism-for-the-people myth that emotions are this brutish, non-rational thing, which I don’t think they have to.
What do you think is the ultimate source of moral beliefs? And how can it be as dispassionate as beliefs about mundane facts are?
Thanks for the replies, Hanno! Some quick thoughts to help further the discussion:
On (1): Right, so I’m resisting your rationalist pessimism. But what say you about the pervasive role of normative/evaluative judgments in human life? Just think of the Knobe effect and related phenomena. Tacit moral evaluations seem to influence many other judgments we make. Or consider motivated reasoning. The evidence suggests that people guide their behavior by implicit evaluations of their options. They don’t just succumb to temptation by brute force. They rationalize (ante hoc, not post hoc) their choices in terms of implicit normative/evaluative beliefs. For example, we lie or cheat by (often unconsciously) thinking that we deserve more, that we did our due diligence, etc. If these aren’t moral judgments, then there’s a lot you’re leaving out, right?
On (2): Psychopaths and the like do suggest that a lack of guilt, empathy, and so on *early in development* leads to *some* problems with moral judgment. But I think the evidence is often overblown and ignores the deficits such individuals have with inference, learning, and attention (points Heidi Maibom has made well). Indeed, I think these cases reveal the difficulty of maintaining a sharp division between reason and emotion. But I think that’s a boon for hardcore rationalism. Emotions aid moral judgment by affecting rational capacities, such as inference, comprehension, and recognition. On this picture, emotions aren’t accorded any special role in *moral* judgment. They’re accorded the same role they play in any other form of judgment—quickly directing attention and the like.
The work on trait-disgust is suggestive but likely involves integral, not incidental, disgust. I’m happy to admit that emotions influence moral judgment when they affect one’s processing of moral reasons (whether or not they’re good reasons). But again I don’t think this accords a special role for emotions in distinctively moral judgment. Disgust might focus one’s attention on the heinousness of a sleazy politician’s scandal and cause one to condemn it, but the emotion’s influence on moral judgment is *via* reason, and in a way that we see outside of the moral domain.
So what’s the ultimate source of moral beliefs? Fair question! But, as a hardcore rationalist, I don’t think there is much of a special answer here. What’s the ultimate source of beliefs about geography? It’s the ordinary forming of beliefs (through inference, categorization, recognition, etc.) about geographical topics. Emotions might affect this process by directing attention and so on, but it’s not as though they are foundational or required for distinctively geographical judgment. The answer is the same for ethics, on my view. The ultimate source of moral beliefs is the ordinary forming of beliefs (through inference, categorization, recognition, etc.) about moral topics. Emotions might affect this process by directing attention and so on, but it’s not as though they are foundational or required for distinctively moral judgment.
Now, this isn’t the caricature of extreme rationalism, in which the typical person forms moral beliefs through *conscious* deliberation. But I think it’s pretty hardcore!
I do want to agree that we make moral judgments all the time. (I am not so sure about some of the examples you give though; the Knobe effect, for instance, doesn’t seem to be driven by moral evaluations as such — but nevermind). My divide and conquer move is merely supposed to (conveniently!) rule out those cases in which affective processes without any rational connection cause certain evaluative attitudes as cases of genuine moral judgment, regardless of how often that actually happens.
I also agree that the rational deficits of psychopathic individuals are routinely underemphasized, and go quite a long way in explaining many of their moral (and general agential) deficits. But the very marginal role in directing attention etc. when making moral judgments you concede to the emotions seems too limited to me. Basically, I agree with the sentimentalist that if we didn’t have feeling (and I don’t here mean pain and suffering), there would be no such thing as morality. (Please don’t tell anyone I said that.) Don’t get me wrong: I think that’s compatible with rationalism, and I also think that without rational capacities, there would be no such thing as morality.
I am intrigued by your geography analogy. Does your rationalism come with realist commitments?
Hi folks! Yes, thanks to Regina and Hanno for starting such a stimulating discussion here! There is clearly much to say, but I want to jump in with two questions that I have been struggling with in my own work and that I see coming up here:
1. I’m still wondering: what makes a view rationalist? I think this picks up on some of the points raised by both Regina and Josh about whether Hanno’s view is really rationalist, and if so to what degree. I hear in the literature, including Hanno’s book (which I’m just finishing!), a few different ways of answering the question. One answer concerns process: a rationalist view holds that moral judgments are (or should be) in large part guided by reflective/deliberative/reasoned (e.g. System 2) processes. Another answer concerns the nature or status of the role of reasoning/reflection in guiding moral judgments, and this is what I hear Josh suggesting: a rationalist holds that reasoning plays a special role in guiding moral judgments. A third would make a claim about the content of moral judgments: a rationalist holds that, at least some, moral judgments can be appropriately deemed rational (e.g. “more correct, more justified” [Sauer 2017, 129]). I take this point to be in contrast to Haidt-like, or emotivist, claims that moral judgments are affective, or socially motivated, reflexes that are not reason-tracking.
I have some follow up questions for each of the views, but I might pause here to give folks a chance to weigh in with thoughts about my question and the suggestions I’ve given as common answers.
2. Second, I’m wondering: what is the difference between “intuitions,” “judgments,” “evaluations,” and “appraisals”? This is coming up for me as I listen to this back and forth between Josh and Hanno about the role of emotion in moral judgment formation (and I think also piggy-backs on Norbert’s question about “educated intuitions” vs. “moral prejudices”). Hanno suggests that moral judgments are largely shaped by emotions (a claim to which I am sympathetic). But Josh replies that, actually, emotions are largely shaped by moral judgments. This seems to produce a conflict: it seems that it cannot be true that emotions primarily influence moral judgments and moral judgments primarily influence emotions. I wonder, though, if intuitions can influence emotions, which can in turn influence moral judgments. Is there a necessary tension there?
Again, I’ll stop there with the general invitation to give a bit more of a lay of the land with the hopes of clearing up some of the disagreement!
Cheers!
Hi Asia, thanks for your questions! I really appreciate your interest in my book, and I am glad you liked the discussion so far.
1. I think there is no fact of the matter here. The way I see it, part of what’s happening in coming up with a rationalist theory of moral judgment is articulating what it means for a theory to be rationalist. Then we look back at the theory, and see whether we would still find ourselves inclined to label it “rationalist”. But you see that there is a great deal of unclarity about that, some of which is taken care of by introducing distinctions between softer and harder versions of rationalism and the like. I mean it’s not like there is nothing we can say about the issue: a view according to which moral judgments are always based on conscious inferences from general moral principles with no emotional involvement at all would perhaps strike us as clearly rationalist, whereas the view that moral judgments are always based on gut reactions of affective (dis)approval which are thoroughly impervious to reasoning would perhaps be classified as anti-rationalist. I think my view if at least somewhat rationalist in that I think that reaosning plays an important, and indeed essential, role, in guiding moral judgment, even though I think the empirical evidence suggests that it does so in suprising ways, namely by educating our automatic intuitions and by shaping them further through socially exchanged challenges and responses which feed back into them.
2. I am not sure whether I know a good answer here apart from (trivial) explanations of how I, personally, happen to use these words (intuition etc.). I think everyone has to admit that the evidence regarding the direction of causation (moral judgment -> emotion vs. emotion -> moral judgment) is at least somewhat confusing and can be taken to suggest both. It could also be “patchy”, in the sense that both things happen on different occasions: sometimes, our moral judgment that foreigners don’t belong here will amplify our disgust, reduce our empathy, and make us angry; sometimes, our contagious suffering with animals will make us think that factory farming is wrong. I think either way, all of this stuff is embedded in (actually generationally extended) chains of reasoning that leave their mark on our moral beliefs, regardless of what proximally causes moral judgments most of the time.
Let me know if I complete failed to answer your questions! And sorry if I did …
Hi, Asia! Great issues to raise. For what it’s worth, I think I agree with Hanno. Ultimately not much hangs on the terms, but some uses certainly seem more clear and helpful than others for discussing certain debates. I think it just depends on what one’s interests are as a theorist.
Hanno, as for realism, I try to remain neutral on all that myself. I drew an analogy with geographical judgments, which might seem objective. But we could just as easily draw an analogy to inferences about topics that are clearly subjective (e.g. which kinds of ice cream I like). I think inference is inference whatever it’s inference about.
Thanks for the replies, Hanno and Josh! I agree, Hanno, that it is tricky to get a good grasp on the kinds of views that lie between firm rationalism and firm intuitionism. I think I’m interested in views that are right at the tipping point, or midpoint of the spectrum, so I appreciate yours! I do think, though, contra your suggestion, Josh, that there are stakes in the use of the terms. Given the long, and not coincidentally gendered, history of the reason/emotion hierarchy, I think there are social and political stakes involved in which view (rationalism, sentimentalism, intuitionism, etc.) is widely adopted and endorsed. (I’m currently working on unpacking those stakes as part of a book project.) Thank you both for weighing in on my question!