Join us to discuss Peter Railton's new Ethics article, "The Affective Dog and Its Rational Tale: Intuition and Attunement"! The article is available open access here. Bryce Huebner kicks off the discussion with a critical précis below the fold:

Peter Railton has done an exemplary job of integrating philosophical insight with data from the affective and cognitive sciences. His new paper is long and fairly programmatic, but it is also thick with ideas that will be of interest to philosophers and scientists alike. Those who are familiar with his work will encounter a relatively familiar view. But the details in this paper are novel, and taking them seriously has the potential to transform the way that we approach questions about moral cognition and practical agency.

 

The paper begins with a description of a pro bono defense attorney, who has systematically dismantled the prosecution’s case and built an air-tight defense for her client. In a sense, everything has gone exactly as planned. But as she begins to recite her carefully constructed summation, her words feel hollow, and both the jury and her client start to check out. Somethinghad gone wrong, but she isn't sure what it is, nor how to fix it. She pauses, and though she is mildly panicked and uncertain about what to do, she proceeds without reflecting too much on what she should say next—and as a result, she launches into an emotionally charged and compelling articulation of why the jury must acquit her client. The words come to her, at the right time, in the right way.

 

This scenario is meant to highlight a contrast between explicitly planned behavior and behavior driven by (largely tacit) practical competence. Railton has long argued that practically competent agents are able to make intelligent and accurate decisions without relying on conscious deliberation. Here, he has two aims: 1) to redeem this claim in the coin of context-sensitive, spontaneous, and affective mechanisms; and 2) to show that our moral intuitions are often reliable because these mechanisms "inform thought and action in flexible, experience-based, statistically sophisticated, and representationally complex ways—grounding us in, and attuning us to, reality" (p.41).

 

Railton’s story about moral intuition begins from the recognition that biological cognition requires sifting through and prioritizing a massive amount of potentially relevant information, doing so in a way that is sensitive to the difference between better and worse options, and updating our assumptions about what is better as our options change. Of course, only a small fraction of the information we encounter and process is relevant to our current and ongoing concerns, and how important it is always depends on our current situation as well as what else happened recently. For most biologically significant purposes, conscious and deliberate thinking is too slow and computationally expensive to do the job that is required. So like all other animals, we often rely on affective systems that are sensitive to the distribution and value of rewards, the probability of gains and losses, and subjective estimates of risk and uncertainty. These mechanisms compute ‘predictions’ about what the world is like, and they motivate thought and behavior in line with these predictions; but they also update future predictions in ways that minimize discrepancies between ‘predicted’ and actual outcomes. Over time, where the structure of the world is fairly stable, these ‘predictions’ will yield accurate representations of the world. By way of error-driven learning, we are able to become attuned to the distribution and value of the risks, rewards, and opportunities we are likely to encounter. Building a neurally plausible model of these kinds of affective mechanisms, Railton then appeals to data suggesting that we spontaneously engage with and attempt to understand the mental lives of others. And he argues that we rely on imaginative simulations (which are constrained by the affective systems discussed above) to plan for future action and test our options before we act.

 

This brings us to the most novel part of the paper. Moral psychology experiments have uncovered a network of mentalizing systems, counterfactual modeling systems, and affective systems, which together seem to allow us to imaginatively engage with morally significant situations. Railton uses this fact to show that the results of well-known experiments, which are supposed to show that our intuitions are unreliable, only reveal that these intuitions are produced in situations that deviate from the world to which they are attuned. For example, using an analogue to Joshua Knobe’s case of the chairman who doesn’t care about the environment, Railton argues that statistical learning systems are likely to track troubling and anti-social character traits, predicting that problematic actions are intentional because they are congruent with those troubling traits (in line with Chandra Sripada’s work on the ‘deep self’). He also addresses trolley cases, and Haidt’s work on consensual incest (I don’t have the space to address his reinterpretations in detail, but see §§18-20 of the paper). In each case, he tries to show that behavior in such experiments reveals the operation of a well-tuned and relatively accurate moral sense. This is a surprising hypothesis, but I think he’s pretty close to right.

 

In slightly different ways, Fiery Cushman, Molly Crocket, Steven Quartz, and I have developed similar sorts of arguments. There is a great deal of evidence that moral cognition relies on a network of affective or evaluative systems that are attuned to the distribution of risks and rewards that we encounter in the world. Understood correctly, these data do seem to reveal that moral ‘intuitions’ are produced by flexible and statistically-sophisticated mechanisms, which are sensitive to the regularities of our world. Nonetheless, I remain less optimistic about the operation of these systems than Railton seems to be. As he notes in passing (p.41), there's a dark side to the recognition that moral intuitions are produced by error-driven, discrepancy-minimizing learning algorithms.

 

Error-driven learning mechanisms seem to attune us to social norms and regularities. Specifically, there is evidence that the affective systems Railton discusses treat conformity with norms as intrinsically rewarding, and deviance from norms as errors to be corrected (Klucharev et al 2009, 2011; Huebner forthcoming). This is important because we live in a world that's thick with structural racism, sexism, ableism, trans*phobia, and xenophobia. As we watch TV and film, read novels and blogs, and walk around familiar and unfamiliar neighborhoods, we are bombarded with a constant stream of 'evidence' that supports (or at the very least fails to contradict) our exclusionary biases. Railton is right that we are attuned to the world in which we live, and that practical competence and moral intuition are subserved by statistical learning systems that adjust their behavior when, and only when, things do not go as expected. But as I’ve argued in my own recent work, this is part of what makes it possible for our biases to become calcified in the practices that we rely on to do academic philosophy, to navigate interpersonal interactions, to make medical decisions, and more. Where we are attuned to biased practices, our ongoing behavior helps to entrench problematic practices, leading to more robustly biased structures against which our future attitudes will become attuned. Railton does argue that we could use a process like wide reflective equilibrium to weed out our problematic intuitions—but we should be apprehensive about the viability of this suggestion. When the vast majority of our intuitions are attuned to a messed up world, we are likely to rely on problematic assumptions about what's right and what's wrong, as well as what counts as evidence for and against our reflective hypotheses; this problem will be even more robust where our biases have become calcified in the norms and practices that we rely on in reasoning about what to do next.

 

We need to find some way of getting our affective systems attuned to morally preferable values. As Railton notes, some people may have better attuned moral intuitions, and if so we would do well to cultivate skills that allow us to reliably find and rely upon these experts. But judgments about moral expertise, too, will depend on our assumptions and biases, which will be filtered through potentially distorted attunements. Even if our attunements are not distorted, however, we will need some way of figuring out that this is the case. As Marx famously notes, the educators must themselves also be educated—this is why I've been arguing that since social cognition depends on affective systems, ethics must be understood as revolutionary practice.

 

Huebner, B. (forthcoming). Implicit bias, reinforcement learning, and scaffolded moral cognition.

Klucharev, V., Hytönen, K., Rijpkema, M., Smidts, A., and Fernández, G. (2009). Reinforcement learning signal predicts social conformity. Neuron 61, 140–151.

Klucharev, V., Munneke, M., Smidts, A., and Fernández, G. (2011). Downregulation of the posterior medial frontal cortex prevents social conformity. Journal of Neuroscience, 31, 11934–11940.

 

**Thanks to Michael Brownstein and Eric Mandelbaum for helpful discussion on this post.

19 Replies to “Ethics discussions at PEA Soup: Peter Railton’s “The Affective Dog and Its Rational Tale: Intuition and Attunement,” with a critical précis by Bryce Huebner

  1. There is much to like in Railton’s impressive and wide-ranging piece. His responses to psychologists and experimental philosophers, which seem to me to be somewhat independent from his theoretical framework, are particularly insightful. On a theoretical level, I agree with him that intuitions are affective and that they often provide defeasible justification for moral beliefs. But I have worries about two related points: on what exactly intuitions are, and why they provide justification.
    So, what are intuitions? In what Railton calls the observational sense, intuitions are, roughly, spontaneous and compelling non-doxastic appearances that can directly guide action. This corresponds to the recently popular view of intuitions as quasi-perceptual seemings, a kind of experience that someone may have. Importantly, what distinguishes intuitions from straightforward perceptions is their subject matter: intuitions concern things we can’t perceive, such as something’s being “good or bad, appropriate or inappropriate… reasonable or excessive, beautiful or ugly, and so on”. The reason why we can’t perceive these properties is plausibly that they don’t stand in the appropriate causal relation to our experiences. As I said, something like this is now a common view, and I’ve endorsed it myself (see ‘A Humean Theory of Moral Intuition’, or HTMI for short).
    What is distinctive, and in my view problematic, about Railton’s view is that he relates intuitions as appearances to the notions of tacit competence and preconditions of conceptual thought. The latter of these, I think, can be quickly dismissed as a model for thinking about moral intuition. Moral intuitions, and philosophical intuitions in general, do have conceptual content: I have the intuition that it is wrong to push the fat man, for example. Whatever exactly a Kantian Anschauung is, it isn’t a propositionally contentful appearance. But this matters little, as what Railton calls the ‘classical model of intuition’ does little work in the argument. The notion of tacit competence, in contrast, is central to his account and response to critics of affective intuition.
    According to the tacit-competency based model of intuition, intuitions can be the manifestation of an underlying grasp of rules or generalizable capacities, more broadly manifestations of a skill. Such competence is tacit, since one cannot, and need not be able to, articulate the underlying principles. Clearly, the sense that something is a good move in chess or that a sentence is ungrammatical or that the audience isn’t with us often fits this picture – our System 1 has been trained to give us reliable guidance about certain subject matters, often in the form of an affective response.
    There’s two main reasons why I think this is a bad model for understanding the authority of moral intuitions. The first is that tacit competence -based ‘intuitions’ (or TC-intuitions) are (at least in principle) dispensable. Deep Blue doesn’t need chess intuitions, and a linguist can articulate a principle for why a sentence is ungrammatical. An autist can in principle figure out how the jury feels while lacking the relevant mind-reading skills. TC-intuitions are just a convenient heuristic or a shortcut. This is not (in general) the case for moral intuitions. We don’t have independent access to basic moral truths – that’s why we’re so interested in the epistemology of intuitions.
    The second key disanalogy between moral intuitions and TC-intuitions has to do with the acquisition of tacit competence. Skills are typically acquired by way of a feedback loop: we try something, which results either in a success or failure signal, and consequently modification of behavior. Or, as Railton puts it in the language of affective neuroscience:

    The firing rates and interaction patterns in these subsystems are updated through experience via “discrepancy-reduction” learning processes that continuously generate expectations, compare these expectations with actual outcomes, and use this information to produce a neural “teaching signal” that guides forward revision of expectations.

    To borrow (and simplify) Railton’s famous example, if you have the wrong kind of drink when thirsty, you’ll feel bad, and will try something different the next time, and keep doing so until you hit on something that does the job, which you’ll select again in the future. Through repeated experience, some things come to feel ‘right’ or ‘wrong’ – not morally right or wrong, mind you (see below), but rather the thing to do or the thing to choose, in more neutral language.
    My claim is that in the case of moral intuitions, there is no right sort of feedback, because we’re not in causal commerce with the normative properties (recall the point about the difference between intuition and perception). Suppose it seems to you that civilians hiding in a UN school deserve to be bombed, because they provide at least moral support for enemy soldiers. Sadly, acting on this intuition won’t result in unambiguous negative feedback – someone might of course be indignant with you, but they might be indignant with you even if you did the right thing, were they prejudiced or partial. If your sense of the best chess move is bad, you’ll lose a lot of games, but if your moral sense is off the rails, you might even end up winning more than losing. (And, as Bryce points out in his comment, I think, there’s a good chance that you’ll get positive feedback from your similarly biased peers.)
    A further reason to question the link is that TC-intuitions at least typically have a different phenomenology from moral intuitions. Think of the compellingness of moral intuitions: there’s no room for question in our sense that knowingly shelling a school full of refugee children is morally wrong. We feel that the case is closed. The sense that something is the right gift for a friend (to use another of Railton’s examples) isn’t like that. Nor is the flash of liking or disliking that Haidt talks about. (For my positive view, see HTMI.)
    If affective moral intuitions aren’t based on tacit competence in Railton’s sense – if they’re deeply disanalogous with social or linguistic or chess intuitions – does it follow that the sceptics are right instead? I don’t think so. This is not the place to defend an alternative view. But briefly, on a Humean picture, moral competence (if we want to keep the term) is a matter of adopting the ‘common point of view’ when one reacts to something with moral sentiments. Such competence is not a mere shortcut, nor can it be acquired by trial-and-error implicit learning mechanisms that track mind-independent moral facts.

  2. What an excellent article indeed! Philosophers and experimentalists will no doubt profit from reading it closely.
    I want to raise a general worry for the picture and how it’s motivated, though. The project is generally to discuss intuition, moral or otherwise. To my mind, the paper provides a strong case against those who think intuitions are always inflexible heuristics with “little understanding of logic and statistics.” But I take it there are also conclusions about moral intuition and moral judgment in particular. One of the ideas seems to be that the (broad) affective system plays a very important role in moral intuition and thus moral judgment (insofar as judgment is informed by intuition).
    Now, as noted in the piece, the affective system is something we share with many other animals, yet moral judgment is not. And much of the empirical evidence cited involves nonhuman animals. Much of the evidence regarding the affective system, and the paper’s focus generally, is on action, not judgment. Many aspects of action we share with other animals, as well as likes, dislikes, preferences, and various emotions. But this hasn’t got us to moral judgment, I’d think. I wonder then about how the affective system can illuminate what is distinctive about human moral judgment.
    What’s distinctive about moral judgment is certainly a contentious issue. (Mikhail, in his contribution to the Ethics symposium offers part of an intriguing answer.) But I’d think many would agree moral judgment involves assigning a deontic (permissible/wrong), not merely evaluative (good/bad), status to the actions of others (including distant strangers).
    My worry is that the affective system is only crucial for a kind of evaluative assessment, which can inform even mere preferences—again, something we share with animals incapable of moral judgment. I’d think the heart of moral assessment takes the states closely tied to the affective system—our preferences, likes/dislikes, etc.—as input. But then perhaps what is distinctive of moral judgment can’t be elucidated by the affective system itself.
    Now, part of the point of the paper is that the affective system is flexible and can be trained by acquiring information about our circumstances and so on. So perhaps in humans it just develops the capacity for (deontic) moral assessment, given our complex social interactions. However, I’d think it’s not just the environment that makes the difference. We also have equipment in the brain that has given rise to moral cognition. (It’s presumably the kind of equipment that allows us to overcome the biases Bryce highlights, for example.) So such training must involve more than the affective system, and the key to understanding moral judgment would be in such distinct systems or processes.
    A solution might be to emphasize how broad we need to characterize the affective system in order to account for the kinds of cognition distinctive of moral judgment. But then it looks like we’re going too far, failing to draw fruitful scientific divisions in the mind. This doesn’t mean the broad category of intuitions can’t still elucidate moral judgment, but we would have to jettison the idea that intuition is “alive and well, and living in the affective system” (859).

  3. I thought the article was great, too, and Bryce’s precis very interesting. I share his worries about relying on wide reflective equilibrium as a way of identifying and rejecting the problematic intuitions. We are certainly tracking something in the ways Peter describes, but, for reasons Antti has developed in his post, it is hard to see what we are tracking in the moral cases as mind-independent moral facts. For these reasons I was hesitant to accept Peter’s analysis of what was going on in some of the empirical research on moral intuitions that he discusses, particularly in the case of the moral dumbfounding cases developed by Jonathan Haidt, et. al.
    The case Peter discusses is that of Julie & Mark (section 10): “Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love?”
    Haidt et. al. discovered that people did not think it was OK, and that they had difficulty saying why once it was pointed out to them that the harms they thought would follow from incest had been ruled out in the scenario. People experience a flash of disgust at the thought of incest, and this underlies their reaction which is resistant to evidence that the act was not harmful. This is the “moral dumbfounding” and seems to indicate that our affective system, or System I, is not very flexible and seems impervious to the sorts of things that characterize System II — such as considerations of costs and benefits. Peter wants to challenge this, and part of the challenge is his take on the significance of the moral dumbfounding case.
    Peter is completely correct to note that the case itself is not well presented in that it overlooks the fact that one thing that may be influencing people is the fact that, whatever actually happened in the case of Julie and Mark, in the real world what they did was incredibly risky. They ran a significant risk of generating great harm, even if none actually materialized. And certainly this will influence people’s intuitions, even if they aren’t very good about articulating that worry. Haidt, however, could reformulate the case so that this is factored into the description, so that Julie and Mark are extremely unusual, and they were fully and reasonably aware that no risk of harm existed, etc. Even so, we would probably get a similar dumbfounding result. Of course, it could still be the case that what is influencing the reaction to the case is the fact that our intuitions are sensitive to what is normal, and abnormal cases will throw us off. But this is consistent with how System I is viewed — as working fairly well under normal circumstances, but not so well in novel ones.
    This is not to say that I think that there are no other problems with these cases. There have been some excellent articles written recently, one by Dan Jacobson, and another by Jeanette Kennett, both of which do a thorough job of critiquing Haidt on this issue. For example, one might argue that Julie and Mark are flouting relationship defining norms. However, I think that our affective responses in cases like this — in cases in which disgust is triggered — are not that flexible if what we are looking at is the question of flexibility in a given agent over a relatively short period of time. Over time, as Peter notes, people are capable of changing, and we have striking historical evidence of great changes in affective reactions. But for an individual subject to these responses it will take a good deal of time to bring about that change, which shows that at least some of these affective responses that are characteristic of moral judgment are not that flexible.

  4. Peter:
    I am inclined to endorse just about everything you say. For this reason, my questions are really just invitations for you to clarify and/or elaborate your views.
    *What, if anything, do your observations suggest about the role that intuitions do/should play in the setting of ends? How is our “tacit susceptibility to information about whether things are going well or ill,” given our ends, related to the experience as of being responsive to information about which ends we should have? Is the practical role of the emotions limited to helping an organism respond to “the challenges and opportunities [this organism] faces in meeting its needs and achieving its goals”? Or can we rely on these psychological states to reach conclusions about which goals are really worth achieving? What evidence shows that “well-tuned affective states can be fitting responses to value”?
    I think that these questions are closely related to Bryce’s worry about the way that prejudices can entrench themselves in our intuitions. As he notes, your answer to this worry is to be found, at least in part, in the last sentence of section X. (Since “our intuitions do not bear upon their sleeves the seal of validity,” “our reliance upon them should take place within an overall method of wide rather than narrow reflective equilibrium.”) Are there any rules of thumb you think we should follow in assessing “whatever evidence can be found about why we might have, or find plausible, certain intuitions rather than others”?
    I think that Bryce’s comments are also relevant to your remarks about the case of Julie and Mark. You suggest that in evaluating “the likely outcome” of what Julie and Mark did (its likely “risks, benefits, and costs”), we should consider our own “experience of familial relations.” But aren’t the major costs/harms of incest under the stipulated circumstances likely to be psychological and social? And aren’t these harms (viz., distress, social instability, etc.) parasitic on the very intuitions whose legitimacy is at issue?
    *What is the relationship between knowing that a particular response is called for and knowing how to respond? Does your characterization of “implicit practical knowledge” have any implications for discussions about the relationship between knowing that and knowing how?
    *On your account, what has gone wrong if someone has the sort of “susceptibility to information” you describe, and yet cannot make sense of her own responses? What distinguishes this case from the case in which someone cannot say why her response is the right one, and yet “feels” that it is? What is this feeling, if it is not simply the experience one has when one manifests the relevant susceptibility?
    *I am not sure I understand your analysis of the Boardroom and Goat cases. Is the idea that someone “expresses” ill will toward me when he is indifferent to the harm his action causes me; whereas someone “expresses” no good will toward me when he is indifferent to the good his action causes me? This seems right. But how do these assessments rely on a “model of the agent”? Can’t we make the relevant judgments without having any views about what the agent is generally like? Of course, it’s hard to see why someone would be indifferent to a certain consideration in one case if he were not also indifferent to this consideration in others. But is this point essential to distinguishing between the two Boardroom and the two Goat cases?
    *I am intrigued by the experiments you describe in section XI. I wish I better understood what the science tells us about the mechanism by which an animal’s goals prompt it to explore not-yet-encountered possibilities. (Does having a goal prompt the search for alternative means? Or is this just what having a goal comes to?) This is, I realize, a rather unfocused question. So, I’ll conclude by asking something very specific: When mice discover a new path to food in a maze, how do the scientists rule out the possibility that they are relying on their sense of smell (and so, are not really relying on “self-improved representations” of the maze which they worked out in their sleep)? (Do they blow air in the opposite direction of the critters’ noses?)

  5. Railton’s “The Affective Dog and Its Rational Tale” is an excellent and important essay with which I am in almost entire agreement. Since saying “yes’ to everything he asserts wouldn’t aid the discussion much, I’m just going to ask Peter about the scope of the view of moral judgment he is proposing, and in so doing defend the account as I understand it from some of the criticisms raised above.
    The question of scope: Roughly, if a lawyer’s intuition that she is not “connecting” with the jury before her is a non-inferential cognition (or representation) of a psychosocial fact, which has its source in an evolved, adaptive, flexible means for assessing the minds of others, why not say that basic aversions (to bodily harm) and appetites (for food and sexual intercourse) are themselves non-inferential representations of what is bad or good for the animal who experiences them? Indeed, though Peter emphasizes that pain is bad in itself, part of its evolved function is to indicate (or represent) what is bad for the animal who experiences it so the animal in question can coordinate its efforts to heal the wound or avoid the aversive stimulus. (This is why bodily pain has an apparent location, intensity, etc.) The more general picture behind Peter’s work on intuition would seem to be the kind of explanatory realism he defended in earlier work on the metaphysics of value: we have the basic soma, emotions and discursive evaluations we do because they provided us with a better guide to what is good or bad for us than did other soma, emotions and discursive evaluations. (The rightness and wrongness of policies and institutions are the community level correlates of these facts.) Doesn’t this fall out of describing the system 1 processes and mechanisms as adaptations? Of course, the reliability of moral intuition and other basic (or non-inferential) representations of value is highly imperfect and must be corrected with reflection and reasoning, but this is also true of more theoretical (or less value-laden) experiences, intuitions and judgments: paradigm cases of sensory perception are largely reliable but must be corrected with reasoning and reflection if they’re to provide us with knowledge on scientific matters that bear little connection to our most basic needs. Is this right? Does Railton’s view of the adaptive history and consequent reliability of intuition rely on his realist metaphysics of value?
    Three criticisms:
    Criticism 1: The sense that Railton’s picture is a realist one seems to drive some of Antti Kaupinnen’s worries above. I take it that the worry is that we cannot evaluate our intuitions for accuracy, truth or reliability, and that we would be able to evaluate them in this way if realism were a correct metaphysics. I agree with Kaupinnen that when we evaluate the reliability of our moral intuitions we must appeal to soma, emotions, evaluations and other intuitions; we don’t have “independent” access to an “independent” moral reality that we can use as a check. But this is also true in the theoretical realm. To regurgitate a familiar example: we don’t have independent access to the colors we can use to evaluate the reliability of our color experiences and judgments. When we look back on the evolution of color vision and argue that we have the basic color experiences we do because they provided us with a better guide to the colors of things than did other possible sensory systems (i.e. the random mutations that did not persist) we are employing a set of color concepts and judgments derived from those very experiences. But at this level of inquiry, all reasonable epistemologies are coherentist. This does not provide a cogent argument against moral realism unless it’s being used to argue against a realist conception of all secondary properties and all aspects of the manifest image not reflected in the scientific image (or “fundamental physics” as people like to say). But then you’re cooking in van Inwagen’s kitchen. If you want to talk about particles arranged tablewise, we need to have a different discussion. In either event, I don’t see an argument against realism about value in particular.
    Criticism 2: So I guess I also disagree with the claim that “In the case of moral intuitions, there is no right sort of feedback, because we’re not in causal commerce with the normative properties.” If your moral sense is seriously “off the rails” and you fail to understand the consequences your actions have for the lives of others, or you understand these consequences but see nothing wrong with hurting people when it will advance your selfish interests, you will typically get negative psycho-social feedback. People will shout at you, refuse to be your friend, or even spend resources to hold you in captivity. Of course, if you’re powerful enough you can suppress these negative reactions or ignore them. But the powerful can equally resist the negative feedback we typically experience when we act on false value-neutral beliefs. The touchy king only hears what he wants to hear.
    Criticism 3: It also seems to me that Railton is right that there is a great deal of tacit competence we employ when formulating those moral intuitions we express in judgment and a great deal more tacit competence we employ when interacting with one another that never gets expressed in assertions or explicit judgments. (There are real points of disanology between moral and grammatical competence, but this isn’t one of them.) It’s an important task for moral philosophers (perhaps the most important task for moral philosophers) to articulate the tacit principles we assume when judging and evaluating each other in order to critically evaluate the principles they uncover. (Again, I agree that we can’t evaluate the principles we tacitly assume when forming our moral judgments by putting all our moral beliefs and intuitions to the side to then see whether the principles correctly represent the moral realm. Instead, we can and should use everything we know to separate the reliable heuristics and valid rules of evaluation from the unreliable heuristics and objectionable assumptions.) But mightn’t the moral competence Peter brings to light really just amount to social intelligence or skill at manipulating others? This gets to Sarah Buss’s worries that we can have intuitive knowledge of the means to our ends provided to us by an evolved, adaptive set of affective mechanisms, but the selection of ends is set by nature, enculturation, a radical existentialist choice or something similarly non-cognitive. This was Ryle’s view in the Concept of Mind when he says that “moral knowledge” or “wisdom” really just refers to expertise at negotiating inter-personal relationships and so rejects Aristotle’s claim that there is a kind of wisdom (phronesis) that requires using this expertise for ends we judge just or good upon reflection. I wonder what Peter thinks about this. My own sense is that the kind of explanatory realism he defends in earlier work provides a coherent response to skepticism about phronesis. I admit that realism is no more forced on us than is Ryle’s skepticism. But that’s because neither view forced on us by the data. (This is metaphysics after all!) In fact, I think a literal (some would say naïve) interpretation of evolutionary psychology and sociology support the kind form of value realism that seems to be lurking behind the scenes here. We are scared of tigers and not bunnies because the former really are (were) dangerous (for us) and the latter not. (Those who do not have these affective dispositions are not well adapted.) We care about our children and the legislation and enforcement of laws that might provide them with an “organically” stable and peaceful society (i.e. a society whose stability does not rest upon widespread ignorance and violent suppression of dissent) because our children flourishing in this way is better than the alternative. (Those who do not have these goals are not well adapted.) Perhaps Peter gets off the metaphysics train before it arrives at this radically Hegelian form of moral realism. But if so, I would like to know where he disembarks.

  6. So many thoughtful and probing comments!–Many thanks Antti, Josh, Julia, Sarah, and Aaron. I’ll need to reflect a bit, and then gladly join in. Best, P

  7. This is a fascinating paper, and I’m very much in agreement with most of its central points. The synthesis of cognitive science and moral theory is extremely helpful and a great model for people writing in this area. (For those interested, another very good paper making a similar argument is Hanno Sauer, 2012, ‘Educated Intuitions: Automaticity and Rationality in Moral Judgment’, Philosophical Explorations 15(3): 255-275.)
    I’d like to add three somewhat disjoint comments about the paper. The first two are meant to be supportive of Railton’s position, in light of some challenges given above. The last is more critical, or at least more querying.
    First: What is the experience base for acquiring our implicit moral competence? Antti Kauppinen, above, raises the worry that in the moral domain (unlike in, say, chess) we do not get the right sort of feedback in training up our implicit moral cognition. That is, Antti says, we do not get “unambiguous negative feedback” when we make a moral mistake. But I am not certain this is right. Think of the sort of feedback young children get – from parents, teachers, peers – when they make moral mistakes. For elementary moral violations (e.g. hitting someone for no good reason, taking others’ things without asking, etc.) the feedback is indeed unambiguously negative. It’s plausible to think that our moral intuitions about complex and novel situations (such as bombing refugees) result from an implicit cognition properly trained up on a wide range of much simpler situations bearing certain similarity relations. So: we needn’t have been given unambiguous training on the complex situations themselves in order to have properly trained up intuitions about them.
    Now I’m aware that this leaves out part of Antti’s challenge, having to do with the lack of “causal commerce with the normative properties”. But this is only a challenge if we suppose that there are causally efficacious moral properties. I can see why, given Railton’s other views, this is an interesting question to put to him (and Aaron Zimmerman expands the point). But I don’t see why someone holding the views Railton defends *in this particular paper* must claim that there are causally efficacious moral properties. As I understand it, the point of the paper is to defend against a simplistic conception of moral intuition as “automatic” (in the mindless conditioning sense). To do this, it is only necessary to show that there is a way in which our moral cognition can be implicit yet “spontaneous” and sophisticated in the way that Railton suggests. For this purpose, we needn’t assume that there are causally efficacious moral properties. We need only assume that morality is a social system comprised (in part) of non-arbitrary and broadly coherent rules – and that one can acquire greater or lesser competency in applying these rules. The thesis is then that our implicit moral competence applies the rules in a “spontaneous” (fluid, sophisticated) manner rather than an “automatic” (rigid, conditioned) way. And this can be explained, as in the last paragraph, by suggesting that our implicit moral competence is built up from unambiguous negative (and positive) feedback from others about decisions on a wide range of relatively simple moral choices.
    Second: How likely is it that our implicit moral competence contains mere prejudices of those who surround us? As Bryce points out (and as reinforced in comments by Antti and by Sarah Buss) if our moral competence results from anything like a statistical learning process, then there is nothing to stop us from internalizing harmful prejudices when our training comes from a prejudiced society. This isn’t (in itself) an objection to Railton’s account of moral cognition, so much as a reason to be very worried about our actual moral practice if Railton’s account is correct.
    Railton addresses a related problem briefly, by pointing to wide reflective equilibrium as a means of subjecting our intuitions to scrutiny. But (as Anthony Appiah wrote in Experiments in Ethics) when reflective equilibrium becomes wide enough, it just looks like a redescription of the problem of how to decide what to believe rather than any solution to it. How do we know whether or not to trust a particular intuition? If we have some independent means of assessing our moral intuitions, why not just use that independent means to make moral decisions, rather than using intuitions at all?
    I’d suggest that Railton’s account has an answer already built into it. Our grounds for distrusting some parts of implicit moral competence come from within implicit moral competence. To reprise Bryce’s example: I know to distrust my prejudiced cognition (once I become aware that it is prejudiced) because I find it intuitively objectionable to behave in a prejudiced way. I can intuitively disapprove of my own intuitions, to the extent that these reflect prejudice. In effect, what I am doing in reflective assessment is allowing myself to become aware of inconsistencies within my implicit moral competence, and trying to work out how best to resolve these inconsistencies. And (though this is super-speculative) it may even be that working out this resolution also relies on implicit moral competence – that is, I don’t need any independent means of resolution. It is the very basic elements of moral learning (share with others; don’t hurt people for no good reason) on which we have received early and continuous unambiguous feedback, that we fall back upon when more complex intuitions are in conflict. And (I think) these basic elements tell in favor of aiming to rid ourselves of prejudice.
    Third: What does the implicit competence model do to our conception of ourselves as moral agents? There is a tradition in contemporary ethics, associated with people like Christine Korsgaard and David Velleman (and plausibly traced at least to Kant) that sees making moral decisions as constitutive of our agency, or even constitutive of our selves. As Korsgaard has it: “we are responsible because we have a form of identity that is constituted by our chosen actions. We are responsible for our actions not because they are our products but because they are us, because we are what we do” (Self-Constitution, p. 130).
    In light of the implicit competence model, this raises the question of who (or what) chooses our actions. We like to believe that it is the conscious self – the ‘I’ that perceives and ruminates and experiences regret or pride – that makes choices. But the implicit competence model suggests that very many of our moral choices are in fact driven by principles of action to which we do not have conscious access. We might confabulate principled grounds for our actions – we might even correctly reconstruct implicit considerations after the fact – but in the moment of decision it is not usually true that the conscious self has access to the myriad factors driving the “felt” correct decision.
    To some philosophers this is unlikely to be very troubling (I don’t imagine Aristotle would be all that bothered, for one). But I do think it is a serious problem. To the extent that we regard human beings as morally distinct from animals, complex natural systems, or sophisticated machines, it is at least partly because of the distinctiveness of conscious human experience, and in particular the role of conscious deliberation in regulating our actions. There is something it is like to be a human, and our recognition of that something-it-is-like in others is critical for holding them morally accountable. (Strawson’s examples of entities from which we withhold the reactive attitudes are mostly those that lack conscious deliberation, or those in whom conscious experience has come unhinged from either reality or bodily action.)
    If the implicit competence model is right, then many if not all of our morally-valenced judgments and actions do not actually belong to us-qua-conscious-agents. Our bodies (including our verbal apparatus) execute spontaneously fluid actions, and our decisions reflect affectively-attuned intuitions – but the motive force for either is opaque to our conscious selves. The conscious self turns out to be mostly morally epiphenomenal. Could that be correct? If it is, how much of our moral self-conception must we revise?

  8. As I hear Railton’s wonderfully informative and compelling story about the role of the affect system in human life, one moral seems to be that we would be wise to treat our intuitions as sensitivities worth listening to—sensitivities that are unlikely to be sounding alarms when there is nothing around that could reasonably be taken to be a threat. On this model we should treat our intuitions as needing interpretation. What is the good point, exactly, in being not ok with sexual relationships between siblings? Our intuition that there is something not ok here, but no obvious harm to anyone, should lead us to wonder what the good point of our moral alarm was in such a case rather than to think we have a generally useful alarm that is sometimes glitchy. This attitude towards our intuitions could be contrasted with the idea that we should treat an intuition that p as pro tanto evidence that p. This latter use of intuition, it now might seem, risks slipping in unnoticed an interpretation of the point of the alarm. Assuming that the intuition that there is something wrong in the sibling sex case is itself completely indifferent to harm since all harm has been hypothesized away, for example, looks in retrospect like an insensitive interpretation of the good point of the alarm rather than the result of a glitchy mechanism. [I too, like Julia, was going to mention Dan Jacobson’s convincing work on just this sort of thing.]
    All that seems very compelling to me. But like Sarah and others I want to hear more about the type of ends of the agent that determine what counts as a reward and how socially malleable these are. If what counts as the relevant ends and rewards are too socially malleable (or too responsive to idiosyncratic ends), then the impressive updating and learning of our affective system would seem quite as capable of taking us away from whatever moral reality there is as it is capable of moving us towards it. I am going to guess that you would accept that an affective system such as ours (of the sort that other rational agents might have had) is in principle quite capable of nudging us away from moral reality but that contingent features of our evolutionary history shaped our affective system in ways that made it have, at least when well-deployed, a non-accidental tendency of nudging us towards moral reality. And I am assuming that you would want to cash out “well-deployed” in a way that does not just build in that conclusion on the cheap. Likely I should pause there and give you a chance to correct assumptions I have made.

  9. Thanks to Aaron and Regina for challenging my challenge to Railton (I’ve only met him briefly, so I don’t feel comfortable to call him “Peter” yet!). Let my try a few quick responses.
    First, on independent access. For a clear case of the kind of contrast I had in mind, consider a mechanic who acquires tacit competence that enables her to TC-intuit what is wrong with an engine on the basis of the sound it makes (I know some people like this). (Some might prefer to talk about perception here, but I don’t think it matters a great deal.) She might not be able to articulate her reasons beyond “Well, it just sounds like the mixture’s too rich”. But in principle, at least, there’s no need for anyone to have any such TC-intuitions. There’s a fact of the matter than can be discovered by observation and reasoning. There’s an intuition-independent access to the facts, and TC-intuitions owe whatever authority they have to approximating the standard set by other methods.
    This still seems to me significantly disanalogous with the case of moral intuition. I’m not a skeptic about moral intuition, or the possibility of calibrating intuitions on a holistic basis (which doesn’t allow for dispensing with intuitions altogether). To be sure, I haven’t made the case that what I say is true of all TC-intuitions. But it certainly holds for the sort of TC-intuitions that are most prominently studied by empirical psychology, such as the TC-intuitions of nurses, firefighters, and investors. And it’s true of Railton’s lawyer’s TC-intuition that she’ll convince her audience more effectively if she shows emotion rather than cool argument. So: if all TC-intuitions are in principle dispensable, but moral intuitions aren’t (as a whole, even if individual intuitions are), then moral intuitions aren’t TC-intuitions.
    Second, on feedback. If your moral sense does not conform with the moral sense of those around you, you will indeed most likely get negative feedback from those around you. But the feedback is only a sign of conformity or disconformity, not right or wrong. There’s surely a vast gap between the two! This contrasts with the feedback a nurse gets if her TC-intuition tells her that a baby is well and she isn’t. When the baby’s temperature rises and she keeps crying, the nurse gets information that helps her recalibrate and correct her sense of when a baby is unwell. Disagreement with others, in contrast, doesn’t signal that I was mistaken to begin with. (This is also a worry I have with Regina’s suggestion that we think of morality as a social system of rules – I don’t deny that we can acquire TC-competence with social morality (in the same way as we do with etiquette), but I take it that Railton’s view is more ambitious.)

  10. Reply to Bryce summary
    First let me thank Bryce for his thoughtful précis of “Affective Dog”. The paper is, as he notes, programmatic, urging a more or less systematic view of the role of affect in moral thought and action, and trying to connect this with long-standing philosophical thinking about intuition and intuitions more generally. He points out that I’ve been urging consideration of the nature and role of non-deliberative processes and competencies in rationality for some time—indeed, as Aaron observes, I have posited tacit evaluative learning by individuals and groups through experienced-based feedback ever since my earliest papers. What’s different is that here I am trying, as Bryce puts it, (1) to “redeem” these ideas by pointing to recent work in psychology and neuroscience, and (2) to argue that this work affords a natural and potentially vindicatory interpretation of intuition and intuitions. But it’s all, of course, very preliminary.
    Let me start with a few responses to Antti’s first post and to Josh.
    Antti doubts whether I have given a plausible account of moral intuitions—in effect challenging (1) by first challenging (2). In “Affective Dog” I try to give an observational characterization of intuition and intuitions, and then an underlying model in terms of tacit competencies. In the observational characterization, however, I’m not sure that intuitions are, as he puts it, “quasi-perceptual seemings, a kind of experience”, mostly because I’m not sure what a quasi-perceptual seeming is. One of the things that strikes me as especially interesting about recent work on brain architecture is that it affords a way of understanding how evaluation enters continuously into ordinary perception—the perceptual stream passes through affective areas that keep track of evaluative information, and thus is encoded with value by the time it reaches declarative cognition. For this to occur it needn’t be that the evaluative information is presented to the conscious mind as a perceptual feature of experience akin to a secondary quality—it is enough if the information shapes the attitude of the individual toward the perceptual content. For example, in ordinary perception of a scene, s, what emerges spontaneously from the affective coding of the perceptual stream might be (some degree of) confidence that s or trust in s, i.e., the “non-inferential” perceptual belief spoken of by philosophers. Think of the attitude as of the familiar form, attitude[object], where the object is the perceptual content (whether this is propositional or not). This is, for example, the sort of attitude one can lose (and so not form the non-inferential perceptual belief that s, given a perceptual appearance as of s–or form such a belief with low credence) when one’s affective system tacitly detects something anomalous in one’s world or internal state. On this picture, the attitude, credence or distrust, say, is not yet a judgment that the scene perception presents me is credible or not, but rather an antecedent “sense” or “feeling” with respect to s, that typically yields such a judgment should I reflect on the question.
    So while Antti is quite correct in saying that, on my view, intuitions involve unobservables like credibility, goodness, or appropriateness. But this does not mean that intuitions are distinguished, in my view, by having a special “subject matter: … things we can’t perceive”, as he suggests. An intuition that some state of affairs is good or inappropriate likewise. Think of the intuition as a sense of “good that s” or a feeling of “inappropriate that s”.
    Intuitions can thus have conceptual content—the affective system stores information in terms of projectable categories and associated expectations—even though the nature of an intuition is the nature of an attitude or feeling toward an object, not a judgment about or concept of an object. As Kant and Aristotle urge, intuitions must be capable of connecting experience with concepts, so what emerges from intuition must have introduced concepts, non-deliberatively. This relates to the question of phenomenology, as Antti notes. So I don’t exactly intuit “that it is wrong to push the fat man”, but rather have a spontaneous sense that pushing this man is somehow inappropriate or unacceptable.
    Linguists have long used something like this idea of intuition as an attitude toward an object. A given sentence might be given a “*” if it seems anomalous or unacceptable to a native speaker—but arriving at the judgment that it is ungrammatical (say, that the fault is syntactic rather than semantic or pragmatic) is a further step. Consider Winston Churchill’s famous one-line refutation of grammatical orthodoxy:
    *Ending a sentence with a preposition is something up with which we should not put.
    Churchill is counting on the fact that native speakers find this sentence anomalous at best. But it falls to the grammarian to judge what sort of fault it is. Perhaps a native speaker will be unsure what to think of this sentence, then the descriptive linguist will record not a “*” but a “?”. My intuitions about moral cases often are “compelling”, though not, as Antti puts it, such that I “feel that the case is closed” on the question of rightness or wrongness.
    About the process of intuiting, though, we might be in closer agreement that it appears at first. As he describes his view, Hume attributes to us a moral competence as a matter of “adopting the ‘common point of view’” and reacting with “sentiment”. I think the tacit competence involved in morality has, at its core, a capacity for spontaneous empathic simulation of situations from multiple viewpoints—something that I think is actually going on continuously, tacitly, as we navigate the social world. And because the simulations are being run on our affective systems, they are more than passive models—instead, they spontaneously shape our attention, attitudes, motivations, and behaviors. Learning from trial-and-error experience is part of the development of these expectation-based models of situations, but dispositions to feel sentiments are equally central. And the developmental literature suggests that some of these empathic dispositions—to be distressed by other’s distress, for example, or (in Kiley Hamlin’s suggestive research) to prefer those who help others over those who harm them—are present in normal infants from very early on. It seems to me that these are responses to value-constituting facts that are independent of how we conceive them. Does this seem to be at all in the spirit of your view of intuition, Antti?
    I am very pleased that Josh finds “Affective Dog” provides a strong case against the view that intuitions are merely heuristic-based with “little understanding of logic and statistics”. At the same time, he’s not convinced that I’ve done much to illuminate the special character of moral intuitions, since the kind of affectively-grounded evaluative responses I attribute to humans are characteristic of animals as well. Indeed, much of the evidence I site is from research on animals. Yet animals, we think, do not have moral intuitions. So the difference must lie elsewhere. For example, he points out that moral assessment involves deontic as well as evaluative categories, and thus goes beyond the sorts of affective mechanisms I describe, even if it takes such states as “input”.
    Josh is certainly right that I do little in this piece to demarcate the moral. Moreover, since I’m claiming that intuitions need not be concept-based, I cannot appeal to the involvement of moral concepts to do the distinguishing work.
    My sense, reading the primate literature, is that we aren’t yet in a good position to say whether chimps or bonobos or macaques possess concepts or assess situations in ways we should distinguish as moral—at least some ethologists seem to think that primates have a grasp of concepts. And some think they have distinctive responses to situations of unfairness, for example (de Waal, if I understand his view). Even so, however, there would remain the question of what makes a response be a response to unfairness as such, since affective responses like dispreference don’t suffice.
    Perhaps what we should be looking for in trying to distinguish the moral is an integrated set of phenomena—ability to discriminate situations in morally-relevant ways, disposition to respond to morally-relevant features with certain feelings such as guilt or resentment or benevolence, ability to understand permissions and prohibitions as well as benefits and harms, ability to reflect critically on one’s first-order responses, etc.—which could be present to a greater or lesser degree in a given individual or species.
    Humans, at least as we find many them in adulthood today, seem to have a lot of this stuff, and it seems to function in them in something like the integrated way we would hope to find in order to attribute bona fide moral cognition and agency. But I wouldn’t want to say that individuals with only a smaller portion of these abilities isn’t having moral responses to situations. Empathy, for example, equips the individual for impersonal sensitivity to benefits and burdens, and, if the individual has some disposition toward benevolence or care, for being influenced affectively by such considerations in morally-appropriate ways. A capacity for causal modeling of situations enables individuals to form projectible expectations. Quite young children have these capacities, and they are might lie behind their reliability in distinguishing conventional vs. non-conventional rules—Nucci and Turiel have found that children as young as three appear to attend to whether the rule concerns something authority-independent, generalizable, and serious. Suppose that this discriminatory ability is driven by an empathic modeling of the kind described above—is that enough to create an entering for the introduction of deontic moral judgment, once the more general concepts or permissible or prohibited are available?
    Our civilization has developed the conceptual wherewithal to distinguish much more finely, and developed meta-representational abilities enable us, along with a great deal of social learning, to apply these concepts reflectively to our own thought, feeling, and actions. On my view, like the classical view of intuition in Aristotle or Kant, the role of intuition is to enable such higher-order conceptual capacities to engage in actual experience with appropriate effects on thought, feeling, and action. Even reflection draws continuously on intuition to mediate its inferential transitions—that is the regress problem.
    So Josh is right that I haven’t given an account of what would make for a bona fide moral agent, and that we cannot locate the whole of the difference in the individual’s affective responses. My hope was to give a plausible account of how intuition could play the complex roles demanded of it in order to make something like bona fide moral agency possible—attuning the individual (to paraphrase Aristotle) in the right way, at the right time, toward the right things, with the right attitude, etc. to make such agency possible. This seems, to me at least, not the picture of intuition that is commonest in the best-known “dual process” models of the mind (e.g., in Haidt, Greene, Kahneman, Bargh, and others), even though I do think there is significant empirical evidence for it—even taking a very disciplined view of what constitutes the broad affective system.
    Thanks to both Antti and Josh for responses that have forced me to think harder about what I am trying to say—and why. P

  11. I’m not quite sure where Peter stands on these sorts of issues, but I’m at least a bit troubled by the way that some people are trying to get traction on the view by appealing to a System-1 vs System-2 distinction.
    One of the most interesting aspects of the recent work in computational neuroscience – which lurks behind Peter’s picture throughout this paper – is that it provides a strategy for replacing this sketch of a theory with something more biologically and computationally rigorous. I think there is good reason for Peter to avoid the details of the relevant models, as people are often scared off by the dynamical equations you need to make them work. But there has been quite a bit of good recent work on the algorithms that are operative in the production of moral judgments. I think that this work really helps to get to an answer to the sort of question that Josh is asking above. For those who are interested, in how that picture works, in broad detail, I would strongly recommend Molly Crockett’s recent paper in Trends in Cognitive Science. It provides a nice and readable account of how to integrate affective representations with more distinctively moral representations, and it does so without going into the math.
    I’ll do my best to sketch what I see as the most important part of the theory (and Peter has just hinted at it above). The affective and valuational systems that we share with other animals are always operating in parallel with systems that are dedicated to things like modeling counterfactuals, generating decision trees, and other stuff like that. Some of these systems are traditional pavlovian learning systems. Some of them are learning systems that build up associations between actions and outcomes. But with any complex decision, multiple values will be computed in parallel, and they will be aggregated to generate an action plan or a decision. At least in human decision-making, it’s rarely the affective system working alone in the production of goal-directed behavior (though simple disgust reactions and simple aversions may be implemented exclusively by these systems). This being the case, propositional representations (if such there be) will always be infused with representations of value. As Peter noted above, our tacit competence with morality is unlikely to be implemented exclusively by the affective system, but every morally relevant hunch that we have is likely to be inherently affective and inherently valuational. (If you want to see how I think this works in the case of moral cognition, see the closing sections of my “Do Emotions Play a Constitutive Role in Moral Cognition?”).
    So, now, I guess what I want to ask is whether the kind of tacit competence that Peter is talking about is really dispensable in principle, as Antti suggests? Maybe there is some sense in which it is, if we are talking about an industrial strength brand of metaphysical dispensability. But in general, apes like us are just going to be stuck with the systems that we have, and we are just going to be stuck with the judgments and motivations they produce. In thinking about ethics, I have hardline Spinozist and naturalistic leanings, and I think we need to start by understanding the patterns of causes that we find in the world around us, before we start to speculate about what is and is not possible. I guess that’s a minority position in philosophy, and probably more so in discussions of ethics and practical reasoning. But if the emerging consensus on these issues is in the right ballpark, I think that we are going to have to do quite a bit of re-thinking in the ethical domain.

  12. I’m very pleased to see Aaron, Regina (please feel free to call me Peter), David, and Bryce sticking up for some of the points made and processes of learning described in “Affective Dog”. Thanks, too, for the way you folks have sharpened the discussion of questions about reliability, non-instrumental learning, and the nature of value and agency. I’ll try to say a bit that might help.
    First, to the very central question of reliability and socialization. Bryce and others are right that statistical learning systems pick up on the patterns they experience—if these embody inequality, discrimination, xenophobia, etc., then this will be reflected in the attitudes acquired. Accommodating one’s own expectations and behaviors to existing social patterns of expectation and behavior will typically be rewarded in various ways, making us very suggestible creatures indeed. We all remember junior high school!
    But think a bit more about junior high school and like human social environments. Despite all the institutional reward structure and peer pressure, they don’t succeed in producing clone-like, well-behaved replicas that internalize predominant values and conform to predominant social norms—just ask any Vice Principal. Instead, they often exhibit the emergence of a multiplicity of different clusters of individuals, values, goals, aspirations, relationships, styles—the social nucleation of different ways of life, including some that directly challenge predominant norms. If we look at humans over time, the striking story is one of change, not stasis, even in very basic forms of social organization.
    Evolutionary types will emphasize here the importance of generating diversity within a population. For humans in particular, this takes social and cultural as well as genetic forms. Human adaptability to the most diverse environments, thanks to persistent technological and social innovation and an unusually high capacity for social learning, permits the rapid spread of changes in thought and practice that produce various kinds of benefit. We’re beginning to get some insight into how such a capacity for innovation arises neurologically—which might be rooted in the exploitation/exploration trade-off inherent in foraging generally.
    What’s exciting to me about the growing knowledge of implicit learning mechanisms is that it enables us to see how “bottom up” unlearning of prevalent values and norms is possible. I tried in the paper to give an example of this in the case of gay marriage in the US—once a large number gay individuals courageously innovated by making their identities known, then the lived experience of the rest of the population shifted in ways such that empathic learning and a remodeling of expectations (What are gay individuals and relationships like, and what would it be for there to be higher social acceptance and recognition?) could rapidly shift attitudes. Even attitudes thought to be anchored in “ancient”, basic, spontaneous disgust responses. (Does this help at all, Bryce, Antti, Julia, Sarah, Regina, and David? Does it clarify whether “Affective Dog” depends upon my background value realism, Aaron?)
    Reflective equilibrium is a critical counterweight to statistical learning, but it is important to see how new reflective equilibria typically emerge historically and socially thanks to initiatives taken “on the ground” by oppressed peoples, which change the experience and shift the sense of possibility of the rest of the population. The source of such initiatives can itself be allied to experiential learning—the discrepancy between what an oppressed individual is told to value and want, or constrained to do, by the dominant culture, and the unsatisfactoriness of her actual, lived experience. That’s roughly the same mechanism as the emergence of diverse values and ways of living in junior high school, if my memory is right. I believe, though it would take a lot more argumentation and evidence to say this with any confidence, that such processes can involve the exploration and acquisition of new intrinsic aims and values (“experiments in living” in the broad sense) as well as more effective instrumentalities toward existing aims (“experiments in living” in the narrow sense). Again, if my memory is right. (Does this help at all, Sarah and David?)
    Experiential learning makes the acquisition of implicit bias is a huge problem. It seems that virtually everyone in the population, even those in oppressed groups, pick up some attitudes of this kind. The empirical literature on overcoming implicit bias, however, emphasizes “bottom up” learning–the importance of lived experience in activities involving joint participation and joint goals. So experiential unlearning can be effective, if sufficient variation in practice exists to give it some evidence.
    This is all by way of expressing agreement with Bryce about the importance in moral learning of revolutionizing practice.
    An adequate epistemology of intuition would look into the processes by which intuitions are acquired or produced. For example, in the trolley problem, my purely speculative suggestion is that differential empathy plays a role in generating the Switch/Footbridge asymmetry. We can test this by constructing scenarios in which empathy is more equalized (Bus) and seeing whether intuitions shift, or by looking for empirical evidence about the workings of empathy from social psychology (such as the work that suggests that perceived social distance affects trolley judgments), or neuroscience (such as work that suggests how various factors influence empathic processing—e.g., the studies suggesting that pictures of homeless individuals did not trigger facial recognition centers). As with perception in general, the more we know about the processes involved, the better we can assess what they are responding to.
    This is part of the answer to Antti’s worries about how reliability could be gauged, connected to Aaron’s and Regina’s helpful responses. We need not go in for the kind of value realism I advocate (which does indeed work hard to show how value properties can be causally effective) in order to sort this out. It is enough if we can say something definite about the natural properties that are morally-relevant, or value-constituting. That is, we can look to the supervenience basis.
    Assume that we’re on safe ground saying that effects on well-being are morally relevant, other things equal. So is impartial assessment, other things equal. If we can trace a given asymmetry in intuitive responses to an asymmetric empathic simulation of the effects on well-being (Footbridge, according to my speculation), then this gives us some reason to question the authority of the intuition in question.
    Next step, think in a Humean spirit about things like behavioral dispositions, motivations, etc., that are more or less conducive to cooperation, reciprocity, beneficence, empathic imagination, sympathetic or caring responses, etc. We’re probably on safe grounds saying that these are morally relevant, other things equal. So if intuitions in cases like Mark and Julie or Footbridge or Boardroom reflect the accurate tacit modeling of social situations in part in terms of such dispositions, motivational tendencies, etc. (tracking recklessness with others’ well-being, or unfeelingness and anti-sociality, etc.), then this gives them greater authority. Other things equal.
    Is intuition therefore dispensable in principle? Perhaps at the level of theoretical moral justification, but not at the level at which we endorse, appreciate, value, respect, debate, question, etc., theoretical principles or justifications, and place some over others. That is, not at the level of actual moral deliberation and practice. Once we realize that even when agents self-consciously engage in deliberation, even about high-order theoretical matters in ethics or science, they are continuously relying upon intuitive, affective, non-inferential, evaluative processes (again, the regress problem), then there is no question of our doing without intuition. As Kant said of intuition and deliberation, “Neither of these qualities or faculties is preferable to the other” when we consider the world from the standpoint of the practical or theoretical agent.
    Agent? All this “bottom-up” talk, Railton! “Where’s the agent?”, Antti would ask. Why aren’t we inferential machines? There’s much to say here, but our picture of agency needs beefing up. Choice cannot make meaningful agency if it is unguided and doesn’t reflect what matters to the agent. And unguided choice cannot of itself generate what matters to us. So: What is it for the affective, non-inferential, evaluative processes described in “Affective Dog” to shape my deliberation? It is for my credences, my values, my concerns, my hopes, my commitments, my fears, my desires, my doubts, my empathy, my relationships, my sense of identity, my discontents, my enjoyments, etc., to shape how I think and what I decide. Even when I am deciding about whether to question any element of what I believe, value, commit to, … .
    My very much underdeveloped suggestion is: this is me doing the reflecting, thinking, and deciding. We humans are different from the familiar sorts of inferential machines in that, even if they have meta-representational (“reflective”) and selectional (“choice”) capacities, they do not have values, concerns, hopes, confidence, doubts, concerns, enjoyments, etc. All these sources of meaningful choice and life are absent. A machine can have these things (we are, after all, biological machines of some sort)—if we add to meta-representational and selectional capacities an engaged affective system. As Aristotle and Kant held, and Liz Anderson argued more recently, it is affective attitudes or sentiments that are the fitting, appreciative representations of value and disvalue. (This seconds Aaron’s point about representation.)
    Once again, thanks to all.

  13. Thanks for your thorough response, Peter! And apologies for being slow to respond in turn – I’m on a family vacation. I do think that our views are close to each other. My challenges are an attempt to put my finger on just where the difference might lie. (It would be nice if more traditional moral intuitionists who don’t regard intuitions as affective would jump in – but I guess they’re all on summer vacation.) One point on which it now seems to me we’re in agreement is the kind of content that affective intuitions have. What I say in my Humean intuition paper is that the content of the emotional appearances is perhaps best thought of as preconceptual. Although possibly below the level of articulation, the experience differentiates between different ways things might be, and is, I claim, sufficiently closely related to the corresponding proposition (such as thoughts of the form X is wrong) to provide defeasible justification for belief in it. I think something similar is true of perception – the content of the experience only approximates the propositional content of a corresponding belief, but may nevertheless rationalize it. I find it plausible that our conceptual repertoire in part shapes the content of the experience in both cases, which is another reason I resist calling it nonconceptual. (But what I said earlier was too strong.)
    I also think that our idea of what is involved in exercising moral competence is very similar. I like very much what you say about simulation and how it works (I’ve defended a similar picture in a different context in an earlier paper. I think now that I was really talking about moral competence, although I put the conclusion in different terms.) I suppose my objection, such as it is, is to understanding such competence on the model of tacit competence with empirical matters. I’m sure Bryce is right that our kind of ape cannot do without tacit competence. But being a redneck traditionalist reared on raw Wittgenstein, Kripke, and Husserl, I think thinking about mere possibilities can be revealing. That’s why if my earlier conjecture about in principle dispensability is correct (and it may not be), it suggests there’s something special about moral (and more generally evaluative and normative) intuitions.
    In this context, it may also be worth noting that a lot of the experiences we describe as something feeling the right thing to do are experiences of instrumental effectiveness – strictly speaking, they’re experiences of something being (maximally?) conducive to a goal one has. The content could be more precisely explicated in terms of “this will convince the jury” or “this will maximize the profit” or “this will lead to checkmate”. Such experiences are certainly not evaluative in the way that moral intuitions are, and are in my view best not described as intuitions about value in the first place. (This may relate to Josh’s earlier point.) They are, in contrast, very plausibly potentially manifestations of tacit competence. So maybe this is another, related line of resistance – tacit competence concerns instrumental effectiveness, but moral competence has to with the choice of ends themselves.
    Finally, the issue of trial-and-error learning. I agree that it is very plausible that change in attitudes towards gay marriage has happened roughly in the way you describe (although consistency-based arguments from universalist norms people already embrace may also have contributed to it). But I don’t quite see a moral error signal in the picture. As you describe it, many people started out with prejudices regarding what gay people and relationships are like, and how important social recognition is for them, and exposure to such individuals together with some degree of empathic identification helped correct them. But that’s to say people implicitly learned some empirical facts, which resulted in a change in moral intuitions that in part depended on false empirical assumptions. I don’t doubt that trial-and-error learning of this sort – which may happen entirely below the level of consciousness – is possible, but it falls short of implicit moral learning. Could we learn in a similar way that everyone’s well-being matters equally (for example)? That’s the key question I have for the approach.
    Again, thanks for your engagement, Peter, and thanks to Hille for inviting me to participate in the discussion! I realize I didn’t get to address all the points already made, but I may not be able to jump back in before next week.

  14. Thanks, Antti, for these thoughtful replies. (I suspect a lot of folks are on family vacation, so I’m doubly grateful for those who contributed to the discussion.)
    Let me try to say something briefly in response to your three concerns, all of which strike me as getting at central issues.
    On in principle dispensability, we need to ask dispensability for what? Kant has the picture that we can have a theoretical representation that an act is wrong simply by applying the test of the categorical imperative. Equally, though, we could have a theoretical representation that an act is wrong by applying the test of utility. Neither of these theoretical cognitions, however, tells us which, if either, merits our respect and following in practice. One can make the same point in theoretical reason. No axiom or set of rules tells us which axioms to believe or which rules to follow in our belief-guiding reasoning. In both cases, there is a normative element not captured in the content of representations or rules. As I read Wittgenstein, this is his point in discussing the “normativity” of logic in PI. And here is one place where the regress problem comes to the surface–we can’t supply an answer by supplying another axiom or rule. To get a mental state with normative *force* we need something other than a bare representation–even a representation of the form, “This act’s maxim would not be consistent with the norms of a community of rational beings”, or a rule of the form “The maxim of an act is not consistent with the norms of a community of rational beings –> The act is wrong”. Indeed, it won’t suffice even to add to such a representation or rule “action-guidingness” in the form of a mental program that leads to the execution of a maxim only after it passes the test. Only a disposition to follow the rule that originates from an *appreciation* of the value of humanity or a *respect* for one’s autonomy and that of others will do. What sort of mental state is apt for embodying such appreciation or respect? Kant tells us: an affective state that is attuned to this value, the “moral feeling”. So, for moral existence and understanding affect and intuition are indeed indispensable–but not for determining whether a given act is permitted or not. According to Kant (or the utilitarian) this is something determinable by a fully articulable, dispassionately determinable standard. What Wittgenstein calls in the case of logic a “model” or “measuring stick”. So, yes, there is something distinctive about *normative regulation* of thought or action; and yes, this does involve intuition and affect (credence in the theoretical case). But whether pushing the man off the footbridge is permissible need not remain opaque to deliberative thought and articulation–we can spell out the grounds in terms that are independent of whether we accept the standard. That’s how such theories differ from certain “moral sense” theories. Does this help?
    About instrumental effectiveness vs. value–an assessment of instrumental effectiveness is incomplete even as a guide for rational instrumental action if it ignores direct costs, opportunity costs, relative effectiveness, beneficial or baleful side effects, etc. These features have valence, magnitude, urgency, importance, etc., and all that is evaluative. So when the neuroscientists analyze how actions are assessed and selected from a set of potential actions, they look for processes with the formal characteristics of value functions.
    Finally, about error-based moral learning. Here Antti touches on a question that has, I think, been of concern to many of you. We can see how error-based learning can pick up factual information, or even instrumental evaluative information, but how could it ever learn that everyone’s well-being matters equally? We learn via empathy as well as first-person experience–e.g., learning from the pain another suffers after reaching out casually to pluck a wild rose not to do so ourselves. When empathy simulates this pain, it provides us an error signal for casual plucking of wild roses. Similarly, when we see the pain in another’s eyes when we make a casual hurtful remark, we get an error signal for being glib about something that matters to someone else. Over time, what we learn is that the world is full of centers of feeling that work like our own mental life–pains and pleasures that differ in magnitude, acuteness, duration, depth, etc. And we learn that our actions, even unintentionally, can affect these centers in the same ways our center is affected. This gives us something like Nagel’s picture of a space of objective reasons, with no particular center. And it gives us Hume’s picture of the space of sentiment, corrected for perspectival bias. Equal mattering across individuals. Like the equal reality of objects near and far, and their actual vs. apparent sizes. Such an acquired “centerless” representational capacity is important not just for morality, but for all manner of practical activity. Of course, we have to begin such learning processes with a lot of equipment: causal modeling and expectation formation, empathy, imagination, hypothetical reasoning, generalization, vicarious projection, etc. But with this equipment, and experience, we can be shown the error of purely perspectival thinking, and cotton on to the reality upon which morality and science supervene.
    This is of course an overly simple, overly optimistic account–there’s a lot of competing information, noise, ideological distortion, etc. But the message can get through to some degree. The humane movement for animals, for example, gets its purchase this way. Not mere universal projection of principles–why to animals but not to plants or rocks? Learning and appreciating the error in thinking that our effects on animals don’t matter any more than our effects on plants or rocks.

  15. These are very helpful clarifications, Peter! I like the way you characterize the difference between Kantian and Humean positions on the indispensability of feeling, and have nothing further to add on that score. As the discussion is winding down, I feel almost like I’m imposing in adding a few brief responses – I’m not in any way expecting a further rejoinder.
    On sense of instrumental effectiveness – yes, what I said was at least misleading. I’m sure sensitivity to all those considerations can feed into the sense that something is the thing to do. I guess there’s still a line of resistance open, though. It’s that at any time, we’ll have a number of goals. The lawyer wants to convince the jury, but also not lose future clients, go bankrupt, or cause pain to other people. Having acquired competence in her job, she’ll be more or less attuned to the likely effects of her potential actions to the satisfaction of these desires. So some particular way of proceeding will appear as the one likeliest to convince the jury without offending future clients etc. – as the one expected to maximize overall desire-satisfaction. If this is right, the sense of aptness that issues from exercising tacit competence is still fundamentally instrumental. This might not be an accidental feature of an implicitly learnable skill.
    Finally, a few more words about empathic error signals. If I expected that no one would be hurt by my remark, and I empathize with the person who is stung by it, I do indeed get an error signal. But what I have trouble with is still whether the signal indicates a moral error. If I didn’t already think that other people’s feelings matter, how could I learn *that* by way of learning that someone unexpectedly feels bad? That is, even if it’s true (and it probably is) that we learn from empathy that “the world is full of centers of feeling that work like our own mental life–pains and pleasures that differ in magnitude, acuteness, duration, depth, etc.” and that our actions affect them, isn’t that still different from learning that it’s bad or wrong to cause pain (etc.) to those others, equally real though they may be? Or, to put it differently again, it is indeed a factual error to suppose that my subjective perspective is the only one (or that things only matter to me), but it’s a different kind of error to suppose that only my perspective matters or is a source of reasons for me (or anyone). But perhaps I fail to grasp something crucial here (it wouldn’t be the first time!).

  16. Sorry I haven’t been as active in this discussion as I would have liked. (I have been travelling and occupied with family, as I’m sure is the case for many others as well, as Peter guessed.) I’ve found Peter’s responses to the above comments illuminating. And I really agree with almost all of it. But I do still have some remaining questions about Peter’s responses to Bryce and Josh.
    First, Bryce. There is, I think, an inherent conservatism in any theory of moral judgment that depicts folk intuition in a largely positive light. (Hence my reference to Hegel. I was thinking of F. H. Bradley too.) Radical ethicists who argue for, e.g. the moral permissibility of infanticide, or just “bite the bullet” when defending the radical consequences of simplistic forms of utilitarianism, often argue in favor of debunking explanations of particular-case moral intuitions. (“There’s nothing wrong with late term abortion and so there’s nothing seriously wrong with early infanticide. You just think otherwise because babies are so cute and your have a biologically evolved aversion to the destruction of cute things.”) Singer often takes this kind of line. The most common strategy of response to it is to challenge the credentials of the a priori intuitions on which the critics of “common sense” morality rely. If the intuition that it’s wrong to kill an unwanted infant is suspect, what gives you such confidence in the principle of utility? The kind of revolutionary ethics that Bryce advocates has premises or ungrounded assumptions of its own. Or if everything is defended with reference to something else normative/evaluative, the justification this kind of ethicist offers will be explicitly coherentist and so lose its argumentative force when directed toward someone who insists on retaining the “common sense” scheme, which (we can conjecture) is just as internally coherent on the disputed claims.
    This connects to my desire to have Peter more fully spell out how evaluative cognition relates to the value-neutral biological and sociological properties of the people, communities and institutions we evaluate. As I interpret them, the Hegelians try to argue for the reliability of folk moral cognition by depicting evaluation in general (and moral evaluation in particular) as the products of learning. Pain, disgust, instinctive fear etc. are supposed to give us a ton of pre-theoretical knowledge about our own physiologies: e.g. about what breaks them and what fixes them. The sense that a distribution is unfair (Esau is mad that Isaac gave everything to Jacob) or aversion to a hinderer and preference for a helper (as in Hamlin’s experiments mentioned above) are supposed to give us similar knowledge about our own families, communities, etc. Peter seems reluctant to say that his favorable depiction of folk moral intuition (and social cognition more generally) relies on this naturalist version of value realism on which evaluation is a kind of knowledge acquired through learning. But I’m still not sure. If our basic appetites and aversions do not provide us with knowledge of what is good and bad for us, and our basic social cognitions do not do something similar, why not go with Singer and try to bracket all such reactions so as to gain some a priori top-down intuition of true general moral principles that seem right to “reason” even if they directly contradict many of our single case intuitions? Why not take his advice to reject the project of aiming at reflective equilibrium altogether? And if one thinks Singer’s epistemology is just as bad as one that gives strong weight to intuition, why not then become a moral skeptic if one can? To answer these questions, I suspect Peter must at some point try to vindicate intuition in the way I described even if the metaphysics of value he invokes when describing intuition as reliable is not “forced on us” by the data and is instead a way of defending the rational coherence of a relative conservative ethical world view (that rejects infanticide, etc. on intuitive grounds).
    On Josh: I think Josh is right that more needs to be done by Peter to distinguish the various forms of intuitions at play in Peter’s article. Again, I do not agree with the criticism made above that moral intuition is not the product of tacit knowledge. But I do think (with many others of course) that if we are going to distinguish moral evaluation from other forms of evaluation we are going to have to appeal to relatively high-level properties that are probably only present in apes or similarly neurologically complex mammals. Of course, people “moralize” with their dogs; they blame them for their misdeeds etc. This is true too of parents frustrated with very young children. I suspect that criteria like authority-independence, knowingly causing harm, seriousness etc. are features that we think upon reflection must be present to warrant such blame. They are therefore dependent for their manifestation on second-order critical evaluation of first-order critical practices (like praise and blame). On this account, meta-cognition and explicitly linguistic practices (like argument, disagreement, punishment, and pleas for mercy) will play a role in distinguishing moral cognition from other forms of evaluation. Judging something immoral (or even experiencing it as such) will thus differ from judging it bad or even wrong in explanatorily significant ways.

  17. Thanks, Peter, for your detailed responses to our challenges. Like others, I’ve been traveling in the last week and am only now catching up on the discussion. If you don’t mind, I’d like to press you on my third point, regarding the relation between moral intuitions and conscious agency. I am interested in focusing on this because I agree with you on nearly everything else, yet this point is where I find most trouble in my own approach to these issues, and I’m curious what you will say.
    Let me start with an observation about the dialectical context around moral intuition. A lot of empirically-motivated attacks on moral intuition seem to run together two claims:
    (1) moral intuition results from cognitive processes that are simplistic, heuristic-driven, and/or otherwise primitive (roughly what you have called “automatic”)
    (2) moral intuition results from a cognitive process that is opaque to conscious introspection.
    Haidt’s early work, for instance, seems to be making both claims at once: the point of “moral dumbfounding” was that people’s moral judgments are driven by fairly primitive motives *and* that they are unaware of this influence. Similarly, Greene often criticizes deontological intuitions *both* because he thinks they are driven by primitive processes and because he thinks deontological moral philosophy is a “rationalization” of impulses we don’t consciously understand.
    Your paper gives a very compelling rebuttal to claim (1), and I’m in near complete agreement with what you say about this. But I think you leave claim (2) unscathed – and I think that claim (2) is at least as much a threat to the philosophical use of moral intuition as was claim (1). So let me explain why, and ask if you disagree.
    I’ll use a fanciful scenario to motivate my worry. Suppose that scientists discover that a certain wavelength of light has interesting properties. Like ultraviolet light, this wavelength is not consciously perceived by humans. Let’s call it “uberviolet light”. What’s interesting about uberviolet light is that it *does* interact with the rods and cones in human eyes. If I place an object that emits uberviolet light in front of you, electrochemical signals transduced in your optic nerve will differ from signals transduced when you are looking at an object that *appears* identical to you, but is not emitting uberviolet light. Further, neuroimaging shows different activation in the occipital lobe for viewing of uberviolet and non-uberviolet light, though you don’t consciously perceive any difference.
    That’s the background story. Now imagine we discover something further: you tend to like objects that emit uberviolet light. You don’t know this, of course, because you can’t consciously perceive uberviolet light. But if we put you in a laboratory setting and ask you to choose among various identical-looking objects, you will almost always prefer the ones that emit uberviolet light. In fact, we find that (like the subjects in Nisbett and Wilson’s famous paper) you confabulate reasons to prefer the uberviolet-emitting objects. And someone with sufficient data about you can show that this affects you even outside the lab. Someone following you around with an uberviolet light detector will observe you stopping to look at uberviolet objects in storefronts, or fixating on uberviolet light sources when you believe you are just staring off into space.
    We need to add a further stipulation here to make this thought experiment parallel your discussion of moral intuition. The way that uberviolet light affects your choices cannot be primitive, or “automatic” in the sense you’ve used that term. It’s not a blunt reflex reaction, where uberviolet light falling on your retina immediately triggers behavioral effects. Instead, it is something that you have *acquired* – something your brain picked up through statistical learning. Psychologists hypothesize that in childhood you may have had certain familiar objects (toys, food) that incidentally emitted uberviolet light. Your brain learned to associate the processing of uberviolet light with pleasure, and so you have developed a fondness for it. Importantly, this preference is sophisticated and adaptive. For instance, let’s say, you once read about a remote island in the South Pacific and though you’ve never even seen a picture of the place, you’ve ever since really wanted to visit it. As it turns out, the article you read mentioned the island’s many shoreline caverns. You know (tacitly) that peregrine falcons love shoreline caverns. And peregrine falcon feathers reflect uberviolet light. (Not that you consciously know *this* fact, nor do you consciously care much about peregrine falcons or even remember the shoreline caverns being mentioned in the article.) It seems then that your brain is pursuing an inferential, adaptive, and goal-directed process by motivating you to travel to the island, though you yourself are completely unaware of the goal.
    I’ll add one last twist to the story, then come to my point. Suppose that some psychologist now wants to deliver interesting news about one of your hobbies, something that you are passionate about. Let’s imagine that it is classic cars – you really love looking at, sitting in, listening to Americans cars from the 1950s and 60s. You go to classic car shows every summer and spend a lot of your money tuning up a 68 Camaro parked in your driveway. Now, says this psychologist, here is an interesting fact. A lot of American car bodies from the 1950s and 60s were made with an alloy that incidentally reflects uberviolet light. But a few were made of a non-uberviolet alloy. And here’s the interesting bit. This psychologist followed you around at the latest classic car show with an uberviolet detector and she can document that you spent nearly all your time admiring the uberviolet cars, while ignoring the non-uberviolet cars that are more or less identical in all other respects. The psychologist says: I hypothesize that all the things you mention about your love of classic cars – their colors, their shapes, their purring engines – all of that is confabulation. Really you just go to be around a lot of uberviolet light.
    That’s the story. Now here is the point. In this story, you brain has acquired a fairly sophisticated and adaptive form of behavioral motivation through statistical learning. Your motivation to go to certain places or look at certain objects is attuned to the likely presence of a particular property. It is not “automatic” in the mindless or purely conditioned sense. But it is still opaque to conscious introspection. You’ve never had any idea that many of the things you perceive as worth pursuing are unified by their reflection of uberviolet light. You could not possibly know how this factor has affected your deliberations.
    At this point we could ask several different questions. I am interested right now in this one: if we are trying to attribute agential-valuing properties to you, do we attribute them on the basis of what the conscious you purports to value, or on what psychology reveals to actually be driving your behavior? Do we still say that you dream of going to that South Pacific Island? Do we still say that you love classic cars? Or do we say that, really, what you dream of and love is uberviolet light?
    The latter seems, to me, just wrong – behaviorism gone mad. But some contemporary psychologists do talk this way (e.g. Timothy Wilson in Strangers to Ourselves). More importantly, it is at least consistent with what you say in your paper. Distinguishing between “automatic” and “spontaneous” processes is useful, but both are still opaque to conscious introspection. “Spontaneous” processes aren’t stupid – but they also aren’t what we normally think of as constituting human agency.
    This is all troublesome enough when we are talking about hobbies. But I think it gets much worse when we are talking about morality. I don’t want to write a lot more here, so I can’t motivate the particular problem fully. But it is related to a background conception of moral choice as playing a fundamental role in constituting human agents (something like Korsgaard’s view). Roughly, the claim is that we just aren’t agents at all unless we act in accordance with values we consciously evaluate and reflectively endorse. But if our moral intuitions are driven by processes opaque to introspection, like the effects of uberviolet light in my story, and if our attempts at conscious reflection are likely nothing more than confabulation or (at best) hypotheses about our own psychologies… well, it looks like this model of agency is just empirically untenable. The conscious self is morally epiphenomenal.
    (Some worries sort of like these appear in a paper by Jeanette Kennett and Cordelia Fine called “Will the Real Moral Judgment Please Stand Up?” and another paper by John Doris called “Skepticism About Persons”. I am working on my own paper on the issue right now.)
    I’ve gone on more than long enough for a blog comment! I hope it’s not too late in the discussion to ask for your thoughts on this point. I’m wondering if your “spontaneous” distinction can do more work than I’m giving it credit for? Or if you even agree that we should worry whether this model of conscious moral agency can be made empirically tenable?

  18. Thanks, Antti, Aaron, and Regina, for these comments—they really get at central problems, and help very much to focus the discussion. I particularly appreciate your willingness to keep pushing the discussion forward in this time of travel and family vacation. I’ll try to do justice to your questions, even if only briefly.
    Facts and values. Antti points out that instrumental learning still falls short of something like moral learning—even if it is properly said to be evaluative or normative (e.g., it involves not only noticing cause-and-effect relations, but also the attribution and balancing of decision weights). Instrumental evaluation, one might think, is always bounded by one’s goals, and thus there will be a fact about whether a given course of action conduces more or less well to one’s goals, relative to alternatives, which can in principle be learned by error-reduction feedback. This presupposes goals, taken as given, and does not provide an example of learning via feedback what goals to seek, or what really matters.
    Aaron also wants to hear more about evaluative learning, as I understand it, and its relation to what he calls learning of value-neutral biological and sociological properties. Earlier commentators had similar questions. And Regina has a neat thought experiment that raises questions about what is actually being tracked by our evaluative responses. So I’d better become more explicit about the sort of learning processes I am positing.
    Start off with an analogy with ordinary perception and causal learning. Here’s the challenge: what emerges from experience is not strictly given in experience, namely, some degree of confidence that certain relations are projectable. All our experience is of particular episodes in the past, and yet we emerge not only with beliefs about what will be the case in the future, but also beliefs about how well justified or reliable these beliefs are, and corresponding dispositions to rely upon what we believe in inference and action. What makes this possible, and what does intuition have to do with it? Thanks no doubt to a long evolutionary history, humans seem endowed with various perceptual faculties and spontaneous dispositions to form beliefs on the basis of inputs from these faculties. These beliefs always outstrip our evidence, so what accounts for our sense that they are more or less justified? Or that some evidence is stronger than another? Or our degree of willingness to rely upon them? These attitudes have normative content. Why say that acquiring these attitudes through experience is learning, rather than just change in attitude? Let’s go in steps.
    Step 1. To borrow Regina’s example: imagine that the actual mechanism underlying these changes in attitude is simply a linear response to the amount of uberviolet light our retina receives—we’ll acquire a spontaneous degree of belief that a certain state of affairs is projectable, or a non-deliberative (“intuitive”) sense of being justified in our belief, just insofar as our perception of that very state is the result of uberviolet radiation. It isn’t that uberviolet light is produced by some process nomically connected with projectability—nor that we are carrying out a kind of induction (if we get a good dose of uberviolet light from a situation we spontaneously believe that , regardless of past experience of S-ish situations). By my lights, this would not count as bona fide causal learning, even if it were indistinguishable to the individual believer from such learning, and even if it resulted in some beliefs about projectability that happen to be true.
    Step 2. Suppose instead that the spontaneous degrees of belief about projectability or non-deliberative (“intuitive”) sense of conviction we acquire were simply the result of a fixed set of biologically-determined dispositions to believe what we are told, combined with our actual cultural conditioning, with no mechanism for detecting or revising in light of anomalies (e.g., failure of a given projection to be borne out in experience, or failures of a supposedly justified belief-forming scheme to result in beliefs borne out in experience). In other words, no mechanism of discrepancy-based revision of the “priors” given by biology and culture. This, too, would not count for me as bona fide causal learning, even if it happened to result in beliefs that are true. Keep the biology and cultural conditioning constant and change the rest of the environment in random ways, and the beliefs that come out would be the same. Our perceptual belief formation and higher-order epistemic attitudes would not be tracking the content they represent.
    Step 3. But now suppose that our projection of color properties from past samples to future circumstances is the result of experiential interaction with the *color-constituting but “colorless” causal properties* of these samples, and with *justification-conferring but non-normative facts* about belief-forming systems. And suppose that the particular “priors” given by our in-born dispositions to form beliefs spontaneously in response to experience or cultural instruction were subject to discrepancy-reduction learning on the basis of actual sensory input. In such a case, other things equal, the relative dependency upon the initial “priors” in what we believe will tend to wash out with increasing experience, and first-order beliefs about color will tend to approximate more closely actual color facts, while the higher-order beliefs about justification or justified methods will tend to approximate more closely to actual facts about reliability. Then the fact that the first-order beliefs are responses to color-constituting but “colorless” causal features (like spectral emission) that depend upon our particular nature and culture (and how our visual system differs from that of animals capable only of gray-scale vision, and how our capacity to make color discriminations is partially shaped by the availability of a given color vocabulary) does not, to my mind, disqualify them as examples of genuine learning. Similarly, the dependence of our higher-order beliefs upon experience of justification-conferring but non-normative facts (like the instrumental reliability of certain belief-forming practices) or culture (like the availability of a certain normative vocabulary) do not, to my mind, disqualify such higher-order beliefs as genuine learning about when one is or is not justified in one’s beliefs. Color need not be entirely “in the world” independently of the features or perceivers, and justification might not be a “fact of immediate experience”, and yet our experience of the world can result in knowledge about color, about when color attributions are justified, and so on.
    Step 4. Suppose we have other spontaneous attitude-forming dispositions—like Hamlin’s infants spontaneously preferring helpers to harmers, or Turiel’s infants spontaneously distinguishing some norms as moral vs. conventional, based upon such factors as seriousness, involvement of harm, etc. And alongside these we have other non-deliberative attitude-forming dispositions—like spontaneously losing confidence in methods or sources that result in disappointed expectations or spontaneously empathically simulating the experiences and attitudes of others, or of ourselves at projected future times, where these simulations themselves accrue greater or lesser trust based upon whether they result in expectations that are borne out experientially. A child’s spontaneous formation of a negative attitude toward any individual who causes harm will, resulting in expectations of future harming behavior, will, as psychologists have shown, tend over time to become more discriminating—e.g., integrating with causal information to distinguish intentional from unintentional harm. Similarly if an infant begins (as Bryce’s sources suggest) with a spontaneous preference of those who behave conventionally, but then this fails to predict to other preferred outcomes such as helping, refraining from harm, showing understanding and affection, etc. Slowly, and certainly with the help of learning moral concepts, infants begin to map the world in moral as well as conventional, or personal, or prudential, etc. ways. As Hume pointed out, our ability to understand and enjoy literature and drama depends upon this kind of spontaneous evaluative attitude formation which, as he puts it, is corrected over time by experience for distorting effects of personal interests, relations, etc. In understanding and appreciating literature and drama, we exhibit a capacity for concern that is quite “disinterested” in terms of personal gains or losses. In these ways, we become more reliable judges of the *impartial value* of traits of character, practices, actions, etc. This does not suffice to make us *behave* morally, but it engages our positive sentiments in ways that can be mobilized for moral behavior—e.g., impartial resentment of injustices, impartial approval of individuals who act well morally (and not merely conventionally), etc.
    So, although these processes of learning depend upon non-experiential “priors” (as all learning processes do) and are mediated by responsiveness to *moral value-constituting but “non-evaluative” or “non-normative” features* of individuals, practices, etc., and even though an important role is played by culturally-transmitted information and the acquisition of moral concepts, this strikes me as a case of bona fide learning of moral features.
    Step 5. Now take the question of value realism. If one were (as I am) a value realist, one could meaningfully characterize such processes as *shaped by evaluative or moral properties* in a causal manner that makes for genuine tracking, and opens the way for a reliabilist epistemology. But suppose one is instead a constructivist. Then one will speak of coherence and convergence, say, but will not offer a realist, tracking explanation of the convergence. This raises the specter of “ungrounded” or “epiphenomenal” cognition, such that the initial “priors”, for example, cannot be characterized as those that equip us for learning certain attitude-independent, causally-effective features of the world (that is, as analogous to the way, in ordinary perception, various “priors” that are “built-into” our perceptual system and belief-formation by evolution equip us for learning about independent features of the world). Of course, if one is a constructivist, one thinks that there is no need to demonstrate such “groundedness” in the moral case. Moral truth is constituted by coherence, perhaps, so reaching coherence in an appropriate way *is* moral learning.
    Step 6. Introduce fitting attitudes, so that one can speak of affective attitudes as in fact the appropriate form of representation of evaluative features. Then the realist can even speak in terms of truth and correspondence without a bad conscience. But this is another story.
    Hope this helps.
    Intuition and conservatism. Here I’ll be very brief. As I see it (and as I interpret figures like Aristotle and Kant), there is no cognition without intuition. Even “top down” cognition drawing the consequences of general principles relies upon intuited inferential connections. If there is an inherent conservatism in intuition, this will be inherited by cognition in general. That seems to be the case—inquiry needs to be inertial, and this cannot be put on a demonstrable basis, so intuition has to do it. But we know that inquiry can also be revolutionary, and it cannot do that without the operation of intuition, for example, in giving us confidence in the premises or the methods deployed, or in undermining confidence in one’s preconvictions.
    This is going on in Singer, I believe. He just as much as stick-in-the-mud moralists relies upon intuition—your intuitive response to the child in the pond, your intuitive sense that the pain and suffering of another, however distant, are akin to your own or the child’s, your intuitive sense that mere distance cannot be morally significant, or that benefits and costs matter, etc. What he is doing, and revolutionaries throughout history have done this, is to draw upon impartial processes like empathy to expand the moral circle. So we feel the force of his arguments, and do not merely see their logical form.
    Intuition and agency. A large and difficult question, Regina! I have to run now, but promise to get back to it.
    Again, humble thanks to you all for helping me think these questions through. I hope I’ve managed something you can see as bona fide learning!

  19. About valuation and agency (finally!). How much of a threat to our ordinary notions of valuing and agency is it that our affective responses, however sophisticated they might be, are often unconscious, so that we lack direct insight into their sources or their role in shaping our attention, thought, motivation, or action? This lack of insight can be more or less great, since typically unconscious affect influences conscious experience and choice in various ways. In principle, however, there is nothing to prevent it from being total, as in Regina’s very effective example of attunement to the presence of uberviolet light. One result can be that the individual, seeking to understand and rationalize her own behavior, comes up with a story about what she is doing and why that is more or less pure confabulation. This can make matters worse than mere lack of insight, since it means that the individual isn’t aware of how little she knows about her own motivations, choices and actions. Moreover, she is developing a story that satisfies norms of rational narration, and this can lead in turn to a systematic misreading of the evidence furnished by her own choices. All this can put awareness of her actual motives, or of her own state of ignorance about them, further out of psychic reach.
    If psychotherapy in its more classic forms is right, then this is not an unusual condition—lack of understanding of our own motives and loss of contact with our own feelings is typically present to some degree in almost all of us.
    So what can be said about such cases? Regina suggests that the affective response to uberviolet light in her example is “fairly sophisticated and adaptive”, since it tracks the presence of uberviolet light in the environment with high accuracy and good predictability. In another sense, however, it seems to me to lack both sophistication and adaptiveness.
    It is not adaptive, since there’s nothing about the presence or absence of uberviolet light that is related to her needs, the needs of those around her, or her other concerns—it appears to be a “stand-alone” response that pre-empts other considerations. To use her example of hobbies. These, we think, take place in portions of our lives where we have significant freedom from need, a kind of psychic free play. But of course the uberviolet effect would extend, as Regina suggests, to all areas of life. For example, were she or her child ill, and she were seeking medical care, it seems she would go for whichever doctor, clinic, herbalist, or nostrum emits the most uberviolet light. Similarly for her choice of relationship partners, career path, or retirement plan. Moreover, as her circumstances change and different requirements emerge this *inclination*, as I’ll call it, is unaffected in its operation.
    It is indeed even difficult to call this inclination a goal, since goal-pursuit requires instrumental cognition and motivation, so that a given means can become attractive even if it lacks the inclining feature. We see this in the case of pain aversion and children. Pain aversion does make them want to be rid of a toothache, but also not want to visit a dentist who could remove it. Maturation of our evaluative and agential capacity is needed for the individual to be moved to pursue a means that is, in many respects, the very opposite of what the individual is inclined to do.
    This idea of maturation helps us see the difference between an inclination and a value, as well, whether the value is conscious or unconscious. A “stand-alone” inclination contrasts with an attitude that coheres with, and whose operation is mediated by, a wide constellation of other attitudes and behaviors. When we value an end, we also feel shame, disappointment, or guilt when we cannot achieve it or when we let mere inclinations pull us away from its pursuit—some measure of focused self-condemnation. We also feel some measure of self-approval when we make progress toward attaining it, and pride when we achieve it. All these feelings can be conscious or unconscious, like the value itself, so there is a difference between an unconscious value and an unconscious inclination or goal. Unconscious or conscious values also manifest themselves in other ways. We tend to simulate or day-dream scenarios in which the aim is attained, and to draw motivational force from such mere imaginings. We form conscious or unconscious intentions whose content includes the value, and seek over time to hold ourselves to such intentions even in the face of difficulty. So the psychic structure of a value is complex, and involves a host of inter-related cognitive and affective attitudes. It need not, however, involve any conscious judgment, choice, or endorsement—indeed, it is typically because we value something that we are led to consciously choose or endorse it, or to make certain judgments or avowals. In such cases, I’ve tried to argue, the value operates to guide thought, feeling, and action *intuitively* rather than deliberatively, as in the case of the trial lawyer—though of course once self-consciousness has developed, as it does when she reflects upon her behavior, deliberative use can follow. If we accept the classical view of intuition, then this sort of pre-judgmental valuation is essential for genuine judgmental valuation, not a threat to it.
    But what about agency? Isn’t an individual who is in the dark about her values diminished in agency? We might need to distinguish two things: the *exercise* of agency and the *scope* of agency. The trial lawyer is clearly exerting agency, though until she becomes aware of the values underlying her conduct the scope of this agency is restricted. Spontaneous action can flow from unconscious goals, values, and intentions, including complex action that involves the simulation and evaluation of alternative futures. But there are important limitations if a creature is capable only of intuitive as opposed to deliberative agency. Statistical learning and simulation-based based action selection have tremendous strengths, but they tend to learn and imagine incrementally, and within the set of categories already established. Moreover, speech is a conscious process. For us to articulate and share our thoughts and feelings they must be more than implicit. Humans seem to owe their exceptional problem-solving skills and adaptiveness in comparison to other primates to their capacity to innovate conceptually and share information—to develop and communicate culture.
    That’s the functional story for conscious agency. But what about value and meaning? If valuing is a matter of a constellation of psychic attitudes and dispositions, then when conscious recognition and articulation of beliefs and values comes onto the scene it is possible for individuals to develop, rethink, and integrate goals into yet more complete psychic structures of cognition, counter-factual reasoning, approval, planning, choice, and self-understanding. A new value can emerge and galvanize a wide sweep of our capacities to make its practical expression possible—becoming an ideal that guides us even in ways never before attempted. The same processes, moreover, make possible the rejection of values, even very entrenched values, which are at odds with our other beliefs and concerns, or which we find we cannot defend to others or, ultimately, ourselves. (I’ve tried to sketch in earlier replies some of the role of learning in these sorts of change. Psychotherapy, as I understand it, seeks to be another route to such learning.)
    All this is lacking in the case of a stand-alone, unknown inclination like the preference for uberviolet light emitters. But I would not locate the problem here in the opacity of the preference, or its unconscious character. Which forms of music do we find aesthetically valuable? This might be grounded at base in contingent and, from the universe’s perspective, arbitrary features of the human sensory system and brain. We might not have a good explanation of these features, though, if Tymozcko is right (this is a caricature of his view) the answer lies in various conditional probabilities of tone transitions. These could be just as arbitrary as a preference for emitters of uberviolet light. But the structures of attitudes and practices that have built up around these basic preferences makes our attitude toward music aesthetic and evaluative, not merely an inclination. And it extends the scope of our musical agency to the creation of new and previously unimagined works, rather than merely the passive enjoyment of what happens to strike our ears.
    Does this go any way toward addressing your concerns?

Leave a Reply

Your email address will not be published. Required fields are marked *