Join us to discuss Peter Railton's new Ethics article, "The Affective Dog and Its Rational Tale: Intuition and Attunement"! The article is available open access here. Bryce Huebner kicks off the discussion with a critical précis below the fold:
Peter Railton has done an exemplary job of integrating philosophical insight with data from the affective and cognitive sciences. His new paper is long and fairly programmatic, but it is also thick with ideas that will be of interest to philosophers and scientists alike. Those who are familiar with his work will encounter a relatively familiar view. But the details in this paper are novel, and taking them seriously has the potential to transform the way that we approach questions about moral cognition and practical agency.
The paper begins with a description of a pro bono defense attorney, who has systematically dismantled the prosecution’s case and built an air-tight defense for her client. In a sense, everything has gone exactly as planned. But as she begins to recite her carefully constructed summation, her words feel hollow, and both the jury and her client start to check out. Somethinghad gone wrong, but she isn't sure what it is, nor how to fix it. She pauses, and though she is mildly panicked and uncertain about what to do, she proceeds without reflecting too much on what she should say next—and as a result, she launches into an emotionally charged and compelling articulation of why the jury must acquit her client. The words come to her, at the right time, in the right way.
This scenario is meant to highlight a contrast between explicitly planned behavior and behavior driven by (largely tacit) practical competence. Railton has long argued that practically competent agents are able to make intelligent and accurate decisions without relying on conscious deliberation. Here, he has two aims: 1) to redeem this claim in the coin of context-sensitive, spontaneous, and affective mechanisms; and 2) to show that our moral intuitions are often reliable because these mechanisms "inform thought and action in flexible, experience-based, statistically sophisticated, and representationally complex ways—grounding us in, and attuning us to, reality" (p.41).
Railton’s story about moral intuition begins from the recognition that biological cognition requires sifting through and prioritizing a massive amount of potentially relevant information, doing so in a way that is sensitive to the difference between better and worse options, and updating our assumptions about what is better as our options change. Of course, only a small fraction of the information we encounter and process is relevant to our current and ongoing concerns, and how important it is always depends on our current situation as well as what else happened recently. For most biologically significant purposes, conscious and deliberate thinking is too slow and computationally expensive to do the job that is required. So like all other animals, we often rely on affective systems that are sensitive to the distribution and value of rewards, the probability of gains and losses, and subjective estimates of risk and uncertainty. These mechanisms compute ‘predictions’ about what the world is like, and they motivate thought and behavior in line with these predictions; but they also update future predictions in ways that minimize discrepancies between ‘predicted’ and actual outcomes. Over time, where the structure of the world is fairly stable, these ‘predictions’ will yield accurate representations of the world. By way of error-driven learning, we are able to become attuned to the distribution and value of the risks, rewards, and opportunities we are likely to encounter. Building a neurally plausible model of these kinds of affective mechanisms, Railton then appeals to data suggesting that we spontaneously engage with and attempt to understand the mental lives of others. And he argues that we rely on imaginative simulations (which are constrained by the affective systems discussed above) to plan for future action and test our options before we act.
This brings us to the most novel part of the paper. Moral psychology experiments have uncovered a network of mentalizing systems, counterfactual modeling systems, and affective systems, which together seem to allow us to imaginatively engage with morally significant situations. Railton uses this fact to show that the results of well-known experiments, which are supposed to show that our intuitions are unreliable, only reveal that these intuitions are produced in situations that deviate from the world to which they are attuned. For example, using an analogue to Joshua Knobe’s case of the chairman who doesn’t care about the environment, Railton argues that statistical learning systems are likely to track troubling and anti-social character traits, predicting that problematic actions are intentional because they are congruent with those troubling traits (in line with Chandra Sripada’s work on the ‘deep self’). He also addresses trolley cases, and Haidt’s work on consensual incest (I don’t have the space to address his reinterpretations in detail, but see §§18-20 of the paper). In each case, he tries to show that behavior in such experiments reveals the operation of a well-tuned and relatively accurate moral sense. This is a surprising hypothesis, but I think he’s pretty close to right.
In slightly different ways, Fiery Cushman, Molly Crocket, Steven Quartz, and I have developed similar sorts of arguments. There is a great deal of evidence that moral cognition relies on a network of affective or evaluative systems that are attuned to the distribution of risks and rewards that we encounter in the world. Understood correctly, these data do seem to reveal that moral ‘intuitions’ are produced by flexible and statistically-sophisticated mechanisms, which are sensitive to the regularities of our world. Nonetheless, I remain less optimistic about the operation of these systems than Railton seems to be. As he notes in passing (p.41), there's a dark side to the recognition that moral intuitions are produced by error-driven, discrepancy-minimizing learning algorithms.
Error-driven learning mechanisms seem to attune us to social norms and regularities. Specifically, there is evidence that the affective systems Railton discusses treat conformity with norms as intrinsically rewarding, and deviance from norms as errors to be corrected (Klucharev et al 2009, 2011; Huebner forthcoming). This is important because we live in a world that's thick with structural racism, sexism, ableism, trans*phobia, and xenophobia. As we watch TV and film, read novels and blogs, and walk around familiar and unfamiliar neighborhoods, we are bombarded with a constant stream of 'evidence' that supports (or at the very least fails to contradict) our exclusionary biases. Railton is right that we are attuned to the world in which we live, and that practical competence and moral intuition are subserved by statistical learning systems that adjust their behavior when, and only when, things do not go as expected. But as I’ve argued in my own recent work, this is part of what makes it possible for our biases to become calcified in the practices that we rely on to do academic philosophy, to navigate interpersonal interactions, to make medical decisions, and more. Where we are attuned to biased practices, our ongoing behavior helps to entrench problematic practices, leading to more robustly biased structures against which our future attitudes will become attuned. Railton does argue that we could use a process like wide reflective equilibrium to weed out our problematic intuitions—but we should be apprehensive about the viability of this suggestion. When the vast majority of our intuitions are attuned to a messed up world, we are likely to rely on problematic assumptions about what's right and what's wrong, as well as what counts as evidence for and against our reflective hypotheses; this problem will be even more robust where our biases have become calcified in the norms and practices that we rely on in reasoning about what to do next.
We need to find some way of getting our affective systems attuned to morally preferable values. As Railton notes, some people may have better attuned moral intuitions, and if so we would do well to cultivate skills that allow us to reliably find and rely upon these experts. But judgments about moral expertise, too, will depend on our assumptions and biases, which will be filtered through potentially distorted attunements. Even if our attunements are not distorted, however, we will need some way of figuring out that this is the case. As Marx famously notes, the educators must themselves also be educated—this is why I've been arguing that since social cognition depends on affective systems, ethics must be understood as revolutionary practice.
Huebner, B. (forthcoming). Implicit bias, reinforcement learning, and scaffolded moral cognition.
Klucharev, V., Hytönen, K., Rijpkema, M., Smidts, A., and Fernández, G. (2009). Reinforcement learning signal predicts social conformity. Neuron 61, 140–151.
Klucharev, V., Munneke, M., Smidts, A., and Fernández, G. (2011). Downregulation of the posterior medial frontal cortex prevents social conformity. Journal of Neuroscience, 31, 11934–11940.
**Thanks to Michael Brownstein and Eric Mandelbaum for helpful discussion on this post.