Hi everyone! Thanks very much for the opportunity to discuss our work-in-progress, “‘I Love Women’: The Conceptual Inadequacy of ‘Implicit Bias.’”
Tests for implicit bias, in particular the Implicit Association Test (IAT), have recently come under scrutiny. Two different meta-analyses, by Oswald et al. (2013) and Forscher et al. (2016) (recently discussed in the Chronicle of Higher Education) have concluded that measurements of “implicit bias” do not reliably predict biased behavior.
In our paper, we offer a different critique of implicit bias testing, one which philosophers and other humanistic thinkers might be well-suited to address. We argue that the dominant implicit bias tests assume crude and implausible conceptions of explicit prejudice, leaving open the possibility that the morally bad and wrong actions supposedly best explained by something interestingly implicit are instead best explained by non-obvious but nonetheless explicit prejudice.[i]
The results of implicit bias testing are supposed to be surprising. They purport to show that even good, self-aware people who reject prejudice or hold “strong egalitarian beliefs,” harbor biases that are significantly unavailable to introspection and that best explain their performance of uncontrolled prejudiced actions.[ii] (Consider how underwhelming the research would be if it turned out that “implicitly biased” subjects included, for example, white supremacists lying about their explicit attitudes, or particularly un-self-aware bigots.)
More specifically, in order for implicit bias tests to live up to their billing, they must demonstrate that:
People who possess a good level of self-awareness, good moral beliefs and feeling
1. Form certain conceptual associations (like black/danger, white/good, woman/home), and
2. Perform biased actions that are best explained by those conceptual associations.
We argue that the research in empirical psychology on implicit bias does not, and without a fundamental shift in focus could not, establish this startling thesis.
To see why, consider the ways in which the dominant implicit bias tests control for subjects’ explicit biases. The most common method of explicit bias assessment involves a self-report of “temperature” on a 1-10 scale. The race version of the IAT, for example, asks participants “How warm or cold do you feel toward Black people?,” and “How warm or cold do you feel toward White people?” Participants are then asked which of a set of statements, ranging from “I strongly/moderately/slightly prefer White people to Black people” to “I strongly/moderately/slightly prefer Black people to White people,” or “I like White and Black people equally,” best describes them. This is the extent of the test for explicit racial bias in the race IAT. The methods the other leading tests for implicit bias use to determine subjects’ explicit biases are equally unsophisticated. [iii] In fact, none of the other dominant tests, the Sorting Paired Features Task, the Affect Misattribution Procedure, and the Go/No-Go Association Task, go beyond some version of the “feeling thermometer” and very basic self-reports of participants’ preferences.
These crude methods of evaluating explicit bias are insufficient to detect, and so control for, the prejudices of agents who disavow what one might think of as blatant racism and sexism, but who are nevertheless biased in subtler, but not necessarily unconscious ways. It is easy enough to imagine, for example, that a man who believes that women are goddesses who should be worshipped, but who lack natural aptitude in math, might report feeling the same “temperature” toward both men and women, or that a participant who explicitly harbored racist stereotypes about black athleticism, strength, and sexual ardor, and white intellectual superiority, might report having no “preference for” or feeling any difference in “warmth” toward, members of one racial group over the other.
Explicitly racist or sexist beliefs that don’t compromise feelings of “warmth” are only part of the problem; the temperature scale is too crude to evaluate the moral quality of subjects’ feelings. Imagine, for example, a white person who feels afraid of black men when walking in public and is resentful of black men who seek political or corporate power, but who also feels a patronizing sense of compassion and a desire to do his duty to save black souls when he does outreach work for his church group. What is he to mark on the temperature scale when asked how he feels toward black people – lukewarm? Imagine he sincerely marks “5/10.” He is then asked to report how warmly he feels toward whites. Thinking to himself, “Well people are a mix of good and bad,” he marks “5/10.” He then performs an implicit bias test, and learns that, in a computer simulation, he shoots unarmed black men at a higher rate than armed white men. Those who accept the standard interpretation of implicit bias tests would conclude that the discrepancy between his “egalitarian” feelings and his non-egalitarian behaviour reflects an implicit bias. Is this the right interpretation? Clearly not: the attitudes that best explain his behavior need not be unconscious, and the suggestion that this man is “a good person with egalitarian feelings and beliefs” is highly misleading.
The point is not just a purely negative one about the current state of psychological testing. The heart of our objection is that successfully testing for implicit racism or sexism would require an understanding of non-implicit racism or sexism. Achieving this understanding is notoriously difficult; it is a task that calls for serious moral philosophical and theoretical work. Explicit sexism and racism operate in non-obvious and evolving ways, and a good theory can reveal the significance of phenomena that one might otherwise have thought of as benign. Our behavior and attitudes should be subject to interpretation in the light of theoretical insights.[iv]
At this point, one might conclude that while empirical tests for implicit bias are not showing exactly what they purport to show, they are nevertheless excellent tools for demonstrating the ways in which racism and sexism operate in subtler ways than one might think. And perhaps interventions based on the assumptions of the implicit bias literature reduce biased behavior.[v] So, what is the problem?
One problem is that the focus on implicit bias as a novel form of prejudice has spawned a new model of thinking about moral improvement. This new model encourages a skepticism of the methods of interaction consistent with what Strawson called the “reactive,” or the “participant” stance (education, reasoning, and the feeling and expression of blaming attitudes). As alternatives to engaging in rational persuasion, psychologists and policymakers have recommended various strategies (which we call “life hacks” in a nod to the popular genre of internet self-help articles) for manipulating one’s own conceptual associations (for example, pressing a button labelled “NO!” when one sees stereotype-consistent images, e.g., of a black face paired with the word “athletic,” or “YES!” when one sees stereotype-inconsistent images, e.g., of a white face paired with the word “athletic”).
Effective or not, life-hacking to reduce bias comes with at least two costs that we do not think have been fully appreciated. First, if we are right to suspect that many automatic associations and actions express non-implicit biases, then life-hacking will not address the root of the problem it aims to solve. Yes, it may succeed in de-programming, but it will not inform, educate, or persuade, and, as a result, the scope of its success is bound to be limited. It won’t help people to understand racism, or actually convince them of the arguments in favor of anti-racism. It won’t encourage someone to think about why he tends to find the anger of women amusing, to reconsider throwing a “Conquistabros and Navahoes” party, or to think about the significance of being condescended to because of one’s gender or race.
Second, if much of the bias expressed by rapid association and action can in principle be addressed by rational argument, engagement with richer understandings of prejudice, and other means consistent with standard practices of holding a person responsible, then the life-hack model does a disservice to both victims and perpetrators. By embracing it, we encourage victims to (mis)understand the expression of prejudice as the result of unfortunate unconscious associations, rather than subtle, but non-implicit, patterns of attitudes and beliefs. And we deny perpetrators agency, manipulating them rather than treating them as moral agents we can reason with, persuade, educate, and blame.
Brownstein, Michael. “Implicit Bias and Race.” In Routledge Companion to the Philosophy of Race. Alcoff, Linda, Luvell Anderson, and Paul Taylor: Routledge, forthcoming.
Forscher, P. S., Lai, C. K., Axt, J., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. (2017, July 1). “A Meta-Analysis of Change in Implicit Bias.” Retrieved from psyarxiv.com/dv8tu
Lai, C. K., Hoffman, K. M., & Nosek, B. A. (2013).” Reducing implicit prejudice.” Social and Personality Psychology Compass, 7, 315-330.
Levy, Neil. “Consciousness, Implicit Attitudes, and Moral Responsibility.” Nous, 48:1 (2014). 21–40.
Oswald, Frederick L., Gregory Mitchell, Hart Blanton, and James Jaccard. “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterian Studies.” Attitudes and Social Cognition 105, no. 2 (2013): 171-92.
Strawson, P.F. “Freedom and Resentment,” Freedom and Resentment and Other Essays, (London: Metheun & Co., 2008).
[i] For simplicity, we focus on implicit bias tests for racism and sexism.
[ii] There are varying interpretations among both psychologists and philosophers of what it means for an attitude to be “implicit.” The standard interpretation is that these attitudes are “unconscious” and “uncontrollable,” though Fazio and Olsen (2003) suggest that they are better understood as automatic. Philosophers Michael Brownstein (forthcoming) and Neil Levy (2014) suggest that “implicit” attitudes are “arational” (or at least, not paradigmatically “rational”); we don’t address that interpretation directly here, but in our paper, we show how our arguments apply to those interpretations as well.
[iii] For a discussion of three different measures of explicit bias that have been used in implicit bias testing, see Oswald et al (2013). Strikingly, they conclude that even these measures of explicit bias (which include more sophisticated tests of prejudice, such as McConohay’s “Modern Racism Scale”), do not predict the performance of biased actions! They also note the difficulty of determining what counts as a measurable “biased action,” another area that is ripe for philosophical investigation.
[iv] Think of the various interpretations one could have of the following phenomena, given one’s moral-theoretical commitments: (1) Sean Spicer claiming that Jews were brought to “Holocaust Centers”; (2) the statement: “The success of Asian-Americans shows that racism is not a significant factor in limiting the progress of African-Americans”; (3) remarking while watching the NBA Finals that basketball is “a surprisingly elegant sport”; (4) a failure to remember the names of the two Latino men in one’s class; (5) using the same vocabulary to describe the NFL Combine and the Kentucky Derby; (6) “Bro’s before ho’s”; (7) college party themes: “Conquistabros and Navahoes,” “King Tuts and Egyptian Sluts,” “MLKeg Day”; (8) bragging at a work happy hour: “I majored in business, but I became a real expert in women’s studies in college, if you know what I mean…”; (9) making fun of a man for the way he’s eating a banana.
[v] In fact, as Lai et al. (2013) conclude, there is little evidence that these strategies are effective in creating long-term change.