Hi everyone! Thanks very much for the opportunity to discuss our work-in-progress, “‘I Love Women’: The Conceptual Inadequacy of ‘Implicit Bias.’”

Tests for implicit bias, in particular the Implicit Association Test (IAT), have recently come under scrutiny. Two different meta-analyses, by Oswald et al. (2013) and Forscher et al. (2016) (recently discussed in the Chronicle of Higher Education) have concluded that measurements of “implicit bias” do not reliably predict biased behavior.

In our paper, we offer a different critique of implicit bias testing, one which philosophers and other humanistic thinkers might be well-suited to address. We argue that the dominant implicit bias tests assume crude and implausible conceptions of explicit prejudice, leaving open the possibility that the morally bad and wrong actions supposedly best explained by something interestingly implicit are instead best explained by non-obvious but nonetheless explicit prejudice.[i]

The results of implicit bias testing are supposed to be surprising. They purport to show that even good, self-aware people who reject prejudice or hold “strong egalitarian beliefs,” harbor biases that are significantly unavailable to introspection and that best explain their performance of uncontrolled prejudiced actions.[ii] (Consider how underwhelming the research would be if it turned out that “implicitly biased” subjects included, for example, white supremacists lying about their explicit attitudes, or particularly un-self-aware bigots.)

More specifically, in order for implicit bias tests to live up to their billing, they must demonstrate that:

People who possess a good level of self-awareness, good moral beliefs and feeling

1.    Form certain conceptual associations (like black/danger, white/good, woman/home), and

2.    Perform biased actions that are best explained by those conceptual associations.

We argue that the research in empirical psychology on implicit bias does not, and without a fundamental shift in focus could not, establish this startling thesis.

To see why, consider the ways in which the dominant implicit bias tests control for subjects’ explicit biases. The most common method of explicit bias assessment involves a self-report of “temperature” on a 1-10 scale. The race version of the IAT, for example, asks participants “How warm or cold do you feel toward Black people?,” and “How warm or cold do you feel toward White people?” Participants are then asked which of a set of statements, ranging from “I strongly/moderately/slightly prefer White people to Black people” to “I strongly/moderately/slightly prefer Black people to White people,” or “I like White and Black people equally,” best describes them. This is the extent of the test for explicit racial bias in the race IAT. The methods the other leading tests for implicit bias use to determine subjects’ explicit biases are equally unsophisticated. [iii] In fact, none of the other dominant tests, the Sorting Paired Features Task, the Affect Misattribution Procedure, and the Go/No-Go Association Task, go beyond some version of the “feeling thermometer” and very basic self-reports of participants’ preferences.

These crude methods of evaluating explicit bias are insufficient to detect, and so control for, the prejudices of agents who disavow what one might think of as blatant racism and sexism, but who are nevertheless biased in subtler, but not necessarily unconscious ways. It is easy enough to imagine, for example, that a man who believes that women are goddesses who should be worshipped, but who lack natural aptitude in math, might report feeling the same “temperature” toward both men and women, or that a participant who explicitly harbored racist stereotypes about black athleticism, strength, and sexual ardor, and white intellectual superiority, might report having no “preference for” or feeling any difference in “warmth” toward, members of one racial group over the other.

Explicitly racist or sexist beliefs that don’t compromise feelings of “warmth” are only part of the problem; the temperature scale is too crude to evaluate the moral quality of subjects’ feelings. Imagine, for example, a white person who feels afraid of black men when walking in public and is resentful of black men who seek political or corporate power, but who also feels a patronizing sense of compassion and a desire to do his duty to save black souls when he does outreach work for his church group. What is he to mark on the temperature scale when asked how he feels toward black people – lukewarm? Imagine he sincerely marks “5/10.” He is then asked to report how warmly he feels toward whites. Thinking to himself, “Well people are a mix of good and bad,” he marks “5/10.” He then performs an implicit bias test, and learns that, in a computer simulation, he shoots unarmed black men at a higher rate than armed white men. Those who accept the standard interpretation of implicit bias tests would conclude that the discrepancy between his “egalitarian” feelings and his non-egalitarian behaviour reflects an implicit bias. Is this the right interpretation? Clearly not: the attitudes that best explain his behavior need not be unconscious, and the suggestion that this man is “a good person with egalitarian feelings and beliefs” is highly misleading.

The point is not just a purely negative one about the current state of psychological testing. The heart of our objection is that successfully testing for implicit racism or sexism would require an understanding of non-implicit racism or sexism. Achieving this understanding is notoriously difficult; it is a task that calls for serious moral philosophical and theoretical work. Explicit sexism and racism operate in non-obvious and evolving ways, and a good theory can reveal the significance of phenomena that one might otherwise have thought of as benign. Our behavior and attitudes should be subject to interpretation in the light of theoretical insights.[iv]

At this point, one might conclude that while empirical tests for implicit bias are not showing exactly what they purport to show, they are nevertheless excellent tools for demonstrating the ways in which racism and sexism operate in subtler ways than one might think. And perhaps interventions based on the assumptions of the implicit bias literature reduce biased behavior.[v] So, what is the problem?

One problem is that the focus on implicit bias as a novel form of prejudice has spawned a new model of thinking about moral improvement. This new model encourages a skepticism of the methods of interaction consistent with what Strawson called the “reactive,” or the “participant” stance (education, reasoning, and the feeling and expression of blaming attitudes). As alternatives to engaging in rational persuasion, psychologists and policymakers have recommended various strategies (which we call “life hacks” in a nod to the popular genre of internet self-help articles) for manipulating one’s own conceptual associations (for example, pressing a button labelled “NO!” when one sees stereotype-consistent images, e.g., of a black face paired with the word “athletic,” or “YES!” when one sees stereotype-inconsistent images, e.g., of a white face paired with the word “athletic”).

Effective or not, life-hacking to reduce bias comes with at least two costs that we do not think have been fully appreciated. First, if we are right to suspect that many automatic associations and actions express non-implicit biases, then life-hacking will not address the root of the problem it aims to solve. Yes, it may succeed in de-programming, but it will not inform, educate, or persuade, and, as a result, the scope of its success is bound to be limited. It won’t help people to understand racism, or actually convince them of the arguments in favor of anti-racism. It won’t encourage someone to think about why he tends to find the anger of women amusing, to reconsider throwing a “Conquistabros and Navahoes” party, or to think about the significance of being condescended to because of one’s gender or race.

Second, if much of the bias expressed by rapid association and action can in principle be addressed by rational argument, engagement with richer understandings of prejudice, and other means consistent with standard practices of holding a person responsible, then the life-hack model does a disservice to both victims and perpetrators. By embracing it, we encourage victims to (mis)understand the expression of prejudice as the result of unfortunate unconscious associations, rather than subtle, but non-implicit, patterns of attitudes and beliefs. And we deny perpetrators agency, manipulating them rather than treating them as moral agents we can reason with, persuade, educate, and blame.

— Vida Yao and Samuel Reis-Dennis

 

Work Cited

Brownstein, Michael. “Implicit Bias and Race.” In Routledge Companion to the Philosophy of Race. Alcoff, Linda, Luvell Anderson, and Paul Taylor: Routledge, forthcoming.

Forscher, P. S., Lai, C. K., Axt, J., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. (2017, July 1). “A Meta-Analysis of Change in Implicit Bias.” Retrieved from psyarxiv.com/dv8tu

Lai, C. K., Hoffman, K. M., & Nosek, B. A. (2013).” Reducing implicit prejudice.” Social and Personality Psychology Compass, 7, 315-330.

Levy, Neil. “Consciousness, Implicit Attitudes, and Moral Responsibility.” Nous, 48:1 (2014). 21–40.

Oswald, Frederick L., Gregory Mitchell, Hart Blanton, and James Jaccard. “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterian Studies.” Attitudes and Social Cognition 105, no. 2 (2013): 171-92.

Strawson, P.F. “Freedom and Resentment,” Freedom and Resentment and Other Essays, (London: Metheun & Co., 2008).

[i] For simplicity, we focus on implicit bias tests for racism and sexism.

[ii] There are varying interpretations among both psychologists and philosophers of what it means for an attitude to be “implicit.” The standard interpretation is that these attitudes are “unconscious” and “uncontrollable,” though Fazio and Olsen (2003) suggest that they are better understood as automatic. Philosophers Michael Brownstein (forthcoming) and Neil Levy (2014) suggest that “implicit” attitudes are “arational” (or at least, not paradigmatically “rational”); we don’t address that interpretation directly here, but in our paper, we show how our arguments apply to those interpretations as well.

[iii] For a discussion of three different measures of explicit bias that have been used in implicit bias testing, see Oswald et al (2013). Strikingly, they conclude that even these measures of explicit bias (which include more sophisticated tests of prejudice, such as McConohay’s “Modern Racism Scale”), do not predict the performance of biased actions! They also note the difficulty of determining what counts as a measurable “biased action,” another area that is ripe for philosophical investigation.

[iv] Think of the various interpretations one could have of the following phenomena, given one’s moral-theoretical commitments: (1) Sean Spicer claiming that Jews were brought to “Holocaust Centers”; (2) the statement: “The success of Asian-Americans shows that racism is not a significant factor in limiting the progress of African-Americans”; (3) remarking while watching the NBA Finals that basketball is “a surprisingly elegant sport”;  (4) a failure to remember the names of the two Latino men in one’s class; (5) using the same vocabulary to describe the NFL Combine and the Kentucky Derby; (6) “Bro’s before ho’s”; (7) college party themes: “Conquistabros and Navahoes,” “King Tuts and Egyptian Sluts,” “MLKeg Day”; (8) bragging at a work happy hour: “I majored in business, but I became a real expert in women’s studies in college, if you know what I mean…”; (9) making fun of a man for the way he’s eating a banana.

[v] In fact, as Lai et al. (2013) conclude, there is little evidence that these strategies are effective in creating long-term change.

 

8 Replies to “‘I Love Women’: The Conceptual Inadequacy of ‘Implicit Bias’ (by Yao and Reis-Dennis)

  1. Kent Lee et al [2017; DOI:10.1037/emo0000347] claim “it is well-known that discrete emotional experiences such as fear, disgust, anger, resentment, or sympathy play an important role in shaping inter-group behavior”. No-one would deny that these states are inaccessible to introspection, but would rather suggest they are not easily changed by persuasion and education, while blame merely causes additional resentment. The Lee paper tests a constructionist hypothesis. Ponsi et al [2017
    DOI:10.1098/rspb.2017.0908] demonstrate effects of subliminal affective priming on categorization of neutral faces into Italians (ingroup) and Romanians (outgroup).

  2. Great article and very important correction!

    However, I think you hint at overcorrecting in some places. (This is somewhat of a reach as I’ll be attacking not explicit conclusions, but the antecedents of conditionals you pose. Nevertheless, I hope it’s at least good fodder for discussion. This comment is partially based on the longer article I found on Vida’s website.)

    Biases identified as “implicit biases” are often not *as implicit* as advertised (for instance, “unconscious” may often need to be downgraded to something like “manifested as a conscious, but vague feeling with non-obvious causes/significance”). However, I think you may jump too quickly from (a) not [fully] implicit to (b) [fully] explicit.

    For instance, you state: “…if much of the bias expressed by rapid association and action can in principle be addressed by rational argument, engagement with richer understandings of prejudice, and other means consistent with standard practices of holding a person responsible, then the life-hack model does a disservice to both victims and perpetrators.”

    (I’m not sure why “in principle” matters?) Is there reason to think that in actuality, the antecedent might be true?

    The reasoning motivating the antecedent seems to be something like:
    (1) The biases-in-question are not [fully] implicit
    (2) Therefore, they are [fully] explicit
    (3) [Fully] explicit biases are the sort of biases that can be addressed by rational argument, etc.
    (4) Therefore, it is plausible that biases-in-question could be addressed by rational argument, etc.

    But the move from (1) to (2) falls apart if implicit/explicit is a matter of degree.

    Hope this isn’t straw-manning.

    Thanks!
    -Mark

  3. Hi David and Mark,

    Thank you for your comments and questions!

    First, to David’s points:
    We’re actually fairly optimistic about the power of education, reasoning, and blaming to rationally influence states such as fear, disgust, anger, resentment, and sympathy. On a theory of the emotions that we find plausible, all of the states you mention tend to reflect the judgments/perceptions of the people feeling them (resentment, for example, reflects, roughly, the judgment that one has been wronged). As such, persuading someone that the judgments/perceptions underlying an emotional response (or lack of response) are off-base could, and should, influence that response. Persuading someone that he has been wronged, for instance, might make him feel resentful; educating someone about the suffering of others might arouse feelings of sympathy in him, and so on.

    We’re also optimistic about the power of well-expressed blame to prompt reflection rather than increased resentment. Of course, in many cases, blame can be divisive, but it need not be. When expressed in the context of a certain kind of relationship, in the right way, and at the right time, we think it can be the best way to draw wrongdoers back into the moral fold. And its goodness for this purpose goes beyond its effects on the blamed agent; there is value in standing up for oneself and in standing up for others, even if taking such a stand prompts resentment rather than remorse.

    For Mark:
    We don’t mean to argue that all bias expressed by rapid association is fully explicit, just that some, (even much), of it may be. We’re suggesting that the best explanation of one’s “failure” of an implicit bias test will, in many cases, be one’s explicit biases. In those cases, the biases would plausibly be best-addressed by rational argument. Of course, in some cases, the best explanation may be the test-taker’s implicit biases. In those cases, perhaps non-rational means would be best. In the middle, there are cases in which someone “fails” an implicit bias test, and it isn’t clear what the best explanation is. Perhaps the person is genuinely committed to mostly good values, but has been known to do some of the things we imagine in our example-filled footnote, such as making fun of a man for the way he eats a banana, or helping to organize a “King Tuts and Egyptian Sluts” party on campus. Concluding, on the basis of very basic survey responses, that such a person’s test “failure” is best explained by implicit bias, would be too hasty. Figuring out what to say about these kinds of cases, and figuring out which of these situations one is in, will require both subtle observation/measurement and a good theory of explicit prejudice, which is why we think philosophical/theoretical work about the nature of racism, sexism, and so on, should have an important role to play in implicit bias scholarship.

    Thanks again to both of you for taking the time to read and comment.

    -Sam and Vida

  4. Dear Sam and Vida. Thank you for your reply. My own thinking is influenced by other domains of automatic bias, where judgements are affected by non-conscious psychological processes: orchestral jobs v. sex, salary v. person’s height or attractiveness, severity of parole board v. sentencing before or after lunch or early or late in the session, severity of judicial sentencing v. youthfulness of appearance, scoring of musical ability v. order of presentation. It seems to me that in these examples, the exhortatory/educational approach can only go so far, as opposed to institutional/administrative “nudging”. It is hard work being reflective all the time, and any given episode can be hard to consciously analyse eg comparing a short and a tall job applicant

    The example of police training removing a tendency to shoot dark-skinned figures more often than light-skinned figures in simulated (anxirty-provoking) encounters is another example of a “hack” operating at a nonconscious level (even though the choice of practice task is obviously rational in nature).

  5. Hi David,

    Those are interesting cases. Some of the actions/judgments you mention do seem most plausibly explained by the agents’ implicit attitudes (differences in sentencing before and after lunch, for example). For others, things aren’t as clear. For example, it’s not obvious that, say, a tendency to select unqualified men over qualified women for orchestral jobs doesn’t express a (perhaps somewhat subtle) explicit attitude. (And this is true even if the selection process is automatic.)

    So, we would argue that upon learning that a police officer tends to shoot dark-skinned figures more readily than light-skinned figures, we should first figure out whether that tendency expresses an implicit or explicit bias. The answer will be relevant to how it would be appropriate to respond. Even if “life-hack”-style trainings worked to reduce a biased behavior, that would not be enough to show that the biases that explained that behavior were implicit. In the paper, we discuss an example involving a person who loves Walt Whitman’s poetry being electroshocked until the mere sight of “Leaves of Grass” makes her shudder in horror. The fact that the shock therapy succeeds in getting her to avoid Whitman, we argue, does not show that her prior tendency to seek out his work was driven by implicit attitudes or associations. Of course, if a “life-hack”-style training is effective in getting police officers to make better snap decisions, that’s a very strong, and perhaps decisive, reason to use it! Our points are 1) that the effectiveness of a “life-hack” doesn’t show the underlying attitudes it aims to change are implicit, and, 2) that using “life-hacks,” especially to correct for attitudes that are non-implicit, comes at a moral cost.

    Best,
    Sam and Vida

  6. Hi Sam and Vida. The male bias in orchestra recruitment had explicit expression by many conductors: “it is my experience that female players are less technically proficient”. I would regard objective demonstration that one’s perceptions are faulty by blind audition (resumes etc) or consciousness raising exercises (“blue eyes v. brown eyes”) as a kind of hack.

    The Lee paper I cite actually found “evidence that conceptualization of negative affect toward Black Americans as sympathy, rather than fear, mitigates the relationship between negative affect and fear of Black Americans on self-report and perceptual measures, and reduces racial bias on a psychophysiological measure” – straddling both sides of the street. They freely move between “implicit bias measures” and “implicit bias” in that paper. Cheers, David.

  7. I’m sorry to leave a comment after reading your article very quickly, I hope I didn’t miss anything crucial — it looks like a great article to me engaging with some very important things, but I feel like it confounds implicit & explicit a few places although I take your point very much so that that’s not a clear dichotomy — this is quite a lot of what Dan Dennett has been arguing for years.

    I want to draw your attention to our article in Science showing that word embeddings (computer representations of semantics that basically are just made by counting words) also have implicit bias, and that these also seem to correlate with who gets call backs on resume studies. I also made a blogpost clarifying our results “We didn’t prove prejudice is true” because some of the feedback we got.

    The Science article is here: http://science.sciencemag.org/content/356/6334/183 An open access version of it is here: http://opus.bath.ac.uk/55288/ my blogpost is here: https://joanna-bryson.blogspot.co.uk/2017/04/we-didnt-prove-prejudice-is-true-role.html

  8. Hi Joanna; thanks for your comment! In response to the conclusion that “computer representations of semantics… also have implicit bias, and that these also seem to correlate with who gets call backs on resume studies,” we’d want to suggest that, given this result, we still need to determine the best explanation for this correlation before concluding that the mere existence of these semantic relationships best explains the prejudiced behavior. That many people hold non-obvious, but nevertheless non-implicit, sexist attitudes, is also a possible explanation of the CV selection patterns. Let us know if you’re interested in reading the paper!

Leave a Reply

Your email address will not be published. Required fields are marked *