Study reveals gender and minority bias in clerkship evaluation language.
What’s in a name? What’s in an adjective? Can seemingly innocuous descriptors in a third-year clerkship evaluation shape a medical student’s future?
Signs point to yes, according to a study co-authored by Rebekah Gardner, MD, associate professor of medicine, and published in May in the Journal of General Internal Medicine.
The work began on a hunch, after Gardner, who chairs the internal medicine residency selection committee, and Urmimala Sarkar, MD, MPH, a professor at the University of California, San Francisco School of Medicine, compared observations about med student evaluations.
“Both of us had noticed quite a difference between [dean’s] letters that were describing female versus male medical students,” says Gardner, who each year reads 50 to 100 of these letters, also called Medical Student Performance Evaluations (MSPEs). “It’s really striking when you start to notice it.”
The conversation sparked a research project that analyzed almost 90,000 third-year core clinical rotation clerkship evaluations from UCSF and Brown. The authors examined descriptions of underrepresented minorities (URMs) in addition to women. Computer algorithms sifted through the text and picked out common adjectives, weighting them for importance based on how often they were applied.
The study found that women were more frequently labeled with personality traits like “lovely,” “cheerful,” and “fabulous,” while male counterparts received descriptors for competency-related behaviors, like “scientific.” URMs, meanwhile, were often described as “pleasant,” “open,” and “nice,” and less likely than non-URM students to be called “knowledgeable.”
Gardner, who also teaches medical students during their third-year clerkships, knows that clerkship evaluators frequently write the same students’ residency recommendation letters, referring back to clerkship evaluations for reminders—sometimes verbatim. “So you may see echoes of the same verbiage,” Gardner says. Because MSPEs are an important part of residency applications, language can create a ripple effect: residency opportunities impact future fellowship chances, and so on.
“What seems like a really tiny thing can have reverberations moving forward through that person’s career, particularly if [they]are already coming from a disadvantaged background,” Gardner says.
Though the study results weren’t unexpected, Gardner says what surprised her more, “but wasn’t totally surprising, was the URM data. That’s been less looked at before.” And, she adds, they were “certainly very problematic.”
Gardner notes that implicit or unconscious bias can occur when writing evaluations, especially when they are so numerous and frequent that they become routine. Reviewers become “less intentional” about word choice, she says, “and less thoughtful about the implications of those words.” Reviewers often lead hurried, busy lives, “so you shortcut. And when our brains take shortcuts, that doesn’t serve disadvantaged groups very well.”
Brown and its residency programs regularly offer training for faculty and hospital staff to address unconscious bias, and Gardner’s department is participating in a University of Wisconsin workshop initiative called the Bias Reduction in Internal Medicine program.
Gardner plans further study of effective bias reduction. “I think it would be helpful to have more data-driven approaches as opposed to just sort of using intuitive approaches,” she says. “It’s an area that we need more work on.”