The Journal of Things We Like (Lots)
Select Page
Khiara M. Bridges, Race in the Machine: Racial Disparities in Health and Medical AI, 110 Va. L. Rev. 243 (2024).

Artificial intelligence (AI) is moving increasingly rapidly into health care (as indeed into everything else). But it has problems there (as indeed everywhere else!). What’s to be done, in particular, about the deeply embedded biases along racial and other lines that permeate the whole world of health and, as such, are likely to be encoded in AI?

Khiara Bridges gives an answer that seems mild but carries roots of revolution. In Race in the Machine: Racial Disparities in Health and Medical AI, she argues that informed consent is a key lever to pull in fighting these racial disparities. But not because informed consent—at present, mostly a formality, a begrudging nod to autonomy—will fix the problem in its current state. Instead, Bridges argues, informed consent, beefed up and focused on conveying the brutal truth about encoded racial disparities, can form the foundation for revolutionary social changes in health care, health, and beyond. Curious? Read on!

The first half of the article comprises Parts I-III. These parts aren’t breaking too much new ground, but they do an excellent job bringing together the literature, often including a host of data and examples, to make their own cases—each of which is prerequisite to the piece’s second half. Part I covers the landscape of health and health-care bias. Part II does AI. And Part III brings the first two together to describe bias in medical AI. To elaborate a bit: Part I traces the causes of different health outcomes for marginalized groups (e.g., substandard housing, poverty, persistent stress caused by racism) and the different treatment of marginalized groups by the medical system (e.g., doctors offering different treatments to Black patients than to White patients). It’s replete with infuriating examples, most reflecting the endemic bias experienced by Black patients in America and many referring to Black maternal health care (the subject of a prior Bridges opus). Part II provides a basic primer on AI (probably skippable by AI-conversant readers, but otherwise a helpful foundation), a handy overview of AI in medicine (again, skippable if familiar, but that’s far fewer folks), and a discussion of the potential uses of medical AI in prenatal care—this last a bringing together of technologies and the medical literature that is both novel and insightful. In Part III, discussing bias in medical AI, Bridges explores pre-mapped territory, but she walks it carefully and thoughtfully and shines new light on it, including through trenchant examples from prenatal care. She lays out different sources of bias—design choices, inadequate data, data that accurately reflect inequitable systems, and the pernicious encoding of race (often deliberate). After this recounting, Bridges emphasizes an underappreciated point: the problem isn’t just that AI will encode existing biases—it’s that it will wrap them in “the veneer of objectivity,” thus leaving minorities ultimately worse off—because they suffer the same injustice, but this time it’s coded as just the machine being impartially correct (what Ifroma Ajunwa dubs “data objectivity”).

The second half (Part IV) is the heart of the article; it starts out smart and interesting and reasonable and winds up smart and interesting and audacious. In a good way!

Starting with the new and smart but not socks-removing: Bridges presents convincing evidence that people of color may well not want to have medical AI involved in their care. She recounts studies of “algorithmic aversion,” where folks are, well, averse to using or trusting algorithms—and demonstrates how this is likely to be particularly forceful for Black patients and medical AI. It’s not just the legacy of Tuskegee, she recounts; it’s the pervasive and ongoing evidence of bias, inequality, and inequity in the health systems of today that compromises trustworthiness. (Indeed, AI may worsen this dynamic.)

So what’s the intervention? Bridges argues for informed consent—telling all patients, but especially those of color, not only that AI is being used in their care, but that the AI is very likely to be biased, based on the deep multimodal biases embedded in health. Why? Well, for starters, there’s the classic story that informed consent respects autonomy, and patients (especially of color) would want to know, so physicians should tell them. And that’s likely enough.

But there’s more, and here’s where it gets radical. Bridges draws on literature grounding informed consent in the Nuremberg Trials and recasting it “as a rebuke of Nazi ‘medicine,’ eugenics, anti-Semitism, racism, and white supremacy.” (P. 316.) This “rebelliousness” underlying informed consent, Bridges forcefully posits, can be grounds for a broader social revolution—if we tell patients, truly and meaningfully, about the inequities embedded in the system and informing their care, that may plant the seeds for making society actually better—a goal vastly preferable to the Faustian outcome where somehow-improved algorithms paper over unaddressed social inequities.

It’s a provocative, fascinating, and persuasive argument. (I’m primed to resist—I’m part of the benighted crew that’s argued that informed consent probably isn’t specially needed for medical AI—but I’m far less complacent than I used to be.)

And of course Bridges’ argument stretches beyond AI in medicine, to reach medical care generally and indeed areas outside medicine. Bridges mentions this, and if it’s not deeply explored in the piece, that may be because the piece is already just shy of a hundred pages. But one can see the radical, beguiling arguments reaching forward. And that’s a gratifying uneasiness with which to leave this challenging and excellent piece.

A postscript: It’d be a shame to review the article without mentioning Bridges’ engrossing prose. Some sesquipedalian sentences are pure pleasure to peruse: “These defiant, revolutionary origins have been expunged from the perfunctory form that the informed consent process has taken at present.” (P. 250.) Some paragraphs are short and pungent; in others, Bridges deploys an avalanche of data and studies. One crushing paragraph in the introduction has a relentless and irresistible litany of “for examples”’s, each a drumbeat pounding home her point about embedded disparity. It’s a pleasure to move through the weighty arguments with Bridges’ writing carrying you along.

Download PDF
Cite as: Nicholson Price, Can Informed Consent Solve AI Bias?, JOTWELL (May 7, 2024) (reviewing Khiara M. Bridges, Race in the Machine: Racial Disparities in Health and Medical AI, 110 Va. L. Rev. 243 (2024)), https://cyber.jotwell.com/can-informed-consent-solve-ai-bias/.