A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science.
The study does not name the makers of the algorithm, but Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study says “almost every large health care system” is using it, as well as institutions like insurers. Similar algorithms are produced by several different companies as well. “This is a systematic feature of the way pretty much everyone in the space approaches this problem,” he says.
“This is a systematic feature”
The algorithm is used by health care providers to screen patients for “high-risk care management” intervention. Under this system, patients who have especially complex medical needs are automatically flagged by the algorithm. Once selected, they may receive additional care resources, like more attention from doctors. As the researchers note, the system is widely used around the United States, and for good reason. Extra benefits like dedicated nurses and more primary care appointments are costly for health care providers. The algorithm is used to predict which patients will benefit the most from extra assistance, allowing providers to focus their limited time and resources where they are most needed.
To make that prediction, the algorithm relies on data about how much it costs a care provider to treat a patient. In theory, this could act as a substitute for how sick a patient is. But by studying a dataset of patients, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on them for treatments than similarly sick white patients. The algorithm doesn’t account for this discrepancy, leading to a startlingly large racial bias against treatment for the black patients.
“Cost is a reasonable proxy for health, but it’s a biased one”
The effect was drastic. Currently, 17.7 percent of black patients receive the additional attention, the researchers found. If the disparity was remedied, that number would skyrocket to 46.5 percent of patients.
“Cost is a reasonable proxy for health, but it’s a biased one, and that choice is actually what introduces bias into the algorithm,” Obermeyer says.
Historical racial inequalities are reflected in how much a society spends on black and white patients. Patients may have to take time off work for treatment, for example. Since black patients disproportionately live in poverty, it may be harder for them, on average, to call out for the day and take a cut in pay. “There are just a million ways in which poverty makes it difficult to access health care,” Obermeyer says. Other disparities, like bias in how doctors treat patients, may also contribute to the gap.
This is a classic example of algorithmic bias in action. Researchers have often pointed out that a biased data source produces biased results in automated systems. The good news, Obermeyer says, is that there are ways to curb the problem in the system.
“That bias is fixable, not with new data, not with a new, fancier kind of neural network, but actually just by changing the thing that the algorithm is supposed to predict,” he says. The researchers found that by focusing on only a subset of specific costs, like trips to the emergency room, they were able to lower the bias. An algorithm that directly predicts health outcomes, rather than costs, also improved the system.
“With that careful attention to how we train algorithms,” Obermeyer says, “we can get a lot of their benefits, but minimize the risk of bias.”