Skip to main content

A new system can measure the hidden bias in otherwise secret algorithms

A new system can measure the hidden bias in otherwise secret algorithms


A powerful tool for algorithmic transparency

Share this story

Researchers at Carnegie Mellon University have developed a new system for detecting bias in otherwise opaque algorithms. In a paper presented today at the IEEE Symposium on Security and Privacy, the researchers laid out a new method for assessing the impact of an algorithm's various input, potentially providing a crucial tool for corporations or governments that want to prove a given algorithm isn't inadvertently discriminatory. "These measures provide a foundation for the design of transparency reports that accompany system decisions," the paper reads, "and for testing tools useful for internal and external oversight."

Called "Quantitative Input Influence," or QII, the system would test a given algorithm through a range of different inputs. Based on that data, the QII system could then effectively estimate which inputs or sets of inputs had the greatest causal effect on a given outcome. In the case of a credit score algorithm, the result might tell you that 80 percent of the variation in your credit score was the result of a specific outstanding bill, providing crucial insight into an otherwise opaque process. The same tools could also be used to test whether an algorithm is biased against a specific class of participants.

"a foundation for the design of transparency reports that accompany system decisions"

The research comes after a number of high-profile accusations of algorithmic bias, most notably a Gizmodo report in which a former Facebook employee accused the company of suppressing conservative news in the Trending Topics box. In that case, the alleged bias was largely a result of intervention by human editors. Facebook responded by shifting Trending Topics to a more automated system.

Algorithmic bias is also a serious issue in the court system. Earlier this week, ProPublica unearthed evidence of racial bias in a "risk assessment" algorithm, which is commonly used by judges in the sentencing process to estimate the risk of a convicted defendant committing similar crimes in the near future. Researchers compared estimates made by the algorithm to the observed rate of repeat offending during the same period. The results showed that the program tended to overestimate the risk of criminal activity for black defendants, even though race itself was not an input.

For years, researchers and nonprofits have called for greater transparency in algorithms, although both groups have lacked both the power and the necessary technology to enforce those ideals. In 2014, Princeton professor and FTC chief technologist Ed Felten argued algorithms could be even more accountable than human decision-making if corporate interests did not obscure the various inputs at work. "When people complain that algorithms aren’t transparent, the real problem is usually that someone is keeping the algorithm or its input data secret," Felten wrote. "Non-transparency is a choice they are making."