Skip to main content

FTC warns it could crack down on biased AI

FTC warns it could crack down on biased AI

/

‘If you don’t hold yourself accountable, the FTC may do it for you’

Share this story

federal trade commission ftc SHUTTERSTOCK (Felix Lipov / Shutterstock)

The US Federal Trade Commission has warned companies against using biased artificial intelligence, saying they may break consumer protection laws. A new blog post notes that AI tools can reflect “troubling” racial and gender biases. If those tools are applied in areas like housing or employment, falsely advertised as unbiased, or trained on data that is gathered deceptively, the agency says it could intervene.

“In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver,” writes FTC attorney Elisa Jillson — particularly when promising decisions that don’t reflect racial or gender bias. “The result may be deception, discrimination — and an FTC law enforcement action.”

As Protocol points out, FTC chair Rebecca Slaughter recently called algorithm-based bias “an economic justice issue.” Slaughter and Jillson both mention that companies could be prosecuted under the Equal Credit Opportunity Act or the Fair Credit Reporting Act for biased and unfair AI-powered decisions, and unfair and deceptive practices could also fall under Section 5 of the FTC Act.

“It’s important to hold yourself accountable for your algorithm’s performance.”

“It’s important to hold yourself accountable for your algorithm’s performance. Our recommendations for transparency and independence can help you do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you,” writes Jillson.

Artificial intelligence holds the potential to mitigate human bias in processes like hiring, but it can also reproduce or exaggerate that bias, particularly if it’s trained in data that reflects it. Facial recognition, for instance, produces less accurate results for Black subjects — potentially encouraging false identifications and arrests when police use it. In 2019, researchers found that a popular health care algorithm made Black patients less likely to receive important medical care, reflecting preexisting disparities in the system. Automated gender recognition tech can use simplistic methods that misclassify transgender or nonbinary people. And automated processes — which are frequently proprietary and secret — can create “black boxes” where it’s difficult to understand or challenge faulty results.

The European Union recently indicated that it may take a stronger stance on some AI applications, potentially banning its use for “indiscriminate surveillance” and social credit scores. With these latest statements, the FTC has signaled that it’s interested in cracking down on specific, harmful uses.

But it’s still in the early days of doing so, and critics have questioned whether it can meaningfully enforce its rules against major tech companies. In a Senate hearing statement today, FTC Commissioner Rohit Chopra complained that “time and time again, when large firms flagrantly violate the law, the FTC is unwilling to pursue meaningful accountability measures,” urging Congress and other commissioners to “turn the page on the FTC’s perceived powerlessness.” In the world of AI, that could mean scrutinizing companies like Facebook, Amazon, Microsoft, and Google — all of which have invested significant resources in powerful systems.