Facebook is apologizing for an incident where its AI mislabeled a video of Black men with a “primates” label, calling it an “unacceptable error” that it was examining to prevent it from happening again. As reported by the New York Times, users who watched a June 27th video posted by the UK tabloid Daily Mail received an auto-prompt asking whether they wanted to “keep seeing videos about Primates.”
Facebook disabled the entire topic recommendation feature as soon as it realized what was happening, a spokesperson said in an email to The Verge on Saturday.
“This was clearly an unacceptable error,” the spokesperson said. The company is investigating the cause to prevent the behavior from happening again, the spokesperson added. “As we have said, while we have made improvements to our AI we know it’s not perfect and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations.”
The incident is just the latest example of artificial intelligence tools showing gender or racial bias, with facial recognition tools shown to have a particular problem of misidentifying people of color. In 2015, Google apologized after its Photos app tagged photos of Black people as “gorillas.” Last year, Facebook said it was studying whether its algorithms trained using AI—including those of Instagram, which Facebook owns— were racially biased.
In April, the US Federal Trade Commission warned that AI tools that have demonstrated “troubling” racial and gender biases may be in violation of consumer protection laws if they’re used decision-making for credit, housing or employment. “Hold yourself accountable— or be ready for the FTC to do it for you,” FTC privacy attorney Elisa Jillson wrote in a post on the agency’s website.