Skip to main content

AI is an excuse for Facebook to keep messing up

AI is an excuse for Facebook to keep messing up

/

Artificial intelligence is not a solution for shortsightedness and lack of transparency

Share this story

Over the course of an accumulated 10 hours spread out over two days of hearings, Mark Zuckerberg dodged question after question by citing the power of artificial intelligence.

Moderating hate speech? AI will fix it. Terrorist content and recruitment? AI again. Fake accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI.

It’s not even entirely clear what Zuckerberg means by “AI” here. He repeatedly brought up how Facebook’s detection systems automatically take down 99 percent of “terrorist content” before any kind of flagging. In 2017, Facebook announced that it was “experimenting” with AI to detect language that “might be advocating for terrorism” — presumably a deep learning technique. It’s not clear that deep learning is actually part of Facebook’s automated system. (We emailed Facebook for clarification and have not yet heard back.) But we do know AI is still in its infancy when it comes to understanding language. As The Verge’s James Vincent concludes from his reporting, AI is not up to snuff when it comes to the nuances of human language, and that’s not even taking into consideration the edge cases where even humans disagree. In fact, AI might never be capable of dealing with certain categories of content, like fake news.

The invocation of AI is a dodge

Beyond that, the kinds of content Zuckerberg focused on in the hearings were images and videos. From what we know about Facebook’s automated system, at its core, it’s a search mechanism across a shared database of hashes. If a video of a beheading goes up that has been previously been identified as terrorist content in the database — by Facebook or one of its partners — it’ll be automatically recognized and taken down. “It’s hard to differentiate between that and the earliest days of the Google search engine, from the technological perspective,” says Ryan Calo, law professor and a director of the Tech Policy Lab at the University of Washington. “If that was AI, then this is AI.”

That’s what’s so nice about AI as an excuse: artificial intelligence is a broad umbrella term that can include automation of all varieties, machine learning, or even more specifically deep learning. It’s not necessarily wrong to call Facebook’s automatic takedown system AI. But you know that if you say “artificial intelligence” in front of a body of lawmakers, they’ll start imagining AlphaGo or maybe more fancifully, SkyNet and C-3PO taking down the terrorist beheading videos before anyone sees them. None of them are imagining Google search.

The invocation of AI is a dodge deployed on a group of laypeople who, for the most part, regrettably swallowed it part and parcel. The one exception might have been Sen. Gary Peters (D-MI) who followed up with a question about AI transparency. “But you also know that artificial intelligence is not without its risk, and that you have to be very transparent about how those algorithms are constructed.” Zuckerberg’s response was to acknowledge that it was a “really important” question and that Facebook had a whole AI ethics team working on the issue.

“I don’t think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don’t understand how they’re making decisions,” said Zuckerberg.

Zuckerberg said again and again in the hearings that in five to 10 years, he was confident that they would have sophisticated AI systems that would be up to the challenge of even dealing with linguistic nuance. Give us five to 10 years, and we’ll have this all figured out.

Artificial intelligence cannot solve the problem of not knowing what the hell you’re doing

But the point isn’t just that Facebook has failed to scale for content moderation. It’s failed to detect entire categories of bad behavior to look out for — like intentional misinformation campaigns conducted by nation-states, the spread of false reports (whether by nation-states or mere profiteers), and data leaks like the Cambridge Analytica scandal. It’s failed to be transparent about its moderation decisions even when these decisions are driven by human intelligence. It’s failed to deal with its increasing importance in the media ecosystem, it’s failed to safeguard users’ privacy, it’s failed to anticipate its role in the Myanmar genocide, and it’s possible it’s even failed to safeguard American democracy.

Artificial intelligence cannot solve the problem of not knowing what the hell you’re doing and not really caring one way or the other. It’s not a solution for shortsightedness and lack of transparency. It’s an excuse that deflects from the question itself: whether and how to regulate Facebook.

In fact, advances in AI suggest that the law itself should change to keep up with it, not that a hands-off approach is further warranted.

Artificial intelligence is just a new tool, one that can be used for good and for bad purposes and one that comes with new dangers and downsides as well. We know already that although machine learning has huge potential, data sets with ingrained biases will produce biased results — garbage in, garbage out. Software used to predict recidivism in defendants result in racist outcomes, and more sophisticated artificial techniques will merely make these types of decisions more opaque. That kind of opacity is a huge problem when machine learning is being deployed with the purest of good intentions. It’s an even bigger problem when machine learning is being deployed to better target consumers with advertisements — a practice that, even without machine learning, allowed Target to find out a teenage girl was pregnant before her parents knew.  

“Either it’s all hype and we shouldn’t overreact to it, or it represents a legitimate sea change.”

“Simultaneously the claim is that AI changes everything — it changes the way we do everything, it’s a game-changer — but nothing should change,” says Ryan Calo. “One of these things can’t be right. Either it’s all hype and we shouldn’t overreact to it, or it represents a legitimate sea change. It’s really specious to argue that the very reason we should get out of AI’s way is that it’s so transformative.”

If it wasn’t already clear that “wait and see what technological wonders we come up with” is just a stall, it’s obvious from Facebook’s approach to privacy that it’s more than willing to stall forever. In one part of Wednesday’s hearing before the House Committee for Energy and Commerce, Zuckerberg said in response to a question about privacy, “I think we’ll figure out what the social norms are and the rules that we want to put in place. Then, five years from now, we’ll come back, and we’ll have learned more things. And either that’ll just be that social norms have evolved and the company’s practices have evolved, or we’ll put rules in place.”

Five years? We’ll wait five years to figure out user privacy? It’s been 14 years since Facebook’s founding. There are people of voting age who do not remember a time before Facebook. Facebook was first criticized for privacy failure in 2006 when it launched its News Feed without telling users what it was going to look like and how their privacy settings would affect what their friends saw. In 2007, it launched Beacon, which injected information about user purchases into the News Feed, a decision that resulted in a class action lawsuit that settled for $9.5 million. The FTC put Facebook under a consent decree in 2011 over its privacy failures, a consent decree they may be in violation of because of the Cambridge Analytica scandal.

Mark Zuckerberg is simply getting ready to stumble from one ethical swamp right into another

By citing the AI excuse, Mark Zuckerberg is simply getting ready to stumble from one ethical swamp right into another. He didn’t know what he was doing when he created Facebook, and to be fair, no one did. When Facebook launched, it dove headfirst into a brave new world. No one knew that the cost of connecting people all over the world for ad revenue was eventually going to be Cambridge Analytica.

But the clues were there all along: privacy advocates repeatedly warned against the aggressive and indiscriminate collection of data, others opined on the creepiness of ad targeting, and experts raised concerns about the effects of social networks on elections.

Give Facebook five to 10 years to fix its problems, and in five to 10 years, Mark Zuckerberg will be testifying before Congress yet again on the unintended consequences of its use of artificial intelligence.