clock menu more-arrow no yes

Filed under:

Facebook’s racist ad problems were baked in from the start

New, 10 comments

How Silicon Valley’s most lucrative product lends itself to unforeseen failures

Mark Zuckerberg Photo by Nick Statt / The Verge

Facebook and Google, two of world’s biggest and most influential companies, pride themselves on their ad businesses. These operations generate tens of billions of dollars per year, thanks in part to letting advertisers target even the most obscure microcommunities using unprecedented sets of data. As the revelations of last week evidenced, however, that ability is a double-edged sword, one that has come back to haunt these ad-supported giants.

ProPublica discovered last Thursday that Facebook’s ad tools could target racists and anti-Semites using the very information those users self-report. That initial report kicked off a series of experiments conducted by news organizations that found that Google’s search engine would not only let you place ads next to search results for hateful rhetoric, but its automated processes would even suggest similar, equally hateful search terms to sell ads against. Twitter was also caught up in the controversy, when its filtering mechanisms failed to prevent ads from targeting “Nazi” and the n-word, an issue the company inexplicably attributed to “a bug we have now fixed.” This week, Instagram converted a journalist’s post about a violent threat she received into an ad that it then served to the journalist’s contacts.

In the most thorough response to the ongoing debacle, Facebook COO Sheryl Sandberg said Tuesday that the issue was the result of a failure on the company’s part. “We never intended or anticipated this functionality being used this way — and that is on us,” Sandberg wrote. “And we did not find it ourselves — and that is also on us.” Sandberg said that, as a someone who is Jewish, the ability to target ads based on an affinity for Hitler made her “disgusted and disappointed.” In an attempt to rectify that oversight, Facebook is now increasing human moderation for its automated processes; improving enforcement of its ad guidelines to prevent targeting that uses attacks on race, ethnicity, gender, and religious affiliation; and creating a more robust user reporting mechanism to cut down on abuses.

But the outrage and indignation that an executive like Sandberg displays, though likely genuine, also feels superficial. The success of her company was built on its propensity and eagerness to perform exactly the type of function that was revealed last week. In other words, the embarrassment isn’t a sign that the platform’s ad system is broken, but the exact opposite. It’s evidence that it’s working as it was designed. Facebook, Google, and others have developed automated systems that blindly vacuum up, and then monetize, such a wealth of data that the events of last week were almost inevitable.

“These kinds of controversies will keep happening because the scale and expectations around how many employees are needed to oversee the content or ad programs is teeny compared to the number of ads being served,” says Kendra Albert, a lawyer and fellow at Harvard Law School’s Cyberlaw Clinic. “I think it’s true that often these companies could not have reached the scale that they reached without automating things that traditionally had a human in the loop.”

It’s not only that these ad systems are governed by algorithms, the software that is increasingly guided by artificial intelligence tools that automate systems in ways even their creators do not fully comprehend. It’s also that, because of their breadth and poor oversight, Facebook and Google have become unvarnished reflections of how humans behave on the internet. Containing and serving that entire spectrum of our interests, no matter how vile, is a feature, not a bug. These companies’ products are measured by their user growth because that is their utility to advertisers. That racism and bigotry would have a place within Silicon Valley’s set of ad-targeting options feels indicative of the industry’s growth-at-all-costs mindset.

“Let’s be clear: what Facebook is doing now won’t have any affect on your ability to target anti-Semites on Facebook. You just won’t be able to type in ‘anti-Semite’ and do it that way,” says Eli Pariser, author of The Filter Bubble and founder of the viral media site Upworthy. “The inference-based targeting, which is how most of targeting works, makes it almost impossible to stop those groups from doing so.” You can still, of course, target visitors of certain Facebook pages, as well as any number of subtle signifiers that speak volumes about a user’s politics, tastes, and attitudes.

Pariser thinks this controversy is a useful public education moment, because it offers a vivid demonstration of how Facebook’s system operates. But, as part of the larger conversation around the social network’s role in society and its responsibilities to regulate user behavior, we’re still largely in uncharted territory. “I don’t envy them,” Pariser says of Facebook’s role.

Neither does Albert, who says the company is stuck between a rock and a hard place. “Unless companies are thinking really proactively about how their platforms are going to be abused, you’re going to keep seeing instances where organizations will find ways that these targeting mechanisms will be used in ways that the company didn’t intend or that has really negative results,” Albert says.

Sandberg said as much in her response, noting how Facebook “never intended or anticipated” ads that targeted “Hitler did nothing wrong,” but still effectively gave anyone the tools to do so. But it seems that platform companies like Facebook, Google, and Twitter keep finding themselves in these positions — be it for hosting ISIS propaganda or accidentally demonetizing inoffensive YouTube videos or censoring historic war photography — because it’s easier to build and deploy a piece of technology before, not after, thinking through all its implications.

Albert says that when new technology arrives on the scene, society is often forced to rethink previously unregulated behavior. This change often occurs after the fact, when we discover something is amiss. “The speed at which this tech is rolled out to the public can make it hard for society to keep up,” Albert adds. “When you’re trying to build as big as possible or as fast as possible, it’s easy for folks who are more skeptical or concerned have issues they’re raising left by the wayside, not out of maliciousness but because, ‘Oh, we have to meet this ship date.’”

Revelations such as those last week are bound to come up again, and there are likely few, if any, concrete solutions available to weed them out in a way that makes everyone happy. But the onus is on tech companies like Facebook and Google to improve. Both companies grew at astronomical pace through the novel combination of unprecedented reach and data collection, cemented through market dominance, with the low overhead of a largely automated system. The failure to anticipate these edge cases is a symptom of their insatiable quest for growth mixed with a lack of meaningful human oversight.

Reckoning with its ad sales model is just one of the hard facts that Facebook is waking up to. (Google, its search engine less of a lightning rod for controversy, is remaining more tight-lipped.) The broader issue, one this ad controversy illustrates, is Facebook’s inability to grapple with the power and influence it’s amassed, and how vulnerable that influence is to bad actors eager to exploit it.

CEO Mark Zuckerberg’s outlook has shifted from last November, when he denied that Facebook had any influence on the US election. Now, with evidence that a pro-Russian propaganda group bought thousands of dollars of political Facebook ads, and a growing realization regarding Facebook’s unprecedented role in society, Zuckerberg can no longer ignore the situation. In a detailed Facebook Live video on Thursday, the Facebook chief says the company will improve transparency around political advertising and plans to put more resources toward protecting election integrity. While distinct from the issue of hateful ad targeting, it’s another acknowledgment that Facebook has failed its users by designing a platform that fosters, instead of prevents, such manipulation.

“You wake up one morning and you’re mayor of a city, and maybe you never wanted to be a mayor and people are asking, ‘Why does the water run here and not there,’ and ‘What are we going to do about trash pickup,’” Pariser says. “I don’t know that Facebook set out to have that role, but by virtue of being the place where the city was built, it’s now got some responsibility to sort those things out.”