Skip to main content

What Facebook should do about its Kenosha problem

What Facebook should do about its Kenosha problem

/

After moderators failed to remove a post that incited violence, the company should release a public report

Share this story

Photo by Amelia Holowaty Krales / The Verge

Today let’s talk about the controversy around a militia organizing on Facebook, the violence that followed, and where that leaves the company heading into this week’s planned visit by the president to Kenosha, WI, threatening to stoke more unrest.

Kenosha police shot Jacob Blake seven times in the back last week, leading to protests in the city. Two people were killed and a third was injured in a shooting during one of the protests, and a 17-year-old has been charged in connection with the shootings.

The afternoon before the murders, a 3,000-member Facebook group calling themselves the Kenosha Guard had advertised an event on Facebook encouraging an armed response to the unrest. It was taken down after the shooting. My colleague Russell Brandom broke the news at The Verge:

In a post Tuesday afternoon, the Kenosha Guard Facebook group encouraged an armed response to the ongoing unrest. “Any patriots willing to take up arms and defend our city tonight from the evil thugs?” the post reads. “No doubt they are currently planning on the next part of the city to burn tonight.”

Facebook said it had not found any digital link between the accused shooter and the Kenosha Guard. Which is to say: his Facebook account did not follow the Kenosha Guard page, and he had not been “invited” to the event. Did the shooter see the post, though? No one at Facebook could tell me today when I asked.

At the same time, Brandom reported that the group had been reported multiple times as violating the company’s policies against militias — but the account was nonetheless found to be “non-violating,” in content moderator parlance. Why? That’s still under investigation inside Facebook, a source familiar with the subject told me.

The basic, implicit bargain we have struck with social networks in the United States sounds something like this. Platforms agree to remove hate speech, incitements to violence, and other terrible posts, and as long as they do so in a timely fashion they can continue to operate. This bargain has many flaws — it’s more of a gentleman’s agreement than a law, and platforms break it in spirit and letter all the time. (This is one of the main reasons that both the candidates for president say they want to get rid of Section 230, the part of law that enables the current bargain.) But it’s the status quo and has been for a long time.

The best way to understand the controversy around the Kenosha Guard page is that Facebook broke this implicit bargain. The reason is that Facebook users had done their part — and as Ryan Mac reported at BuzzFeed, they had arguably done more than their part (emphasis mine):

The event associated with the Kenosha Guard page, however, was flagged to Facebook at least 455 times after its creation, according to an internal report viewed by BuzzFeed News, and had been cleared by four moderators, all of whom deemed it “non-violating.” The page and event were eventually removed from the platform on Wednesday — several hours after the shooting.

“To put that number into perspective, it made up 66% of all event reports that day,” one Facebook worker wrote in the internal “Violence and Incitement Working Group” to illustrate the number of complaints the company had received about the event.

Ultimately, CEO Mark Zuckerberg posted a portion of his weekly Q&A with employees publicly, and said the incident had been an “operational mistake.”

There are a few things to say about this.

The first is that, strange as it may seem, the Kenosha Guard’s page might not have been found to be in violation of Facebook’s policies at all had the company not changed them quite recently. On August 19th, Facebook banned “US-based militia groups” as part of an effort that made bigger headlines for removing a bunch of QAnon groups. That’s the policy under which the page was removed. It’s possible moderators could have elected to take it down for inciting violence, but it isn’t guaranteed.

One question coming out of the Kenosha incident is whether Facebook is attempting to remove these militia groups proactively or whether it’s relying on user reports instead. A source told me that for the most part, it’s going to be the former. Facebook has better insights into the growth and operations of pages like this on its network than average users do, I’m told. And user reports aren’t always a great signal — often people will-mass report benign posts for malicious reasons.

That may be one reason the Kenosha Guard page wasn’t caught sooner — Facebook is generally less sensitive to seeing a spike in user reports than it is to seeing a spikes in views and growth. The Kenosha Guard page wasn’t getting a lot of either, at least not in Facebook terms, I’m told.

That doesn’t explain why the moderators who saw the page didn’t take action when they first saw the page, though, which leads me to the second thing worth saying about the Kenosha incident.

When Facebook’s policies change — which they do frequently — it often takes time for those policies to be understood, and effectively enforced, by the company’s roughly 15,000 outsourced content moderators. One of the conclusions I came to after spending last year reporting on the lives of Facebook’s content moderators in America is that they often lack the necessary context for effectively enforcing the policies with a high degree of accuracy, and that supplemental resources from Facebook and its third-party vendors are often lacking or contain errors themselves.

Moderators also generally give users wide latitude in their posts to discuss events that were even faintly political, even when those posts seem obvious on their face, a former Facebook moderator told me Sunday.

“We would get examples like “shoot immigrants,” “shoot at immigrants,” and variations of this,” the moderator said. “People would defend leaving stuff like that up because ‘you aren’t saying you’re going to physically hit them necessarily, they can just be talking about using guns to defend the border/property.’”

The moderator continued: “Essentially, in Facebook’s moderator population, they have tons of people who see no problem with things like ‘bring all your guns.’”

Officially, moderators are not supposed to have any leeway in how they enforce Facebook policies. But in practice, of course they do — there’s a lot of gray area in those policies; even well written policies still require judgment calls; and only a fraction of their decisions are ever audited to ensure fidelity to the written policy.

Add to all that the fact that a majority of Facebook’s moderators are located in gun-friendly states like Texas, and you begin to understand why the Kenosha Guard page may not have come down immediately.

So what to do about all this?

Facebook is continuing to roll out its ban on militias, and it seems likely that a few months from now it will be more effective at rooting out violent social movements on the network than it is today. The big question, of course, is to what extent that can happen before the election and its immediate aftermath, when tensions will be at their highest. Several reports last week found that Facebook still has a lot of work to do on that front.

Another thing the company could consider is publishing a public report about the incident. The investigation now underway into whether the alleged shooter saw the page in question, why moderators initially dismissed reports, and how Facebook will handle similar reports are all subjects of legitimate public interest. Facebook led the way in publishing quarterly “transparency reports” about their enforcement actions — the company could earn some much-needed goodwill by publishing occasional public reports about its high-profile missteps, too.

The Ratio

Today in news that could affect public perception of the big tech platforms.

⬆️ Trending up: Facebook is teaming up with academics across the country to determine whether the platform is influencing the 2020 election, although the results won’t be public until the election is over. Once users opt in to be part of the study, a research team will split them into groups and begin tinkering with their News Feeds and ad experiences. This is excellent news. (Issie Lapowsky / Protocol)

🔽 Trending down: Apple refused to waive its 30 percent fee on a Facebook tool that would let influencers and businesses host paid events as a way to offset revenue lost during the COVID-19 pandemic. Apple also rejected Facebook’s attempt to alert users that some of their money would go toward this fee. (Katie Paul and Stephen Nellis / Reuters)

🔽 Trending down: Google declined to remove ads containing “blatant disinformation” about mail-in voting. The ads, sponsored by the shadowy group Protect My Vote, falsely suggest there is a meaningful difference between mail-in voting and casting an absentee ballot. (Isaac Stanley-Becker / The Washington Post)

🔽 Trending down: Militia groups are continuing to show up on Facebook despite the company’s recent ban on those that call for violence on its platform. Many are openly advocating for violence against protesters. (Shirin Ghaffary / Recode)

Hotspots

San Quentin prison is now the largest COVID-19 outbreak in the country — a disaster that stemmed from a decision the California Department of Corrections and Rehabilitation made in late May to move men away from a prison in Chino, CA, that was having an outbreak of its own. At the time, San Quentin had no known cases of COVID-19. Within a month, more than a third of people there had the virus. By August, 24 inmates were dead.

America’s failure to stop the virus from spreading in prisons is a key piece of its failure to contain the virus at large. From March through the beginning of June, the number of COVID-19 cases in US prisons grew at a rate of around 8 percent per day, compared to 3 percent in the general population. Of the top 20 largest disease clusters in the country, 19 are in prisons or jails.

At San Quentin, the outbreak spurred a slew of conspiracy theories among the inmates and staff. Speaking to The Verge on contraband cellphones, men said they believe the virus was unleashed on purpose to kill off the prison population.

“The governor said they weren’t going to execute people on death row anymore. So they sent the virus here to do what? To kill off people on death row,” one inmate told The Verge. “They cost more money than anyone else here. So people like me are getting swept up in the process.” — Zoe Schiffer and Nicole Wetsman

Governing

TikTok has reportedly chosen a bidder for its US, New Zealand and Australian businesses, and it could announce the deal as soon as Tuesday. (A lot of folks are skeptical about the timing being so fast, though.) Here are Steve Kovach and Alex Sherman at CNBC:

Microsoft, in partnership with Walmart, and Oracle are the two top contenders. The sale price is expected to be in the range of $20 billion to $30 billion, CNBC reported last week.

However, even though TikTok has selected a bidder, the deal could be slowed or derailed by the Chinese government, which updated its technology export list on Friday to include artificial intelligence technology used by TikTok. TikTok’s Chinese parent company, Bytedance, said over the weekend that it would need a license from the Chinese government before it can sell to a U.S. company.

Walmart emerged as a surprise contender last week, saying the social media app would augment its e-commerce efforts.

China announced new restrictions on artificial-intelligence technology exports that could complicate the sale of TikTok’s US operations. The new restrictions cover text analysis, content recommendation, speech modeling and voice-recognition. These technologies can’t be exported without a license from local commerce authorities. (Eva Xiao and Liza Lin / The Wall Street Journal)

Microsoft’s influence in Washington could give it a powerful advantage against other tech giants in its bid for TikTok. While the company was once a cautionary tale of an arrogant tech company caught off-guard by government scrutiny, it has built deep ties with lawmakers. (Karen Weise and David McCabe / The New York Times)

The rise of social commerce in China could help explain why Walmart is interested in buying TikTok. There, buying stuff on social media platforms is a massive driver of new business. (Sherisse Pham / CNN)

ByteDance told TikTok employees to draw up a contingency plan in case the app has to shut down in the US. Trump has ordered ByteDance to divest TikTok in the United States, which it is currently trying to do. (Echo Wang and Greg Roumeliotis / Reuters)

TikTok is thriving in Southeast Asia as it implements a strategy of quickly launching non-political products and promising governments that content will be highly policed in accordance with local laws. Finally some good news for this app! (Fanny Potkin / Reuters)

Los Angeles city attorney Mike Feuer charged TikTok creators Bryce Hall and Blake Gray for allegedly throwing a series of parties in violation of public health restrictions. “If you have a combined 19 million followers on TikTok, and in the middle of a public health crisis, you should modeling great behavior and best practices rather than brazenly violating the law,” Feuer said. (Julia Alexander / The Verge)

Trump’s “silent majority” only seems silent because we’re not looking at conservative Facebook feeds, this piece argues. In the alternate universe of conservative Facebook, Trump’s response to COVID-19 has been effective, Joe Biden is barely capable of forming sentences, and Black Lives Matter is a dangerous group of violent looters. (None of these things are true! Just underlining that one more time.) (Kevin Roose / The New York Times)

The Trump campaign is urging people to request their ballots with a flood of Facebook ads, even as the president spreads misinformation about vote by mail fraud. The ads also double as a way to collect data from potential voters. (Issie Lapowsky / Protocol)

Facebook quietly removed the “multicultural affinity” categories on its ad platform, ending the ability of advertisers to target users by race. It was a huge reversal for Facebook, which had defended its racial ad categories for years. (Julia Angwin / The Markup)

Facebook has a responsibility to support free speech and democracy in Thailand, argues the person who set up the Facebook Group that was recently blocked in the country at the request of the Thai government. Thailand has a law that prohibits criticism of the royal family, which Facebook was forced to comply with, though it is now suing the Thai government. (Pavin Chachavalpongpun / The Washington Post)

Facebook has been allowing advertisers to target users in mainland China, although the social network has been blocked there since 2009. Facebook said this is not a mistake, adding: “there are various technical ways a very small fraction of people in China may be able to access Facebook and see ads.” (Sarah Frier / Bloomberg)

The Facebook executive at the center of a political storm in India previously posted about her support for the Hindu nationalist party and disparaged its main rival in an employee-only Facebook group. Some staff say the posts conflict with the company’s pledge to remain neutral in elections around the world. (Jeff Horwitz and Newley Purnell / The Wall Street Journal)

Mark Zuckerberg said Apple has a “unique stranglehold” on what goes on the iPhone, adding that the App Store blocks innovation and competition and allows Apple to charge “monopoly rents.” The remarks came during a Facebook all-hands meeting last week. (Pranav Dixit and Ryan Mac / BuzzFeed)

Apple suspended Epic Games’ developer account on Friday. The account that does not include the Unreal Engine used by third-party developers, which keeps the move in line with the temporary restraining order a judge passed earlier last week. (Todd Haselton / CNBC)

Apple’s new App Store appeals process is live. Now, developers can challenge Apple over whether their app is in fact violating one of its guidelines. Can’t wait to see whether anyone actually wins an appeal here! (Nick Statt / The Verge)

Twitter blocked three accounts associated with a spam operation that pushed a viral message claiming to be a Black Lives Matter protestor who was changing to vote Republican. The fake accounts received tens of thousands of shares in the past month. (Ben Collins / NBC)

Twitter placed a “manipulated media” label on a tweet from Rep. Steve Scalise (R-LA), which showed a video of activist Ady Barkan, who has ALS and speaks through voice assistance. The video was edited to change a question Barkan asked Joe Biden. (Kim Lyons / The Verge)

Twitter launched a search prompt to guide people to visit vote.gov for accurate information on how to register to vote. Accurate information on Twitter — we love to see it! (Twitter)

The White House is searching for a replacement for Federal Trade Commission Chair Joe Simons, a Republican who has publicly resisted Trump’s efforts to crack down on social media platforms. The FTC would play a crucial role in the president’s efforts to combat what he alleges is anti-conservative bias at companies like Twitter. (Leah Nylen, Betsy Woodruff Swan, John Hendel and Daniel Lippman / Politico)

Contact tracing is failing in the US in part because Americans don’t trust the government enough to give up their contacts or follow quarantine orders. About half of the people whom contact tracers call don’t even answer the phone. (Olga Khazan / The Atlantic)

As the novel coronavirus spread from China to the rest of the world, the Chinese government cracked down on how information related to the disease spread on WeChat. Between January and May this year, more than 2,000 keywords related to the pandemic were suppressed on the platform, which has more than 1 billion users in the country. (Louise Matsakis / Wired)

Repeated internet shutdowns in Belarus have prompted a spike in VPN usage and a private Telegram channel as people try to get around government censorship. (Aliide Naylor / Gizmodo)

Google and Facebook abandoned plans for an undersea cable between the US and Hong Kong after the Trump administration said Beijing might use the link to collect data on Americans. The companies submitted a revised proposal that includes links to Taiwan and the Philippines. (Todd Shields / Bloomberg)

Ed Markey stans are leveraging the mechanics of fandom to keep him in the Senate. Markey is currently facing a heated primary against Joseph P. Kennedy III, who’s been buoyed by his family legend and support from party power brokers like House Speaker Nancy Pelosi (D-CA). Makena Kelly / The Verge)

Industry

Facebook is making aspects of its content recommendation system public for the first time. In Facebook’s Help Center and Instagram’s Help Center, the company details how the platforms’ algorithms filter out content, accounts, Pages, Groups and Events from its recommendations. Sarah Perez at TechCrunch explains:

The company says Facebook’s existing guidelines have been in place since 2016 under a strategy it references as “remove, reduce, and inform.” This strategy focuses on removing content that violates Facebook’s Community Standards, reducing the spread of problematic content that does not violate its standards, and informing people with additional information so they can choose what to click, read or share, Facebook explains.

The Recommendation Guidelines typically fall under Facebook’s efforts in the “reduce” area, and are designed to maintain a higher standard than Facebook’s Community Standards, because they push users to follow new accounts, groups, Pages and the like.

Facebook is testing out a new feature that would link your Facebook account to your news subscription. This would allow you to read a paywalled article on Facebook without having to log in again. It would also indicate to Facebook that you want to see more articles from that publisher. (Anthony Ha at TechCrunch)

The number of pages eligible to monetize their videos through Facebook’s in-stream ads program has leapt by more than 30 percent in the past month. The growth has made ad buyers nervous, saying the platform is growing less safe for brands. (Max Willens / Digiday)

Instagram scams are evolving alongside the tech platforms, as fraudsters find new ways to get into our wallets. Ultimately, the scams could tell us more about ourselves than the scammers. (Zoe Schiffer / The Verge)

TikTok creators will soon be able to sell merchandise directly to fans in the app. Creator commerce platform Teespring is set to roll out the integration soon. (Julia Alexander / The Verge)

Vine co-founder Rus Yusupov has advice for TikTok on how to stay on top. It includes a focus on premium content and monetization, which the app already seems to be doing. (Rus Yusupov / CNN)

Zoom’s revenue has more than quadrupled from last year. Revenue grew 355 percent on an annualized basis in the second fiscal quarter. (Jordan Novet / CNBC)

Explicit deepfake videos featuring female celebrities, actresses and musicians are being uploaded to the world’s biggest porn sites every month, and racking up millions of views. Porn companies aren’t doing much to stop them. (Matt Burgess / Wired)

Those good tweets

Talk to us

Send us tips, comments, questions, and whatever you were thinking about sending Ryan Mac: casey@theverge.com and zoe@theverge.com.