clock menu more-arrow no yes mobile

Filed under:

Facebook has doubled bullying and harassment takedowns since last year

It’s ramping up moderation after a COVID-19 dip last year

Someone using the Facebook app on a phone. Photo by Amelia Holowaty Krales / The Verge

On Thursday, Facebook released a new moderation transparency report showing a marked uptick in bullying and harassment enforcement, which reached a peak of 6.3 million total takedowns through the last quarter of 2020. It’s an increase from 3.5 million pieces last quarter and 2.8 million in the fourth quarter of 2019. The company said much of the change is due to improvements in the automated systems that analyze Facebook and Instagram comments.

Facebook’s latest transparency report covers October to December 2020, a period that includes the US presidential election. During that time, the main Facebook network removed more harassment, organized hate and hate speech, and suicide and self-harm content. Instagram saw significant jumps in bullying and self-harm removals. The company says its numbers were shaped by two factors: more human review capacity and improvements in artificial intelligence, especially for non-English posts.

The company also indicates it will lean on automation to address a growing amount of video and audio on its platforms, including a rumored Clubhouse competitor. “We’re investing in technology across all the different sorts of ways that people share,” said CTO Mike Schroepfer on a call with reporters. “We understand audio, video, we understand the content around those things, who shared it, and build a broader picture of what’s happening there.” Facebook hasn’t confirmed the existence of a Clubhouse-like audio platform, but “I think there’s a lot we’re doing here that can apply to these different formats, and we obviously look at how the products are changing and invest ahead of those changes to make sure we have the technological tools we need,” he said.

Facebook pushed some moderation teams back into offices in early October; although it said in November that most moderators worked remotely, it’s also said that some sensitive content can’t be reviewed from home. Now, the company says increased moderation has helped Facebook and Instagram remove more suicide and self-injury posts. Facebook removed 2.5 million pieces of violating content, compared to 1.3 million pieces the preceding quarter, and Instagram removed 3.4 million pieces, up from 1.3 million. That’s comparable to pre-pandemic levels for Facebook, and it’s a significant absolute increase for Instagram.

Facebook bullying and harassment takedowns between Q3 2018 and Q4 2020.

Conversely, Facebook attributes some increases to AI-powered moderation. It removed 6.3 million pieces of bullying and harassing content on Facebook, for instance, which is nearly double the numbers from previous quarters. On Instagram, it removed 5 million pieces of content, up from 2.6 million pieces last quarter and 1.5 million pieces at the end of 2019. Those increases stem from tech that better analyzes comments in the context of the accompanying post.

Non-English language moderation has been a historic weak point for Facebook, and the company says it has improved AI language detection in Arabic, Spanish, and Portuguese, fueling a hate speech takedown increase from 22.1 million to 26.9 million pieces. That’s not as big as the jump Facebook saw in late 2019, however, when it made what it described as dramatic improvements to its automated detection.

Facebook hate speech takedowns between Q4 2017 and Q4 2020.

Facebook says it’s changed its News Feed in ways that reduce the amount of hate speech and violent content people see. A survey of hate speech in the third quarter found that users averaged between 10 and 11 pieces of hate speech for every 10,000 pieces of content; in the fourth quarter, that dropped to seven or eight pieces. The company said it was still formulating responses to some suggestions from the Facebook Oversight Board, which released its first decisions last month.

As it did last quarter, Facebook suggested lawmakers could use its transparency report as the model for a legal framework. Facebook has supported changes to Section 230 of the Communications Decency Act, a broad liability shield that has come under fire from critics of social media. “We think that regulation would be a very good thing,” said Monika Bickert, VP of content policy.

However, Facebook has not backed a specific legislative proposal — including the SAFE TECH Act, a sweeping rollback proposed in Congress last week. “We remain committed to having this dialogue with everybody in the United States who is working on finding a way forward with regulation,” said Bickert. “We’ve obviously seen a number of proposals in this area, and we’ve seen different focuses from different people on the Hill in terms of what they want to pursue, and we want to make sure that we are part of all those conversations.”