A sustained push to reduce abuse on Twitter has shown promising results, the company said, citing internal data suggesting the company is taking action on significantly more hostile accounts while reducing the amount of vitriol seen by users. The majority of disciplined user accounts do not commit a second infraction, Twitter said, suggesting it has had success in reshaping some users’ behavior. But Twitter did not share the hard numbers behind its data, and users continue to post accounts of the company appearing to ignore physical threats.
In a meeting with reporters, Twitter executives laid out the following results of their anti-abuse efforts. The company:
- is taking action against 10 times more accounts this year than it did last year, amounting to “thousands more” every day.
- has discovered twice as many accounts created by users who were previously banned.
- saw a 25 percent decline in abuse reports linked to accounts that were disciplined by making their tweets visible only to their followers for a period of time.
Taking action against 10 times more accounts
- found that 65 percent of accounts that were disciplined have not offended a second time.
- saw a 40 percent reduction in blocks received from accounts that recently received mentions from an account that doesn’t follow them. (This suggests that replies from non-followers that contained abuse were never surfaced to the user, thanks to improved muting controls Twitter says.)
Twitter declined to release the raw data behind any of its observations, although it will consider doing so in the future, said Del Harvey, the company’s vice president of trust and safety. (Among other things, Twitter is concerned that releasing the data could subject it to new requests from governments and law enforcement agencies, a spokeswoman said.)
Twitter’s announcements follows a series of product changes designed to reduce abuse, following years of neglect. Over the past year or so, the company has begun collapsing abusive or low-quality tweets in replies, created more notification filters to block mentions from new and unverified accounts, and built a separate inbox for direct messages from accounts that a user does not follow. It also redesigned its reporting tools in an effort to make them easier.
Redesigning reporting tools to make them easier
One of Twitter’s most promising new anti-abuse tools appears to be its technology for temporarily limiting a user’s audience. This is sometimes called a “shadow ban,” but is meaningfully different: users who receive shadow bans in forums believe they are posting publicly, but their messages are visible only to themselves. Twitter’s disciplinary tool lets users continue to post, but their tweets are visible only to users that follow them.
“It’s critically important that people can come to Twitter and talk about what’s happening without worrying about feeling safe,” said Ed Ho, Twitter’s general manager for consumer product and engineering. Only in the past year has Twitter’s product team worked consistently with its trust and safety teams, executives said. “There wasn’t the partnership we really needed on the product and engineering side,” Harvey said.
Twitter’s announcements today will likely come as cold comfort to the users who have been terrorized on the platform and had their abuse reports fall on deaf ears. A BuzzFeed report on Wednesday found that Twitter had consistently ignored clear examples of credible threats unless they were reported by celebrities or journalists inquired about them. The company continues to receive criticism for tolerating anti-Semitic and Nazi speech on the platform, and has faced calls from some quarters to ban President Donald Trump, who critics argue has incited his followers to harassment and violence.
Twitter said it will continue investing in anti-abuse tools, sharing more data along the way. “It’s something we’re going to have to keep working on,” Harvey said.