clock menu more-arrow no yes

Filed under:

Twitter says it did everything it could to fight YouTube shooting hoaxes

New, 8 comments
Photo by Michele Doying / The Verge

Twitter has defended its handling of misinformation and abuse in the aftermath of this week’s shooting at YouTube’s headquarters. In a blog post titled “Serving the Public Conversation During Breaking Events,” trust and safety VP Del Harvey laid out how Twitter tried to provide “credible and authentic” information about the attack, even as some users spread hoaxes about the shooter’s identity.

Harvey writes that Twitter doesn’t have a system for verifying information accuracy, reiterating Twitter’s stance that it isn’t an “arbiter of truth.” However, it watches for deliberate misinformation that violates rules against harassment, hate speech, spam, or violent threats. Harvey says that after the shooting, Twitter “suspended hundreds of accounts for harassing others or purposely manipulating conversations about the event” and implemented automated systems to stop suspended users from making new accounts.

Harvey also says Twitter tried to promote credible information by publishing Twitter Moments about the shooting as early as 10 minutes after tweets started coming in.

BuzzFeed complained after the shooting that Twitter was losing its usefulness as a source of credible news, counting 25 different people who hoaxers falsely claimed were the shooter, including a BuzzFeed journalist who was debunking the hoaxes. A hacker also briefly took over the account of a YouTube employee who had tweeted about the shooting, spreading fake information through the account. CEO Jack Dorsey said after the shooting that Twitter was “tracking, learning, and taking action” against misinformation, and “working diligently on product solutions to help.”

This post doesn’t outline changes, but Harvey says Twitter is “continuing to explore and invest in” possible solutions. Those potentially include making it harder for people to evade suspensions, improving Twitter’s ability to identify automated accounts, and having team members respond more quickly to “ensure a human review element continues to be present” in evaluations.

Twitter is right that some of the problems we saw this week — like users hacking accounts or posting pictures of specific people to incite harassment — can be handled by enforcing existing rules. And the improvements Harvey describes would help the overall platform, not just its usefulness during tragedies. At the same time, it’s tough to crack down on a behavior while explicitly refusing to ban it, which is the balance Twitter’s trying to strike here.