clock menu more-arrow no yes

Filed under:

Twitter launches new process for reporting COVID misinformation

New, 7 comments

The White House had pressured platforms to do more

The Twitter bird logo in white against a dark background with outlined logos around it and red circles rippling out from it. Illustration by Alex Castro / The Verge

On Tuesday, Twitter announced that it will begin testing a new reporting feature for users to flag tweets containing possible misinformation.

Starting today, users will be able to report misinformation through the same process as harassment or other harmful content, through the dropdown menu at the top right of every tweet. Users will be prompted to select whether the misleading comment is political, health-related, or falls into another category. The politics category includes more specific forms of misinformation like content related to elections. The health category will also include an option for users to flag COVID-19-specific misinformation.

The new feature will be available on Tuesday for most users in the US, Australia, and South Korea. Twitter said it expects to run this experiment for a few months before deciding to roll it out to additional markets.

Twitter said that not every report will be reviewed as the platform continues to test the feature. But the data obtained through the test will help the company determine how it could expand on the feature over the next few weeks. The test could be used to identify tweets containing misinformation that have the potential to go viral as well.

Last month, the Biden administration took a stronger stance against misinformation as new variants of COVID-19 have continued to spread. President Biden told reporters in July that social media platforms like Facebook were “killing people” with vaccine misinformation.

The statement followed a coordinated campaign from the White House pressuring platforms to more aggressively remove coronavirus misinformation. The US Surgeon General’s office published a report outlining new ways platforms could counter health misinformation. The report called for “clear consequences for accounts that repeatedly violate” a platform’s rules and for companies like Facebook and Twitter to redesign their algorithms to “avoid amplifying” false information.

Sen. Amy Klobuchar (D-MN) also introduced a bill earlier this year that would remove Facebook and other social media platforms’ Section 230 liability shield if they amplified harmful public health information.