Skip to main content

Twitter locks account encouraging coronavirus ‘chickenpox parties’

Twitter locks account encouraging coronavirus ‘chickenpox parties’

/

It bans content that “goes directly against guidance” from experts

Share this story

Medical staff check phone for information at coronavirus testing area on Long Island
Photo by J. Conrad Williams, Jr./Newsday RM via Getty Images

On Wednesday, Twitter briefly locked conservative site The Federalist’s account for suggesting people deliberately expose themselves to the novel coronavirus. The Federalist promoted the medically unsound idea of “medical ‘chickenpox parties’” to infect young, healthy people with the virus under controlled quarantine.

The tweet was removed for violating the social media platform’s policies, and a Twitter spokesperson tells The Verge that “the account was temporarily locked for violating the Twitter Rules regarding COVID-19.”

Twitter bans coronavirus-related content that “goes directly against guidance from authoritative sources of global and local public health information.” That includes tweets promoting ineffective or counterproductive treatments, denying the effectiveness of measures like social distancing, or contradicting known public health facts.

The Federalist was tweeting an article where an Oregon physician urged readers to “seriously consider a somewhat unconventional approach” to the pandemic. But “unconventional” is a bit of a euphemism. The hospital system is overloaded even without deliberate infections, and unlike with chickenpox, we don’t know how long COVID-19 immunity lasts. In other words, hosting a coronavirus “chickenpox party” is a very bad idea.

The coronavirus pandemic has led to a global lockdown and thousands of deaths, as well as economic chaos. America has the third-highest number of confirmed cases, after China and Italy. Congress is attempting to mitigate the economic harm with a stimulus package.

President Donald Trump has chronically minimized the risk of coronavirus infection and made falsely rosy claims about new treatments and vaccines, recently alarming experts by suggesting social distancing restrictions end by Easter Sunday. Other Republicans have either downplayed the threat or argued that some Americans should accept a heightened risk of death to let the country leave lockdown. Social media platforms have to decide when these statements could have a negative effect on the larger pandemic response, sometimes drawing ire in the process.

“Every day, we are removing coronavirus-related posts that violate our rules.”

Earlier this week, blogging platform Medium removed an article from technologist and former Mitt Romney campaign team member Aaron Ginn. Ginn claimed that the COVID-19 response was being driven by “hysteria” or a “mob-like fear.” A Medium spokesperson told The Verge that Ginn’s essay violated rules against “controversial, suspect, and extreme content,” which cover distorted or pseudoscientific arguments that could have serious social repercussions.

“Every day, we are removing coronavirus-related posts that violate our rules,” the spokesperson said.

Twitter also slapped a warning on the article when it was later reposted elsewhere, telling readers who clicked the link that it was “potentially harmful or associated with a violation of Twitter’s Terms of Service.”

Ginn’s Medium article didn’t fit the stereotype of social media misinformation posts, which often incorporate alarmist exaggerations, blatantly made-up facts, or miracle cure scams. But critics like University of Washington biology professor Carl Bergstrom cited logical leaps that painted a misleading — yet widely cited — portrait of the pandemic. The Wall Street Journal’s editorial board, however, slammed Medium’s decision and urged platforms not to “require conformity with the judgment of expert institutions, even as many of those institutions themselves woefully misjudged the situation months or weeks ago.”

Facebook also recently published guidance for COVID-19 hoaxes and misinformation, drawing a line around content that could “contribute to imminent physical harm.” That includes statements like saying that social distancing doesn’t work — something Facebook says it recently started taking down. It doesn’t include more abstract claims like “conspiracy theories about the origin of the virus,” which aren’t considered immediately harmful, but can be de-ranked and flagged with a warning label, like other false information on the platform.