This week, representatives from Google, Facebook, and Twitter are appearing before House and Senate subcommittees to answer for their role in Russian manipulation during the 2016 election, and so far, the questioning has been brutal. Facebook has taken the bulk of the heat, being publicly called out by members of Congress for missing a wave of Russian activity until months after the election.
But one of the most interesting parts of yesterday’s proceedings actually came after the big companies had left the room, and a national security researcher named Clint Watts took the floor. Watts is one of the most respected figures in the nascent field of social media manipulation — and when it came time to diagnose root of Russia’s platform meddling, he put much of the blame on the decision to allow anonymous accounts. As long as Russian operatives can get on Twitter and Facebook without identifying themselves, Watts diagnosed, foreign actors will be able to quietly influence our politics:
“With features like account anonymity, unlimited audience access, low cost technology tools, plausible deniability – social media provides Russia an unprecedented opportunity to execute their dark arts of manipulation and subversion…Today, anonymous sites rife with conspiracy theories, such as 4Chan and Reddit, offer unlimited options for placement of digital forgeries that drive Kremlin narratives. Graphics, memes & documents litter these discussion boards providing ammunition for Kremlin narratives and kompromat. Anonymous posts of the Kremlin’s design or those generated by the target audiences power smear campaigns and falsehoods that tarnish confidence in America and trust in democratic institutions.”
The point is clear enough: if you’re fighting Russian interference on social media, anonymity is a big problem. In some ways, it’s the original sin, creating space for that first lie that lets trolls enter the conversation unnoticed. “Account anonymity in public provides some benefits to society, but social media companies must work immediately to confirm real humans operate accounts,” Watts told the committee. “The negative effects of social bots far outweigh any benefits.” It’s a common insight among bot-hunters, and one that’s become particularly popular amid this week’s hearings.
Thomas Rid expressed a similar idea this morning, making the case that Twitter had been more useful for Russian active measures than Facebook. “[Twitter’s] openness,” Rid writes, “particularly the openness for deletion, anonymity, and automation, has made the platform easy to exploit.” The Digital Forensics Research Lab, a longtime hub for bot-watchers, opened the hearing with four questions for social media companies, including a tricky one: What is the limit of anonymity on social media?
In each case, the writers stop short of asking for an outright ban on anonymous accounts. But measures like Facebook’s real name policy have been cast as useful tools in the fight against Russian influence — and often tools in need of stricter enforcement. The online pseudonym was once a guiding light of internet culture, a crucial protection for whistleblowers and communities with a legitimate fear of being exposed. Now, it’s increasingly seen as a threat. Worse, it seems more and more likely that platforms will respond to Russia concerns by tightening restrictions on online anonymity, and driving webgoers to live more and more of their online life under legal names.
Taking on the issue in the earlier panel, Facebook general counsel Colin Stretch cast the entire problem as one of identity. “It wasn’t so much the content,” Stretch told the committee. “The real problem with what we saw was its lack of authenticity.” It’s a useful line for Facebook, particularly as the criticism broadens from paid election ads (the target of the only recent bill to tackle the issue) to Russian posting in general. Reining in organic posts is much trickier than reining in ad spending, and it’s hard to imagine doing it without tighter identity controls.
“The real problem with what we saw was its lack of authenticity.”
Even though the idea has come to prominence on a wave of anger towards Facebook, Zuckerberg would probably suffer the least damage from a crackdown on anonymity. Facebook already has a real name policy, and they could easily tighten enforcement to include ID checks without altering their core product. There are plenty of services like NextDoor with stricter identity policies, and they don’t pose any significant technical problems. The problem is social. We’re used to anonymity on the internet, particularly on the services where it’s still available. It’s hard to know what an anonymity backlash would mean for services like Twitter, Reddit, and 4chan — all of which are named in Watts’ testimony as playing a role in Russian disinformation.
In the background, there’s an even harder question: is anonymity still worth saving? It’s foundational to many people’s idea of the internet, but amid widespread online harassment and Facebook itself, it’s come to mean less and less. Even without Russian influence campaigns, the web’s online spaces are largely associated with the ugliest parts of humanity. (4chan is a prime example.) With new pressure from Congress, bot analysts, and the public, online anonymity may not have any defenders left. In the face of that, Twitter, Reddit, and others might decide a real name policy is a small price to pay for forestalling federal regulation.
Maybe all that sounds like a straw man. I hope it is. The most likely path forward is still that Congress does nothing — or, failing that, sticks to the FEC regulations laid out in Warner-Klobuchar — and Russian disinformation continues to be a hazard of digital life. It’s a hard problem, and at this point, it’s reasonable to not trust Congress or Facebook management to solve it. Still, it’s worth considering what we want platforms to do about fake posts. Do we want stronger identity checks before a person is allowed to post online? Do we want an algorithm sniffing out activity that looks like an influence campaign, with all the inevitable false positives such a system would bring? Do we want intelligence services to actively collaborate with Facebook in sniffing out those campaigns? All those ideas make me nervous, but they don’t seem as implausible as they would have a year ago, or even a week ago.