Election Day is here, and in the next few days or weeks, we’ll know who won — but for lots of people, tonight isn’t just about choosing the next president. It’s also a stress test for online platforms and a measure of how carefully they can handle information when the stakes are this high.
By now, we know what failure could look like. In one nightmare scenario, a candidate (likely Trump) could preemptively declare victory before the votes are counted. In another, a fast-spreading rumor could cause serious offline unrest — like a viral hoax or misleading video that encourages vigilante violence. Since election night might not end with a clear winner, sites could be dealing with these threats for days.
The biggest platforms have laid out a playbook for stopping false information, but no matter how well it works, some people — like Wall Street Journal reporter Joanna Stern — recommend logging off altogether. There are countless ways that the internet could make tonight’s election worse, and only a few ways to make it better.
As we judge how social media handled the 2020 presidential election, though, we need a standard for success as well as failure. What would a good election night look like online? As nebulous as that standard is, there are three key things we want to see.
Platforms should keep facts up front with big accounts
Big social media platforms are collectively moderating billions of accounts. But on election night and the days that follow, there are two key issues: stopping high-profile users from breaking sites’ rules and spreading accurate facts as fast as (or faster than) false claims.
Platforms have developed safeguards against false claims of victory. Twitter, Facebook, and Instagram are all using a banner to warn users that results are still being counted, while YouTube will offer a fact-check panel and streams from authoritative sources. Facebook and Google are temporarily banning political ads after the election to prevent misinformation. Yesterday, Facebook and Twitter labeled (and in Twitter’s case, restricted) a misleading Trump tweet decrying a Supreme Court decision on mail-in voting.
The banners and warnings are a little bit general — Twitter warns that experts “may not have called the race,” for example, rather than calling out specific errors. But they’re a start, if they’re added quickly and comprehensively.
Some recent research suggests misinformation is often driven by traditional media, politicians, and other “elite” actors. Trump, among other things, massively amplifies conspiracy theories by retweeting small accounts that espouse them. During election night, plenty of accounts will probably post false and potentially rule-breaking claims. But just finding those claims with a search query isn’t necessarily awful. The key question is whether sites step in to fact-check (or delete) false stories coming from big accounts — no matter how powerful their owners are.
Users should look for local citizen journalism — but share it carefully
Social media can bypass traditional media in bad ways, like spreading false information or misleading, emotionally charged stories. But it can also provide fast, unfiltered, hyper-local news. Many public officials share rapid status updates on their Facebook or Twitter feeds, including corrections to misinformation. If there’s a problem at a specific polling location, social media can offer detailed firsthand reports and focus public attention on it. This is the positive promise of social media — and maybe we’ll see it on display in the coming days.
Of course, this requires people to be careful with what they’re sharing. Doing research and seeking context is more vital than ever. Was an official-looking tweet posted by a credible (and ideally verified) account? Is a newsworthy-seeming picture or video actually new, or is it older material being reposted? Does a post include replies and comments that offer conflicting information? This goes double for any story that perfectly confirms your preexisting assumptions.
For help specifically navigating tonight’s minefield, disinformation expert Jane Lytvynenko has a running list of false and misleading election posts. The Election Integrity Project is also keeping an Election Day live blog and Twitter feed.
Social media should look beyond Facebook and Twitter
There’s certainly a strong argument for looking away from some social media platforms on election night — especially Facebook and Twitter, where news feeds are less social experiences than information firehoses. But this isn’t the only way to engage with other people online.
Sometimes you want to share the nail-biting experience of seeing polls close and results roll in. The pandemic has nixed this year’s election night watch parties, but digital spaces are there to fill the gap. As The Washington Post outlines, people are using Twitch and Zoom to gather with friends. Here at The Verge, many of us will be commiserating with each other on Slack. You may be doing the same on a group chat or Discord server.
If you’re looking for an information firehose, Reddit’s r/news moderators have laid out a plan for stopping false stories on the forum, including answers to some of the election’s most contentious questions. Reddit has obviously faced its own misinformation problems in the past. But it’s small enough to be managed by a team of humans who can make nuanced calls, rather than moderating hundreds of millions of people with a complicated rule set.
Smaller spaces pose their own challenges. It’s easy to spread misinformation in small groups among friends, and there won’t be public moderators to debunk it. Potentially violent groups can organize on platforms like the encrypted messaging service Telegram. But they’re just as much a part of “social media” as larger services. And tonight will offer a test of their strengths and weaknesses — along with those of America’s biggest web platforms.