Skip to main content

How WhatsApp is undermining Facebook’s war on election interference

How WhatsApp is undermining Facebook’s war on election interference

/

Encryption and viral sharing mechanics are a dangerous combination

Share this story

About a month ago, the New York Times revealed the existence of a conference room inside of Facebook devoted to fighting election interference. But this was no ordinary conference room ... this was a war room. Then under construction, the war room promised to provide a hub for all of Facebook’s efforts to fight election interference around the world. At the time, I mentioned that I would be amenable to visiting the war room, should an opportunity ever present itself.

Reader, it did.

On Wednesday morning, I joined a couple dozen or so other reporters at Facebook headquarters in Menlo Park, and after an introductory briefing about the purpose of the war room, got to poke my head inside:

On one hand, the war room is just one of many conference rooms in MPK 20, the company’s Menlo Park, CA headquarters. But it’s larger than average, and has been stuffed with people and electronics equipment. There are desks for 24 people, and the room is ringed with 17 screens, each of which highlights a different stream of information Facebook is monitoring.

Employees look for suspicious spikes in spam and hate speech, in some cases using custom software built for the purpose. They look for efforts at voter suppression, such as falsely telling people that lines are long or that the election has been delayed. (The team recently uncovered one such hoax claiming that the Brazilian election date had been delayed a day due to protests, and swiftly removed the offending posts.)

I hoped that some sort of election-related drama might present itself during my actual visit to the war room — a hot piece of fake news blowing up on everyone’s screens simultaneously, say — but none did. For a war room, it was peaceful. Everyone who was not staring at a screen spoke in hushed tones, though it’s possible that they just didn’t want me to overhear them talking about the war.

In any case, we all wrote up our stories, and some outlets who weren’t invited to the war room hee-hawed at us, presenting our coverage of the conference room as a massive win for Facebook’s public-relations department. Personally I thought it worthwhile to see the room in person, and ask a handful of questions, and tell readers what Facebook is doing there. (In short: bringing team leaders together in close proximity to increase the speed of decision-making during critical times.)

As I noted in my story, the war room was covered in American and Brazilian flags, to reflect the two most imminent global elections. But if things have been relatively quiet on the election-interference front in America, in Brazil the situation is quite serious.

The problem, of course, is WhatsApp. As we were admiring the flags, Brazilian newspaper Folha published an investigation showing that media companies are buying large groups of phone numbers and blasting them with anti-leftist propaganda on the encrypted messaging app. While it’s often discussed as a chat app, WhatsApp has message-forwarding mechanics that strip away the identity of the sender and allow messages to spread virally with little accountability.

Here’s BuzzFeed’s Ryan Broderick on the scheme:

Media firms that supported far-right frontrunner Jair Bolsonaro used Bolsonaro’s supporter database, as well as third-party databases of phone numbers. Some of these agencies were even offering a breakdown of location and income level. The firms then used a service called “mass shooting” to transmit thousands of messages.

Folha alleges that some of these firms purchased contracts for up to 12 million reais ($3.2 million USD). Not only is this an abuse of WhatsApp, it is illegal to do this in Brazil. Companies are forbidden from donating to political campaigns, and they are not allowed to procure a candidate’s supporter database.

While it’s impossible to know — seemingly even for WhatsApp’s moderators — what’s going on inside a private conversation or group — it is possible to monitor public groups. A WhatsApp monitor built by local fact-checking group Eleições Sem Fake shows that the platform is just as full of misinformation as Facebook.

What makes the scheme insidious is that it’s not clear that any of the many screens in Facebook’s war room are capable of capturing the activity Folha described. Misinformation is spreading virally on a platform that almost no one, Facebook included, can see inside.

There are good ideas floating around for how Facebook could make life harder on WhatsApp propaganda artists. In an op-ed published in the Times this week, Brazilian researchers Cristina Tardáguila, Fabrício Benevenuto and Pablo Ortellado offered three ideas: restrict the number of times a message can be forwarded from 20 to five, which Facebook has already done in India; dramatically lower the number of people that a user can send a single message to, from its current limit of 256; and limit the size of new groups created in the weeks leading up to an election, in the hopes that it will stop new viral misinformation mobs from forming.

I’ve come around to the idea that an app should be able to have end-to-end encryption, or viral sharing mechanics, but not both. If mobs are going to organize in democratic elections, it generally ought to be in plain sight, where we can see who’s holding the megaphone. I won’t make fun of Facebook’s war room, however theatrical its presentation, because I think there’s value in disparate teams sitting shoulder to shoulder and sharing their knowledge. But I suspect that those teams will conclude that their colleagues at WhatsApp, through their willful inaction, are undermining their efforts.

Democracy

Facebook `Delighted’ With War Room Response to Brazil Election

Sarah Frier and David Biller contrast Facebook executives’ comments that they are “delighted” with how quickly they have been able to tackle misinformation with what Brazilians are saying. (They’re saying that the problem is WhatsApp.)

Pablo Ortellado, a professor of public policy at the University of Sao Paulo who has studied fake news, said Facebook has made good strides, but isn’t addressing the full scale of the problem. And he thinks the company’s efforts still won’t be enough to tame WhatsApp, where Facebook doesn’t have visibility into exactly what’s being shared.

“All the malicious stuff of the campaigns went through WhatsApp, that’s the problem,” he said in an interview. “Really, that was one of the disasters of this election.”

Brazil Election Court Boosts Fake-News Fight With Runoff Looming

Elsewhere, Biller writes about what Brazil’s Supreme Court is doing to rein in fake news:

The court known as TSE launched an official website to debunk social media posts challenging the vote’s legitimacy, and has held two video conferences with executives from California-based messaging app WhatsApp, widely used in Brazil. TSE President Rosa Weber was also scheduled to address the issue in a Wednesday meeting with representatives for the candidates, far-right front-runner Jair Bolsonaro and leftist Fernando Haddad, according to the court’s press office.

Facebook Finds Hack Was Done by Spammers, Not Foreign State

Robert McMillan and Deepa Seetharaman reported late Wednesday that Facebook has tentatively concluded its big data breach was not the work of a foreign government:

Facebook Inc. believes that the hackers who gained access to the private information of 30 million of its users were spammers looking to make money through deceptive advertising, according to people familiar with the company’s internal investigation.

The preliminary findings suggest that the hackers weren’t affiliated with a nation-state, the people said.

Facebook labels African-American, Hispanic, Mexican ads as political

Jessica Guynn finds dozens of Facebook ads mentioning Hispanic Heritage Month or the word “Mexican” that get flagged as being “political” and are blocked (because the page administrator has not registered as a political advertiser.) I’m hearing lots of anecdotal stories like these myself:

Dozens of advertisements removed from Facebook for being political ahead of the November midterm elections did not appear to express any political view, a USA TODAY analysis showed. The Facebook ads from businesses, universities, nonprofits and other organizations did seem to have something in common: They mentioned “African-American,” “Latino,” “Hispanic,” “Mexican,” “women,” “LGBT” or were written in Spanish.

Even offers of free delivery from Chipotle Mexican Grill were mislabeled as political until an inquiry from USA TODAY. Laurie Schalow, the restaurant chain’s chief communications officer, said Facebook “corrected the error” after being alerted.

In Facebook’s Effort to Fight Fake News, Human Fact-Checkers Struggle to Keep Up

Georgia Wells and Lucas I. Alpert report that Facebook partner FactCheck.org manages to debunk less than one story per day. (Elsewhere in the story, LikeWar author Peter W. Singer tells the Journal that bringing on fact checkers to Facebook “is like bringing a spoon to clean out a pig farm.”)

Out of Factcheck’s full-time staff of eight people, two focus specifically on Facebook. On average, they debunk less than one Facebook post a day. Some of the other third-party groups reported similar volumes. None of the organizations said they had received special instructions from Facebook ahead of the midterms, or perceived a sense of heightened urgency.

ABC News, which was part of the fact-checking effort when it began early last year, has dropped out. “We did a review, and we couldn’t tell if it was really making any difference; so we decided to reallocate the resources,” said a person familiar with ABC’s decision.

The Twitter problem: Republicans and Democrats polarize more when they read each other

Are we becoming more polarized because we only hang out online with people just like us? Or is it because we are so often exposed online to our political opposition? Ezra Klein reports on a paper (which I mentioned here when it first came out) that found the latter explanation more compelling:

“Republican” is an identity. “Democrat” is an identity. When you log on to Twitter and read someone attacking the people you admire, the people you ally with, the people you see as your group, you become defensive of your side and angry at the critics.

One problem in all this is that most political media isn’t designed for persuasion. Some is, of course — Ross Douthat’s column at the New York Times is a soft conservative trying to persuade a liberal audience, for instance — but most opinionated political media is written for the side that already agrees with the author. Similarly, most partisan elected officials are tweeting to their supporters, who follow them and fundraise for them, rather than to their critics, who don’t.

Who’s Winning the Social Media Midterms?

Kevin Roose and Keith Collins analyzed the number of interactions on every Facebook and Instagram post for hundreds of candidates in the midterm elections. They found that Democrats have a lead in engagement in their House races — and a deficit in their Senate races:

Together, the data amounts to a revealing picture of how those candidates’ messages are resonating with a digital audience, and how social media activity both mirrors and departs from more traditional polling methods.

It also shows that Democrats often dominate the conversation on Instagram, but Republican candidates are finding their biggest audiences on Facebook, the largest and most influential social network.

Inside the race to hack-proof the Democratic Party

Eric Geller looks at what the Democratic National Committee is doing to prevent another 2016-style hack:

The Democratic National Committee has spent 14 months staffing up with tech talent from Silicon Valley, training staff to spot suspicious emails and giving the FBI someone to talk to if it spots signs of hackers targeting the party.

The first concrete sign of success may come in a few weeks, if the Democrats make it through the November midterm elections unscathed. But Raffi Krikorian, the DNC’s chief technology officer, is already pointing to one significant accomplishment — what he calls a massive overhaul of digital security at the committee and its sister organizations.

Elsewhere

Twitter Won’t Suspend Louis Farrakhan For His Tweet Comparing Jews To Insects

In an extremely Twitter decision, Twitter has decided you can compare Jews to insects until new rules go into effect later this year.

Why Can’t Instagram Get Anybody to Care About IGTV?

Madison Malone Kirchner is the latest to note the slow start of IGTV. (She also talks to a bunch of teens about why they haven’t been using it.)

It appears, though, that the very creators Instagram promised users would anchor IGTV haven’t even bothered to put in a good-faith effort to get those views. Lauren Riihimaki a.k.a. LaurDIY has just two IGTV videos posted to her channel. One is a three minute “Target haul” — haul is vlogger talk for a video showing off everything you bought on a shopping trip and describing it in excruciating detail — and the other is a 56-second stop-motion clip of Riihimaki walking around Los Angeles. Both were posted on June 20, the same day IGTV launched. JiffPom the dog doesn’t have any IGTV videos posted at present. If creators aren’t creating on the platform, then it only makes sense that users aren’t, well, using it. There’s nothing to watch.

A Botnet Used By Russian Trolls Is Still Sitting Dormant On Twitter, And It Promoted Taco Bell And Coachella

Twitter released new information about Russian trolls on the platform this week, and Jane Lytvynenko writes about a botnet sleeper cell hiding in plain sight:

Jonathon Morgan, the CEO of New Knowledge, a security company that monitors social media misinformation and online influence operations, told BuzzFeed News that his software identified strange phrases used by some of the IRA accounts released by Twitter. They appeared to be more promotional than political.

“Free Lunch for a Year at Taco Bell,” some accounts posted. “FREE pass to the 2013 Coachella Music Festival!” others tweeted.

Shane Dawson’s Jake Paul series is really about YouTube’s broken heart

Patricia Hernandez watches Shane Dawson’s series about Jake Paul and finds that it’s really about YouTubers wrestling with the platform’s incentives:

This is not a series about cracking Jake Paul or rehabilitating him — not really. This is Shane Dawson staring into the abyss, knowing full well that he’ll find something familiar lurking in the shadows. Maybe all YouTubers do, on some level. “No matter what somebody thinks about your videos or whatever, everyone can agree how much work it is … having to come up with the craziest shit every time,” Shane says to Jake Paul at one point during the series.

Jake agrees, saying that he feels like he has to top himself with every subsequent upload. “I think that’s where a lot of madness and craziness comes in,” Paul says. This sentiment — the idea that you have to keep going, that the next thing always has to be bigger and better — is at the heart of why so many YouTubers end up feeling burned out.

Did I Make a Mistake Selling Del.icio.us to Yahoo?

Del.icio.us CEO Joshua Schachter reflects on selling his one-beloved social bookmarking site to Yahoo.

Once we were acquired, Yahoo helped us on the tech side, but not as much as it said it would. I think this is common for acquisitions. Before you’re acquired, you’re an important visionary. Afterward, you’re a crazy person who just wants to burn money.

Any decision was an endless discussion. I remember once, we had to present to a senior vice-president. We had a 105-slide deck prepared, and we didn’t get past the second slide because they rattled on about one fucking slide. It was a miserable environment.

Launches

Sidestepping App Stores, Facebook Lite and Groups get Instant Games

Facebook is putting games in more places using HTML5, Josh Constine reports. For me, games on Facebook began and ended with Scrabulous. Scrabulous was really fun! It’s sad that it was completely illegal.

Facebook is bringing back MTV’s The Real World

Nothing has really broken through on Facebook Watch just yet, but The Real World is at least a recognizable name that might get millennials to stop scrolling and watch a few seconds.

Takes

Computational Propaganda

Renee DiResta writes a long, sober, unsettling piece about how we got to such a lot point of trust in our information ecosystem:

These technologies will continue to evolve: disinformation campaign content will soon include manufactured video and audio. We can see it coming but are not equipped to prevent it. What will happen when video makes us distrust what we see with our own eyes? If democracy is predicated on an informed citizenry, then the increasing pervasiveness of computational propaganda is a fundamental problem. Through a series of unintended consequences, algorithms have inadvertently become the invisible rulers that control the destinies of millions. Now we have to decide what we are going to do about that.

Twitter’s Misguided Barriers for Researchers

Kara Alaimo says Twitter’s efforts to prevent the misuse of the platform are making it too hard on researchers who study misuses of the platform:

Justin Littman, a software developer at Stanford University Libraries, explained that researchers with advanced software skills used to be able to sign up to write their own software and get the data they needed from Twitter’s Application Programming Interface (API). In July, the company announced that researchers would have to first request access to the API for such projects order to prevent malicious use of the application. The company also limits how data sets obtained this way can be shared. Users who sign up to use the developer platform also have to agree to Twitter’s policies. One of these imposes restrictions on studying a range of subjects, including political beliefs and racial and religious topics.

It’s great that Twitter is trying to prevent its data from being used for nefarious purposes, such as interfering in elections. But it’s disturbing that the company requires researchers using their API to agree to policies that include restrictions on studying topics like identity politics. A Twitter spokesperson pointed out that access to the basic API is free, but Littman noted that it only provides historical tweets going back a few days. That’s unhelpful for many academics whose studies have longer time frames.

And finally ...

Facebook Apologizes for Showing Parenting Ads to Bereaved Mother

There are two basic views of tech platforms that are funded by advertising. One is that the rise of machine learning and improved data collection techniques has created a dystopian panopticon in which all of us have become helpless pawns of late capitalism. The other is view is that the rise of machine learning and improved data collection techniques has created a dystopian panopticon in which the artificial intelligence isn’t yet working right, causing people to needlessly suffer while tech platforms fine-tune their ad targeting.

Here is an exhibit for the latter view:

Anna England-Kerr said that after sharing the news on the social network, she continued to see ads for cots, baby blankets and bottles, and more recently IVF treatments, despite changing settings on Facebook that should have blocked such appearances.

“The onus should really be on Facebook to fix this, and not on bereaved parents to remove themselves from social spaces that help them deal with their grief,” England-Kerr said in an interview.

*Nods.*

Talk to me

Send me tips, comments, questions, and your viral WhatsApp threads: casey@theverge.com.