Today, three shorter items to carry us into the weekend.
One, Facebook has hired a new head of global policy and communications to replace Elliot Schrage. It’s Nick Clegg, the former deputy prime minister of the United Kingdom. Clegg is a former European Commission trade negotiator, where he played a role in punishing tech companies for anticompetitive behavior — most notably Google, which received a $5 billion fine for issues involving Android. With Facebook currently in the crosshairs of European regulators over a wide range of issues, Clegg brings a perspective and a clout that the company has previously lacked.
British people have a proud tradition of loathing their elected leaders, and they eagerly traded zingers about Clegg on Friday morning, many of which are funny only if you have a solid grasp of British politics. (It helps to know that Clegg presided over a collapse in support for his party, the Liberal Democrats, and that the party abandoned a pledge to oppose tuition increases for students. The Guardian has a helpful mini-profileembedded in his op-ed about taking the new job.)
Clegg is a former journalist, a centrist, and unlike Schrage, has a large Twitter following. Is he what Facebook needs for the role? A global head of policy and communications needs to be very good at two things: knowing people, and arguing. By that measure, Clegg would seem to fit the bill. In any case, he deserves a chance. Here’s what he said in the Guardian:
I remain a stubborn optimist about the progressive potential to society of technological innovation. It can transform how we work, play and build relationships. It can help to protect our environment and keep our streets safe. It will fundamentally change how we teach our children at school and at home. It is transforming healthcare and transport. If the tech industry can work sensibly with governments, regulators, parliaments and civic society around the world, I believe we can enhance the benefits of technology while diminishing the often unintended downsides.
Of course, managing those unintended downsides will probably represent the bulk of Clegg’s time at Facebook. He’ll have his work cut out for him.
Two, the new head of WhatsApp made his first public comments about an issue of any significance. Chris Daniels, who took over the messaging app during Facebook’s big org-chart shuffle in May, posted to the company blog on Thursday to explain how Facebook is trying to prevent WhatsApp from being misused in Brazil. (This was also the subject of my column yesterday; Daniels’ note hadn’t been posted by press time.)
Anyone hoping to better understand Daniels’ product philosophy will be disappointed by his charmless and notably defensive blog post, which includes the full complement of October 2018 Facebook talking points: misinformation didn’t start with us; most people don’t use WhatsApp to spread misinformation; a global platform will inevitably host both the good and the bad. He also adopts Facebook’s unfortunate tendency to speak about world-scale problems in percentages.
Today, over 90 percent of messages sent on WhatsApp in Brazil are individual, one-on-one conversations. The majority of groups are about just six people — a conversation so private and personal that it would fit in your living room.
(You can stop over 90 percent of asteroids from crashing into your planet and still have a major problem on your hands.)
Nowhere in Daniels’ post does he acknowledge some of the unique ways in which his popular app, with its potent combination of encryption and viral sharing mechanics, has created new and extremely difficult problems for Brazil. (A far-right, anti-democratic climate change skeptic is now poised to win, after his backers funded a fake news campaign on WhatsApp.) Instead Daniels lists six steps the company has taken to reduce its level of harm, before saying “it will take all of us” to solve the problem.
In the meantime, it’s not clear that Daniels even understands what the problem is. He comes across as a colonial governor telling a restless public that the crown is taking their concerns very seriously. Brazil deserves better. So does WhatsApp.
Three, the media had a weeklong fight over whether Facebook intentionally misled them about the extent to which people had an interest in watching video, prompting publishers to lay off their writers in an ultimately fruitless “pivot to video” that impoverished journalists and journalism. The spark was a lawsuit I mentioned here earlier in the week, in which advertisers said a metrics reporting error — which Facebook acknowledged in 2016 — was well known within the company for a year.
At issue is how Facebook reported video views. Here’s Suzanna Vranica with a concise explanation:
For two years, Facebook had counted only video views that lasted more than three seconds when calculating its “average duration of video viewed” metric. Video views of under three seconds weren’t factored in, thereby inflating the average length of a view.
Facebook replaced the metric with “average watch time,” which reflects video views of any duration.
The metric may have been overstated. But as the linchpin of a theory that publishers pivoted to video on a false pretext, it’s pretty flimsy. As Laura Hazard Owen notes, much more important was the way Facebook talked about video, with Mark Zuckerberg himself predicting that video would soon become the dominant form of communication on the platform.
Much of the conversation has concluded that people did not want to watch news-oriented video. This conversation tends to omit the existence of YouTube, on which people do watch quite a lot of news-oriented video. (May I please recommend to you the Vox channel, with 1.1 billion views and a successful Netflix show, or Verge Science, which reached more than half a million subscribers in under a year.)
In 2016, traditional publishers were still having trouble cracking YouTube. But they were willing to take a flier on Facebook, because more than 1 billion people were looking at it every day, and Facebook had turned the knobs on video all the way up. Importantly, some publishers appeared to be succeeding with a video strategy:
In September, Tasty’s main Facebook page was the third-biggest video account on Facebook with nearly 1.7 billion video views, according to Tubular Labs. Viewership per video is also staggering: During the last three months, Tasty’s Facebook videos have averaged 22.8 million video views in the first 30 days alone. That’s better than BuzzFeed’s main Facebook page and the separate BuzzFeed Food account, which averaged 4.7 million views and 1.1 million views per video in the same timeframe.
Overall, Tasty now accounts for 37 percent of BuzzFeed’s video views, according to Tubular. This is all the more remarkable considering BuzzFeed started Tasty just in July 2015.
There were three problems with Facebook video. One, Facebook never figured out a good way for publishers to make money from them. Publishers assumed that some kind of pre- or mid- or post-roll advertising would offer a return on their investment, but it never did. Two, Facebook had a product problem. The News Feed is meant for rapid, near-mindless scrolling; video is meant for intent, lean-back viewing. A handful of formats, most notably Tasty’s, thrived in the News Feed. But most died — which is why Facebook is now shunting video over to its Watch tab, and even there nothing has really broken out of the pack.
Finally, in the aftermath of the 2016 election, Facebook ratcheted down the amount of publisher content in the feed, in the hopes that seeing more of our friends and family would discourage us from sharing viral memes and destroying democracy. Video will still play a major role in Facebook’s future, but it’s likely to look more like the video you see in Instagram stories and less like those square videos with text captions posted on B-roll.
There’s a valid critique of Facebook in there somewhere. But much of the anger feels, to me, misplaced. Journalists would have benefited if Facebook had done a better job predicting the future. But publishers could have done a better job predicting the future, too.
Democracy
Here’s our first real piece of evidence that Russia is actively interfering in our current midterm election here in the United States. Adam Goldman reports:
Russians working for a close ally of President Vladimir V. Putin engaged in an elaborate campaign of “information warfare” to interfere with the midterm elections, federal prosecutors said on Friday in unsealing a criminal complaint against one of them.
The woman, Elena Alekseevna Khusyaynova, 44, of St. Petersburg, was involved in an effort “to spread distrust toward candidates for U.S. political office and the U.S. political system,” prosecutors said.
Craig Timberg, Tony Romm, Brian Fung examine the propaganda in Russia’s US midterm election campaign, which comes out of the unsealed criminal complaint above.
The late Sen. John McCain was “an old geezer.” House Speaker Paul Ryan is “a complete and absolute nobody.” And the investigation into possible collusion between President Trump’s campaign and Russia is a “witch hunt” led by “an establishment puppet.”
Name the subject, and Russian disinformation operatives had a playbook on how to pass themselves off as politically active Americans as they secretly sought to manipulate U.S. voters online – on both the right and the left – with incendiary phrases, glib putdowns and appeals to pre-existing political biases. And the same tactics honed during the 2016 presidential election carried over into the runup toward the 2018 midterm congressional vote.
Mike Isaac and Kevin Roose examine the state of disinformation in Brazil ahead of the election:
“People entered this election with a sense of hyperpolarization,” said Roberta Braga, an associate director at the Adrienne Arsht Latin America Center at the Atlantic Council, a Washington-based foreign policy think tank. “There is a lot of distrust in politics and politicians and political establishments in general.”
“People entered this election with a sense of hyperpolarization,” said Roberta Braga, an associate director at the Adrienne Arsht Latin America Center at the Atlantic Council, a Washington-based foreign policy think tank. “There is a lot of distrust in politics and politicians and political establishments in general.”
Trumpian name-calling is now a feature of many state and local elections, Kevin Roose reports.
Tessa Lyons cites new research showing that the volume of fake news shared on Facebook has declined by more than 50 percent:
First, Alcott, Gentzkow and Yu published a study on misinformation on Facebook and Twitter (PDF). The researchers began by compiling a list of 570 sites that had been identified as false news sources in previous studies and online lists. They then measured the volume of Facebook engagements (shares, comments and reactions) and Twitter shares for all stories from these 570 sites published between January 2015 and July 2018. The researchers found that on Facebook, interactions with these false news sites declined by more than half after the 2016 election, suggesting that “efforts by Facebook following the 2016 election to limit the diffusion of misinformation may have had a meaningful impact.”
Last week, a University of Michigan study on misinformation (PDF) had similar findings about the effectiveness of our work. The Michigan team compiled a list of sites that commonly share misinformation by looking at judgements made by two external organizations, Media Bias/Fact Check and Open Sources.
Twitter suspended a network of suspected Twitter bots on Thursday that pushed pro-Saudi Arabia talking points about the disappearance of journalist Jamal Khashoggi in the past week.
Days after the reported murder of Jamal Khashoggi, misinformation is everywhere, report Daniel Funke and Alexios Mantzarlis:
Saudi media outlets reported a conspiracy theory that Khashoggi’s fiancée is fake in an apparent effort to discredit Turkish and American intelligence. Reuters fell for a fake news story about the firing of a Saudi general consul. Some accounts are promoting a nonsensical video from a guy who wears a strainer on his head. And the Saudi government itself has threatened anyone who spreads “fake news” online with lengthy prison terms and heavy fines.
Elsewhere
Issie Lapowsky talks to recently departed Facebook engineer Brian Amerige, who had accused the company of a “political monoculture that’s intolerant of different views.“ But he’s leery of becoming a poster boy for Republicans complaining about “bias.”
“I have every confidence that they take these issues really, really seriously, and they’ve treated me with a lot of respect,” Amerige says. “They’re pretty intimately involved.”
Last week, Amerige left Facebook over disagreements about the company’s platform-wide hate speech policy, which he describes as “dangerous and impractical” for a platform that promotes openness. But he had spent the two months before that working closely with Facebook’s human resources team on ways to foster what he calls “political diversity.” One initiative Amerige says they discussed was an updated employee speech policy that would draw a distinction between attacking people’s ideas (which would be permitted) and attacking their character (which would be prohibited). He’s unsure whether Facebook plans to implement the ideas.
Speaking of departed employees, PRI’s The World talks to ex-Googler Vijay Boyapati, who quit in 2007 over the company’s decision to enter the Chinese market.
When I was there, I thought it was morally wrong for two reasons: One was that there had been no internal debate about it in terms of Google News — the product I’d worked on. And so I wanted to bring that up because I thought it was the wrong move for Google. If a journalist does have the courage to write about something controversial and Google was asked to censor them. And as someone who’d worked on the product, you’d have the knowledge that someone’s voice had been silenced by something that you built. And that makes me deeply uncomfortable.
Facebook is launching a new series of blog posts in which they describe how they found fake news and determined it to be false. In episode one, learn if a Saudi Arabian man actually spit in a woman’s face.
Speaking of fake news, Geoffrey Fowler got taken in by a video that showed a commercial plane appearing to do a barrel roll during landing:
The photorealism of Tsirbas’s clip played a big role in making the fake story go viral. And that makes it typical: Misinformation featuring manipulated photos and videos is among the most likely to go viral, Facebook’s Lyons said. Sometimes, like in this case, it employs shots from real news reports to make it seem just credible enough. “The really crazy things tend to get less distribution than the things that hit the sweet spot where they could be believable,” Lyons said.
Even after decades of Photoshop and CG films, most of us are still not very good about challenging the authenticity of images — or telling the real from the fake. That includes me: In an online test made by software maker Autodesk called Fake or Foto, I correctly identified the authenticity of just 22 percent of their images. (You can test yourself here.)
Launches
YouTube has finally rolled out mini-players for browser users. The mobile app has used it for quite some time. This will allow users to continue watching a video while browsing for something new at the same time.
Big day for little YouTube updates! In addition to the one above, and this one, which is just what it says on the tin, you can also now buy concert tickets on Eventbrite from music video pages.
Takes
The creator of The Wire talks to the creator of the law that as the length of an online conservation continues, the odds that it will eventually include a comparison to Hitler approaches 1. Simon hits hard on his pet issue, which is that he should be able to call a Nazi anything he wants to:
The last thing that Twitter should be doing is policing decorum, or trying to leech hostility from the platform. Why? Because the appropriate response to overt racism, to anti-Semitism, to libel, to organized disinformation campaigns is not to politely reason with such in long threads of fact-sharing. All that does is lend a fundamental credence to the worst kind of speech—which, grievously, seems to be the paradigm that Twitter prefers at present. It’s a paradigm that offers two basic choices: Ignore the deplorati—which allows the dishonesty or cruelty to stand in public view and acquire the veneer of credibility by doing so. Or worse, engage in some measure of serious disputation with all manner of horseshit, which also grants trash the veneer of credibility.
In 1935, the reply to Streicher or Goebbels quoting The Protocols of the Elders of Zion and asserting that Jews drink the blood of baptized Christian babies is not to begin arguing that “no, Jews do not drink Christian baby blood” and deliver a long explanation of The Protocols as a czarist forgery in chapter and verse. The correct response is to call Julius Streicher a submoronic piece of shit, marking him as such for the rest of the sentient, and move on to some more meaningful exchange of ideas. So it is with Twitter.
And finally ...
I’ll be in New York City on Thursday to speak at this conference about content moderation on big platforms. If you see me, please say hello!
Talk to me
Send me tips, questions, comments, and fun ideas for what I should do in New York City next week: casey@theverge.com.