In December of last year, conservative publisher The Weekly Standard became an approved fact-checker at Facebook. It is the only partisan outlet to be approved as a Facebook fact-checker in the United States, and the decision to include it in the partnership alongside the Associated Press, Politifact, Snopes, and Factcheck.org drew some criticism when it occurred, as Sam Levin reported at the time in the Guardian:
“I’m really disheartened and disturbed by this,” said Angelo Carusone, president of Media Matters for America, a progressive watchdog group that published numerous criticisms of the Weekly Standard after the partnership was first rumored in October. “They have described themselves as an opinion magazine. They are supposed to be thought leaders.”
The inclusion of a conservative outlet alongside nonpartisan news organizations set the stage for a conflict when, inevitably, that outlet weighed in on a controversial partisan issue. Today that day arrived.
On Sunday the liberal publisher ThinkProgress published an article by Ian Millhiser entitled “Brett Kavanaugh said he would kill Roe v. Wade last week and almost no one noticed.” In the piece, Millhiser takes a sentence from Kavanaugh’s testimony to Sen. Ted Cruz and extrapolates what he thinks it means, in light of statements Kavanaugh has made previously about abortion rights.
At no point in Kavanaugh’s testimony did the judge say he “would kill Roe v. Wade,” as the headline says. He did make statements that suggest is he likely to vote to overturn Roe vs. Wade, as Millhiser’s article explains. Say you’re the fact checker. Is ThinkProgress’ story true or false?
The Weekly Standard said it was the latter, slapping the piece with a “false” label, which Facebook has said typically reduces an article’s reach by 80 percent. Millhiser objected to the rating in a new piece for ThinkProgress today:
The article in question, which this reporter wrote, pointed out that, when you read a statement Kavanaugh made during his confirmation hearing alongside a statement he made in a 2017, it becomes clear he is communicating that he opposes Roe v. Wade. Our article is factually accurate and The Weekly Standard’s allegation against us is wrong.
Millhiser defends his use of the word “said” in the headline by citing its dictionary definition:
According to Merriam-Webster’s dictionary, the verb “say” or “said” can mean to “indicate,” “show,” or “communicate” an idea. Our argument is that Kavanuagh indicated, showed, or communicated his intention to overrule Roe when he endorsed the Gluckberg test after saying that Gluckberg is inconsistent with Roe.
Of these words, “indicate” strikes me as much more honest than “said,” a verb commonly understood to indicate saying something out loud. So why didn’t Millhiser use “indicate” in his headline instead?
You don’t have to be an expert in publishing to know that “Brett Kavanaugh indicated he would kill Roe v. Wade last week and almost no one noticed” is a much weaker headline than the one Millhiser used. Most readers will only ever see the headline, of course. And so he fudged it to “said.” In adhering to one law of Facebook — use the headline that will generate the maximum possible outrage, and therefore clicks — he arguably violated another (stick to the truth).
Millhiser cops to his publication’s thirst for eyeballs in his piece today:
It’s no secret that the digital news business is driven by clicks. A news site that brings in many readers will also bring in a great deal of ad revenue, and this money can be used to hire reporters and to continue the outlet’s work. An outlet that loses a significant portion of its readership may have to lay off reporters or could even go under.
At its peak, Facebook provided as much as 40 percent of ThinkProgress’ traffic. Facebook recently changed its algorithm in ways that reduced the amount of traffic it sent to most news outlets, but it still accounts for between 10 to 15 percent of our readers. The difference between keeping those readers and losing them could decide whether we can hire more reporters who will continue to report on subjects that the Weekly Standard may have ideological disagreements about.
And yet I can’t help but feel, reading all of this, that the particularly dispute we are looking at is not ideological in nature. It’s semantic. And fact-checking is a vocation that often finds itself mired in petty word-based disputes like this.
“The real problem is not ‘Is Facebook censoring progressives,’” tweets Alexios Mantzarlis, director of the International Fact-Checking Network at Poynter, which approved The Weekly Standard’s entry into its ranks, “but ‘Should Facebook ask fact-checking partners to flag stories based on headlines?’ and ‘How Literally?’ We know a lot of fakes travel off of a headline alone. Not acting on those opens a pretty big loophole.’”
He goes on: “It seems to me that we’re stuck in a lethal loop where instead of improving a platform through data-based accountability and measured interactions we are making the conversation around Facebook’s actions to fight misinformation a 100% U.S.-centric partisan battle.”
Joshua Benton, who runs Harvard’s Nieman Lab, says the Weekly Standard ought to have developed a longer record as a fact-checker before it was allowed to weigh in on Facebook disputes. (It had barely begun to check facts before it became a Facebook partner.) He also suggests, smartly I think, that Facebook should not partner with organizations that select only a narrow slice of subjects — say, articles that challenge conservative talking points — to “fact-check.”
Instead, Facebook may take an even more democratic approach. Daniel Funke reported Tuesday on a new test from its subsidiary CrowdTangle, which makes software that lets you analyze how content spreads around the web:
CrowdTangle announced in a blog post that it’s testing a feature that allows users to report potentially false news stories within the platform’s Facebook dashboards. That test builds upon the company’s existing mechanism for reporting potential misinformation at the post-level as a regular user.
“We know media professionals who use CrowdTangle have a sense of the type of content being circulated that is false or misleading, especially outside the United States. Many also have an understanding of the active ecosystem of websites that generate false news,” Jesse Evans wrote in the post. “We want to give our partners the ability to quickly and easily report false news right where they are, inside CrowdTangle.”
Facebook says it won’t take action based on user reports, and instead only wants to see whether CrowdTangle reports constant a useful signal in identifying bad actors. But the experiment bears watching.
Ultimately, I’m with Mantzarlis. If an article is basically factually correct, but has a headline that is basically factually wrong, fact-checkers ought to take action — or what else are they for? Some people think the “false label” ought to be reserved only for moonbat headlines about the Pope being a lizard person, but it’s hard for me to see how that meaningfully improves our news ecosystem. The Weekly Standard is using the system as it was designed — and how that design should be improved is a different question.
If you’re curious, you can still see the ThinkProgress post that caused all the controversy. It’s there, uncensored and without any angry labels, on the publication’s Facebook page. It got 727 shares and more than 1,000 reactions, way above average for ThinkProgress. The vast majority of people who ever would have seen it had done so long before the Weekly Standard weighed in. True or false, Millhiser’s article was a hit.
That’s a feature of the current system, too.
File under “I’ll believe it when I see it”:
President Trump is expected to sign an executive order as soon as Wednesday that would authorize sanctions against foreigners who attempt to interfere in American elections, according to three people familiar with the matter.
Suzanne Nossel examines the moral compromises Google would make by re-entering China with its censored search engine:
Google’s compliance with Chinese censorship directives will also have an unavoidable, distorting impact on online discourse in the world’s most populous country, obscuring the truth, reifying government-sanctioned orthodoxies, denying history, and furthering the repression of persecuted groups. Chinese government organs are estimated to issue thousands of separate censorship directives annually, charging all companies with compliance under threat of severe sanction or shutdown. Discussion of the Tiananmen Square protests, Taiwan’s independence, and the rights of Tibetans is forbidden, and those who violate the strictures face harsh punishment. Beyond those three top taboo topics, Google may be required to deny its users vital information about health and safety threats when such information casts a negative light on the state, including vaccinations, pollution, and disease controls.
Those who use Google to search for information on human rights violations—including the pervasive, forced detainment of hundreds of thousands of ethnic minority residents of China’s Xinjiang region—will find only whitewashed accounts that provide cover for the government’s abusive campaigns. Articles or posts questioning China’s frequent use of forced confessions will be banned, helping to shield this brutal practice from scrutiny. Other topics certain to be off-limits include the rights of other ethnic minorities; the mistreatment and premature deaths of Chinese political prisoners; politically motivated charges and show trials of activists, human rights lawyers, and independent scholars; and extrajudicial renderings of Chinese and foreign citizens throughout Asia. Whereas Google has positioned itself as a champion of the #MeToo movement, it will be required to censor that and related hashtags in China, denying survivors of sexual assault and abuse a desperately needed voice.
Joe Bernstein writes about the end of r/milliondollarextreme:
Before it was shuttered, the /r/milliondollarextreme subreddit had more than 43,000 subscribers, making it one of the more active Reddit communities where white supremacist and white nationalist content was shared. An archived version of the page preserved in Google cache shows the subreddit on Monday featured posts mocking nonwhite people, vilifying transgender people, and claiming Jews are trying to normalize pedophilia.
“As of September 10th, r/miliondollarextreme and associate subreddits have been banned for violating our violent content rules,” a Reddit spokesperson told BuzzFeed News. “We are very clear in our site terms of service that posting content that incites violence will get users and communities banned from Reddit.”
The media loves embedding Russian troll tweets! Although they are not labeled as such, report Alex Hern, Pamela Duncan and Ella Creamer:
In June the US Congress released details of 1,000 accounts that Twitter believes were run by the Internet Research Agency (IRA), a state-backed misinformation operation based in St Petersburg, adding to more than 2,000 accounts the company had already identified.
The accounts were cited in news stories by the British press more than 20 times. Adding to the 80 citations the Guardian uncovered in November 2017, Russian propaganda ended up being published by the British press more than 100 times.
Here’s a good piece from Jeremy W. Peters and Sapna Maheshwari about how first-time candidates are finding success making unconventional videos about themselves, many of which have gone viral and helped with fundraising:
The wave of female, minority and outsider candidates that is breaking cultural barriers and toppling incumbents in the Democratic Party is also sweeping aside a longstanding norm in campaigns: That the public image of politicians — especially women — should be upbeat, uncontroversial and utterly conventional.
For many of these Democrats who were running against better-financed rivals, the breakthrough moment came after they got personal in relatively low-cost videos that went viral, reaching millions of people. Using documentary-style storytelling, which can last for several minutes, candidates have found a successful alternative to the traditional model of raising huge sums of money that get spent on expensive, 30-second television commercials.
Alex Jones lost his online platforms but the his online store hasn’t, Craig Timberg reports:
Digital storefronts on these and other platforms funnel traffic to a website where transactions take place, Infowarsstore.com, that has not been affected by the industry crackdown on Jones. It had 1.15 million visits in August, up 55 percent compared with June, the last month before the recent controversy started, according to SimilarWeb, an analytics tool.
Kim Hart has the key bullet point from this new study by Common Sense Media:
In 2012, 68% said their go-to social site was Facebook. That number fell to 15% in 2018, with Snapchat and Instagram the new favorites.
Facebook’s first foray into prestige TV is a half-hour drama about a woman coping with the loss of her husband, Steven Zeitchik reports:
The Silicon Valley giant unveiled its first high-end series, “Sorry for Your Loss,” on Saturday at the Toronto International Film Festival. And it turns out that the show — about a young woman coping with the sudden death of her husband — is at once highly traditional yet very particular to the platform.
“In some ways this comes in a long line of shows and movies about loss,” said Kit Steinkellner, the creator of the series, which will stream on Facebook Watch, Facebook’s streaming platform. “But I also like the idea of media meeting message. Facebook is a place where I hear about most deaths, most births, most marriages. It made sense for them to do a series about these life events.”
Do you hate opening the Safari browser once a year? You’re in luck: you can watch tomorrow’s iPhone event on Twitter, right alongside all the jokes about dongles. (I’ll be liveblogging along with my Verge colleagues Nilay Patel and Dieter Bohn, and I invite you to follow along with us.)
Being a Twitch streamer seems extremely fraught:
Guy Beahm, best known as the boisterous PlayerUnknown’s Battlegrounds streamer Dr. Disrespect, abruptly stopped his Twitch broadcast today following what he says was an attempted shooting at his house.
News of this apparent shooting spread primarily due to Beahm’s live stream, where he can be seen playing Call of Duty only to leave his station following an unknown noise. He breaks character and says, “someone shot at our house.“
Rosetta is a new AI tool that analyzes text inside of memes, which ought to come in handy for content moderation.
Users of Facebook’s low-data alternative app now have access to its tools for seeking help after a calamity, Mallory Locklear reports:
The feature, which evolved out of Safety Check, helps users connect in the event of a crisis, allowing them to share updates, communicate with others and find or provide help.
Donald Trump’s campaign manager, Brad Parscale, rounds up a bunch of times conservatives got mad at Google and stamps his feet about the company’s influence. (Content advisory: this take is infuriating.)
Google’s broad and pervasive role in the lives of almost every American today cannot be overstated. More than 90 percent of all online searches are conducted through Google or YouTube. The media giant’s video-sharing site has 1 billion active users a month, many of whom go there to learn and share conservative ideas only to find their quest for knowledge subverted by faceless ideologues.
Google is clearly manipulating and controlling the political narrative in favor of Democrats and the left, and silencing conservatives and Republicans. A company with such power and influence cannot simply be allowed to play the biased gatekeeper of political discourse.
Alexis Madrigal takes on the New Yorker profile of Zuckerberg, and the lede is good:
Mark Zuckerberg is impossible to profile. He’s a narrative anti-catalyst, who takes all the elements of a fantastic story, and renders them lifeless, probably on purpose. The latest New Yorker contains about 14,000 exceedingly well-crafted words about Zuckerberg, and yet, not once do we catch a glimpse of the man outside his carefully managed cocoon of self-awareness. When there is a reporter around, he’s never thinking aloud, or hanging around with his friends, or talking shit. He is never in the heat of the moment. He is the anti–Elon Musk.
Lauren Oyler’s discursive attempt to answer the question “why does anybody tweet about politics, or anything, really?” will be familiar to many users:
The movement to abolish ICE was supplemented in no small part by the diligent tweets of one guy, but I am not that guy. Are we all supposed to be that guy? That’s sort of the message I get. I suspect the main thing for most people is that it (tweeting) makes them feel, briefly, like they’re helping, but beyond politics, it usually makes me feel worse. If people “engage” I can’t stop checking back to see who and how many; if they don’t, I feel like I’ve made some grave error of judgment that will soon trickle down into my professional and social prospects. My self-deprecating commentary—“nothing more embarrassing than being complimented on your Twitter thread”—never quite manages to ironize itself out of what it is: a plea for attention among infinite other pleas for attention. The “connection” we were promised is not so different from a broadcast: I make up a character and play it for ratings. It’s amazing that a tech company can make me—me!—divide my self-worth into endless discrete moments and distribute them among people I’ve never met and on average don’t think are very smart. I’m not being sarcastic: it really is amazing.
And finally ...
Sometimes when I need something dumb and funny to end the newsletter, I visit Mashable to see what dumb thing is going on in social media. Today I am delighted to tell you that there is a dog who speaks Japanese. His name is Beni. Thank you Beni — and thank you Mashable for never letting me down!
Talk to me
Send me tips, questions, corrections, and fact checks: email@example.com.