Two weeks ago, CNN’s Oliver Darcy put a question to Facebook executives during an event in New York: how can Facebook say it’s serious about fighting misinformation while also allowing the notorious conspiracy site Infowars to host a page there, where it has collected nearly 1 million followers? The tension Darcy identified wasn’t new. But it crystallized the a contradiction at the heart of Facebook’s efforts to clean up its platform. And we’ve been talking about it ever since.
Late Thursday night, Facebook took its first enforcement action against Jones since the current debate started. The company removed four videos that were posted to the pages of Alex Jones, Infowars’ founder and chief personality, and Infowars itself. Jones, who had violated Facebook’s community guidelines before, received a 30-day ban. Infowars’ page got off with a warning, although Facebook took the unusual step of saying the page is “close” to being unpublished.
The move came a day after YouTube issued a strike against Jones’ channel, after removing four videos itself. (Facebook won’t say which videos it removed, but the rationale it used to remove them — attacks on someone based on their religious affiliation or gender identity, and showing physical violence — suggests they are the same ones YouTube removed.)
These posts were removed for hate speech and violence, not misinformation. It’s likely Facebook would have removed them even without the extra attention on Infowars. But Jones’ behavior in the wake of recent enforcement actions shows how easily bad actors can skirt rules that were designed in the belief that most users will generally stick to them.
YouTube, for example, has a “three strikes policy.” Post three bad videos and your channel gets banned. But there’s a huge loophole, and Jones exploited it. As I reported earlier this week, users must log in to YouTube and view the strike against them before it gets counted. And if they posted multiple offending videos before they logged in, those offending videos are “bundled” into a single strike.
The idea is that the disciplinary process should educate a first-time offender. If someone posted three videos that violated copyright, for example, they might not understand what they did until YouTube notifies them. Better to give them a second chance, the thinking goes, than to ban their account instantly for three simultaneous violations.
Jones has proven himself capable at evading platforms’ well intentioned policies
Similarly, YouTube allows strikes to expire three months after they are issued. The idea is to give users a chance to rehabilitate themselves after they make a mistake. But viewed through the lens of Infowars, the policy begins to look like a free pass to post hate speech every 90 days or so.
Jones has proven himself capable at evading platforms’ well intentioned policies. The YouTube strike came with a ban on using the platform’s live-streaming features for 90 days — so Jones simply began appearing on the live streams of his associates, such as Ron Gibson. Here’s Sean Hollister at CNET:
YouTube is removing these streams and revoking livestreaming access to channels that host them, but it hasn’t stopped Infowars yet. Though YouTube shut down a livestream at Ron Gibson’s primary YouTube channel, he merely set up a second YouTube channel and is pointing people there.
Meanwhile, Facebook’s profile-specific discipline similarly ignores Jones’ ability to roam across pages. Jones is banned from accessing his personal profile, but he still gets to appear on his daily live show, which is broadcast on Infowars and “The Alex Jones Channel.” The solution to being banned from one profile is simply to broadcast yourself from another one.
There were good reasons for tech platforms to set up disciplinary policies that strived to forgive their users. But given how easily they can be gamed, they would appear to be ripe for reconsideration.
Britain’s Fake News Inquiry Says Facebook And Google’s Algorithms Should Be Audited By UK Regulators
An interim report from the House of Commons Digital, Culture, Media and Sport Committee, which leaked ahead of a planned Sunday release, calls for much stricter scrutiny of tech platforms like Facebook. Proposals include giving the government oversight of ranking algorithms, requiring online publications to be “accurate and impartial,” and making platforms liable for “harmful and illegal content.” All of that would be a big deal; this one bears watching.
In the first half of the year, Facebook received 1,704 complaints under a new German law that bans online hate speech, Reuters reports. The company removed 262 blog posts during that period, the company said in a German-language blog.
Paul Packer, the chairman of the U.S. Commission to Preserve America’s Heritage Abroad, wrote a letter to Zuckerberg calling his comments about Holocaust deniers “dangerous” and the company’s policies “inexcusable.”
Shortly after I sent off yesterday’s newsletter, Twitter posted a message about the “shadow ban” controversy:
We do not shadow ban. You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile). And we certainly don’t shadow ban based on political viewpoints or ideology.
Trump’s tweets could come back to haunt him in the Mueller investigation.
Twitter lost 1 million users in the past quarter, the company said today as part of its earnings report, though at least some of that seems to be tied to efforts to remove bad actors from the platform. This is a good thing, but the stock tumbled anyway.
Interesting nugget about Instagram monetization from Paresh Dave:
Instagram and Facebook users see about the same number of ads, but Instagram ad prices are half of what Facebook charges because of the limited number of advertisers vying for spots on Instagram, four ad buyers said.
Hard Questions: Who Reviews Objectionable Content on Facebook — And Is the Company Doing Enough to Support Them?
Facebook says it takes good care of its content moderators:
All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counseling. And all reviewers have full health care benefits.
We also pay attention to the environment where our reviewers work. There’s a misconception that content reviewers work in dark basements, lit only by the glow of their computer screens. At least for Facebook, that couldn’t be further from the truth. Content review offices look a lot like other Facebook offices. And because these teams deal with such serious issues, the environment they work in and support around them is important to their well-being.
Stop calling Twitter’s search ranking features a “bug,” says Brian Feldman.
It’s not a bug. We need to be clear about this — the issue here is not a bug, glitch, error, or whatever other synonym you can conjure up. Calling this a “bug” implies an outcome contrary to what should be expected by the code, and implies Twitter made a mistake. This is not what we normally think of when it comes to the sorting algorithms that power Twitter, or Facebook’s News Feed, or Google’s search engine, or YouTube’s recommendation system. These are programs designed to anticipate what a user wants based on a myriad number of signals and behaviors, and if the results they serve up are imperfect to a few users, that doesn’t mean the software is buggy. The results might not be politically helpful to a company, or they might be unpredictable. But they’re not a mistake.
Josh Constine argues that if Facebook wants to make more money from Instagram, it’s going to have to stop letting you skip ads. Gee thanks a lot Josh!
Talk to me
Questions? Comments? Weekend plans? email@example.com