Last summer, during tech platforms’ long period of indecision over what to do about Alex Jones, I wondered why their community standards so often seemed to favor the bully over his victims. Algorithms built by YouTube, Facebook, and Twitter all worked to recruit new followers for Jones, but when some of those followers then forced parents of the victims of the Sandy Hook Elementary shooting to go into hiding, the platforms offered no support.
That’s why I was heartened on Tuesday to learn that YouTube is changing its policies to make victims of Sandy Hook, 9/11, and other historical tragedies a protected class. That means YouTubers will no longer be able to upload videos denying that these historical events happened. That reduces the likelihood that followers of conspiracy peddlers like Jones will be recruited into his hoax and begin harassing victims.
The move is part of a broader expansion of the service’s hate speech policies, which will also prohibit videos promoting Naziism and other supremacist ideologies for the first time. It will likely result in the removal of thousands of channels — some of which are trying to document white supremacy for journalistic or academic purposes — and I wrote about it in some detail today for The Verge.
But if you’ve been following platform news today, YouTube’s expanded hate speech policy isn’t likely the news you’ve been reading about. Rather, you’ve likely been reading about Steven Crowder and Carlos Maza — and YouTube’s clumsy efforts to address a controversy that calls into question its commitment to enforcing its policies against harassment.
(If you’re familiar with the rough timeline here, feel free to skip ahead. But I do want to walk through the twists and turns here, because collectively they illustrate the difficulty even the most venerable social networks have in creating and enforcing their community standards.)
At issue is a series of response videos that Crowder, a conservative pundit, has made to Maza’s popular YouTube series “Strikethrough.” Maza’s videos often analyze cases of journalistic malpractice among conservative cable news hosts, and for the past two years Crowder has responded with a series of racial and homophobic slurs. (Until today he also sold “Socialism is for F*gs” T-shirts on a page linked to his YouTube channel, in which the vowel is replaced with an image of a fig that isn’t fooling anyone.)
Maza created a supercut of the harassment he has received from Crowder, drawing mainstream media attention. On Tuesday night, after investigating Maza’s claims, YouTube posted a curt four-part reply on Twitter saying that “while we found language that was clearly hurtful, the videos as posted don’t violate our policies.”
Outrage followed, in part because YouTube did not present a public rationale for its decision. It did, however provide one on background to journalists, which Gizmodo helpfully published. The gist: while Crowder did use offensive language, the overall thrust of his videos was to respond to Maza’s media criticism, and so YouTube considered it fair game. To YouTube, this appeared to be a case of creators badmouthing one another, which happens all the time. That Maza has been doxxed and threatened appears not to have figured into the discussion.
On Wednesday, everything got a little weirder. In a set of follow-up tweets, YouTube told Maza that it had demonetized Crowder’s channel, pending the removal of a link to his anti-gay T-shirt. This led to more outrage — the homophobia is fine, but the merch sales are a step too far? — until YouTube posted another follow-up, saying Crowder would have to address other unspecified “issues” with his account to get it back in good standing.
All of which offers little consolation to Maza. But if you squint — and are willing to give a monolithic corporation the benefit of the doubt, in a spirit of extreme generosity — you can intuit the existence of a group of frustrated YouTube employees gradually pushing their employer to do the slightly less bad thing. In fact, Caroline O’Donovan reported in BuzzFeed, more than 100 Googlers have signed a petition demanding that the company remove Pride branding from its social media accounts. (June, after all, is Pride month.)
A lot of chatter on Twitter (and in Vox Media Slack rooms) today has argued that this is an easy call. A schoolyard bully has been calling a guy names for years, on a platform that purports to ban “content that makes hurtful and negative personal comments/videos about another person.” If repeatedly calling someone a “lispy queer” is not a hurtful and negative personal comment, what is?
At the same time, it’s clear that in this case at least, YouTube does not believe it can enforce that particular rule against this particular creator. So what else might the company do?
That brings me back to Alex Jones.
Jones, like Crowder, was typically careful not to make explicit calls for his followers to harass people. But they did so anyway, and the result — that parents of murdered children can no longer regularly visit their child’s grave — shocked our conscience.
Two, the effect of Jones’ harassment grew as his channel grew — something YouTube helped him do by recruiting new subscribers for him through recommendations. As Jones’ base grew into the millions, his videos carried with them an ever-growing risk of inspiring real-world violence.
Three, YouTube (and other platforms) made inconsistent, sometimes baffling statements about Jones before banning him. It left open the question of precisely when YouTube would enforce its standards against harassment, in ways that continue to haunt the service today.
So, here’s what I’d like to see YouTube do.
One, the definition of incitement should change to hold creators accountable when they dog whistle at their followers in a way that inspires harassment. Crowder has gone into overdrive making videos about his impending ban from YouTube at Maza’s hands, warning his millions of subscribers that corporate media is about to crush them and stifle all free speech. It’s no wonder that harassment of Maza has only increased since then, proliferating across social networks.
But imagine if YouTube held creators accountable for what their followers did, whatever borderline language they used in an effort to get them to do it. Telling a creator that you’re suspending them because their followers harassed and doxxed someone they made a video about might encourage better creator behavior all around. Two things I like about this approach: one, it emphasizes getting the outcomes right, rather than the rules. And two, outcomes are generally easier to evaluate than intent — you either were doxxed, or you weren’t.
Two, YouTube should explicitly apply different standards to creators with millions of subscribers. A random vlogger with 100 followers railing against a corporate YouTube channel is barely of passing interest to me. But a creator with millions of followers has great power — and Spider-Man taught us long ago what ought to come with that. YouTube already holds its biggest creators to a different standard with its partner program, which describes the (opaque, byzantine, ever-changing) rules by which they are allowed to earn revenue. I see no reason why stars with enormous followings shouldn’t be held to a higher standard than the creative proletariat.
Finally, YouTube and its top executives need to start having all of these arguments with us in public. At the moment, it is a company terrified of its own user base — the company’s blog post today announcing a hate speech ban was unsigned, and reporters were asked to withhold the names of the executives they interviewed about it.
I fully support this, from a security standpoint. But it has proven awful from a communications standpoint. The company’s terse tweeted responses to impassioned pleas from its creators have left everyone mystified about what, if anything, the company stands for, aside from minimizing bad press. At least the background statement it provided to Gizmodo about Crowder offered something resembling a rationale. Here’s hoping the company starts putting those background statements on the record — because until YouTube starts standing behind its decisions with arguments rather than platitudes, we are doomed to spend eternity talking past each other.
I’m under no illusion that these policy changes would eliminate YouTube’s harassment policy. But I do think they would go a long way toward rooting it in durable principles. And principles are something that YouTube lately has had far too much trouble demonstrating.
Pushback
I made a dumb mistake in the email version of yesterday’s newsletter, in which I said that “Sign in with Apple” is mandatory for all developers. I should have added that it’s mandatory for all developers who enable third-party logins. So if you offer your own login tool but not a similar one from Google, Facebook, Twitter, or Snap, you’re in the clear.
But if you do enable third party logins, not only do you have to include Apple’s, but you have to put the Apple button on top. Hilarious detail from Stephen Nellis that broke after I put the newsletter to bed Tuesday:
The move to give Apple prime placement is significant because users often select the default or top option on apps. And Apple will require apps to offer its button if they want to offer options to login with Facebook or Google. […]
Apple’s suggestion to developers to place its login button above rival buttons is part of its “Human Interface Guidelines,” which are not formal requirements to pass App Store review. But many developers believe that following them is the surest way to gain approval.
Democracy
Teddy Schleifer says Washington’s newfound interest in antitrust stems partially from surge in public support for regulation:
Take the Harris Poll, which surveys Americans on the corporate reputations of various companies. Google’s reputation fell 13 spots in Harris’ most recent poll issued in 2019, one of most precipitous declines in the survey. One of the few companies that saw more reputational damage? Facebook, which fell 43 slots on the 100-company list to be about as popular — or unpopular, depending on how you look at it — as other scandal-plagued companies like Wells Fargo and the Trump Organization.
And when you dive into other survey research, you see that critiques of Google and Facebook are surprisingly bipartisan.
Jason Del Rey finds that the Federal Trade Commission is gathering information from Amazon’s competitors ahead of a potential antitrust case:
As of 2018, Amazon boasts more than 100 million Prime members worldwide.
But the FTC has shown interest in the question of whether this bundling of services allows Amazon to unfairly undercut competitors. One line of thinking: If Amazon doesn’t need to profit directly from the annual Prime membership fee — and it’s not clear that it does — how are competitors who have to make money from an individual service supposed to compete on price?
Nick Wingfield and Christopher Stern consider whether the threat of antitrust action will cause big tech companies to temper their behavior:
Several antitrust attorneys told The Information they wouldn’t be surprised to see big tech companies modify some of their conduct ahead of antitrust lawsuits, with the goal of softening some of their more controversial business practices. Some could put the kibosh on large acquisitions, while others may be less likely to impose technical restrictions on outside companies seeking to connect with their technologies.
Sigal Samuel gets expert opinion on what we should demand from companies building artificial intelligence:
We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
Transparency is the No. 1 concern on people’s minds, judging by the responses I received. “We’re not even fully aware of when an algorithm is being used to make decisions for us or about us,” Hosanagar told me.
An investigation into police officers’ Facebook posts finds a litany of racism and other disturbing content. Emily Hoerner and Rick Tulsky report:
Police officers saying bigoted and racist things online has been an issue since the beginning of social media. The behavior was especially scrutinized after the Black Lives Matter movement blasted into the national conversation — and that scrutiny has continued even after that movement began grappling with its future. What was never really captured was the scope of problematic online posts from police officers.
But a new review of police behavior on Facebook documents the systemic nature of the conduct across several departments. The Plain View Project, launched by Philadelphia lawyer Emily Baker-White, examined the accounts of about 2,900 officers from eight departments across the country and an additional 600 retired officers from those same departments. She compiled posts that represented troubling conduct in a database that is replete with racist imagery and memes, and in some cases long, vitriolic exchanges involving multiple officers.
Elsewhere
Blind is an anonymous social network, and I hate anonymous social networks, largely because they inevitably lead to bullying and harassment. Tesla has apparently had enough of the app, Melanie Ehrenkranz reports:
“Tesla is the only company that is blocking its own employees from accessing or signing up on Blind,” a Blind spokesperson told Gizmodo in an email. “We found out about this issue through emails from our users, saying they were unable to receive verification emails from us and from posts on the public channel, where already verified Tesla employees raised the issue. Then we looked into the verification rates and we could confirm that Tesla is preventing employees from accessing Blind.”
Margaret Sullivan reports that publishers have gained bipartisan support for a measure that would grant them a safe harbor exemption from antitrust laws in order to negotiate collectively against Google and Facebook:
Sen. John Neely Kennedy (R-LA) sees the two tech giants — the duopoly as they’re often called — as far too powerful in their damaging control of the news kingdom.
“They’ve pitted themselves against newspapers in a David-and-Goliath battle in which newspapers don’t have a stone to throw much less a slingshot to put it in,” Kennedy said in a statement.
Taylor Lorenz reports that the teens are making friends by sharing unsolicited images using an iOS feature called AirDrop:
Anyone who has accidentally left their AirDrop settings open around a group of teens is likely familiar with the deluge of memes, selfies, and notes that arrives so quickly it can often freeze your phone. “Another day another group of french teens trying to AirDrop me memes on the subway,” one woman tweeted. “In a crowd of teens and they keep trying to AirDrop me memes!!!” said another. One young Twitter user joked that she was going to a music festival last weekend “just to AirDrop.”
Launches
Starting today, you’ll be able to share your screen via Skype on a supported Android or iOS device.
Facebook now enables subscriptions for all publishers through its Instant Articles product. Points for effort, but I don’t know a single major US publisher who believes they can build a sustainable business using Instant Articles.
Takes
Three threads worth reading today. Click to expand!
And finally ...
When I was in middle school, people would invite you to join the “pen15 club,” and then write “pen15” on your arm, and then later in the day your classmates would start laughing because you had the word “penis” on your arm. From this true story came the outstanding Hulu series PEN15, and also now Facebook is maybe moving into a building in New York called Penn15?
Facebook had already committed to a different building, but now the developer of Penn15 is advertising Facebook’s presence in its brochures. Whoever becomes a tenant here, I just hope Penn15 will appear prominently on the badges they have to wear around the building.
Talk to me
Send me tips, comments, questions, and a cleverly worded dog whistle that will offend me without endangering monetization on your YouTube channel: casey@theverge.com.