Last August, as it inched toward banning Alex Jones from its platform, Twitter invited the New York Times to sit in on a meeting about why it was taking so long. It would later emerge that Jones had already violated the company’s rules at least seven times, but CEO Jack Dorsey still hesitated to pull the trigger. By the meeting’s end, Dorsey had instructed his underlings to create a new policy banning “dehumanizing speech.”
The underlings spent the next year trying to figure out what that meant.
A sweeping draft policy was posted in September. Today, the company unveiled the finished product: an update to its rules on hateful conduct narrowly banning speech that dehumanizes others on the basis of religion. It is no longer kosher to call people maggots, or vermin, or viruses, for keeping kosher. Any existing tweet that breaks the rule will have to be deleted if it gets reported — which has already tripped up Louis Farrakhan — and tweeting dehumanizing anti-religious sentiment in the future could lead to account suspensions or even outright bans.
All of this was a somewhat unexpected outcome: the original Times story had not even mentioned religion. In a new piece, the Times’ Kate Conger says Twitter ultimately decided that religion was the easiest place to start in implementing the policy:
“While we have started with religion, our intention has always been and continues to be an expansion to all protected categories,” Jerrel Peterson, Twitter’s head of safety policy, said in an interview. “We just want to be methodical.”
The scaling back of Twitter’s efforts to define dehumanizing speech illustrates the company’s challenges as it sorts through what to allow on its platform. While the new guidelines help it draw starker lines around what it will and will not tolerate, it took Twitter nearly a year to put together the rules — and even then they are just a fraction of the policy that it originally said it intended to create.
That’s all fine as far as it goes, and yet you can still read it and think — really? Twitter banned saying “Jews are vermin” on a Tuesday in 2019? Even for a company that is notorious for moving at a geologic pace, today’s update feels overdue.
It also feels redundant.
Read the Twitter rules and you’ll see that forms of conduct already banned include “inciting fear about a protected category,” using the example “all [religious group] are terrorists.” It also banned “hateful imagery,” including swastikas. And yet as most Twitter users will tell you, vicious anti-Semites and open Nazis still appear in the timeline all too often — to the point that Jack Dorsey spent much of his winter podcast tour taking questions about Nazis’ durable presence on the service. (Twitter says the key change here is that rules previously applied only to tweets targeted at individuals. So you could “Protestants are scum” but not “Casey’s Protestant scum.”)
New policies will always be needed to account for the ever-evolving nature of human speech and shifting cultural norms. But they will never be sufficient to the task of keeping users feeling safe. Far more important is that the policies are actually applied.
The Times story does include comments from Twitter about how it will train its force of content moderators to apply the new rules. And the company has begun reporting high-level data about its enforcement activities, giving us sense of the scale of the problem that Twitter faces.
The most recent such report found that Twitter users reported 11 million unique accounts between July and December 2018, up 19 percent from the previous reporting period. And yet Twitter took action against just 250,806 accounts — which was down 4 percent from the previous period.
The data doesn’t get any more granular than that, so it’s impossible to judge the efficacy of Twitter moderation from the report. But the numbers suggest that Twitter users’ frustration with the product greatly exceeds moderators’ willingness — or ability — to do anything about it. Viewed that way, Twitter doesn’t have a problem writing policies — it has a problem acting on them.
Citing a former assistant US attorney who went on the record, Michael Isikoff reports that Russian trolls originated the hoax that former DNC staffer Seth Rich was murdered to cover up corruption in Hillary Clinton’s campaign. Fox News ran wild with the story, and the fallout for Rich’s family was brutal.
In the summer of 2016, Russian intelligence agents secretly planted a fake report claiming that Democratic National Committee staffer Seth Rich was gunned down by a squad of assassins working for Hillary Clinton, giving rise to a notorious conspiracy theory that captivated conservative activists and was later promoted from inside President Trump’s White House, a Yahoo News investigation has found.
Russia’s foreign intelligence service, known as the SVR, first circulated a phony “bulletin” — disguised to read as a real intelligence report —about the alleged murder of the former DNC staffer on July 13, 2016, according to the U.S. federal prosecutor who was in charge of the Rich case. That was just three days after Rich, 27, was killed in what police believed was a botched robbery while walking home to his group house in the Bloomingdale neighborhood of Washington, D.C., about 30 blocks north of the Capitol.
Colin Lecher reports:
President Trump violated the First Amendment by blocking his critics on Twitter, a federal appeals court ruled today, shutting down the White House’s request to overturn a lower court’s decision.
In the latest case of an employee uprising at a big tech company, Amazon warehouse workers plan to strike to protest their low wages on Prime Day next week. Josh Eidelson and Spencer Soper report:
Workers at a Shakopee, Minnesota, fulfillment center plan a six-hour work stoppage July 15, the first day of Prime Day. Amazon started the event five years ago, using deep discounts on televisions, toys and clothes to attract and retain Prime members, who pay subscription fees in exchange for free shipping and other perks.
“Amazon is going to be telling one story about itself, which is they can ship a Kindle to your house in one day, isn’t that wonderful,” said William Stolz, one of the Shakopee employees organizing the strike. “We want to take the opportunity to talk about what it takes to make that work happen and put pressure on Amazon to protect us and provide safe, reliable jobs.”
Ben Brody reports that the chairman of the Federal Trade Commission is asking about potentially requiring YouTube to disable ads for children:
During a July 1 call, Chairman Joseph Simons and fellow Republican Commissioner Noah Phillips suggested the world’s largest video site wouldn’t need to move all children’s content to a separate platform as advocates have proposed, according to the person. Instead, individual channels could disable advertising to bring the site into line with a U.S. law’s ban on collecting information on children under age 13 without parental permission.
The FTC is investigating Google’s YouTube for potential violations of the Children’s Online Privacy Protection Act. The heads of two kids groups who had previously filed complaints against the site participated in the conversation, the person said.
Interesting note in Amy Harmon’s piece on the debate that Detroit is currently having over facial recognition and surveillance: no one even knows why facial recognition algorithms are racist. (“Facial recognition software marketed by Amazon misidentified darker-skinned women as men 31 percent of the time.”)
It is not clear why facial recognition algorithms perform differently on different racial groups, researchers say. One reason may be that the algorithms, which learn to recognize patterns in faces by looking at large numbers of them, are not being trained on a diverse enough array of photographs.
But Kevin Bowyer, a Notre Dame computer scientist, said that was not the case for a study he recently published. Nor is it certain that skin tone is the culprit: Facial structure, hairstyles and other factors may contribute.
Mark Sullivan interviews Noah Feldman, who helped to come up with the idea of Facebook’s forthcoming independent oversight board for content moderation.
SULLIVAN: For example, think of the public discussion about whether Mark was correct when he hinted that he thought that the Holocaust denial shouldn’t be taken down as hate speech. And then a lot of people were angry and said, “How dare you say that?” The whole point, the point that Mark gets, is that Mark shouldn’t decide that! It shouldn’t be up to Mark. That is a genuinely hard balancing decision and it will be made in the future by this board. That’s a good example of the kind of hard content decision of what are the borders of hate speech. That’s sort of the architectural situation.
Then there’ll also be some situations where Facebook may have set a community standard that isn’t really consistent with its own values. And in those cases, I would envision the board actually saying to Facebook: “Listen, your community standard is wrong; it’s not consistent with the values that you articulated, and so you have to change it.”
Adam Minter explores why there hasn’t been a popular revolt against social-credit systems in China:
It’s chilling, dystopian — and likely to be quite popular. Chinese have already embraced a whole range of private and government systems that gather, aggregate and distribute records of digital and offline behavior. Depicted outside of China as a creepy digital panopticon, this network of so-called social-credit systems is seen within China as a means to generate something the country sorely lacks: trust. For that, perpetual surveillance and the loss of privacy are a small price to pay.
As in many developing countries, the fact is that China’s economic growth has outpaced its ability to create and police institutions that promote trust between citizens and businesses. For example, a decade after Chinese milk producers were revealed to be adulterating infant formula, Chinese parents still shun the country’s dairy industry and distrust of food producers remains almost universal. Meanwhile, China remains the counterfeiting capital of the world. Some of its most recognizable companies — including Alibaba Group Holding Ltd., Tencent Holdings Ltd., and Pinduoduo Inc. — are known as thriving markets for fakes, thereby undermining the credibility of Chinese e-commerce in general.
Facebook is setting harder goals around diversity, Kurt Wagner reports:
“We envision a company where in five years, at least fifty-percent of our workforce is made up of women, people who are Black, Hispanic, Native American, Pacific Islanders, people with two or more ethnicities, people with disabilities and veterans,” Maxine Williams, Facebook’s chief diversity officer, wrote in a blog post Tuesday.
Facebook released the new targets alongside its annual diversity report, which details the ethnic and gender breakdown of its workforce. Williams said that reaching 50% underrepresented employees in the U.S. was both a “stretch” and “ambitious.” About 43% of Facebook’s U.S. workers are currently from underrepresented groups.
Alex Stamos talks to Victoria Kwan about the disinformation landscape:
STAMOS: At some point, you start to realise it’s mostly scammers. This is the truth on the internet: there are tens of thousands of people whose entire job it is to push spam on Facebook. It’s their career. There are hundreds of times more people doing that than there are working in professional disinformation campaigns for governments. So they have to fundamentally accept that the sexiest explanation is usually not true.
This is something that companies go through, too. They’ll hire new analysts, and they jump to wild conclusions. ‘I found a Chinese IP, maybe it’s MSS [Ministry of State Security].’ It’s probably not MSS; it’s probably unpatched Windows bugs in China. This is also why you do the red-teaming, and why you have disinterested parties whose job it is to question the conclusions.
“Mark Zuckerberg’s family office says that there was no evidence to substantiate allegations of misconduct against Liam Booth, but he’s leaving anyway,” Rob Price reports.
Angel Au-Yeung’s in-depth look at the company behind Bumble finds “a corporate headquarters that more than a dozen former employees allege is toxic, especially for women.”
“While serving as the company’s CMO, I was told to act pretty for investors and make job candidates ‘horny’ to work for Badoo,” Jessica Powell, Badoo’s chief marketing officer from 2011 to 2012 says in an email. “I was once even asked to give a designer candidate a massage.” She says she refused to do so, adding that “female employees were routinely discussed in terms of their appearance.”
“When female staff spoke up, their concerns were ignored or minimized,” she adds, decrying a “misogynistic atmosphere.”
GitHub will no longer host new versions of an app that created synthetic porn of women, Joseph Cox reports:
“We do not proactively monitor user-generated content, but we do actively investigate abuse reports. In this case, we disabled the project because we found it to be in violation of our acceptable use policy,” a GitHub spokesperson told Motherboard in a statement. “We do not condone using GitHub for posting sexually obscene content and prohibit such conduct in our Terms of Service and Community Guidelines.”
We tend to talk a lot about evil bots around here, so I appreciated Alexandria Symonds’ charming profile of this extremely good and funny Twitter bot, which as it turns out is operated by a 24-year-old Googler:
The bot is a computer program that scrapes The Times’s website hourly for new articles and compares them against a memory bank of words the paper has previously used. The bot then tweets the words that appear to be new. On a typical day, it posts a handful of tweets, comprising neologisms, scientific terms, words in foreign languages and the occasional typo.
On June 28, its tweets read: zendale, zombiecorn, biofocals, parasexualized, dobok, doors’ll, gaytriarchy. (That last one was by far the most popular.)
Three and a half years after shutting down its similar Creative Labs division, Facebook looks to be relaunching it with a worse name, Chaim Gartenberg reports:
Facebook is launching a new brand of experimental apps for consumers, developed under the “NPE Team, from Facebook” label. (NPE stands for “new product experimentation.”) The team will be developing new apps for iOS, Android, and the web, with a specific focus on consumer services, which is similar to Microsoft’s Garage group.
In a blog post announcing the new team, the company notes that it “decided to use this separate brand name to help set the appropriate expectations with users that NPE Team apps will change very rapidly and will be shut down if we learn that they’re not useful to people.”
Facebook is going to start taking a cut of creator revenue. But it also has some new goodies to pass out:
Ahead of VidCon, Facebook has announced a slew of monetization options for its creators, which include more paid groups, ad placement options, and packs of Stars that viewers can buy and send as tips during live streams. Facebook has been trying to lure video creators away from competitors like YouTube and Patreon with monetization features like Fan Subscriptions, a $4.99-a-month digital tip jar that gets fans exclusive content, which opened up to more creators earlier this year. The features announced today are meant to add more ways for creators to make money from the platform and customize fans’ experience when they visit their Facebook pages.
Jake Kastrenakes on a creator-friendly move YouTube is announcing ahead of VidCon this week:
Owners of copyrighted content — like a record label or a movie studio — will now have to say exactly where in a video their copyrighted material appears, which they didn’t have to do in the past when manually reporting infringement. That’ll allow creators to easily verify whether or not a claim is legitimate and to then edit out the content if they don’t want to deal with the repercussions, like losing revenue or having the video taken down.
Until now, copyright owners didn’t have to say where infringing content appeared when making a manual claim. That’s been the source of much frustration for creators, who would find themselves searching through lengthy videos to determine exactly what part was even at issue. The lack of detail made it hard to dispute the claims, and it meant that if a creator tried to edit potentially infringing content out, they’d have to wait and see if the copyright owner agreed that the problem was resolved before the claim would be let go.
Darius Kazemi has a fun new project in which he teaches you how to create and host your own social network. Let me know if any of you try this!
Brian Feldman takes down this absurd complaint in Vice — “a countercultural publication that does spon for Bank of America” — from a man whose Twitter account was banned after he sent death threats to the Mr. Peanut brand account.
Here’s what’s not a good prank: straightforwardly tweeting that you’ll put a bullet in someone’s brain. That’s only funny if you think contextless threats of violence indistinguishable from real online harassment are funny. Some people do though, and if that’s the case for you, I would like to wish you the best of luck starting high school in the fall and urge you not to put off your summer reading assignment until the last minute.
And finally ...
With facial recognition surveillance systems being deployed across America, citizens are justifiably concerned — and looking for solutions. Fortunately, you can fool many systems simply by joining a notorious music/wrestling/Faygo fandom known as the Insane Clown Posse. Ming Lee Newcomb reports:
It turns out that Juggalos face makeup cannot be accurately read by many facial recognition technologies. Most common programs identify areas of contrast — like those around the eyes, nose, and chin — and then compare those points to images within a database. The black bands frequently used in Juggalo makeup obscure the mouth and cover the chin, totally redefining a person’s key features.
As Twitter use @tahkion points out (via Yahoo!), the black-on-white face paint tricks most facial recognition into incorrectly reading a person’s jawline and, presumably, eye area.
You are going to look so good as a clown!
Talk to me
Send me tips, comments, questions, and photos of you as a Juggalo: email@example.com.