Skip to main content

Does Twitter have a secret weapon for silencing trolls?

Does Twitter have a secret weapon for silencing trolls?

/

A British lawmaker complained of abuse. Suddenly, the abuse stopped.

Share this story

Parliament via Flickr
Parliament via Flickr
Parliament via Flickr

Luciana Berger, a member of British Parliament, has been receiving a stream of anti-Semitic abuse on Twitter. It only escalated after a man was jailed for tweeting her a picture with a Star of David superimposed on her forehead and the text "Hitler was Right." But over the last few weeks, the abuse began to disappear. Her harassers hadn’t gone away, and Twitter wasn't removing abusive tweets after the fact, as it sometimes does, or suspending accounts as reports came in. Instead, the abuse was being blocked by what seems to be an entirely new anti-abuse filter.

For a while, at least, Berger didn’t receive any tweets containing anti-Semitic slurs, including relatively innocuous words like "rat." If an account attempted to @-mention her in a tweet containing certain slurs, it would receive an error message, and the tweet would not be allowed to send. Frustrated by their inability to tweet at Berger, the harassers began to find novel ways to defeat the filter, like using dashes between the letters of slurs, or pictures to evade the text filters. One white supremacist site documented various ways to evade Twitter’s censorship, urging others to "keep this rolling, no matter what."

the harassers began to find novel ways to defeat the filter

In recent months, Twitter has come under fire for the proliferation of harassment on its platform—in particular, gendered harassment. (According to the Pew Center, women online are more at risk from extreme forms of harassment like "physical threats, stalking, and sexual abuse.") Twitter first implemented the ability to report abuse in 2013, in response to the flood of harassment received by feminist activist Caroline Criado-Perez. The recent surge in harassment has again resulted in calls for Twitter to "fix" its harassment problem, whether by reducing anonymity, or by creating better blocking tools that could mass-block harassing accounts or pre-emptively block recently created accounts that tweet at you. (The Blockbot, Block Together, and GG Autoblocker are all instances of third party attempts to achieve the latter.) Last week, the nonprofit Women, Action, & the Media announced a partnership with Twitter to specifically track and address gendered harassment.

Twitter has come under fire for the proliferation of harassment

While some may welcome the mechanism deployed against Berger’s trolls as a step in the right direction, the move is troubling to free speech advocates. Many of the proposals to deal with online abuse clash with Twitter’s once-vaunted stance as "the free speech wing of the free speech party," but this particular instance seems less like an attempt to navigate between free speech and user safety, and more like a case of exceptionalism for a politician whose abuse has made headlines in the United Kingdom. The filter, which Twitter has not discussed publicly, does not appear as if it's intended to be a universal fix for harassment that is experienced by less-important users on the platform, such as the women targeted by Gamergate.

Prior to the filter being activated, Luciana Berger and her fellow MP, John Mann, had announced plans to visit Twitter’s European Headquarters, to talk to higher-ups about the abuse. Parliament is currently discussing more punitive laws against online trolling, including a demand from Mann for a way to ban miscreants from "specific parts of social media or, if necessary, to the Internet as a whole."

Prior to the filter being activated, Luciana Berger had announced plans to visit Twitter's headquarters

In a letter to Berger that is quoted in part here, Twitter’s head of global safety outreach framed efforts over the past year as including architectural solutions to harassment. "Our strategy has been to create multiple layers of defense, involving both technical infrastructure and human review, because abusive users often are highly motivated and creative about subverting anti-abuse mechanisms." The letter goes on to describe known mechanisms, like the use of "signals and reports from Twitter users to prioritize the review of abusive content," and hitherto unknown mechanisms like "mandatory phone number verification for accounts that indicate engagement in abusive activity." However, the letter says nothing about a selective filter for specific words. To achieve that result, the company appears to have used an entirely new tool outside of its usual arsenal.

A source familiar with the incident told us, "Things were used that were definitely abnormal."

A former engineer at Twitter, speaking on the condition of anonymity, agreed, saying, "There’s no system expressly designed to censor communication between individuals. … It’s not normal, what they’re doing."

He and another former Twitter employee speculated that the censorship might have been repurposed from anti-spam tools—in particular, BotMaker, which is described here in an engineering blog post by Twitter. BotMaker can, according to Twitter "deny any Tweets" that match certain conditions. A tweet that runs afoul of BotMaker will simply be prevented from being sent out—an error message will pop up instead. The system is, according to a source, "really open-ended" and is frequently edited by contractors under wide-ranging conditions in order to effectively fight spam.

When asked whether a new tool had been used, or BotMaker repurposed, a Twitter spokesperson replied: "We regularly refine and review our spam tools to identify serial accounts and reduce targeted abuse. Individual users and coordinated campaigns sometimes report abusive content as spam and accounts may be flagged mistakenly in those situations."

"Things were used that were definitely abnormal"

It’s not clear whether this filter is still in place. (I attempted to test it with "rat," the only word that I was willing to try to tweet, and my tweet did go through. The filter may have been removed, the word "rat" may have been removed from the blacklist, or the filter may have only been applied to recently created accounts).

It’s hard to shed a tear for a few missing slurs, but the way they were censored is deeply alarming to free speech activists like Eva Galperin of the Electronic Frontier Foundation. "Even white supremacists are entitled to free speech when it’s not in violation of the terms of service. Just deciding you’re going to censor someone’s speech because you don’t like the potential political ramifications for your company is deeply unethical. The big point here is that someone on the abuse team was worried about the ramifications for Twitter. That’s the part that’s particularly gross."

What’s worrisome to free speech advocacy groups like the EFF about this incident is how quietly it happened. Others may see the bigger problem being the fact that it appears to have been done for the benefit of a single, high-profile user, rather than to fix Twitter’s larger harassment issues. The selective censorship doesn’t seem to reflect a change in Twitter abuse policies or how they handle abuse directed at the average user; aside from a vague public statement by Twitter that elides the specific details of the unprecedented move, and a few, mostly-unread complaints by white supremacists, the entire thing could have gone unnoticed.

The way they were censored is deeply alarming to free speech activists

Eva Galperin thinks incidents like these could be put in check by transparency reports documenting the application of the terms of services, similar to how Twitter already puts out transparency reports for government requests and DMCA notices. But while a transparency report might offer users better information as to how and why their tweets are removed, some still worry about the free-speech ramifications of what transpired. One source familiar with the matter said that the tools Twitter is testing "are extremely aggressive and could be preventing political speech down the road." He added, "are these systems going to be used whenever politicians are upset about something?"

Today’s Storystream

Feed refreshed Two hours ago 10 minutes in the clouds

J
Twitter
Jay PetersTwo hours ago
Twitch’s creators SVP is leaving the company.

Constance Knight, Twitch’s senior vice president of global creators, is leaving for a new opportunity, according to Bloomberg’s Cecilia D’Anastasio. Knight shared her departure with staff on the same day Twitch announced impending cuts to how much its biggest streamers will earn from subscriptions.


T
Twitter
Tom WarrenTwo hours ago
Has the Windows 11 2022 Update made your gaming PC stutter?

Nvidia GPU owners have been complaining of stuttering and poor frame rates with the latest Windows 11 update, but thankfully there’s a fix. Nvidia has identified an issue with its GeForce Experience overlay and the Windows 11 2022 Update (22H2). A fix is available in beta from Nvidia’s website.


A
External Link
Andrew J. Hawkins7:23 PM UTC
If you’re using crash detection on the iPhone 14, invest in a really good phone mount.

Motorcycle owner Douglas Sonders has a cautionary tale in Jalopnik today about the iPhone 14’s new crash detection feature. He was riding his LiveWire One motorcycle down the West Side Highway at about 60 mph when he hit a bump, causing his iPhone 14 Pro Max to fly off its handlebar mount. Soon after, his girlfriend and parents received text messages that he had been in a horrible accident, causing several hours of panic. The phone even called the police, all because it fell off the handlebars. All thanks to crash detection.

Riding a motorcycle is very dangerous, and the last thing anyone needs is to think their loved one was in a horrible crash when they weren’t. This is obviously an edge case, but it makes me wonder what other sort of false positives we see as more phones adopt this technology.


Welcome to the new Verge

Revolutionizing the media with blog posts

Nilay PatelSep 13
A
External Link
Andrew J. Hawkins6:11 PM UTC
Ford is running out of its own Blue Oval badges.

Running out of semiconductors is one thing, but running out of your own iconic nameplates is just downright brutal. The Wall Street Journal reports badge and nameplate shortages are impacting the automaker's popular F-series pickup lineup, delaying deliveries and causing general chaos.

Some executives are even proposing a 3D printing workaround, but they didn’t feel like the substitutes would clear the bar. All in all, it's been a dreadful summer of supply chain setbacks for Ford, leading the company to reorganize its org chart to bring some sort of relief.


E
TikTok
Elizabeth Lopatto5:52 PM UTC
Spain’s Transports Urbans de Sabadell has La Bussí.

Once again, the US has fallen behind in transportation — call it the Bussí gap. A hole in our infrastructure, if you will.


J
External Link
Jay Peters4:28 PM UTC
Doing more with less (extravagant holiday parties).

Sundar Pichai addressed employees’ questions about Google’s spending changes at an all-hands this week, according to CNBC.

“Maybe you were planning on hiring six more people but maybe you are going to have to do with four and how are you going to make that happen?” Pichai sent a memo to workers in July about a hiring slowdown.

In the all-hands, Google’s head of finance also asked staff to try not to go “over the top” for holiday parties.


E
External Link
Elizabeth Lopatto4:21 PM UTC
Insiders made the most money off of Helium’s “People’s Network.”

Remember Helium, which was touted by The New York Times in an article entitled “Maybe There’s a Use for Crypto After All?” Not only was the company misleading people about who used it — Salesforce and Lime weren’t using it, despite what Helium said on its site — Helium disproportionately enriched insiders, Forbes reports.


J
Youtube
James Vincent2:50 PM UTC
Nvidia’s latest AI model generates endless 3D models.

Need to fill your video game, VR world, or project render with 3D chaff? Nvidia’s latest AI model could help. Trained on 2D images, it can churn out customizable 3D objects ready to import and tweak.

The model seems rudimentary (the renders aren’t amazing quality and seem limited in their variety), but generative AI models like this are only going to improve, speeding up work for all sorts of creative types.


R
Richard Lawler1:02 PM UTC
Green light.

This week Friday brings the debut of Apple’s other new hardware. We’ve reviewed both the new AirPods Pro and this chonky Apple Watch Ultra, and now you’ll decide if you’re picking them up, or not.

Otherwise, we’re preparing for Netflix’s Tudum event this weekend and slapping Dynamic Island onto Android phones.


The Apple Watch Ultra on a woman’s wrist
Photo by Amelia Holowaty Krales / The Verge
J
External Link
Jess Weatherbed12:31 PM UTC
Japan will fully reopen to tourists in October following two and a half years of travel restrictions.

Good news for folks who have been waiting to book their dream Tokyo vacation: Japan will finally relax Covid border control measures for visa-free travel and individual travelers on October 11th.

Tourists will still need to be vaccinated three times or submit a negative COVID-19 test result ahead of their trip, but can take advantage of the weak yen and a ‘national travel discount’ launching on the same date. Sugoi!


T
External Link
Thomas Ricker11:00 AM UTC
Sony starts selling the Xperia 1 IV with continuous zoom lens.

What does it cost to buy a smartphone that does something no smartphone from Apple, Google, Samsung can? $1,599.99 is Sony’s answer: for a camera lens that can shift its focal length anywhere between 85mm and 125mm.

Here’s Allison’s take on Sony’s continuous-zoom lens when she tested a prototype Xperia 1 IV back in May: 

Sony put a good point-and-shoot zoom in a smartphone. That’s an impressive feat. In practical use, it’s a bit less impressive. It’s essentially two lenses that serve the same function: portrait photography. The fact that there’s optical zoom connecting them doesn’t make them much more versatile.

Still, it is a Sony, and like.no.other.


C
External Link
Corin Faife10:44 AM UTC
If God sees everything, so do these apps.

Some Churches are asking congregants to install so-called “accountability apps” to prevent sinful behavior. A Wired investigation found that they monitor almost everything a user does on their phone, including taking regular screenshots and flagging LGBT search terms.