What role do major institutions play in the promotion of extremism? Two days into this week, we’ve already gotten two important looks at the issue.
On Monday I told you about a report from danah boyd about the media’s role in amplifying “digital martyrs” like Alex Jones. (I pasted the wrong link into the newsletter yesterday — come on, Newton! — and so if you haven’t read it yet, there you go.)
Today comes a report from Rebecca Lewis looking at another kind of amplification: the closely linked network of conservative YouTube personalities who collaborate in videos and advance an extremist ideology. (Both reports, incidentally, come from the New York-based nonprofit Data and Society.)
Lewis set out to understand how YouTube in particular has become a thriving hub of far-right content. Starting with a handful of well-known conservative personalities, she began tracking their appearances on one another’s channels. When another personality popped up on one of these channels, she began charting that person’s path through YouTube as well. Eventually, she had watched hundreds of hours of video from 65 influencers across more than 80 channels.
After mapping the network, Lewis makes three findings.
- These influencers built an alternative media network by emphasizing their relatability, “authenticity,” and accessibility to their fans. They portray themselves as social underdogs, outcasts, and victims, giving them a countercultural ethos that can be attractive to younger viewers.
- The influencers have effectively promoted themselves using tactics including “ideological testimonials,” in which they recount their conversion from wayward leftists into right-thinking conservatives; search engine optimization, in which they use keywords common in more neutral and liberal-oriented videos to attract viewers; and “strategic controversy,” which is to say stunts.
- The influencers encourage people to adopt a more radical set of views over time by first encouraging them to reject all non-ideological media, and then introducing them to extremist figures who offer alternative worldviews.
Lewis notes that she is not the first scholar to examine radicalization on YouTube; she cites Zeynep Tufekci’s New York Times piece and ex-YouTube employee Guillaume Chaslot’s work on the subject. Where she differs from her predecessors is in moving away from the now-standard critique that YouTube’s core problem is technological in nature. Previous work has focused on how quickly recommendation algorithms push viewers to extremist content; Lewis says the problem lies in the content itself. She writes:
While these articles identify a real problem, they treat radicalization as a fundamentally technical problem. What the section below showcases is that radicalization on YouTube is also a fundamentally social problem. Thus, even if YouTube altered or fully removed its content recommendation algorithms, the AIN would still provide a pathway for radicalization.
Lewis’s proposed solution is that YouTube should develop a strict value-based code of behavior, actively monitor the content of influencers’ videos, and discipline violators accordingly:
There is an undercurrent to this report that is worth making explicit: in many ways, YouTube is built to incentivize the behavior of these political influencers. YouTube monetizes influence for everyone, regardless of how harmful their belief systems are.The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online – and in many cases, to generate advertising revenue – as long as it does not explicitly include slurs. YouTube also profits directly from features like Super Chat which often incentivizes “shocking” content. In other words, the type of content and engagement created by the AIN fits neatly into YouTube’s business model.
The website similarly seeks policies that offer it protection for hosting user-generated content while simultaneously facing minimal liability for what those users say. This report has shown how these attempts at objectivity are being exploited by users who fundamentally reject objectivity as a valid stance. As a result, platforms like YouTube have an imperative to govern content and behavior for explicit values, such as the rejection of content that promotes white supremacy, regardless of whether it includes slurs.
It seems fair to assume that YouTube would reject this notion out of hand. (The criticism would start with “it doesn’t scale” and go from there.) But there are certainly smaller steps YouTube could take in the meantime. Lewis notes the glee with which one conservative provocateur received his plaque for attracting 1 million subscribers; surely, she writs, the company could choose to withhold trophies from people arguing against equality or targeting harassment at others.
In the meantime, I hope YouTube employees will at least read this report, if only to understand how some of its most influential users are exploiting its viral mechanics to promote white supremacy and other noxious views.
If, like me, you spend a lot of time looking around America and wonder what is going on, exactly, you’ll want to read Anne Applebaum’s long, discursive essay on how “the illiberal state” has made similar inroads in Poland, where she lives, and in Hungary. The essay’s overall effect is to remind you that people everywhere are basically the same, and in ways that threaten democracy. She concludes:
In truth, the argument about who gets to rule is never over, particularly in an era when people have rejected aristocracy, and no longer believe that leadership is inherited at birth or that the ruling class is endorsed by God. Some of us, in Europe and North America, have settled on the idea that various forms of democratic and economic competition are the fairest alternative to inherited or ordained power.
But we should not have been surprised—I should not have been surprised—when the principles of meritocracy and competition were challenged. Democracy and free markets can produce unsatisfying outcomes, after all, especially when badly regulated, or when nobody trusts the regulators, or when people are entering the contest from very different starting points. Sooner or later, the losers of the competition were always going to challenge the value of the competition itself.
With moderation very much in the news, Bertelsmann has agreed to merge the part of its business that offers content moderation services for Facebook and other companies with a competitor, Sara Germano reports:
Bertelsmann’s Arvato customer relations management division runs moderation centers in Germany and elsewhere, where workers pore over content on Facebook that has been flagged as objectionable. The task has taken on a higher profile as governments increasingly demand moderation of online content.
But the unit hasn’t been growing as quickly as Bertelsmann had hoped, and the company said in January it was considering options for the business. On Tuesday, it announced a merger between the unit and the customer relations business of Morocco-based Saham Group to form a new company in which both firms will retain a 50% stake.
Now here’s way that social networks can benefit democracy. As Chaim Gartenberg reports, Instagram will put ads in users’ feeds and in Stories with links to help users register to vote.
To provide accurate voting information, Instagram is partnering with TurboVote, which promises “up-to-date information on how to register, how to update their registration, how to look up their state’s voting rules and more.”
Additionally, Instagram is planning to offer “I Voted” story stickers on Election Day. In addition to letting you brag to all your friends about how good you are at doing your civic duty, it will also link to Get to the Polls to help others find their polling location.
Wired turned 25 — happy birthday, Wired! — and talked to tech-world luminaries about whatever said luminaries would agree to make time for. Mark Zuckerberg chose to talk about immigration. Honestly the photo caption is more interesting than the immigration stuff, which we have heard from Zuckerberg before:
“During the photo shoot, Mark’s dog, Beast, stayed by photographer Michelle Groskopf’s side the entire time … until she asked Mark to sit in a chair in his sunroom. At that point, Beast leapt across the room onto Mark’s lap. He responded with an ‘oof!’ and we all laughed.”
Here’s a big new lawsuit against Facebook from the ACLU and the Communication Workers of America alleging that Facebook’s ad platform enables gender-based discrimination. My colleague Jake Kastrenakes:
The American Civil Liberties Union is filing charges against Facebook for allegedly running discriminatory job ads that appeared only to men, something that is illegal under the Civil Rights Act. The ACLU says that Facebook’s platform allowed 10 employers, including a software developer and a police department, to run ads that excluded women and non-binary users, and it says the social network should be held liable for creating the tools to offer these allegedly discriminatory ads.
The complaint is being filed with the United States Equal Employment Opportunity Commission, a federal agency that oversees charges of workplace discrimination. It’s filed on behalf of three women who say they were discriminated against, but the complaint also hopes to cover “millions” of women who were excluded from seeing job ads by Facebook and various employers.
My read of this Journal story is basically that Facebook wanted access to financial data for use in building chatbots, which no one wound up using. I don’t think there’s much more to it than that.
Aaron Tilley and Sarah Kuranda report that Facebook wants to build its own augmented-reality chips, likely to kickstart its inevitable (and probably already in development?) AR headset:
It isn’t clear whether Facebook will eventually release its own AR headset using the custom chips it is developing. If such a product emerges, it could be years away from being released since Facebook Reality Labs—previously known as Oculus Research—is typically focused on long-term projects. Facebook is also investing in chip development for artificial intelligence and data center purposes, as Bloomberg has reported.
On stage at Code Commerce, Kurt Wagner asked Instagram business lead Vishal Shah if he was building a shopping app, as I reported earlier this month. He didn’t say no! (The answer is yes.)
Some people make videos where they whisper, crinkle up paper, and make other tiny, stimulating noises for enthusiastic audiences. Amid some sort of sex panic, PayPal is banning these creators for life, Violet Blue reports:
This past week, nonsexual ASMR video creators Sharon DuBois (ASMR Glow), Scottish Murmurs, Creative Cal, and Rose ASMR have been permanently banned from PayPal and had their funds frozen for 180 days. Like with YouTube’s July censorship sweep, the women create videos of sound effects and have been expelled from the payment utility under alleged violations of the company’s sexual content policy prohibitions. ASMR community websites are now warning all creators to avoid PayPal. Engadget reached out to PayPal regarding the banning of ASMR video creators, the 8chan sex-harassment campaign and how PayPal plans to protect users from this type of abuse — but we did not hear back before publication time.
Here’s a good reminder that the bulk of misinformation is still financially motivated. (Also, like, wow to all of this.)
Ashley’s thread went viral over the weekend, with more than 330,000 likes and 77,000 retweets. But none of these images are actually of her. They belong to a cam model who actually specializes in feederism, according to the model’s blog, which Motherboard reviewed.
The “Ashley” account was suspended less than two hours after I contacted Twitter to ask whether this account violates the platform’s rules impersonation rules.
Oculus co-founder Palmer Luckey tells Wired that he once tried to build, um, this:
A bypass for my peripheral nervous system. Rather than waiting a few hundred milliseconds for a signal to travel from my brain to my extremities, I tried to capture it closer to the source and relay it electronically. If you could do this with all of your limbs, not just one finger or one arm, you could potentially have superhuman reflexes without doing a bunch of crazy work on, let’s say, exoskeletons or predictive analytics.
Jack Dorsey tells Lauren Goode how much he likes ProPublica, and it is very charming. (ProPublica is fantastic.)
Here’s a surprise from late Monday: Twitter is working on a way to let you switch back easily from a ranked to an unranked feed. Both have their uses — ranked is great for catching up; unranked is great for living in the moment — and so I’m delighted to see Twitter accommodating both as first-class citizens in the app.
If you see misinformation online and decide you would like to report it to The New York Times instead of me, here’s how. (Please at least CC me, though.)
HQ has a new game coming in October. Also: it has a new CEO! Also, it has generated more than $10 million advertising revenue, which I found pretty impressive! Maybe there’s some life in the old viral phenomenon yet.
Creators with more than 50,000 subscribers will be able to sell viewers a $4.99 monthly membership fee in exchange for exclusive perks, Megan Farrokhmanesh reports. Previously, the feature required at least 100,000 subs.
Finally, an API to let you hunt down influencers and — we can only hope — subdue them.
Did you know that social platforms are most often used on phones? Well, there are a couple of new ones!
Charlie Warzel reflects on the danah boyd report and encourages reporters to employ “defensive journalism” when writing about extremists:
What I take from her is not that we journalists completely lost the plot (though we could do without being so sensitive!), but that there’s room for so much more sophistication in our work and what happens after we hit publish.
This notion reminds me a bit of defensive driving. Though the term ‘defensive journalism’ sounds ridiculous, I think this a helpful way to think of reporting in the era of the platforms and the information war. Like defensive driving, defensive journalism isn’t about aggression, it’s about staying vigilant and anticipating how others might ignoring or break the rules. It’s a heightened sense of awareness and skepticism (that should be very familiar to journalists) that doesn’t just keep you safe, but everyone else on the road, too.
Hot on the heels of news about the Twitter timeline, Jason Kottke suggests that Twitter create “smart accounts” — personalized collections of tweets that you can follow or unfollow. So you could follow a smart account that includes “likes from friends,” for example, “trends,” or “popular threads.” I love this idea.
As noted above, in an incredible self-own, I mis-pasted the link to yesterday’s lead item! Incredibly, only one of you told me about this. Anyway, thank you Roger McNamee! And the rest of you, really do read danah boyd’s talk.
In Friday’s newsletter I included an item about Facebook declining to remove a post that seemingly called for violence against a politician in the Philippines. A spokeswoman wrote me to say that it later decided to remove the post.
And finally ...
Say you are Elon Musk. You’re having a terrible week for lots of reasons, including a possible criminal probe of your tweets. But you also have this other Twitter problem, which is that people impersonate your Twitter account to try to scam people into buying them cryptocurrency. And so he reached out to the creator of joke-cryptocurrency Dogecoin to get some sort of script that … prevents this from happening? Somehow? Details are scarce. But as David Canellis notes:
The scambots are so prevalent that Twitter was forced to add a new rule: changing your name to Elon Musk will get you banned from the platform.
Ironically, just a few months ago, Musk joked about the prevalence of scambots on Twitter – and how impressed he is by the people behind them. It seems they no longer amuse him.
I would venture to say there are a number of things from Elon Musk’s recent past that no longer amuse him!
Talk to me
Send me tips, questions, comments, corrections, and radicalizing videos: firstname.lastname@example.org.