In discussing my story last week on the secret lives of Facebook’s content moderators in America, interviewers often ask me about how the job can make workers more susceptible to conspiracy theories. In my interviews with workers at a content moderation site in Phoenix, I heard over and over again how the work environment was full of people who had come to believe the fringe views that they were reviewing. As one of them put it to me, regarding the aftermath of the Parkland shootings:
“People really started to believe these posts they were supposed to be moderating,” she says. “They were saying, ‘Oh gosh, they weren’t really there. Look at this CNN video of David Hogg — he’s too old to be in school.’ People started Googling things instead of doing their jobs and looking into conspiracy theories about them. We were like, ‘Guys, no, this is the crazy stuff we’re supposed to be moderating. What are you doing?’”
Now journalists are beginning to investigate the mechanisms by which this change in views can happen. Today in The Verge, my colleague Mary Beth Griggs talks to Mike Wood, a psychologist at the University of Winchester who studies the spread of conspiracy theories. Wood says that existing research does not assess the effect of repeated exposure to conspiracy views on people’s beliefs. But research does show that people become more susceptible to fringe views when they are experiencing stress, he says:
Conspiracy theories do associate with stress. Basically, there’s been some research that’s showed that when people undergo a stressful life event — something like death of a family member, divorce, major disruption to their lives — conspiracy theories are more likely in that circumstance. So there is some indication that psychological stress can put people in this place where they’re looking around for new answers or they’re possibly trying to come to grips with the world in a new way.
We’ve got other research showing that when someone doesn’t feel in control of their life or in control of what’s happening to them, conspiracy theories seem more plausible, and that might have been what’s happening with these people. I’m not sure what their subjective psychological experience was at the time, but there is some data that suggests that can happen.
As I document in my piece, work as a content moderator is highly stressful. Workers’ time is managed down to the second, Facebook’s instructions about how to moderate individual posts can vary on an hourly basis, and making just a few mistakes can be enough to put a workers’ job at risk. Given that level of duress, it’s fair to wonder whether it couldn’t be a factor in workers’ likelihood to start believing in conspiracy theories.
Meanwhile in One Zero, the new tech publication from Medium, Erin Schumaker talks to experts who speak to the power of repeated exposure to warp the human mind.
“The more often you see it, the more familiar something is, and the more familiar something is, the more believable it is,” says Jeff Hancock, communication professor and founding director of the Stanford Social Media Lab.
Conspiracy content is engineered to be persuasive. People accept these theories because they help make sense of a world that feels random; even if they seem far-flung to the rest of us, they can offer some sense of comfort or security. And seeing those theories repeatedly pop up in your Facebook news feed “starts to undermine the sense that they are fringe,” says James Grimmelmann, a professor at Cornell Law School who studies internet law and social networks.
What to do? The obvious first step is research. Facebook told me it plans to conduct a survey of contractors’ “resiliency” in coming months that will allow the company to better understand its workers’ mental health. A question or two about conspiracy content could help Facebook begin to understand how widespread the issue is.
Second, Facebook could develop training materials that prepare workers for the possibility they will find themselves influenced by conspiracies. It should be disclosed to prospective workers as a possible effect of doing the job, and counselors should be encouraged to discuss the issue with workers during their regular interactions.
Finally, Facebook could create a knowledge base of known conspiracy theories for moderators to review as they go about their work.
One reason the company began hiring Americans is for what it calls “cultural context” — the idea that Americans will already be familiar with public figures, local slang, and other regional idiosyncrasies. But in practice, many workers lack that cultural context. One moderator told me she was embarrassed to mistakenly remove a video from the conservative provocateurs Diamond and Silk. She didn’t recognize them as public figures, and saw them only as two women who appeared to be bullying someone. (That someone turned out to be Ted Cruz, a US senator, the moderator told me.)
While it would be impossible to lay out all relevant cultural context for moderators, giving workers some sort of guide to popular conspiracy theories seems like a positive next step. A worker who has been prepared for the possibility that she may find herself exposed to and persuaded by anti-Semitic conspiracies would likely be better equipped to handle them.
And who knows? Resources developed to support moderators may prove to be useful elsewhere. After all, before it reached the reviewer’s desk, it was lurking somewhere on Facebook, nudging someone further toward the fringe.
The Trauma Floor
In the wake of a disturbing virtual sexual assault in Roblox, a virtual community for children, the company is ramping up moderation efforts, Dean Takahashi reports:
Bhaumik hired Laura Higgins, an online safety expert in the United Kingdom, as Roblox’s first director of digital civility in January. Higgins, who was the online safety operations manager at the South West Grid for Learning, has been advising Roblox for more than 18 months. The company has more than 600 human moderators to patrol the content and behavior on the Roblox, and it is dedicating more resources to the task of changing online behavior.
But it’s hard to stay ahead of the problem, Bhaumik said. Kids are smart and they come up with ingenious ways to get around the rules.
Patricia Sullivan digs into Arlington County’s agreement with Amazon, which appears designed to minimize public input:
Amazon will receive staggered payouts for occupying an increasing amount of office space near National Airport, and get at least two days’ notice when someone requests information it provides to Arlington County, under an incentives agreement the County Board will vote on March 16.
The $23 million agreement, which was provided to The Washington Post in advance of its public release later today, spells out for the first time how many square feet the online retail giant must fill each year in order to get money from the county’s hotel tax.
Samarth Bansal and Kiran Garimella survey the misinformation-filled world of WhatsApp’s closed political groups:
From outright falsehoods and partially-true misleading narratives to bigotry and hate, everything is circulated in the closed encrypted world of India’s most popular messaging application, with over 230 million active users. According to a 2017 Lokniti-CSDS Mood of the Nation (MOTN) survey, around one-sixth of WhatsApp users in India said they were members of a group started by a political leader or party.
Not all political discussion on WhatsApp is about consuming fake news. Our dataset — over a million messages collected from politically-motivated WhatsApp groups between 1st August and 4th December 2018 — has all kinds of information: long crafted text messages, infographics, political memes, and news videos. The focus of this analysis is restricted to the study of images—36% of all messages; 9% were videos.
In December, the Canadian government passed a law requiring tech platforms to create a registry of all political ads, and created penalties for failing to comply that include fines and jail time. Google now says it will ban political advertising in Canada rather than comply, reports Tom Cardoso:
“We’ve come to the decision that the best way for Google to comply with the Elections Act in the 2019 election cycle is actually to stop accepting elections ads as defined in the legislation,” said Colin McKay, Google Canada’s head of public policy and government relations. “It is painful for us.” […]
Aside from the ad registry requirement, Google also expressed concerns about how it would detect ads of a partisan nature, which may not specifically mention a candidate or party by name. “The challenge for us is that that definition is extremely broad,” Mr. McKay said.
Karen Weise, Manny Fernandez and John Eligon examine Amazon’s brutish negotiating tactics with governments around the country:
Virtually all of America’s largest businesses drive a hard bargain with governments, angling for benefits and financial incentives. Amazon, though, often plays politics with a distinctive message: Give us what we want, or we’ll leave and take our jobs elsewhere. The tactics help Amazon squeeze as much as possible out of politicians.
“They are just as cutthroat as can be,” said Alex Pearlstein, vice president at Market Street Services, which helps cities, including those with Amazon warehouses, attract employers.
Michael M. Grynbaum reports that big tech companies are no longer welcome at the annual gathering of conservatives. Warning that reading the following excerpt may cause your eyes to roll back so hard they fall out of your head:
At last week’s gathering here in a suburb of Washington, Silicon Valley’s only obvious presence was on the lips of exercised right-wing critics who whipped up the crowd by denouncing the American tech industry as an authoritarian hegemony intent on censoring their cause.
“Facebook, Google and Twitter are pushing a left-wing social agenda while marshaling their marketing power to shut conservative voices out of the marketplace,” said Senator Josh Hawley, a Missouri Republican, during a featured session with the ominous title “Blocked: This Panel Has Been Removed for Conservative Content.”
Spencer Soper and Rebecca Greenfield have a new tale of employee activism at Amazon. These folks had me at “Momazonians.”
Amazon has long resisted pleas to provide a backup day care benefit for employees, even though other tech companies have offered the perk for years. Now a group comprising hundreds of working moms is waging a campaign to persuade founder and Chief Executive Officer Jeff Bezos that providing help for parents dealing with flu outbreaks, school closures and other emergencies is not simply humane but good for the company, too.
Ben Collins has a classic freedom of reach case with this tale of a Qanon book riding the Amazon recommendation algorithm to a high place on the bestseller list:
A book that pushes the conspiracy theory Qanon climbed within the top 75 of all books sold on Amazon in recent days, pushed by Amazon’s algorithmically generated recommendations page.
“QAnon: An Invitation to the Great Awakening,” which has no stated author, ranked at No. 56 at press time, was featured in the algorithmically generated “Hot new releases” section on Amazon’s books landing page. The book claims without evidence a variety of outlandish claims including that prominent Democrats murder and eat children and that the U.S. government created both AIDS and the movie Monsters Inc.
Speaking of freedom-of-reach cases, Julia Carrie Wong reports that Amazon’s charity program is benefiting anti-vaccine zealots:
The AmazonSmile fundraising program – through which Amazon donates 0.5% of the purchase price of a shopper’s Amazon transactions to an organization of their choice – is promoted on the websites of four prominent anti-vaccine organizations: National Vaccine Information Center (NVIC), Physicians for Informed Consent, Learn the Risk, and Age of Autism.
Numerous other anti-vaccine organizations, including American Citizens for Health Choice (ACHC), National Health Freedom Coalition, Michigan for Vaccine Choice, Texans for Vaccine Freedom, A Voice for Choice and the Informed Consent Action Network are also listed by Amazon as eligible for the donations.
Nick Statt talks to Los Angeles writer and comedian Miel Bredouw about how Barstool Sports stole her content, filed a bogus counterclaim to her DMCA takedown notice, and got away with it. Twitter has no answer for this kind of behavior, and it ought to. (Incidentally, Miel’s podcast with Demi Adejuyigbe, Punch Up the Jam, is spectacular.)
Eventually, when Bredouw refused to respond, Barstool Sports filed its counter-notice, telling Twitter that Bredouw’s copyright strike wasn’t legitimate; that Barstool had “a good faith belief that the material should not have been removed.” And Twitter apparently bought it, according to a message she received from Twitter indicating the video would go back online after 10 days — unless she sought out a lawyer and took Barstool to court.
Twitter’s current policy states that when it receives a counter-notice, the case is no longer in the company’s hands; it’s been elevated to a legal matter that only lawyers and courts are equipped to handle. But Bredouw claims that the status quo effectively allows sites like Barstool to steal creators’ work and bully them into not fighting back. “This is not the first time this has happened to me where a large account has stolen a piece of content and I filed a DMCA and they filed a counter-notice,” Bredouw says. “There’s just this glaring loophole when a counter-DMCA is filed where you have to get a court order.”
Microsoft recently pivoted Hololens to the enterprise, and now Janko Roettgers reports that Oculus will soon be sold to businesses as well:
Facebook is looking to launch enterprise edition versions of its Oculus Go and Oculus Quest virtual reality (VR) headsets this year, according to a job listing published late last week. The listing spells out broader plans to bring augmented and virtual realitytechnology to the enterprise, starting with dedicated enterprise versions of the two headsets.
“Starting with VR, we are building an Oculus Go and Oculus Quest Enterprise edition expected to launch in 2019,” it reads in part. “Are you excited about how VR and AR can change the future of work? Join us to make it a reality.”
Facebook has a new effort to improve digital literacy in Asia:
Today we’re launching We Think Digital, an online education portal with interactive tutorials aimed at helping people think critically and share thoughtfully online. We designed the program in partnership with experts from across Asia Pacific, and aim to train 1 million people across 8 countries in Asia Pacific by 2020, with our resources available in 6 different languages.
Asia Pacific has a fast-growing internet population, with more than 2.21 billion people now online and 203 million new people joining in the past year, according to We Are Social’s 2019 Digital Trends Report. We Think Digital has been designed for new and existing internet users of all ages to develop the skills they need to safely enjoy digital technology, including critical thinking and empathy.
Russell Brandom says alarmism over deepfakes is not justified:
Even with deepfake filters’ limitations, they could be enough to scare political trolls away from the tactic. Uploading an algorithmically doctored video is likely to attract attention from automated filters, while conventional film editing and obvious lies won’t. Why take the risk?
It’s also not clear how useful deepfakes are for this kind of troll campaign. Most operations we’ve seen so far have been more about muddying the water than producing convincing evidence for a claim. In 2016, one of the starkest examples of fake news was the Facebook-fueled report that Pope Francis had endorsed Donald Trump. It was widely shared and completely false, the perfect example of fake news run amok. But the fake story offered no real evidence for the claim, just a cursory article on an otherwise unknown website. It wasn’t damaging because it was convincing; people just wanted to believe it. If you already think that Donald Trump is leading America toward the path of Christ, it won’t take much to convince you that the Pope thinks so, too. If you’re skeptical, a doctored video of a papal address probably won’t change your mind.
How should employees at Microsoft, Amazon, or other platforms think about them military contracts that their bosses are pursuing? Kevin Roose offers some caution rooted in history:
Dow was not known as a defense contractor — in fact, until its Pentagon contract, the business was best known for making industrial chemicals and household plastics like Saran Wrap.
But over the next few years, as Americans began seeing gruesome images of South Vietnamese children with horrific napalm burns, the antiwar movement set its sights on the company.
And finally ...
“Delete your account” is a fun thing you can say to anyone at any time, and I encourage you to do so. JetBlue is basically asking its Instagram followers to do just that, with a twist. Rather than deleting your entire account, you simply have to delete (or archive) all your photos, then post a single, JetBlue-promoting photo to your account, using the airline’s template.
On one hand, turning Instagram into a spam wonderland to promote an airline feels like a bad outcome. On the other hand — free flights. The choice is yours.
Talk to me
Send me tips, comments, questions, and strategies for ignoring conspiracy theories: firstname.lastname@example.org.