Hey, do you read The Interface regularly? We’re doing a short survey and would love to get your thoughts on how we could make the newsletter even better. It would mean a lot to me if you took a few minutes to share your thoughts. Thanks!
Today, two stories about Facebook cracking down on bad guys.
It has been nearly six months since Alex Jones lost his infowar, getting banned by every major platform after a long career of bullying behavior. In the time since, his ability to attract new followers has been radically diminished — but not for lack of trying on his part.
Facebook has prevented him and his associates from creating new pages similar to the four that they banned last year over violations involving bullying, hate speech, and graphic violence. But the policy had a loophole that allowed administrators of existing pages to repurpose old pages — until today. Here’s me in The Verge:
Previously, Facebook would prevent administrators of banned pages from creating similar pages in the future. But the company found that some administrators have attempted to evade enforcement by repurposing pages that they had created before their bans in an effort to rebuild their online communities.
Today’s move marks the first time Facebook has removed pages in line with the updated policy. The company did not disclose all of the ways in which the freshly banned Jones pages resembled old pages, but said that they used similar titles. Jones is the creator of Infowars, which was kicked off platforms including Apple Podcasts, Twitter, and PayPal in addition to Facebook last summer.
It was the first major enforcement action Facebook undertook today. Pranav Dixit covers the second in BuzzFeed:
Facebook has banned four insurgent groups who have been fighting against Myanmar’s military from using its platform, according to a company blog post published Tuesday.
The banned groups include the Arakan Army (AA), the Myanmar National Democratic Alliance Army (MNDA), the Kachin Independence Army, and the Ta’ang National Liberation Army. Facebook said that all “praise, support, and representation” related to these groups will also be removed from the platform as soon as the company becomes aware of it.
Dixit reports that the MNDA been blamed for at least 30 deaths, while the AA killed 13 police officers last month. Facebook says the removals did not come in response to a request from the Burmese government, which itself has been the recipient of multiple bans from Facebook.
On Twitter, journalist Kayleigh E. Long, who has written about Myanmar, worried that banning these “ethnic armed organizations” would silence legitimate political speech. “In this fell swoop, they’ll arguably be silencing civil society voices,” she wrote. “”Like, are they going to ban every EAO in the world? Every military? Will they update blacklist as ceasefires are made & abrogated? Are activists from ethnic minorities engaged in armed struggle silenced? The implications of this could be sweeping and can’t have been thought through.”
But in a blog post, Facebook argued that it was trying to do the thing that critics have long asked it to: prevent its services from being used to incite violence:
“There is clear evidence that these organizations have been responsible for attacks against civilians and have engaged in violence in Myanmar, and we want to prevent them from using our services to further inflame tensions on the ground.”
In any case, the trend is clear around the world: Facebook is enforcing its policies around coordinated inauthentic behavior and incitement to violence in more places, and more publicly, than it has to date. On Thursday, the company reported removing such content in both Indonesia (207 pages, 800 Facebook accounts, 546 groups, 208 Instagram accounts) and Iran (262 pages, 356 Facebook accounts, three groups, 162 Instagram accounts.)
On January 17th, it announced a new takedown of coordinated information operations in Russia. On January 10th, it was a takedown of a politically motivated spam network in the Philippines.
Of course, partly this rash of takedowns speaks to the enormity of Facebook’s self-inflicted challenge. State-level actors are misusing its services around the world, sometimes to great effect. That the company has identified more of these actors is not necessarily a cause for celebration.
By now, “progress” has become one of Mark Zuckerberg’s most oft-repeated talking points. “We’ve made real progress on these issues and built some of the most advanced systems in the world to address them,” he wrote in his 15th anniversary post Monday. And: “It’s critical we continue making progress on these questions.”
One of Facebook’s key challenges is that it’s simultaneously working on hard problems across so many dimensions that it’s difficult to quantify what “progress” really looks like. Fighting information operations doesn’t resolve down neatly to a handful of metrics that you can nudge up or down over time. It’s impossible for me to identify the goal posts that, if Facebook could only kick the ball through, would lead most of its critics to agree that the platform had been “fixed.”
And it’s for that reason I think the uptick in enforcement actions is notable. “Progress,” to Facebook, will look like a lot of things. But one of the things it probably looks like is more banning of state-level actors, in more countries, and making it harder for them to ever come back.
Ime Archibong, Facebook’s vice president of product partnerships, will meet with House and Senate staffers this week, David McCabe reports.
Archibong will brief staff members who work for members of the Senate Judiciary and Commerce committees — both of which will play a key role in the debate over a national privacy law — on Tuesday. Expect the company’s critics to take an interest: Sen. Ed Markey’s (D-Mass.) office confirmed his staff will attend the briefing.
Snopes is out, but a new fact checker has been added to Facebook’s roster. It’s called Lead Stories, Sara Fischer reports.
Zusha Elinson reports that devices made to make gun ownership safer, such as safety locks, have been caught up in bans intended to prevent gun sales on Facebook and Google:
Digital marketing giants including Facebook and Alphabet Inc.’s Google often reject marketing from companies like Zore, the maker of a high-tech, quick-release gun lock, due to policies intended to forbid ads for the firearms themselves, manufacturers of the devices say.
“It really blocked all the ways we wanted to get people,” said Eytan Morgenstern, Zore’s director of communications. “We’re selling to gun owners and they have to understand why it works.”
Peter Geoghegan writes about mysterious new spending on pro-Brexit Facebook ads:
A single pro-Brexit group with almost no public presence spent almost £50,000 on Facebook. Britain’s Future – which does not declare its funders and has no published address – is running hundreds of very localised targeted ads pushing for ‘no deal’.
Politicians and campaigners have called for greater transparency of political advertising. Labour MP Ben Bradshaw said: “We have no idea who these people are or where their money comes from. It shows again how unfit for purpose the rules are that govern online campaigning and the use of data.”
Pavel Polityuk reports on what Russians do when it isn’t election season in the United States:
Serhiy Demedyuk told Reuters the attackers were using virus-infected greeting cards, shopping invitations, offers for software updates and other malicious “phishing” material intended to steal passwords and personal information.
Ten weeks before the elections, hackers were also buying personal details of election officials, Demedyuk said, paying in cryptocurrency on the dark web, part of the internet accessible only through certain software and typically used anonymously.
Snap reported earnings today. User numbers were flat, sales revenue was up, and cash burn was down (to a mere $192 million in losses). The stock popped on the news.
This is basically two (good!) Reed Albergotti stories about Facebook stacked on top of one another — one about the notion of an ad-free Facebook subsidized by users, and one about why Facebook outsources content moderation. In short: to maintain industry-leading profit margins:
By implementing this philosophy, Facebook was able to create an advertising business with margins similar to Google’s. Even as Facebook’s hiring soared, its profits rose faster, according to regulatory filings. Around the time of the IPO, Ms. Sandberg once said in an internal meeting that she wanted to turn Facebook into the most profitable company in history, according to a person in the meeting.
Mr. Zuckerberg, meanwhile, has a strong competitive instinct and relished the idea of outshining Google, according to people who know him.
Julie Beck writes about what she sees as Facebook’s central feature: its unnatural preservation of weak social ties:
The social network is 15 years old this Monday, and in taking stock of the effects of its decade and a half of existence on people’s social lives, this is what stands out the most: Facebook is where friendships go to never quite die.
The site has created an entirely new category of relationship, one that simply couldn’t have existed for most of human history—the vestigial friendship. It’s the one you’ve evolved out of, the one that would normally have faded out of your life, but which, thanks to Facebook, is instead still hanging around. Having access to this diffuse network of people you once knew can be pleasant—a curio cabinet of memories—or annoying; if those good memories get spoiled by an old friend’s new posts; or helpful, if you need to poll a large group for information. But it is, above all, new and unusual.
Emily Dreyfuss talks to teenagers who have never known a world without Facebook. It’s remarkable how for non-Facebook-using teens, the social network is mostly a threat to be managed:
These children are aware of Facebook from the youngest of ages, and as they grow up, it becomes something they have to actively negotiate with their parents. Rather than the classic 21st-century worry about what kids are doing online, Facebook use requires a sort of role reversal. For the families who spoke to WIRED, it’s often the teens asking the parents to limit what they post, or how much time they spend on the site. While a few teens mentioned broader privacy issues and the impact Facebook is having on society, most focused on more immediate concerns—what their parents were posting about them.
Jordyn, a 19-year-old who has had her own Facebook account for a few years, recently discovered with horror that her mom had posted a photo of her from middle school “after a particularly traumatic haircut.” “I was super mad about it for sure. I had super curly hair and had to get it cut very short after I burnt some off!“ she told WIRED over Twitter DM. “She teased me about it but took it down pretty quickly, so I didn’t mind much. I personally chose any photo she posted of me for almost a month after that.”
Sam Harris’ podcast is the latest stop on the Jack Dorsey Is Talking Podcast Tour 2019. Highlights: editable tweets may take different forms depending on whether you are correcting a typo or trying to clarify an older tweet. Also Dorsey says the like button is empty and destructive! (Dorsey also recorded a YouTube conversation with Gad Saad, who is completely unknown to me.)
Are we talking enough about Reddit’s rebound? It seems to have tamped down on the worst parts of its community while growing ad revenue and just generally being a vibrant place for people to hang out online. And investors have noticed, Josh Constine reports:
As more people seek esoteric community and off-kilter entertainment online, Reddit continues to grow its link-sharing forums. Indeed, 330 million monthly active users now frequent its 150,000 Subreddits. That warrants the boost to its valuation, which previously reached $1.8 billion when it raised $200 million in July 2017. As of then, Reddit’s majority stake was still held by publisher Conde Nast, which bought in back in 2006 just a year after the site launched. Reddit had raised $250 million previously, so the new round will push it to $400 million to $550 million in total funding.
Kashmir Hill wraps up her excellent series on attempting to live beyond the reach of the five biggest tech companies with a week without Apple. (She’s planning a final week in which she attempts to block all five at once and I am worried about her safety!!!)
An EDM DJ staged a concert inside a video game on Saturday and as many as 10 million people may have been watching it simultaneously. The sourcing here is a little shaky, but if there was any doubt that Fortnite is a social space like any other that I write about here, there you go.
Months after TechCrunch caught Facebook deleting all of Mark Zuckerberg’s messages without telling the recipients — or deleting their messages — a neutered version of that same feature is now available to consumers.
Sarah Perez reports that YouTube is expanding its test of an Explore page, which is literally just the Instagram Explore page but for YouTube.
This year’s additions to our emoji language include 59 new objects, and are focused on inclusion.
Brendan Nyhan says we shouldn’t worry too much about fake news:
More than two years later, we can now evaluate these claims. And it turns out that many of the initial conclusions that observers reached about the scope of fake news consumption, and its effects on our politics, were exaggerated or incorrect. Relatively few people consumed this form of content directly during the 2016 campaign, and even fewer did so before the 2018 election. Fake news consumption is concentrated among a narrow subset of Americans with the most conservative news diets. And, most notably, no credible evidence exists that exposure to fake news changed the outcome of the 2016 election. […]
Many important concerns about online misinformation still remain, including the influence of the fake news audience, the difficulty of countering fake news at scale, the dangers of Facebook’s size, and the threat of YouTube-based radicalization. But none of these questions can be adequately addressed without creating a reality-based debate that puts fake news in context as just one of the many sources of misinformation in our politics.
Brian Feldman says that the meme account started by Elliott Tebele, which pioneered the art of stealing and then monetizing other people’s jokes on Instagram, is an indictment of its host platform:
As far as I can tell, however, Instagram has never punished a major meme account for sharing content that was not “authentic.” During the holiday season late last year, many meme accounts were purged, but Instagram told The Atlantic that the purge was to punish people for selling or trading their accounts, not for content theft.
It has been more than three years since the Fat Jew was first called out for systematically stealing jokes. In that time, some meme accounts have instituted policies of self-regulation in which they promised to credit those they rip off, even if they don’t get permission to repost others’ work. Those policies still fall short of Instagram’s own. The policy that Tebele announced over the weekend — only posting things he gets permission to put up beforehand — is literally Instagram’s own official policy. Yet Instagram has, as far as I can tell, never taken substantial action against meme accounts like @fuckjerry.
And finally ...
Here’s a fun detail in Alexis Madrigal’s piece about the earliest days of TheFacebook, when it existed only at Harvard, and a young Mark Zuckerberg asked a dean, Harry Lewis, if he could depict Lewis as the central node in the baby network:
“I had a very interesting reaction,” Lewis told me recently. “I told him, ‘It’s all public information, but there is somehow a point at which aggregation of too much public information begins to feel like an invasion of privacy.’ So ‘invasion of privacy’ was actually in the very first email that I wrote to Mark Zuckerberg in 2004 in response to the first glimpse of the prototype.”
Talk to me
Send me tips, comments, questions, and secret Alex Jones pages: email@example.com.