Criticism of Facebook comes from all quarters, and at this point it’s rare to discover an opinion about the company that has never previously been voiced. My favorite takes these days tend to come from people who once worked at the company — recent experiences on the ground can lead to more original thinking.
Eugene Wei, who served as head of video at Facebook’s Oculus division from 2015 to 2017, is one of the most original thinkers on social networks. In a spectacular new blog post, he offers us several useful ways to describe how a company like Facebook functions in its users’ everyday lives — and in so doing, helps us to predict the future.
Wei’s full post approaches 20,000 words, and is worth reading in full. I want to draw out a few key elements to consider today.
First, Wei gives us a new way of thinking about social apps like Facebook, Instagram, and TikTok. Their original value lay not in their utility, he writes, but in their ability to increase the social status of their users. One reason young people gravitate toward these apps is because they can rapidly generate social capital for people who don’t have any — and one reason older people shun them is that they already have all the social capital they need.
Collectively, then, these are what Wei cheekily calls “status-as-a-service” businesses. They don’t give away status: to the contrary, they require the user to perform some creative task that others find difficult. But if they present the right task — what Wei, borrowing from the language of cryptocurrencies, calls the “proof of work” — they can unlock an impressive store of value.
Thus, one way to think about the health of a social network is how effectively it helps its users generate status. Perform this calculation and you understand why something like TikTok or musically feels hot right now, and Facebook seems cold:
Status isn’t worth much if there’s no skill and effort required to mine it. It’s not that a social network that makes it easy for lots of users to perform well can’t be a useful one, but competition for relative status still motivates humans. Recall our first tenet: humans are status-seeking monkeys. Status is a relative ladder. By definition, if everyone can achieve a certain type of status, it’s no status at all, it’s a participation trophy.
Musical.ly created a hurdle for gaining followers and status that wasn’t easily cleared by many people. However, for some, especially teens, and especially girls, it was a status game at which they were particularly suited to win. And so they flocked there, because, according to my second tenet, people look for the most efficient ways to accumulate the most social capital.
So what does a cold social network do? Well, it might do any number of the things we’ve seen Facebook do over the past several years. It would invest more in Instagram, which still reliably generates social capital for its users. It would invest more in utilities like Messenger or payments, which are less beloved but bring users back day after day no matter how cool the network seems. And it would add various entertainment products, in the hopes that they would become similarly sticky.
That will all probably work well enough, for a time. But Wei makes a convincing case that the next generation of social networks will intuit various lessons about social capital from the generation of services that included Facebook, YouTube, and Twitter — and build quite different products as a result. It remains unclear whether Facebook will simply absorb these ideas, as it did with Snapchat stories, or whether a stiffer challenge could emerge.
It’s possible that one already has. In Wei’s view, TikTok marks an important break with social networking’s past. And it has to do with how social capital is accumulated. In the previous era, most of the spoils went to the platform’s earliest adopters — as with Bitcoin, mining value gets harder as the platform ages. TikTok, on the other hand, is set up to promote videos regardless of who made them or how many followers they might have:
If you are new to TikTok and have just uploaded a great video, the selection algorithm promises to distribute your post much more quickly than if you were on sharing it on a network that relies on the size of your following, which most people have to build up over a long period of time. Conversely, if you come up with one great video but the rest of your work is mediocre, you can’t count on continued distribution on TikTok since your followers live mostly in a feed driven by the TikTok algorithm, not their follow graph.
The result is a feedback loop that is much more tightly wound that that of other social networks, both in the positive and negative direction. Theoretically, if the algorithm is accurate, the content in your feed should correlate most closely to quality of the work and its alignment with your personal interests rather than the drawing from the work of accounts you follow. At a time when Bytedance is spending tens (hundreds?) of millions of marketing dollars in a bid to acquire users in international markets, the rapid ROI on new creators’ work is a helpful quality in ensuring they stick around.
Wei closes by suggesting that acknowledging that social networks are status businesses may even help them mitigate (or at least better focus on) some of their externalities, including the fact that many of the people seeking status there are extremely bad actors. At the very least, it could give social networks something positive to rally around, beyond the usual sloganeering around connecting the world. One reason I find myself rooting so hard for a company like Patreon is that it exists primarily to turn users’ social capital into money. It’s an idea YouTube could have had first, and didn’t. (And it’s an idea that Facebook had over the weekend, but worse.)
In the meantime, I suspect the ideas in this post will help inspire the next generation of founders working on social apps, or at the very least serve as a reference point for them as they build. It’s easy to look at Facebook’s scale and assume the social game has already been won, forever. But Wei notes that Facebook board member Marc Andreessen, in Elad Gil’s new book, has a word of caution there:
The problem with network effects is they unwind just as fast. And so they’re great while they last, but when they reverse, they reverse viciously. Go ask the MySpace guys how their network effect is going.
The Trauma Floor
My colleague James Vincent explores why we can’t outsource the job of content moderation to artificial intelligence:
The problem with trying to get machines to understand this sort of content, says Robyn Caplan, an affiliate researcher at the nonprofit Data & Society, is that it is essentially asking them to understand human culture — a phenomenon too fluid and subtle to be described in simple, machine-readable rules.
“[This content] tends to involve context that is specific to the speaker,” Caplan tells The Verge. “That means things like power dynamics, race relations, political dynamics, economic dynamics.” Since these platforms operate globally, varying cultural norms need to be taken into account too, she says, as well as different legal regimes.
Eye-popping stat in this piece on content moderation in India from Munsif Vengattil and Paresh Dave:
Job postings and salary pay-slips seen by Reuters showed annual compensation at Genpact for an entry-level Facebook Arabic language content reviewer was 100,000 Indian rupees ($1,404) annually, or just over $6 a day.
Medium’s head of legal Alex Feerst interviewed a bunch of people who have worked in content moderation about their experiences. (Hopefully he will soon launch a regular column on the subject and call it The Feerst Amendment.)
I talked to about 15 trust and safety employees who work or have worked full-time at companies including YouTube, Facebook, Twitter, Reddit, Pinterest, Google, Automattic, Slack, Tumblr, Airbnb, Etsy, Quora, Internet Archive, and Medium. Every day, they make decisions that deeply affect our lives—and theirs.
Simon Van Zuylen writes a breezy piece about hanging out with Facebook’s policy team:
Tall and thin, with long strawberry-blond hair, Bickert sits behind a laptop decorated with a TEENAGE MUTANT NINJA TURTLES sticker. She speaks neither in guarded corporatese nor in the faux-altruistic argot particular to Silicon Valley. As a relative newcomer to the tech industry, she regards Facebook with the ambivalence of a normal person, telling me she’s relieved her two teenage daughters are “not all about sharing everything” on social media. When I cite Facebook’s stated mission to make the world a “more open and connected place,” she literally rolls her eyes. “It’s a company. It’s a business,” she says. “Like, I am not, I guess, apologetic about the reality that we have to answer to advertisers or the public or regulators.” To her, Facebook is neither utopian nor dystopian. It’s just massively influential and, for the moment, not going anywhere.
Ryan Broderick writes about his time moderating comments for BuzzFeed:
The hardest days, though, were when we’d get attacked by another online community. The tactic is called “astroturfing,” and usually a community like Reddit or 4chan, or the neo-Nazi message board Stormfront, would flood our comment sections with gore, pornography, and hate speech. If this sort of thing happened overnight — which it usually did — I’d end up working through lunch to clean things up. After days like that, I’d usually spend my nights silently staring off into space, not because I was particularly traumatized, but because there’s really only so much vitriol and toxicity a person can absorb before it all stops meaning anything.
Those astroturf days are what every day is like now.
Ryan Gallagher reports that a group of Googlers has found evidence that the company continues to work on a controversial plan to re-enter China.
Carole Cadwalladr and Duncan Campbell unearth Facebook’s 2013 lobbying strategy in Europe, which included “[using] chief operating officer Sheryl Sandberg’s feminist memoir Lean In to ‘bond’ with female European commissioners it viewed as hostile.”
Adi Robertson checks in with the literal platonic ideal of an Interface story:
Facebook has applied to patent a system where people could comment on laws that might affect them, then have that feedback worked into a formal political proposal, creating a way for people to “meaningfully engage in civil discourse” online. The concept would build on Facebook’s earlier attempts at promoting civic engagement, and it sounds similar to other, existing crowdsourced democracy tools. But Facebook’s vast scale could put tremendous weight behind any kind of private political forum.
The patent, titled “Providing digital forums to enhance civic engagement,” covers a localized and politics-focused form of social networking. The system would identify a proposed new law or amendment, then use existing social networking data to find and invite people “having a predicted interest in the proposed law.”
New York’s Department of Financial Services is digging in on a report in the Journal about how Facebook shares data with other app developers, Sam Schechner reports:
One letter, addressed to Facebook Chief Executive Mark Zuckerberg, requests information about all companies that have sent Facebook data about mobile application users via software provided by the social-media giant in the last three years, the person said. It also asked the company to provide the categories of data that were shared and a list of all New York state residents whose data were included, the person added.
The European Commission is not satisfied with progress made by Facebook, Twitter, and Google when it comes to fighting disinformation, Colin Lecher reports:
“Platforms have not provided enough details showing that new policies and tools are being deployed in a timely manner and with sufficient resources across all EU Member States,” the statement said. “The reports provide too little information on the actual results of the measures already taken.”
Pranav Dixit and Nishita Jha have the scary tale of how misinformation on social media has complicated the ongoing conflict between India and Pakistan:
On Wednesday, shortly after Pakistan shot down two Indian warplanes and captured an Indian pilot, a Facebook page called Pak Army that uses “@ArmyPakistanOfficial” as its name (spoofing the official page of the army) showed a video of a bloody aircraft pilot lying on the ground, claiming he was the captured pilot. Indian fact-checking services, including ones that are Facebook’s official fact-checking partners in India, revealed that the man in the video was actually a pilot who was injured in an airshow in India earlier this month. The video had more than 735,000 views on Facebook, and has been shared more than 32,000 times. It also continues to live on Twitter, where it went viral after being shared by a prominent Pakistani political analyst, and had more than 477,000 views at the time of this writing.
Abdi Latif Dahir reports that Uganda’s social media tax has caused people to use social media way less:
In the three months following the introduction of the levy in July 2018, there was a noted decline in the number of internet users, total revenues collected, as well as mobile money transactions. In a series of tweets, the Uganda Communications Commission noted internet subscription declined by more than 2.5 million users, while the sum of taxpayers from over-the-top (OTT) media services decreased by more than 1.2 million users. The value of mobile money transactions also fell by 4.5 trillion Ugandan shillings ($1.2 million).
Katherine Bindley performs a great little stunt to demonstrate the lack of true privacy options on social networks:
I tested my suspicion by downloading the What to Expect pregnancy app. I didn’t so much as share an email address, yet in less than 12 hours, I got a maternity-wear ad in my Instagram feed. I’m not pregnant, nor otherwise in a target market for maternity-wear. When I tried to retrace the pathway, discussing the issue with the app’s publisher, its data partners, the advertiser and Facebook itself—dozens of emails and phone calls—not one would draw a connection between the two events. Often, they suggested I ask one of the other parties.
Here’s a positive development: Facebook is taking legal action against companies that are selling fake accounts.
Zack Whittaker writes about a controversy that broke over the weekend involving the way Facebook uses phone numbers given to enable two-factor authentication to reduce user privacy:
Last year, Facebook was forced to admit that after months of pestering its users to switch on two-factor by signing up their phone number, it was also using those phone numbers to target users with ads. But some users are finding out just now that Facebook’s default setting allows everyone — with or without an account — to look up a user profile based off the same phone number previously added to their account.
The recent hubbub began today after a tweet by Jeremy Burge blew up, criticizing Facebook’s collection and use of phone numbers, which he likened to “a unique ID that is used to link your identity across every platform on the internet.”
Josh Constine reports that Facebook understated the number of young people who used its controversial market-research app, which led to the company’s enterprise certificate being canceled by Apple:
In the response from Facebook’s VP of US public policy Kevin Martin, the company admits that (emphasis ours) “At the time we ended the Facebook Research App on Apple’s iOS platform, less than 5 percent of the people sharing data with us through this program were teens. Analysis shows that number is about 18 percent when you look at the complete lifetime of the program, and also add people who had become inactive and uninstalled the app.” So 18 percent of research testers were teens. It was only less than 5 percent when Facebook got caught.
Ashley Carman talks to the creators making Instagram filters through a new developer program:
Beauty3000 comes from Johanna Jaskowska, a designer who participates in an Instagram beta program that allows people to create custom face filters and spread them to their followers. Instagram announced the program last May and expanded the closed beta in October, but the effects only recently seem to have taken hold.
Influencers have been posing with them in Stories, like the model Teddy Quinlivan and musicians iLoveMakonnen and Rosalía, and filter creators say their followings are growing because of it. Snapchat’s puppy dog filter may have started the face filter trend and become a meme unto itself, but now Instagram’s creators are moving filter design forward with a less cutesy look and more of a futuristic art kid vibe, often covered in gloss.
As if being named Meghan Trainor isn’t hard enough these days!
Meghan Trainor, 45, has been an exhibiting artist and performer for more than 15 years. She’s worked in 3D printing, brain-computer interface, robotics, and medieval technology. Drone metal outfit Earth is her favorite band and, this week, she was locked out of her social media accounts for “impersonating” Meghan Trainor—that is, herself.
Meghan Trainor, 25, is a pop singer-songwriter probably best known for her single “All About That Bass,” which dropped five years ago. When that song was released, “everyone I ever met in my life decided to send me that video,” Trainor the artist told Gizmodo. “I was very aware that single happened.” She said she wrote off the other Meghan as a one-hit wonder and had a sense of humor about it, but after a few years, she realized that Trainor the singer wasn’t fading into obscurity. “But it never occurred to me that it would impact my ability to do my job.”
Merchants can now convert their entire catalogs into shoppable pins, in a bit of pre-IPO stat-juicing before the IPO.
Here’s a helpful guide from our friends at The Information to all the governmental investigations against the big tech platforms. An immediate bookmark for me.
John Herrman has a great essay on what the hysteria around Momo is really about: our inchoate dread of what the internet is doing to us:
Momo is what happens when the grown-ups start writing copypasta of their own, about their own biggest fears: what their kids are doing on the internet, and what the internet is doing to their kids.
Designer Ana Noemi has a thoughtful reflection on my piece about content moderation:
Facebook could make structural platform shifts which would reduce the likelihood of disturbing content showing up in the first place. They could create different corners of the site where users go specifically to engage in certain activities (share their latest accomplishment, post cooking photos), rather than everyone swimming in the same amorphous soup. They could go back to affiliations with offline institutions, like universities, and make your experience within these tribes be the default experience of the site. Or they could get more selective about who they accept money from, or whom they allow to be targeted for ads. But I’m sure any one of these moves would damage their revenues at numbers that would boggle our minds. Facebook’s ambition for scale, and their need to maintain it now that they have it, is working against creating healthier experiences.
Like the Radium Girls, Facebook moderators are coming into daily contact with a barely-understood new form of technology so that others may profit. As we begin to see the second order effects and human costs of these practices and incentive systems, now is a good time for scale to be questioned as an inherent good in the business of the internet.
Adam Sokol writes about six years moderating comments on a conservative news website:
Before working as a moderator, I never would have known how many comments on a story about Africanized bees it would take before they started taking a racist turn. Now, having done the job, I know that that’s a trick question, because the answer is: immediately. It will happen on the first comment and keep on going until the last one.
And finally ...
In a world where you get paid when people pay attention to you, people will do anything to stand out. That explains the surging popularity of face tattoos, Kaitlyn Tiffany writes in a beautifully discursive piece for Vox:
Now face tattoos are “happening” again, a testament to Lil Wayne’s legacy and to the enduring sardonic energy of Gucci Mane’s choice, in 2011, to cover half of his face with an ice cream cone. This time, we’re looking at a new wave of inked-up kids tightly associated with the DIY music platform SoundCloud, a place where kids raised on emo and pop-punk and Lil Wayne are gathering to make mumbly rap about hating their lives. These are musicians who get famous so fast, as the Ringer’s Lindsay Zoladz put it, “we can sometimes watch their face tattoos accumulate in real time, like a fast-motion video of a wall being graffitied.” Translated to Instagram, where they all have significant presences, their faces become low-cost advertisements for careers that haven’t yet taken shape.
If you have ideas for a good face tattoo for me, I’m all ears.
Talk to me
Send tips, comments, questions, and status: email@example.com.