Skip to main content

What Instagram really learned from hiding like counts

What Instagram really learned from hiding like counts

/

One-size-fits-all solutions are making us miserable

Share this story

Illustration by Alex Castro / The Verge

In April 2019, amid growing questions about the effects of social networks on mental health, Instagram announced it would test a feed without likes. The person posting an image on the network would still see how many people had sent it a heart, but the total number of hearts would remain invisible to the public.

“It’s about young people,” Instagram chief Adam Mosseri said that November, just ahead of the test arriving in the United States. “The idea is to try and depressurize Instagram, make it less of a competition, give people more space to focus on connecting with people that they love, things that inspire them. But it’s really focused on young people.”

After more than two years of testing, today Instagram announced what it found: removing likes doesn’t seem to meaningfully depressurize Instagram, for young people or anyone else, and so likes will remain publicly viewable by default. But all users will now get the ability to switch them off if they like, either for their whole feed or on a per-post basis.

“What we heard from people and experts was that not seeing like counts was beneficial for some, and annoying to others, particularly because people use like counts to get a sense for what’s trending or popular, so we’re giving you the choice,” the company said in a blog post.

“It did end up being pretty polarizing.”

At first blush, this move feels like a remarkable anticlimax. The company invested more than two years in testing these changes, with Mosseri himself telling Wired he spent “a lot of time on this personally” as the company began the project. For a moment, it seemed as if Instagram might be on the verge of a fundamental transformation — away from an influencer-driven social media reality show toward something more intimate and humane.

In 2019, this no-public-metrics, friends-first approach had been perfected by Instagram’s forever rival, Snapchat. And the idea of stripping out likes, view counts, followers and other popularity scoreboards gained traction in some circles — the artist Ben Grosser’s Demetricator project made a series of tools that implemented the idea via browser extensions, to positive reviews.

So what happened at Instagram?

“It turned out that it didn’t actually change nearly as much about … how people felt, or how much they used the experience as we thought it would,” Mosseri said in a briefing with reporters this week. “But it did end up being pretty polarizing. Some people really liked it, and some people really didn’t.”

On that last point, he added: “You can check out some of my @-mentions on Twitter.”

While Instagram ran its tests, a growing number of studies found only limited evidence linking the use of smartphones or social networks to changes in mental health, The New York Times reported last year. Just this month, a 30-year study of teenagers and technology from Oxford University reached a similar finding.

Note that this doesn’t say social networks are necessarily good for teenagers, or anyone else. Just that they don’t move the needle very much on mental health. Assuming that’s true, it stands to reason that changes to the user interface of individual apps would also have a limited effect.

At the same time, I wouldn’t write off this experiment as a failure. Rather, I think it highlights a lesson that social networks are often too reluctant to learn: rigid, one-size-fits-all platform policies are making people miserable.

Think of the vocal minority of Instagram users who would like to view their feed chronologically, for example. Or the Facebook users who want to pay to turn off ads. Or look at all the impossible questions related to speech that are decided at a platform level, when they would better be resolved at a personal one.

Last month, Intel was roasted online after showing off Bleep, an experimental AI tool for censoring voice chat during multiplayer online video games. If you’ve ever played an online shooter, chances are you haven’t gone a full afternoon without being subjected to a barrage of racist, misogynist, and homophobic speech. (Usually from a 12-year-old.) Rather than censor all of it, though, Intel said it would put the choice in users’ hands. Here’s Ana Diaz at Polygon:

The screenshot depicts the user settings for the software and shows a sliding scale where people can choose between “none, some, most, or all” of categories of hate speech like “racism and xenophobia” or “misogyny.” There’s also a toggle for the N-word.

An “all racism” toggle makes us understandably upset, even if hearing all racism is currently the default for most in-game chat today, and the screenshot generated many worthwhile memes and jokes. Intel explained that it built settings like these to account for the fact that people might accept hearing language from friends that they won’t from strangers.

But the basic idea of sliders for speech issues is a good one, I think. Some issues, particularly related to non-sexual nudity, vary so widely across cultures that forcing one global standard on them — as is the norm today — seems ludicrous. Letting users build their own experience, from whether their like counts are visible to whether breastfeeding photos appear in their feed, feels like the clear solution.

Expanded user choice is clearly in the interest of both people and platforms

There are some obvious limits here. Tech platforms can’t ask users to make an unlimited number of decisions, as it introduces too much complexity into the product. Companies will still have to draw hard lines around tricky issues, including hate speech and misinformation. And introducing choices won’t change the fact that, as in all software, most people will simply stick with the defaults.

All that said, expanded user choice is clearly in the interest of both people and platforms. People can get software that maps more closely to their cultures and preferences. And platforms can offload a series of impossible-to-solve riddles from their policy teams to an eager user base.

There are already signs beyond today that this future is arriving. Reddit offered us an early glimpse with its policy of setting a hard “floor” of rules for the platform, while letting individual subreddits raise the “ceiling” by introducing additional rules. Twitter CEO Jack Dorsey has forecast a world in which users will be able to choose from different feed ranking algorithms.

With his decision on likes, Mosseri is moving in the same direction.

“It ended up being that the clearest path forward was something that we already believe in, which is giving people choice,” he said this week. “I think it’s something that we should do more of.”


This column was co-published with Platformer, a daily newsletter about Big Tech and democracy.