Eleven months ago, Facebook published what I then called the most extraordinary blog post in its history: an acknowledgement that, in some cases, using social media can make you feel worse about yourself. The post was based on a survey of recent academic research on the platform, which found that certain forms of mindless thumb-scrolling could be alienating.
Facebook’s proposed solution was not to use social media less, necessarily, but rather to use it differently. What followed over the next year was a series of steps designed to get people to use Facebook more “actively” — increasing the number of comments, while decreasing number of stories and videos from professional publishers in the News Feed.
Little follow-up research on the subject has been published to date. But over the weekend I read a study with something new to say on the matter: “No more FOMO: Limiting social media decreases loneliness and depression.” It’s to be published in the December edition of the Journal of Social and Clinical Psychology. Science Daily tells us how it worked:
Each of 143 participants completed a survey to determine mood and well-being at the study’s start, plus shared shots of their iPhone battery screens to offer a week’s worth of baseline social-media data. Participants were then randomly assigned to a control group, which had users maintain their typical social-media behavior, or an experimental group that limited time on Facebook, Snapchat, and Instagram to 10 minutes per platform per day.
For the next three weeks, participants shared iPhone battery screenshots to give the researchers weekly tallies for each individual. With those data in hand, Hunt then looked at seven outcome measures including fear of missing out, anxiety, depression, and loneliness.
Participants who reduced their time on social sites saw a statistically significant decrease in depression and loneliness, according to the study. The control group did not report an improvement.
The study’s authors present this as a milestone. Their study concludes:
The results from our experiment strongly suggest that limiting social media usage does have a direct and positive impact on subjective well-being over time, especially with respect to decreasing loneliness and depression. That is, ours is the first study to establish a clear causal link between decreasing social media use, and improvements in loneliness and depression. It is ironic, but perhaps not surprising, that reducing social media, which promised to help us connect with others, actually helps people feel less lonely and depressed.
The study’s lead author, psychologist Melissa G. Hunt, told Science Daily that she did not recommend that people stop using social media. But limits can be helpful, she said.
“Facebook did not participate in this study, but our teams are working to better understand the research about technology and well-being,” a spokeswoman told me. “We want people’s time on Facebook to be meaningful and positive and are building tools with people’s well-being in mind so they can better manage their experience. We are committed to continuing this work to foster safe and supportive communities for everyone.”
On the subject of time spent in its apps, the company has arguably already capitulated. In response to the Time Well Spent movement, the company voluntarily introduced in-app screen time limits earlier this year. (Well, it announced those limits, anyway. They still haven’t shipped for reasons no one will tell me.) Apple and Google, who control Facebook’s key distribution channels, shipped screen time management features of their own.
While the study finds evidence that social media usage can make us depressed, it doesn’t offer any thoughts on why. Hunt has offered some theories in interviews, primarily the idea that seeing other people’s happiness can create negative comparisons with our own experiences. But if we are to better understand how to manage our relationship with social networks, we need to understand those mechanics much better. To know that social media often makes us feel lonely now seems like a given. Knowing why feels like an important next step.
Update, 6:01 p.m.: This article has been updated to include comment from Facebook.
The president is attempting to undermine confidence in the midterm election by repeatedly tweeting baseless claims of voter fraud. I find it all tremendously alarming. Jane Lytvynenko walks us through it:
Since Thursday, Trump sent seven tweets about the election, baselessly making claims like, “Trying to STEAL two big elections in Florida!” and “Rick Scott was up by 50,000+ votes on Election Day, now they ‘found’ many votes and he is only up 15,000 votes.” Although most of his tweets focused on Broward County in Florida, they echoed the conspiratorial thinking of falsehoods that spread online.
Trying to imagine a logical explanation for this, from Joe Uchill, and coming up short:
French President Emmanuel Macron released an international agreement on cybersecurity principles Monday as part of the Paris Peace Forum. The original signatories included more than 50 nations, 130 private sector groups and 90 charitable groups and universities, but not the United States, Russia or China.
I’m not sure it’s fair to say, as Dustin Volz and Robert McMillan do, that Russia “largely skipped the midterms.” They were running an online influence campaign, and Facebook has continually removed pages related to it. But:
Voting largely came and went without major incident, according to U.S. officials and cybersecurity companies looking for evidence of Russian interference.
Several factors may have reduced Moscow’s impact. Clint Watts, a senior fellow with the Foreign Policy Research Institute, said the diffuse nature of congressional and state races makes them a harder target than a single presidential election.
Facebook’s relationship with law enforcement is growing closer, partly due to the threat of foreign interference in our elections, report Sarah Frier, Selina Wang, and Alyza Sebenius. Feels like this could be a double-edged sword:
Now, communication lines have started to open between Facebook and federal agencies including the Department of Homeland Security and the Federal Bureau of Investigation, according to the company. Facebook also established relationships with state election boards, so it could be alerted to problems as they occurred. Those connections are likely to strengthen ahead of the 2020 presidential election, when foreign interest in election manipulation may be higher. Twitter Inc., too, has strengthened its relationship with federal law enforcement agencies, seeking to protect against foreign influence.
YouTube CEO Susan Wojcicki has more to say about Article 13, the European Union’s proposed copyright directive. In this op-ed, she warns that if passed it would deprive the internet of the music video for “Despacito.”
“This video contains multiple copyrights, ranging from sound recording to publishing rights,” Wojcicki wrote. “Although YouTube has agreements with multiple entities to license and pay for the video, some of the rights holders remain unknown. That uncertainty means we might have to block videos like this to avoid liability under article 13. Multiply that risk with the scale of YouTube, where more than 400 hours of video are uploaded every minute, and the potential liabilities could be so large that no company could take on such a financial risk.”
In a novel collaboration, Facebook has invited French regulators to embed with the company and examine how it moderates content, Tony Romm and James McCauley report:
Under a six-month arrangement announced on Monday, French investigators will monitor Facebook’s policies and tools for stopping posts and photos that attack people on the basis of race, ethnicity, religion, sexuality or gender. From there, aides to French President Emmanuel Macron hope to determine “the necessary regulatory and legislative developments” to fight online hate speech, a government official said.
In recent months, France and its fellow European countries have adopted a much harder line against Facebook and its social-media peers, demanding they police their platforms more aggressively to stop a range of digital ills — including conspiracy theories, fake news and terrorist propaganda. Macron pressed Facebook chief executive Mark Zuckerberg on the issue when the two met in Paris earlier this year.
Almost a year ago, I first wrote about Maria Ressa, who had been profiled beautifully in this Lauren Etter story about how the government of the Philippines was using social media to harass dissenters. Ressa, who runs the news startup the Rappler, now faces a decade in prison on trumped-up tax evasion charges. This would represent a catastrophe for the free press — here’s hoping she and the Rappler can beat the charges.
This story about the firing of Palmer Luckey caused a stir over the weekend. But as best as I can tell, from my own reporting and others’, Luckey’s firing had little to do with his politics and was much more about the fact that he lied to Facebook executives about his role in funding anti-Clinton memes.
Facebook stayed up for me the entire time, but apparently lots of you struggled.
Here is yet another ominous departure from Snap: Nick Bell, who ran Discover. I’ve always felt that Discover underperformed, but it seems particularly bad that Bell would leave just as the company is launching its first slate of original programming.
It’s notable to me that Khan is moving into e-commerce, given Snap’s many moves lately to add commerce to Snapchat.
A new Harris Poll finds that consumers’ trust in Facebook is at a low:
Facebook is the least trustworthy of all major tech companies when it comes to safeguarding user data, according to a new national poll conducted for Fortune, highlighting the major challenges the company faces following a series of recent privacy blunders.
Only 22% of Americans said that they trust Facebook with their personal information, far less than Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%).
Last month, New York wrote about parents protesting Summit Learning, a personalized learning curriculum that has been championed by Facebook. (The company has donated engineers to the project.) Now a New York high school is joining the opposition, Susan Edelman reports:
Brooklyn teens are protesting their high school’s adoption of an online program spawned by Facebook, saying it forces them to stare at computers for hours and “teach ourselves.”
Nearly 100 students walked out of classes at the Secondary School for Journalism in Park Slope last week in revolt against “Summit Learning,” a web-based curriculum designed by Facebook engineers, and bankrolled by CEO Mark Zuckerberg and his wife Priscilla Chan.
Here’s a supremely dark story about the Thousand Oaks shooter posting to Instagram as he was killing people:
“Fact is I had no reason to do it, and I just thought… life is boring so why not,” the gunman wrote on Instagram,
Excellent hate read here from Laura Belgray, who charges clients $950 an hour to write inspirational Instagram copy for them.
Twitter deleted several million suspicious accounts on Friday, Paresh Dave reports.
Twitter’s own account fell by 7.8 million followers in July but gained back 2.36 million by mid-October. It lost 2.4 million on Friday, according to Social Blade.
Today’s most viewed YouTube videos last 20 minutes or longer, writes Emma Grey Ellis — a result of YouTube’s recommendation algorithm promoting longer videos over time:
The simplest explanation for these swelling run times is straightforward business. As a new study from the Pew Research Center demonstrates, YouTube has been quietly shifting its recommendation system to reward lengthy videos. YouTube’s algorithm is famously opaque. So to reverse-engineer it, Pew researchers took more than 170,000 “random walks” through YouTube over a period of six weeks, letting the site’s recommendations be their guide. They ended up watching over 300,000 unique videos, but just two patterns emerged in the recommended video’s statistics: First, the recommendation algorithm drove viewers toward more and more popular creators. But the researchers also noticed that YouTube’s recommendations consistently increased in length: At first, they were nine and a half minutes, then 12, then 15.
I enjoyed this Q&A with the Times’ Max Fisher about why he turned to covering social media for his excellent newsletter with Amanda Taub, The Interpreter. (Its most recent edition, about what the midterms mean for the health of our democracy, is also highly recommended.)
Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.
It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.
LinkedIn is really just Facebook in slow motion.
YouTube is launching its once Daydream-exclusive virtual reality app on the Oculus Go mobile VR headset today, Adi Robertson writes, after announcing its plans in September.
YouTube is now available on Portal.
America finally has a pro-democracy movement, David Leonhardt writes:
Last week, ballot initiatives to improve the functioning of democracy fared very well. In Florida — a state divided nearly equally between right and left — more than 64 percent of voters approved restoring the franchise to 1.4 million people with felony convictions. In Colorado, Michigan and Missouri, measures to reduce gerrymandering passed. In Maryland, Michigan and Nevada, measures to simplify voter registration passed. “In red states as well as blue states,” Chiraag Bains of the think tank Demos says, “voters overwhelmingly sent the message: We’re taking our democracy back.”
And finally ...
Work on Twitter began March 21, 2006. Should Twitter release this much-needed feature, I can promise that no one is going to refer to it as a rush job.
Talk to me
Send me tips, comments, questions, and your suggested optimal daily social media usage: email@example.com.