The terror attacks that struck Paris and Beirut last week roared across social media like wildfire, but in profoundly different ways. The bombings in the Lebanese capital killed 43 and injured 239 on Thursday, but reports from The New York Times to The Economist were largely ignored by Western audiences. A day later, a series of attacks in Paris resulted in the deaths of 129 people, with 433 more injured. Coverage of the Paris attacks spread far and wide, helped along by online readers eager to share information and show support. Hashtags proliferated. French flags were draped over profile pictures. It wasn’t until angry commenters assailed the media with accusations that Beirut was being ignored that the initial bombings reached a wider audience.
It shouldn’t be a mystery why those in Lebanon felt slighted. The West’s attention swung toward recognizable Paris and away from remote Beirut, in the same way it swings away from Syria. Or Baghdad. Or Ankara. It’s a classic media quandary; how can any newsroom make its readers care about issues that feel worlds away? The problem has been radically amplified by social media, where news lives and dies based on its shareability.
What is Facebook's role in creating global narratives?
Facebook is both a utility and an immensely powerful media company. The social network wields more influence than any single news outlet on the planet, serving as both a wire service and forum for 1.01 billion daily users. That means readers in search of a narrative will often turn to Facebook first. That’s an enormous responsibility, especially as the company acts out its ambitions of becoming a global portal to the internet at large. We need to ask ourselves: what should Facebook’s role be in determining the narratives that people follow?
These questions are more pressing after Facebook’s use of Safety Check, a feature that allowed people to see if their friends and loved ones in Paris were safe. It’s a remarkably simple and effective tool that’s already proven its value after disasters like the recent Nepalese earthquake. However, terror is a more politically fraught "human disaster," and deploying Safety Check for the Paris attacks instead of those in Beirut earned swift and vocal criticism.
Explaining the decision to deploy Safety Check, Facebook’s VP of growth Alex Schultz wrote:
"We chose to activate Safety Check in Paris because we observed a lot of activity on Facebook as the events were unfolding. In the middle of a complex, uncertain situation affecting many people, Facebook became a place where people were sharing information and looking to understand the condition of their loved ones."
In other words, Facebook used Facebook like the rest of us: it saw a trend in the News Feed and responded to it. It was a humane, well-intentioned act, but it raises fraught questions that Facebook will have to answer in the future. Activating Safety Check in Paris was a decision — almost an editorial decision — based on its view of an event’s importance. And it had media repercussions: users all over the world saw Safety Check notifications from Paris, along with French flags on profile pictures. No one saw the same come out of Beirut or Baghdad.
When Facebook explained its decision to turn on Safety Check, Schultz said that a feature as simple as Safety Check isn’t useful during ongoing crises like war "because there isn't a clear start or end point and, unfortunately, it's impossible to know when someone is truly ‘safe.’" That puts Facebook in the position of evaluating the severity of conflicts and the rarity — newsworthiness — of attacks.
Facebook is in the position of evaluating the newsworthiness of terror attacks
Facebook will have to make more of these decisions in the future. The company has for years talked about bringing "the next billion" people online, whether it’s via satellite internet or partnerships with mobile operators. Facebook wants to be foundational to a truly connected, borderless citizenry. Connecting the next billion Facebook users means bringing people in unstable regions online. They’re not there yet, of course. But it's worth stating that Facebook deployed a tool in a way that indicated a troubling ethnocentrism for a company that hopes to be a social utility for the world.
Here’s what needs answering: if Facebook has taken up the task of keeping global citizens informed of their loved ones' safety, what are its next set of responsibilities? Should Safety Check scale to behave differently in different places depending on what Facebook defines internally as safe? Beyond that, how does Facebook determine how its algorithm treats news coming out of regions where it provides connectivity, especially when lives are at stake? Should it step in to ensure all its users are informed of events of global importance? Is it Facebook’s responsibility to make us care about countries like Syria in a way the media hasn’t really been able to? Is it Facebook’s responsibility to ensure all people are treated more equally online?
There are no easy answers to these questions. However, there’s no ignoring the fact that Facebook has the power to make decisions that answer them, one way or another. And people will have to hold Facebook accountable for the decisions it makes.