In the fall of 2015, a new wave of violence hit Israel. It began with a vehicle attack in October, followed by a wave of shootings and stabbings. The wave also broke on social media, where some Palestinian groups shared cartoons and memes encouraging violence against Jews, many from accounts linked to Hamas. Over the following 15 months, 47 people were killed and 675 wounded in related violence, according to a tally by the country’s Ministry of Foreign Affairs.
Among certain writers, it became known as the Facebook Intifada, a designation favored by Israeli Prime Minister Benjamin Netanyahu. “What we are seeing here is a combination of radical Islam and the internet,” Netanyahu told the Likud Party that October, in a widely quoted speech. “Osama bin Laden meets Mark Zuckerberg. The incitement in the social networks is moving the murders.”
“A combination of radical Islam and the internet... Osama bin Laden meets Mark Zuckerberg.”
Now that association has landed Facebook in federal court. In Eastern New York District Court yesterday, Facebook faced off against plaintiffs who argue the company’s basic services fueled the violence, putting thousands of Israeli lives in danger. If successful, the suit would force Facebook to cease providing services to known terrorists and hand over as much as $1 billion in damages. It’s a major challenge to the usual interpretation of the law, which tends to protect platforms like Facebook from being sued for what their users do. But after a spate of terrorism lawsuits against social networks — and a newfound acceptance of social responsibility from Facebook itself — those legal protections are being put to the test.
In court, Facebook argued the case should be dismissed immediately, appealing to a much-cited clause in the Communications Decency Act that protects any provider of an interactive computer service from being held liable as a publisher. Known as Section 230, that clause prevents plaintiffs from suing your ISP for unwittingly transmitting a pirated movie, and it has historically protected platforms like Facebook and Twitter from liability for anything someone does in a public post.
According to Facebook’s defense, there’s no reason those same protections shouldn’t extend to the decision about whether or not to ban posts or users linked to Hamas. “This case, while it does certainly raise complex and important social issues, as a legal matter is a straightforward application of the CDA,” said Facebook attorney Craig Primis. “Decisions Facebook makes about how to operate its service are just as protected as any individual piece of content.”
“All they have to do is understand their legal obligation.”
But according to the plaintiff, those protections don’t apply when you’re doing business with terrorists. The opposing counsel pointed to the antiterrorism act, which forbids providing material support to terrorist groups. As plaintiff’s counsel Robert Tolchin argued in court, that concept of “material support” could include providing a Facebook account to an individual on the Treasury Department’s list of Specially Designated Nationals.
“I’m not saying Facebook has to look at everybody’s Facebook page and make an editorial decision,” Tolchin told the judge. “All they have to do is understand their legal obligation not to provide services to the people on a list.”
The argument is similar to a number of lawsuits brought against Twitter that seek to hold the company accountable for attacks linked to ISIS. Thus far, all the cases have been dismissed, but the Facebook case includes a number of new charges focused on the platform’s algorithms, which actively match users to like-minded people and groups. According to the plaintiffs, that includes “introductions between those who incite to murder and mayhem, and those who are interested in committing murder and mayhem.” That’s led some legal scholars to see the recent Israeli challenges against Facebook as the strongest social media-terrorism lawsuits yet.
A fake account named for a senior member of Hamas
Tolchin made the case more aggressively than previous lawyers, even fabricating a fake terrorist account on Facebook to drive home his point. During arguments, he directed the court to an account an Israeli friend had registered on Monday under the name Mousa Abu Marzook — a senior leader of Hamas in Palestine. The account remained live throughout Wednesday, although it was taken down shortly before press time.
The Treasury Department’s list of designated terrorists and other disfavored groups is already a major tool of enforcement for traditional banks and apps like Venmo, which use the list to block money transfers to terrorist groups. Most financial services companies will block any transactions marked with a term on the list, including seemingly innocuous phrases like “idek” or “blue sky.” So far, courts have not rules that those measures are necessary for communications services.
District Judge Nicholas Garaufis oversaw the proceedings, and seemed particularly interested in more severe moderation tactics Facebook could adopt. “Is it possible to eliminate the accounts of, say, people who live in the seven countries designated by the recent executive order on immigration?” Garaufis asked at one point. “I’m interested in knowing whether Facebook could focus on certain areas and eliminate the ability of those areas to disseminate content.”
“There is no place on Facebook for groups that engage in terrorist activity.”
Even if the court rules against the current motion to dismiss, the plaintiffs will have a long road ahead, as Facebook raised additional questions of jurisdiction and standing that could moot the case in the months to come. But the downside of even a partial defeat for Facebook could be immense, making Facebook’s moderation strategies subject to public modification by judges and members of Congress. For users, the result could be more aggressive and opaque bans, particularly if you try to form a Facebook group about blue skies.
Reached for comment, a Facebook spokesperson said the company was committed to the safety of its users. “Our community standards make clear that there is no place on Facebook for groups that engage in terrorist activity or for content that expresses support for such activity,” the spokesperson said, “and we take swift action to remove this content when it’s reported to us. We sympathize with the victims of these horrible crimes.”