Following the Hamas terrorist attack on Israel, social media companies are facing increased pressure to crack down on violent content, hate speech, and disinformation.
Some social platforms are already sharing details of their response: Meta is tightening security measures amidst an increase in content violating its rules, while TikTok has also committed to stepping up its moderation in the wake of the attacks.
Despite this, both Meta and TikTok are facing scrutiny from the European Commission over whether their response complies with the Digital Services Act, a set of rules that hold large social media companies accountable for preventing illegal content from being posted to their platforms.
The European Commission is looking into the way X (formerly Twitter) is handling the Israel-Hamas war as well and also sent a letter to YouTube to remind the company of its responsibility to keep illegal content and disinformation off its platform.
Here are all the updates on what social platforms are doing in response to the Israel-Hamas war.
Some blue check “Premium” subscribers on X, formerly Twitter, who are spreading misinformation may be eligible for X’s ads revenue sharing program. That’s the conclusion reached by NewsGuard, a for-profit misinformation watchdog organization, in its report that followed ads appearing on 30 posts from November 13th to the 22nd. The posts made conspiratorial claims about the Israel-Hamas war that reached a collective 92 million views.Read Article >
Each of the 10 accounts NewsGuard referenced had over 100,000 followers — one of the metrics it uses to classify them as “misinformation super-spreader” posters.
TikTok is taking action against content promoting the manifesto Osama bin Laden wrote discussing his supposed motivations for the 9/11 terrorist attacks. In a statement on X (formerly Twitter), TikTok says it’s “proactively and aggressively removing this content and investigating how it got onto our platform.”Read Article >
Dozens of videos about the manifesto, titled “Letter to America,” have surfaced on TikTok over the past several days, with CNN reporting the topic amassed “at least” 14 million views by Thursday. Originally published in 2002, the manifesto criticizes the US government’s presence in the Middle East and support of Israel. However, some creators are now trying to apply that criticism to the US government’s response to the ongoing Israel-Hamas war.
Oct 29X will demonetize posts corrected by Community Notes.
Elon Musk says the change is meant to “maximize the incentive for accuracy over sensationalism.”
A study earlier this month found X Premium verified users were getting heavy engagement as “superspreaders of misinformation” about the Israel-Hamas war.
Oct 26The Bellingcat guide to wartime disinformation.
The open source intelligence site has been fact-checking information about the Israel-Hamas war during a seismic shift in social media practices, and it outlines some of the most common tells of a false report, alongside a useful reminder for watching disturbing videos:
Ask yourself if there is a genuine reason you need to view this footage.
Social messaging platform Telegram has blocked channels used by Hamas, but only on Android phones due to violations of Google’s app store guidelines. According to CNBC, two channels — hamas_com and al-Qassam brigades — were cut off for Android users, though other channels the group uses, like Gaza Now, are still accessible.Read Article >
Telegram blamed the blocks on Google’s app store guidelines, according to reporting in The Jerusalem Post. Users reportedly see an error saying the channels can’t be viewed on “Telegram apps downloaded from the Google Play Store,” implying that the ban doesn’t extend to the app when it’s downloaded from elsewhere or used on another operating system.
Oct 23Google shuts off live traffic data in Maps and Waze for Israel and Gaza.
Bloomberg reports that the Israeli military asked Google to make the change in response to the war, which Google says it agreed to “for the safety of local communities.” The company also disabled the feature in Ukraine last year.
Israeli site Geektime says live traffic is also disabled in Apple Maps in the region.
The vast majority of viral misinformation about the Israel-Hamas war being posted on X (formerly Twitter) is being pushed by verified users, according to a recent study by NewsGuard — a for-profit organization that rates the trustworthiness of news sites. After analyzing the 250 most-engaged X posts between October 7th and October 14th that promoted incorrect or unverified information relating to the war, researchers at NewsGuard found that verified X accounts were behind 74 percent of it.Read Article >
The 250 posts analyzed within the study promoted one of 10 false or unsubstantiated war narratives identified by NewsGuard, including claims that CNN had staged footage of its news crew under attack in Israel, and videos claiming to show Israeli or Palestinian children in cages. In one week, the 250 posts collectively received 1,349,979 engagements (including likes, reposts, replies, and bookmarks) and were viewed over 100 million times globally. 186 of these top 250 posts were posted by verified blue-checked X accounts.
Oct 20A mega translation blunder at Meta.
Instagram has apologized after its auto-translate feature inexplicably inserted the word ‘terrorist’ into Palestine-related bios. “We fixed a problem that briefly caused inappropriate Arabic translations in some of our products. We sincerely apologise that this happened,” a Meta spokesperson told The Guardian.
I have been paying close attention to Brookings’ political podcast database over the past week and a half to see how discussion of the war is taking shape. This is a highly underrated tool and an essential one if you care about the state of political podcasting. It looks at the top political podcasts on the left and right on Apple Podcasts (i.e., no Rogan) and catalogs what they’re talking about. One feature breaks out the most-discussed topics on each side, and the language used is telling about how conservative podcasters are approaching the topic versus liberal ones.Read Article >
Immediately after the terrorist attacks on October 7th, conservative podcasters cornered the conversation. While “Israel” was the number one topic for conservative shows, it didn’t crack the top 10 on liberal shows. That has changed as the conflict has escalated into war, with “Israel” being the number one topic for both sides in the past week, but you can still see the difference in language and approach.
The European Commission is formally requesting information from Meta and TikTok on how they’re handling illegal content and disinformation related to the war in Israel. The inquiry comes as part of the European Union’s newly enacted Digital Services Act (DSA), which holds large online platforms legally accountable for the content posted to them.Read Article >
Both platforms have until October 25th to respond to the Commission’s request. From there, the Commission will evaluate their responses and “assess next steps.”
Oct 16It’s TikTok’s turn to explain how it’s moderating the Israel-Hamas conflict.
First was X, then came Meta, now TikTok has put out a blog post on its moderation policies in response to EU commissioner Thierry Breton. The video platform says it’s removed over 500,000 videos and 8,000 livestreams in the region since the attacks on October 7th, and has also added more Arabic and Hebrew-speaking moderators to its ranks.
European Commissioner Thierry Breton sent a letter to Alphabet CEO Sundar Pichai reminding him of the company’s obligations under the EU’s Digital Services Act (DSA) as a large online platform to keep illegal content and disinformation from being shared on YouTube surrounding Israel’s war with Hamas.Read Article >
“Following the terrorist attacks carried out by Hamas against Israel, we are seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms,” Breton wrote. He added that YouTube has an obligation to protect children and teens in the EU from violent content on the platform, must promptly take action in response to notices from the EU, and have “proportionate and effective mitigation measures in place” to tackle risks from disinformation.
In the three days following the terrorist attacks carried out by Hamas against Israel on October 7th, Meta says it removed “seven times as many pieces of content on a daily basis” for violating its Dangerous Organizations and Individuals policy in Hebrew and Arabic versus the two months prior. The disclosure came as part of a blog post in which the social media company outlined its moderation efforts during the ongoing war in Israel.Read Article >
Although it doesn’t mention the EU or its Digital Services Act, Meta’s blog post was published days after European Commissioner Thierry Breton wrote an open letter to Meta reminding the company of its obligations to limit disinformation and illegal content on its platforms. Breton wrote that the Commission is “seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms” and “urgently” asked Meta CEO Mark Zuckerberg to “ensure that your systems are effective.” The commissioner has also written similar letters to X, the company formerly known as Twitter, as well as TikTok.
The European Union (EU) has formally opened an investigation into X, the platform previously known as Twitter, to ensure it’s complying with the Digital Services Act (DSA) following Hamas’ attack on Israel in early October and subsequent Israeli air assault on Gaza. According to the request, this comes after “indications received by the Commission services of the alleged spreading of illegal content and disinformation, in particular the spreading of terrorist and violent content and hate speech.”Read Article >
Earlier this week, EU Commissioner Thierry Breton sent a letter to X owner Elon Musk alleging that the platform is “being used to disseminate illegal content and disinformation in the EU.”
X CEO Linda Yaccarino says the social media platform formerly known as Twitter has identified and removed “hundreds” of Hamas-affiliated accounts and has “taken action to remove or label tens of thousands of pieces of content” in the wake of terrorist attacks carried out by Hamas against Israel. Yaccarino’s letter comes in response to concerns raised by EU Commissioner Thierry Breton that X is being used to “disseminate illegal content and disinformation” in possible violation of the EU’s tough new Digital Services Act (DSA).Read Article >
The back-and-forth between X and the European Union comes as the EU implements the DSA, which imposes obligations on large online platforms to remove illegal content and mitigate risks to public security more generally. In addition to X, Breton has also written to Meta to remind it of its obligations under the DSA.
European Commissioner Thierry Breton warned Meta CEO Mark Zuckerberg Wednesday that failing to remove pro-Hamas content across his platforms could put the company in violation of new EU moderation regulations.Read Article >
In a letter to Zuckerberg Wednesday, Breton urged Meta “to be very vigilant” in removing illegal terrorist content and hate speech amid the ongoing war in Israel. Breton said that the European Commission had seen “a surge of illegal content and disinformation being disseminated in the EU,” potentially putting social media platforms in violation of its Digital Services Act, or DSA.
This is Platformer, a newsletter on the intersection of Silicon Valley and democracy from Casey Newton and Zoë Schiffer. Sign up here.Read Article >
Today, after a long weekend of awful terrorist violence in Israel, let’s talk about the shifting landscape for social networks amid the current crisis — and consider the path ahead for Meta’s Threads app.