Skip to main content

Who is responsible for taking down Nazi GIFs?

Moderating animated hate

Alex Castro

In the first grainy black-and-white photograph, the men are digging pits while onlookers watch from a ridge behind them. In the foreground, a Nazi soldier looks on, his back to the camera. In the next slide, tiny figures dig around the edge of a vast, chalky bowl in the earth. Then, a street: a man lies motionless on the pavement, blood oozing from his head. Then, the pit again, now filled with bodies. Two soldiers mill among the dead.

More black-and-white photographs follow: women and children awaiting execution; a man kneeling over a trench full of bodies while a soldier takes aim at his head with a pistol; a picture of shoes, trodden into mud. Together, they comprise a slideshow GIF, now removed by its host Giphy, depicting a massacre of Ukrainian Jews by the SS in 1941. Titled and tagged “descendantsnet,” the GIF appeared as a search result for the term “einsatzgruppen” — the death squads of Nazi Germany — on both Facebook Messenger and Twitter’s direct messaging service.

This was just one example in a swathe of offensive GIFs depicting everything from explicit racism and Nazi imagery to graphic violence and sexual assault that have been available at various times for users to post on Facebook and Twitter. These animations were accessible on both platforms, thanks to a complex supply chain where it is not only unclear how users can report and scrub disturbing content, but it’s also unclear who is responsible for policing the source databases in the first place.

While the “einsatzgruppen” GIF and many of its ilk have been removed, the mechanisms that allowed them to proliferate in the first place remain unchanged. This lack of accountability when it comes to moderating GIFs raises further, troubling questions about how social media platforms choose to commission, market, and police third-party content. That’s especially true for content that easily lends itself to manipulation and dissemination by trolls and hate groups whose proliferation online has swelled over the last few years into a massive existential crisis for social media platforms at large.


Invented in 1987 as a silent, looping miniature-video format, GIFs have effectively become their own digital language. They’re endlessly mutable reactions that can be deployed in virtually any context, and they often convey messages more efficiently than text in just a few frames. Once strange internet curios, GIFs have become an amusing refuge for many amid an increasingly brutal and confusing news cycle. For others, they’re a useful attack vehicle. You can use GIFs to flirt and network. Nation-states converse in them. When Ukraine suffered a major cyberattack last year, its official reaction was to tweet a GIF of the “This is fine” dog, a meme described by its creator as “halfway between a shrug and complete denial of reality.”

GIFs have become a refuge for many amid an increasingly brutal and confusing news cycle

The incredible popularity of GIFs has also made them a potent commercial proposition. Over the past few years, two startups have come to dominate the GIF landscape: Giphy, founded in 2013, and Tenor (originally called Riffsy) a year later. Giphy and Tenor’s exhaustive databases supply Facebook, Twitter, iMessage, Slack, and more with hundreds of thousands of GIFs. Their respective websites also allow anyone to create or upload their own GIFs to these databases. In February, Tenor announced that searches for GIFs on its proprietary keyboard topped 12 billion a month, while Giphy says it has up to 300 million daily active users.

Both firms rely on a mixture of corporate partners, search engine indexing, and ordinary, often anonymous users to upload GIFs, which are in turn piped to social networks like Facebook and Twitter and other clients. Inviting the entire internet to effectively create new content on your platform has its risks. When it comes to GIFs, the most obvious and easily combated is copyright violations. For its part, Giphy managed to head off a majority of potential Digital Millennium Copyright Act (DMCA) concerns by entering into licensing agreements with major commercial partners.

The other, more chilling danger is that both sites will see vile and distressing content either being accidentally indexed by custom web crawlers or deliberately uploaded, the latter having long been a problem for more prominent video platforms like YouTube. To narrow down the possibility of offensive GIFs filtering onto their sites, both providers have rules on the impermissibility of uploading hate speech or defamatory, illegal, pornographic, or violent content, and they employ a mixture of human and automated moderating tools to enforce them. If an offensive GIF from the database is posted on Facebook and Twitter, reporting tools exist on both platforms that allow users to flag posted clips for removal — though there is no mechanism for users on either platform to report GIFs that surface from third parties when users search.

Racist and anti-Semitic GIFs originating from Giphy continue to surface. In November 2015, the Times of Israel reported on the existence of a 9/11 conspiracy theory GIF appearing in results for “Israel” in Facebook’s GIF search. In March 2017, an Australian news site discovered that hundreds more clips depicting Hitler and Nazi imagery were available to post from Facebook’s GIF button. (Facebook subsequently blocked GIFs with the keyword “Hitler.”) Then, in July 2017, Forbes reported several cases where explicitly anti-Semitic clips were being returned in search inquiries in response to the term “Jew,” one of which was “subsequently used to harass a Jewish reporter on Twitter.” Earlier this year, Instagram and Snapchat temporarily suspended GIF functionality on their platforms after a racist animation was discovered in Instagram’s Sticker library as a search result for the term “crime.”

Confusion about who is accountable for moderating GIFs stems from the seamless way they appear to an average user

Any GIF supplied to either Twitter or Facebook has to adhere to each platform’s community standards, which either prohibit or impose an age limit on graphically violent or pornographic content and ban hate speech (including Nazi symbols). But, for the most part, users can only report content after it is posted, not if it appears in a search that draws from a third party. (One flagging option exists for users of the desktop version of Facebook, and while you can technically attempt to report GIF search results this way, the tool is intended to report broken site features.) Everywhere else, users are only able to rely on flagging tools designed for posted content or tracing the animation back to its original host and alerting moderators to its existence there. This creates a bottleneck that likely discourages most casual users from reporting.

When approached about its GIF-flagging mechanisms, Twitter declined to comment. A spokesperson from Facebook, meanwhile, replied that the site has had “several conversations with our partners, including when something that does not meet our community standards is reported to us.” The spokesperson also repeated a statement the company had given to Forbes in July 2017: “GIFs that can be sent on Messenger are supplied by third parties. If we are made aware that a third-party supplier is violating our policies, we reach out to the supplier to ensure they address the issue by removing the violating content.”

“Is that a satisfactory response? I think it’s probably a truthful response in this case, but it doesn’t feel very satisfactory,” says Sarah T. Roberts, an expert in content moderation at UCLA. “It’s an arbitrary kind of difference that most users would have trouble understanding. Think about Mark Zuckerberg’s performance in front of Congress … [those senators and representatives] didn’t even get some of these distinctions.”

In theory, the current model for policing GIFs should work. After all, there are multiple stages at which the appropriateness of a clip is judged: at the upload stage, where the GIF is subject to a mix of human and automated review; against Facebook or Twitter’s community standards; and, ultimately, by the users of those platforms, who should be able to alert the platform, and the provider, if the animation breaks the rules.

But this assumes that the rules governing what is and isn’t permissible are the same between third-party providers and the social platforms they serve. In fact, standards differ widely, even just between Tenor and Giphy. Tenor, for example, prohibits the upload of content that “is violent or threatening or promotes violence,” which is in line with Facebook’s rules on the issue. Giphy, meanwhile, makes an exception for fictional violence, which is why a search for clips from Inglourious Basterds will return several violent clips on the provider that are blocked as an option on Messenger.

The anonymity of a GIF uploader also varies: on Giphy, any visitor can create one, although you need to be a verified user for your clip to appear as a search result. On Tenor, users have to create an account before they can upload anything. Search terms are also policed. A search for “Nazi” on Giphy, for example, will yield no results; type the same term on Tenor, meanwhile, and a cornucopia of animated Hitler clips will appear. Until very recently, there was also a disparity in both sites’ reporting mechanisms. On Giphy, users are able to flag distressing content via each GIF’s details page. This mechanism on Tenor’s website was absent until late May.

This indirect, multi-pipeline moderation approach also means visitors to Facebook, in particular, are often unaware that GIFs were supplied to the site via third parties. In its report on anti-Semitic clips in 2015, a Times of Israel headline described them as a “Facebook feature,” while in March 2017, the Australian B’nai B’rith Anti-Defamation Commission called on Mark Zuckerberg to “personally intervene” and remove clips depicting Hitler and Nazi symbols. Sometimes Facebook’s own moderators don’t appear to know who is responsible for censoring GIF content. In July 2017, Forbes also reported on the case of a user named Liz Dobin, who was repeatedly frustrated in her attempts to report an anti-Semitic GIF. At one point, she was informed by a moderator that she was “asking a question that we can’t support from here.”

This confusion about who is accountable for moderating GIFs stems from the seamless way the function appears to an average Facebook or Twitter user, says Tarleton Gillespie, author of Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media.

“If you [want to] throw an animated GIF into your post, you don’t want that to feel like you have to shift over to the [supplier], log in, choose something in their search engine, and shift back,” says Gillespie. “The more you can minimize that series of steps, the more likely you’re going to use it, and the more likely it’s going to feel intuitive. But that erases the fact that [social media platforms and GIF databases] are actually partners, doing this together.”

Direct appeals on Facebook’s Help Community have often fallen on deaf ears. Two years ago, user Ryan Morse posted a question asking how he could report a cartoon portrayal of blackface in response to the search term “too much.” He knew that GIFs were supplied to Facebook via third parties, but he wasn’t sure what role the social network played in moderating that content. “Ultimately it wasn’t something that I ever heard back on,” says Morse. “There didn’t really appear to be a process or … accountability for something that goes through their system.”

“There didn’t really appear to be a process or accountability for something that goes through their system.”

Several other users posted similar queries on the Help Community about how to report offensive GIFs in-search, including Thorlaug Ágústsdóttir, a political scientist from Iceland. Ágústsdóttir had discovered that the first result for the term “feminism” was a loop of a red-haired woman shouting into a camera with the word “TRIGGERED” under her face. Unable to find a way to report it, she and her friends tried to game the system. Their failure underscored how little is known about how GIFs are delivered to Facebook.

“I thought they were user-generated,” says Ágústsdóttir. “But it doesn’t seem that way, since a group of 20 women was not able to budge what came up in their GIF selection after posting new GIFs into the pool.”


In most cases where the appearance of offensive GIFs on social media has been discussed publicly, suppliers and platforms have adopted a reactive posture: an animation is identified, both companies are approached for public comment, and the supplier responds by removing the GIF(s). It’s a process contingent on the existence of an offended user, usually one who only stumbled across an offensive animation after using a benign search term.

A cursory investigation reveals that plenty of material is easily accessible on Twitter and Facebook to a second type of user: one who deliberately searches for offensive material to post, and thus has no inclination whatsoever to report its existence. In multiple instances, it was possible to bypass banned search terms for racial slurs, often by simply placing spaces between the letters of a word or phrase. Many of the GIFs that resulted depicted racist stereotypes, although others simply portrayed people of Black or Asian descent. Precisely why this is possible remains unclear since almost none had been tagged as such on either Giphy or Tenor.

For the most part, users can only report content after it is posted

“The kinds of mechanisms that are being employed to circumvent the guards that do exist — for example, as you describe, putting of spaces between the letters of racist terminology — those are as old as the social internet itself,” says Roberts. What normally combats this is a simple word list of banned terms. “It’s just the easiest, least technologically complicated automated content barrier. It’s also incredibly easy to defeat.”

One way for trolls to sidestep content barriers to achieve this is to create media tied to different search terms that are often more general (or more obscure) in meaning to mask its meaning and intent. There is evidence to suggest that both may have been taking place on Giphy and Tenor. The search term “rape” conjured many violent and tasteless images on Facebook Messenger. (None appeared on Twitter.) Meanwhile, the search term “armenians” yielded a slideshow of victims of the 1992 Khojaly Massacre. “Schutzstaffel” produced reams of Nazi imagery, while results for “genocide” included archived newsreel footage of a Nazi rally and an image of a field dotted with funerary crosses displaying the text “White Genocide South Africa.”

The GIF depicting the Babi Yar massacre also fits into this category as imagery that has been egregiously divorced from its original context. On Giphy, its source is listed as an obscure family history website created by Christine Usdin, an artist and amateur genealogist who is credited with the translation of thousands of Latvian census records into English. The site — mostly dedicated to the story of Višķi, a former shtetl in Latvia — is a source for the settlement’s Wikipedia page, which explains that most of the Jewish inhabitants were murdered by Einsatzgruppen units or forcibly moved to the nearby Daugavpils ghetto. The animated GIF depicting the Babi Yar massacre was one of several meant to “show the barbary of the Nazis.” Usdin certainly did not upload it to Giphy: it was uploaded to the platform in October 2016. Usdin died in 2013.

Usdin’s Babi Yar GIF underwent the same review and distribution process as a two-second clip from The Office or the latest NBA game. In response to The Verge’s request for comment, Tenor issued an apology for hosting offensive GIFs on its platform and said that it works hard to ensure that the animations it supplies to its partners are consistent with their community standards and policies.

“The putting of spaces between the letters of racist search terms is as old as the social internet itself.”

“We have an extensive human review process before content can be posted to Tenor, and have content flagging options in our app and on our website,” in addition to proactively reviewing content in its existing library, a spokesperson told The Verge. “[W]e’ve also added flagging options to each GIF on our website to make it even easier to report content.” Tenor added that all the animations uncovered by The Verge had now either been removed or recategorized. Content classifiers have also been introduced, and a bug that permitted users to conduct inappropriate searches by typing spaces between words has now been fixed.

A spokesperson from Giphy, meanwhile, stated that the provider has a “zero-tolerance policy for any content that violates our Community Guidelines,” and it already has “multiple levels of content moderation filters for every piece of content that is distributed through GIPHY,” including a mixture of software, third-party services, and its in-house flagging mechanism. “We have direct lines of communication with our integration partners to immediately address any content issues they find that violate their own content guidelines.” The spokesperson added that Giphy proactively remoderates content on a regular basis, and maintains a blacklist of terms and offensive content, which is continually updated.

Even so, these safeguards did little to prevent clips of Nazi rallies, rape jokes, and graphic footage of war crimes from appearing on Facebook and Twitter. These GIFs were hidden in plain sight on Tenor and Giphy, in some cases for years. That’s often because proactive moderation policies are not enforced with élan even when they exist. Absent this, and a more integrated and thorough flagging mechanism for Giphy and Tenor GIFs on Facebook and Twitter, users will remain confused about — and therefore discouraged from — reporting offensive content. And there is little to suggest that graphic and distressing content will not continue to be indexed accidentally or uploaded by trolls and hate groups in years to come.


Other platforms have adopted a more drastic approach to GIF moderation than Facebook and Twitter. When an Instagram user in the UK reported that a racist animation appeared on the platform’s Stories feature in March, the platform suspended GIF functionality on its platform within hours. In a statement to TechCrunch, Giphy said that the animation “was available due to a bug in our content moderation filters specifically affecting GIF stickers.” GIFs were reinstated on both platforms later that month, after Giphy said it had scrubbed its sticker library no fewer than four times.

Moderation rules speak multitudes to the kind of place a social network wants to be for its users

The design of Facebook and Twitter allows them more creative options than Instagram or Snapchat. “An easy fix — and the first order of business — ought to be the fixing of the extant reporting mechanism, so [that it] functions in a real way,” says Roberts. “That’s the baseline. That’s an easy thing they could do right now.”

That would mean adding a mechanism for users to report an offensive search result within the GIF button search, which would allow them to instantly report its existence to both the platform and the provider. This would, in turn, necessitate unified policies about what is permissible content, more direct lines of communication between the two, and a clearer public explanation by both as to who is responsible for moderating GIFs on social media for the benefit of users who are still confused.

While an effective flagging system within Facebook and Twitter’s GIF searches might be a start, it would also need to be complemented by other moderation tools. “You get people who game that system,” says Gillespie, like trolls that upload outrageous content or those who complain about non-violating content just to see what happens.

Facebook and Twitter could also censor GIFs via search terms more aggressively, but this is also problematic if the subject the word denotes is too general. Automated moderation tools are also useful in spotting offensive content before it can be seen by the majority of users. This was recently credited by Facebook as the reason it was able to take enforcement action on 1.9 million al-Qaeda and ISIS posts on its platform. Algorithms, though, are not infallible, and they require extensive training to ensure they correctly understand the context of the media they are tasked with policing. All one has to do is Google “racist GIFs” and click on the first Giphy search result — which delivers GIFs that depict someone saying something is racist right alongside GIFs that actually feature racist images — to understand how limited automated screening tools can be.

Only human moderators — tried, tested, and scarred — can make these nuanced decisions

In the end, none of these tools and actions are equal to the careful work of human moderators who trawl through the muck of vile online content — though it often comes at great personal cost. “I don’t want to suggest that we put more people in the sight, in the bullseye, of that kind of work,” says Roberts, “but barring technological mechanisms, which in many cases are not adequate, that is the ‘solution.’”

GIFs are polysemic: they are capable of conveying many meanings at once, in multiple registers of emphasis. Automated tools like PhotoDNA can help enormously in thwarting the spread of illegal content contained within them, but offensive sentiments wrapped in benign imagery may escape them.

Throwing more of them at the problem is not a solution that she’s happy recommending, but according to Roberts, it’s one of few available for policing a medium ripe for abuse. A GIF’s “raison d’être is to be this abstracted form that transmits some kind of powerful meaning in the shortest amount of time,” she explains. “So what are people going to reach for? Well, maybe they just want to reach for the nadir.”