clock menu more-arrow no yes mobile

Filed under:

The Internet of Garbage

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Sarah Jeong features editor who publishes award-winning stories about law, tech, and internet subcultures. A journalist trained as a lawyer, she has been writing about tech for 10 years.

The intersection of copyright and harassment

On December 15th, 2014, a full en banc panel of 11 judges of the Ninth Circuit Court of Appeals sat for oral arguments in Garcia v. Google. Cris Armenta, the attorney for the plaintiff, began her argument:

Cindy Lee Garcia is an ordinary woman, surviving under extraordinary circumstances. After YouTube hosted a film trailer that contained her performance, she received the following threats in writing:

Record at 218: “Are you mad, you dirty bitch? I kill you. Stop the film. Otherwise, I kill you.”

Record at 212: “Hey you bitch, why you make the movie Innocence of Muslim? Delete this movie otherwise I am the mafia don.”

Record at 220: “I kill whoever have hand in insulting my prophet.”

Last one, Record at 217. Not the last threat, just the last one I’ll read. “O enemy of Allah, if you are insulting Mohammed prophet’s life, suffer forever, never let you live it freely, sore and painful. Wait for my reply.”

At this point, Armenta was interrupted by Judge Johnnie Rawlinson. “Counsel, how do those threats go to the preliminary injunction standard?”

Indeed, her opening was an odd way to begin, and the observers—mostly lawyers deeply familiar with copyright who had followed the case with great interest—were confused by it. Wasn’t Garcia a case about copyright law and preliminary injunctions?

For Cindy Lee Garcia, of course, it wasn’t. It was a case about her right to control her exposure on the internet. But in her quest to end the barrage of hate aimed at her, she ended up in a messy collision with copyright doctrine, the Digital Millennium Copyright Act (DMCA), the Communications Decency Act (CDA), and the First Amendment.

The Ninth Circuit had released a previous opinion in the case earlier that year, written by then-Chief Judge Alex Kozinski. Kozinski’s initial Garcia opinion, issued after three appeals judges had heard the case, may have made few headlines, but it caused a wild frenzy in the world of copyright academia. In short, Kozinski’s initial opinion in Garcia appeared to break copyright law as had been understood for decades, if not a century.

The case was a hard one. The plaintiff was sympathetic, the facts were bad, and the law was—Kozinski aside—straightforward. Cindy Garcia had been tricked into acting in the film The Innocence of Muslims. Her dialogue was later dubbed over to be insulting to the prophet Mohammed. Later the film’s controversial nature would play an odd role in geopolitics. At one point, the State Department would blame the film for inciting the attack on the Benghazi embassy.

Meanwhile, Garcia was receiving a barrage of threats due to her role in the film. She feared for her safety. The film’s producers, who had tricked her, had vanished into thin air. She couldn’t get justice from them, so she had to settle for something different. Garcia wanted the film taken offline—and she wanted the courts to force YouTube to do it.

Garcia had first tried to use the DMCA to ask YouTube to take the film down. YouTube wouldn’t honor her request. Their reasoning was simple: the DMCA is a process for removing copyrighted content, not offensive or threatening material. While Garcia’s motivations were eminently understandable, her legal case was null. The copyright owner of the trailer for The Innocence of Muslims was Nakoula Basseley Nakoula, not Garcia.

Garcia pressed the theory that her “performance” within the video clip (which amounted to five seconds of screen time) was independently copyrightable, and that she had a right to issue a DMCA takedown. YouTube disagreed, and its position was far from unfounded—numerous copyright scholars also agreed. (In the December 2014 en banc hearing, Judge M. Margaret McKeown would comment, “Could any person who appeared in the battle scenes of The Lord of the Rings claim rights in the work?”)

Garcia went to court. She lost in the district court, and she appealed up the Ninth Circuit. To nearly everyone’s surprise, then-Chief Judge Kozinski agreed with her that her five-second performance had an independent copyright, a move that went against traditional doctrinal understandings of authorship and fixation.

A strange thing then unfolded: it wasn’t merely a decision that Garcia had a copyright inside of a work someone else had made. If it had been, Garcia could go home and reissue the DMCA request. But the court also ordered YouTube to take down the video—creating an end-run around the DMCA, even though the DMCA notice-and-takedown procedure had been specifically designed to grant services like YouTube “safe harbor” from lawsuits so long as they complied with notice-and-takedown. (Cathy Gellis, in an amicus brief written for Floor64, additionally argued that an end-run around CDA 230 had also been created.) Judge Kozinski had broken copyright law and the DMCA.

Google/YouTube immediately appealed the decision, requesting an en banc hearing—essentially, asking the court of appeals to rehear the case, with 11 judges sitting instead of only three. Their petition was accompanied by 10 amicus briefs by newspapers, documentarians, advocacy groups, industry groups for technology companies and broadcasters, corporations like Netflix and Adobe, and law professors by the dozen.

Nobody liked the Garcia ruling. What did it mean for news reporting casting interview subjects in an unflattering light? And what did it mean for reality television shows? For documentaries? What did it mean for services like Netflix that hosted those shows and documentaries? The first Ninth Circuit opinion had created a gaping hole in copyright and had pierced through the well-settled rules that governed how copyright liability worked on the internet.

In May 2015, Kozinski’s first ruling was reversed by the en banc panel. “We are sympathetic to her plight,” the court wrote. “Nonetheless, the claim against Google is grounded in copyright law, not privacy, emotional distress, or tort law.”

Lurking beneath the thorny legal and doctrinal issues of Garcia is the great paradigm shift of the present digital age, the rise of the conscious and affirmative belief that women should have, must have, some kind of legal recourse to threats online.

Cindy Lee Garcia is a woman stuck between a rock and a hard place. Nonetheless, the 2014 Garcia decision is wrongly decided. Garcia is not just a weird copyright case; it’s a case that speaks volumes about popular attitudes toward online harassment and about the dead end that will come about from a focus on content removal.

How the DMCA taught us all the wrong lessons

Cindy Garcia went straight to the DMCA because it was the “only” option she had. But it was also the “only” option in her mind because 16 years of the DMCA had trained us all to think in terms of ownership, control, and deletion.

When you assume that your only recourse for safety is deletion, you don’t have very many options. It’s often very difficult to target a harassing poster directly. They might be anonymous. They might have disappeared. They might live in a different country. So usually, when seeking to delete something off the web, wronged individuals go after the platform that hosts the content. The problem is that those platforms are mostly immunized through Section 230 of the Communications Decency Act. The biggest gaping hole in CDA 230, however, is copyright. That’s where most of the action regarding legally required deletion on the internet happens, and all of that is regulated by the DMCA.

The Digital Millennium Copyright Act

The Digital Millennium Copyright Act, among other things, provides “safe harbor” to third-party intermediaries so long as they comply with notice-and-takedown procedures. So if a user uploads a Metallica music video without permission, Warner Bros. cannot directly proceed to suing YouTube. Instead, Warner Bros. would send a DMCA notice. If the notice is proper, YouTube is forced to either take down the video, or no longer be in its “safe harbor.”

The safe harbor provision of the DMCA is largely touted with encouraging the rise of services like YouTube, Reddit, WordPress, and Tumblr—services that are considered pillars of the current internet. These sites host user-generated content. While there are certainly rules on these sites, the mass of user-generated content can’t be totally controlled. Without DMCA safe harbor, these sites couldn’t cope with copyright liability for material that slipped through the cracks.

Although today YouTube uses a sophisticated Content ID system that does manage to automatically identify copyrighted content with surprisingly accuracy, Content ID was developed later in YouTube’s history. This extraordinary R&D project couldn’t have existed without the early umbrella of protection provided by DMCA safe harbor. Theoretically, DMCA safe harbor protects the little guys, ensuring that the internet will continue to evolve, flourish, and provide ever-new options for consumers.

The DMCA is also one of the handful of ways you can force an online intermediary to remove content.

The Communications Decency Act, Section 230

Under present law, DMCA works in lockstep with Section 230 of the Communications Decency Act, which generally immunizes services from legal liability for the posts of their users. Thanks to CDA 230, if someone tweets something defamatory about the Church of Scientology, Twitter can’t be sued for defamation.

There are very few exceptions to CDA 230. One notable exception is federal law banning child pornography. But the big one is copyrighted material. Copyright infringement is not shielded by CDA 230—instead, any violations are regulated by the provisions of the DMCA instead.

CDA 230 was created in response to Stratton Oakmont, Inc. v. Prodigy, a case where the web service Prodigy was sued for bulletin board posts that “defamed” Wall Street firm Stratton Oakmont. (Today, Stratton Oakmont is best known as the company in the Martin Scorsese film The Wolf of Wall Street.)

At the time, Prodigy received 60,000 postings a day on its bulletin boards. The key was that Prodigy did enforce rules, even if it couldn’t control every single posting. By taking any sort of action to curate its boards, it had opened itself up to liability. Strangely, the Stratton Oakmont decision discouraged moderation and encouraged services to leave their boards open as a free-for-all. So Congress sought to reverse Stratton Oakmont by creating CDA 230.

Changing CDA 230?

CDA 230 was a shield intended to encourage site moderation and voluntary processes for removal of offensive material. Ironically, it is presently also the greatest stumbling block for many of the anti-harassment proposals floating around today. CDA 230 can seemingly provide a shield for revenge porn sites—sites that purportedly post user-submitted nude pictures of women without their consent. Danielle Citron in Hate Crimes in Cyberspace proposes creating a new exception to CDA 230 that would allow for liability for sites dedicated to revenge porn, a smaller subset of a category of sites for which Citron adopts Brian Leiter’s label: “cyber-cesspool.”

CDA 230 has no doubt been essential in creating the modern internet. Any changes to the status quo must be carefully considered—how much of the internet would the new exception take down, and which parts of the internet would it be? What kind of exception would there be for news sites and newsworthy material? Crafting the perfect exception to CDA 230 is not theoretically impossible, but then there is an additional practical aspect that muddies the waters.

Any legislation laying out a new exception, no matter how carefully crafted from the start, will likely suffer from mission creep, making the exception bigger and bigger. Anti-harassment initiatives become Trojan horses of unrelated regulation. It is rhetorically difficult to oppose those who claim to represent exploited women and children, so various interest groups will tack on their agendas in hopes of flying under the cover of a good cause.

New considerations for altering CDA 230 are in play. Many of the major revenge porn sites have been successfully targeted either by state attorneys general or by agencies like the Federal Trade Commission. One operator, at least, was not blindly receiving submissions as a CDA 230–protected intermediary, but was actually hacking into women’s email accounts to procure the photos. Other operators were engaging in extortion, charging people to “take down” the photos for a fee. Revenge porn websites have demonstrated a long and consistent pattern of unlawful conduct adjacent to hosting the revenge porn itself. These sites, which Danielle Citron calls the “worst actors,” never quite evade the law even with CDA 230 standing as-is. It turns out that these worst actors are, well, the worst.

A new exception to CDA 230 aimed at protecting the targets of harassing behavior stands in an uncanny intersection. A narrow exception does not officially make criminals out of people who were acting badly; it rather targets people who have consistently demonstrated themselves to be engaged in a host of other crimes that are prosecutable. But a broad exception, targeted just a step above the “worst actors,” could be disastrous for the internet.

Turning hate crimes into copyright crimes

When her book Hate Crimes in Cyberspace went to print, Citron outlined a proposal for a limited and narrow exception to CDA 230, meant to target these “worst actors.” But she also took great pains to explain how it was not targeted at other, more mainstream sites, citing Reddit as an example of a site that would not be affected.

Shortly after Hate Crimes in Cyberspace was published in September 2014, Reddit became ground zero for the distribution of nude photos of celebrities that had been hacked from their Apple iCloud accounts. “Leaked” nudes or sex tapes are nothing new in Hollywood, but in an era of increasing awareness of misogyny on the web, this mass nonconsensual distribution of photos struck a new chord. Jennifer Lawrence called what happened to her a “sex crime,” and many pundits agreed.

Reddit was slow to remove the subreddit that was the gathering place for the photos. But eventually it did, with the reasoning being that the images being shared there were copyrighted. A tone-deaf blog post by then-CEO Yishan Wong announced that Reddit was “unlikely to make changes to our existing site content policies in response to this specific event,” explaining:

The reason is because we consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community. The role and responsibility of a government differs from that of a private corporation, in that it exercises restraint in the usage of its powers.

The title of the post was, incredibly, “Every Man is Responsible for His Own Soul.” Yishan Wong resigned in November 2014 (supposedly over an unrelated conflict). In February 2015, under then-new CEO Ellen Pao, Reddit implemented new policies on nonconsensually distributed nude photos. By May 2015, Reddit implemented site-wide anti-harassment policies.

Reddit is now in a very different place than it was in 2014—but its actions in September of that year are a fascinating case study in the worst way for a platform to handle harassment. Reddit is not a “worst actor” in the hierarchy of platforms, and its relative prominence on the internet likely did end up influencing its eventual policy changes, despite initial resistance.

What’s striking about the September 2014 incident is that in removing the offending subreddit, Reddit did not appeal to morals, the invasion of privacy, Reddit’s pre-existing rule against doxing, or the likely crime that had occurred in acquiring the photos in the first place. Instead, Reddit cited DMCA notices, effectively placing copyright as a priority over any of those other rationales.

The affair doesn’t cast Reddit in a particularly good light, but the bizarre entanglement between the DMCA and gendered harassment on the internet isn’t new. Regardless of their motivations, both Reddit and Cindy Lee Garcia fell into the same trap: They turned a hate crime into a copyright crime.

When people are harassed on the internet, the instinctive feeling of those targeted is that the internet is out of control and must be reined in. The most prominent and broad regulation of the internet is through copyright, as publicized in the thousands of lawsuits that the Recording Industry Association of America launched against individual downloaders, the subpoenas the RIAA issued to the ISPs to unmask downloaders, and the RIAA and MPAA’s massive lawsuits against the Napsters, Groksters, and even YouTubes of the world.

In our mass cultural consciousness, we have absorbed the overall success of the RIAA and the MPAA in these suits, and have come to believe that copyright law is how one successfully manages to reach through a computer screen and punch someone else in the face.

Online harassment, amplified on axes of gender identity, race, and sexual orientation, is an issue of social oppression that is being sucked into a policy arena that was prepped and primed by the RIAA in the early 2000s. The censorship of the early internet has revolved around copyright enforcement, rather than the safety of vulnerable internet users. And so we now tackle the issue of gendered harassment in a time where people understand policing the internet chiefly as a matter of content identification and removal—and most dramatically, by unmasking users and hounding them through the courts.

Yet an anti-harassment strategy that models itself after internet copyright enforcement is bound to fail. Although the penalties for copyright infringement are massive (for example, statutory damages for downloading a single song can be up to $150,000), and although the music and movie industries are well-moneyed and well-lawyered, downloading and file-sharing continue.

Content removal is a game of whack-a-mole, as Cindy Lee Garcia learned. Shortly after the first Ninth Circuit decision in her favor, she filed an emergency contempt motion claiming that copies of The Innocence of Muslims were still available on the platform, demanding that Google/YouTube not only take down specific URLs but also take proactive steps to block anything that came up in a search for “innocence of Muslims.”

From Garcia’s point of view, if her safety was at stake, then only a total blackout could protect her. But copyright law was not created to protect people from fatwas. Her case, already a strange contortion of copyright law, became even messier at this moment, as her lawyer asked for $127.8 million in contempt penalties—the copyright statutory damages maximum of $150,000 multiplied by the 852 channels that were allegedly “still up.”

At that moment, Cindy Garcia, who had so far been a sympathetic plaintiff laboring under extraordinarily difficult circumstances, suddenly became indistinguishable from a copyright troll.

Google’s reply brief clapped back: “Garcia’s fundamental complaint appears to be that Innocence of Muslims is still on the internet. But Google and YouTube do not operate the internet.”

The illusive goal of total control

Garcia may have been right that removing or disabling most or even some instances of the video could have mitigated her circumstances. But it’s hard to say, especially once the cat was out of the bag. Indeed, during the December 2014 oral arguments, Judge Richard Clifton chimed in with, “Is there anyone in the world who doesn’t know your client is associated with this video?” Garcia’s attorney stumbled for a bit, and Judge Clifton interrupted again, musing, “Maybe in a cave someplace, and those are the people we worry about, but. . . ”

In many circumstances, when online content continues to draw attention to a target of harassment, the harassment is amplified, and once the content falls away out of sight, the interest disappears as well. But at the same time, Garcia wasn’t seeking to merely mitigate the harassment; she wanted to wipe the film off the internet simply because she had appeared in it.

Garcia was chasing a dream of being able to completely control her image on the internet. It’s an echo of the same dream that the record industry has been chasing since the 1990s. It’s not that you can’t impact or influence or dampen content in the digital realm. But there’s no way to control every single instance, forever.

Any anti-harassment strategy that focuses on deletion and removal is doomed to spin in circles, damned to the Sisyphean task of stamping out infinitely replicable information. And here, of course, is the crux of the issue: harassing content overlaps with harassing behavior, but the content itself is only bits and bytes.

It’s the consequences that echo around the content that are truly damaging—threats, stalking, assault, impact on someone’s employment, and the unasked-for emotional cost of using the internet. The bits and bytes can be rearranged to minimize these consequences. And that’s a matter of architectural reconfiguration, filtering, community management, norm-enforcement, and yes, some deletion. But deletion should be thought of one tool in the toolbox, not the end goal.

Because deletion isn’t victory, liberation, or freedom from fear. It’s just deletion.