Skip to main content

Singapore’s fake news law should be a warning to American lawmakers

Singapore’s fake news law should be a warning to American lawmakers

/

How misinformation laws get used against the people who advocate for them

Share this story

The national flag flying on the roof of the Parliament House in Singapore
The national flag flying on the roof of the Parliament House in Singapore
Photo credit should read ROSLAN RAHMAN/AFP via Getty Images

On Sunday evening, 60 Minutes correspondent Lesley Stahl sat down with YouTube CEO Susan Wojcicki to voice a now-familiar critique: YouTube allows too many dangerous and disturbing videos to remain on the site. She brings up a distorted video of Rep. Nancy Pelosi that falsely describes her as drunk; altered copies of the Christchurch shooting video, quack science, and misleading political ads, among other questionable videos found on the site. It leads to the following exchange:

Lesley Stahl: The struggle for Wojcicki is policing the site, while keeping YouTube an open platform. 

Susan Wojcicki: You can go too far and that can become censorship. And so we have been working really hard to figure out what’s the right way to balance responsibility with freedom of speech. 

Stahl: But the private sector is not legally beholden to the First Amendment. 

As it so happens, some countries are trying to make tech platforms legally beholden to police speech according to national laws. One of them is Singapore, where in October a new law went into effect with the stated purpose of fighting “fake news.” James Griffith wrote about the law for CNN:

Under the Protection from Online Falsehoods and Manipulation Bill, it is now illegal to spread “false statements of fact” under circumstances in which that information is deemed “prejudicial” to Singapore’s security, public safety, “public tranquility,” or to the “friendly relations of Singapore with other countries,” among numerous other topics.

Government ministers can decide whether to order something deemed fake news to be taken down, or for a correction to be put up alongside it. They can also order technology companies such as Facebook and Google — both of which opposed the bill during its fast-tracked process through parliament — to block accounts or sites spreading false information.

Those government ministers wasted little time in enforcing that law, taking action twice in the past week. And if you had to guess, what type of social media post would spur them into action the fastest? Would it be a post that spread hate speech or promoted violence? Would it be a post that spread harmful misinformation, such as a false election date intended to mislead voters? Or would it be a post that criticized the government?

If you guessed No. 3, then you’ve been paying attention to the arguments that every single critic of this law has made since it was first proposed. Here’s Griffiths again, from Saturday:

One offending item was a Facebook post by an opposition politician that questioned the governance of the city-state’s sovereign wealth funds and some of their investment decisions. The other post was published by an Australia-based blog that claimed police had arrested a “whistleblower” who “exposed” a political candidate’s religious affiliations.

In both cases, Singapore officials ordered the accused to include the government’s rebuttal at the top of their posts. The government announcements were accompanied by screenshots of the original posts with the word “FALSE” stamped in giant letters across them.

Now, credit where it’s due: Facebook’s response to this can only be described as hilariously bitchy. Here’s Fathin Ungku and John Geddie in Reuters (emphasis mine):

Facebook said on Saturday it had issued a correction notice on a user’s post at the request of the Singapore government, but called for a measured approach to the implementation of a new “fake news” law in the city-state.

Facebook is legally required to tell you that the Singapore government says this post has false information,” said the notice, which is visible only to Singapore users.

It’s hard to think of a more dismissive way of phrasing that, short of maybe describing the Singapore government as a sniveling mosh pit of baby clowns. But that description would also presumably be in violation of the Protection from Online Falsehoods and Manipulation Bill.

Last week, Sacha Baron Cohen made the case —although not in so many words — that the United States needs its own version of Singapore’s law. Like Stahl, he questioned the value of Section 230 of the Communications Decency Act. And he suggested that tech platforms should be held liable for what their users post. He did so out of legitimate concern over the dangerous misinformation and hate speech that really does spread on these platforms — and out of frustration that they are currently not held accountable for any of it.

But the lesson of Singapore is that the fake-news law you want probably won’t be used in the way that you want. In fact, it may be used in ways that you don’t want at all!

Granted, just because one country implemented a law this way doesn’t mean that Western democracies will. But if you think that they won’t ... why, exactly? In the United States, the First Amendment may offer some protections to average citizens who want to criticize their government online. Others won’t be as lucky. And as the FOSTA-SESTA debacle showed, even the United States is not immune to terrible consequences from noble-sounding speech regulation.

As the debate over Section 230 rages on, that’s something we ought to keep in mind.

Pushback

In our last edition, I wrote somewhat flippantly that “YouTube has had such a rough year that I struggled to come up with a major product or policy win.” YouTube wrote in to say, not unfairly, that is has indeed has some wins this year. Among them:

Just a few examples: our updated hate speech policy, which resulted in not just thousands of accounts coming down at launch, but 5x spikes in video removals and channel terminations; a reduction in our violative view rate by 80% over the past 18 months; changes to the way recommendations work resulting in a 50% drop in watchtime on borderline content in the US (and that # is about to go up); a suite of tools that is helping creators successfully diversify their revenue streams; and improvements to the way copyright claims work, solving a top pain point for creators. 

One reason I think that some of these moves haven’t resonated is that they feel so abstract. If YouTube has taken down five times as many videos this year, how much of the problem is solved? How much is left to go? It all still feels quite mysterious. Still, incremental progress is the actual way that most big tech problems get solved. So: point taken.

The Ratio

Today in news that could affect public perception of the big tech platforms.

🔼 Trending up: Facebook released a new tool that will allow users to transfer photos and videos from Facebook to other services, starting with Google Photos. It’s the sort of charming, pro-competition move that seems to happen more frequently amid the looming threat of antitrust regulation!

🔽 Trending down: Google’s limits on political ads have a loophole Trump could exploit. Although the company is pulling powerful targeting tools from political advertisers, they can still target display ads using tools from other companies.

🔽Trending down: Leaked documents show TikTok may have hid videos of people with disabilities. Queer and fat users were also pushed out of view. The news is the latest content moderation debacle the video sharing platform is facing as it tries to expand around the world.

Governing

TikTok said it had made a mistake in suspending the account of 17-year-old Feroza Aziz, who had posted witty but incisive videos about politics. Aziz had her account suspended after posting a video accusing China of putting Muslims in concentration camps. Tony Romm and Drew Harwell from The Washington Post lay out the timeline. (Here’s TikTok’s apology.)

TikTok, however, said it had penalized her not for her comments about China, but rather for a video she had shared earlier — a short clip, posted to a different account, that included a photo of Osama bin Laden. Aziz’s video violated the company’s policies against terrorist content, TikTok said, so the company took action against her device, making any of her other accounts unavailable on that device. TikTok said her videos about China did not violate its rules, had not been removed and had been viewed more than a million times.

But the video in question — a copy of which she shared with The Post — actually was a comedic video about dating that the company had misinterpreted as terrorism, Aziz said.

By Wednesday evening, TikTok had reversed course: The company said it restored her ability to access her account on her personal device. TikTok also acknowledged that her video about China had been removed for 50 minutes on Wednesday morning, which it attributed to a “human moderation error.”

A federal judge ruled Facebook doesn’t have to pay damages to 29 million users whose personal information was stolen in a September 2018 data breach. Users can require the company to employ automated security monitoring and educate people better about hacking threats. It’s not the big payday the plaintiffs were hoping for. (Jonathan Stempel / Reuters)

EU antitrust regulators are investigating Google’s data collection practices. The company is now being investigated on both sides of the Atlantic for how it monetizes peoples’ information. (Foo Yun Chee / Reuters)

Voting machines in Northampton County Pennsylvania glitched on election night, forcing officials to count paper ballots instead. The issue exposed flaws in both the election machine testing and procurement process, and offers us plenty of reason to worry in 2020. (Nick Corasaniti / The New York Times)

A new rule requiring people in China to scan their faces when signing up for new mobile plans went into effect yesterday. The rule has sparked widespread privacy concerns, as well as increased scrutiny of China’s sophisticated surveillance tactics. (Annabelle Timsit / Quartz)

Chinese regulators also announced new rules governing video and audio content online, including a ban on fake news and deepfakes. Any use of AI or VR also needs to be clearly marked or it could be considered a criminal offense. (Reuters)

Chinese data mining firm MiningLamp is helping police solve crimes, track drug dealers, and prevent human trafficking. The company has been compared to Palantir, which helps law enforcement agencies in the US. (Sarah Dai and Li Tao / South China Morning Post)

Shopkeepers in India are protesting Amazon and Walmart. They say the companies engage in predatory pricing in violation of new rules meant to protect local businesses. (Ari Altstedter / Bloomberg)

Industry

Match Group, the company that owns most major online dating services, screens for sexual predators on Match — but not on Tinder, OkCupid or PlentyofFish. A spokesperson said there are “definitely” registered sex offenders using Match services, according to Hillary Flynn, Elizabeth Naismith Picciani and Keith Cousins from Columbia Journalism Investigations:

For nearly a decade, its flagship website, Match, has issued statements and signed agreements promising to protect users from sexual predators. The site has a policy of screening customers against government sex offender registries. But over this same period, as Match evolved into the publicly traded Match Group and bought its competitors, the company hasn’t extended this practice across its platforms — including Plenty of Fish, its second most popular dating app. The lack of a uniform policy allows convicted and accused perpetrators to access Match Group apps and leaves users vulnerable to sexual assault, a 16-month investigation by Columbia Journalism Investigations found.

Match first agreed to screen for registered sex offenders in 2011 after Carole Markin made it her mission to improve its safety practices. The site had connected her with a six-time convicted rapist who, she told police, had raped her on their second date. Markin sued the company to push for regular registry checks. The Harvard-educated entertainment executive held a high-profile press conference to unveil her lawsuit. Within months, Match’s lawyers told the judge that “a screening process has been initiated,” records show. After the settlement, the company’s attorneys declared the site was “checking subscribers against state and national sex offender registries.”

Mark Zuckerberg went on CBS This Morning, where he did not want to talk about what he talked about with President Trump the other day.

New on Instagram: messy bedrooms. “People aren’t interested in seeing this perfectly curated grid. It’s about giving yourself permission to be a little bit more human,” said one writer quoted in this article. (Julie Vadnal / Elle)

Rowan Winch is the 15-year-old behind the popular Instagram meme account @Zuccccccccccc. This charming profile shows how he, like many American teenagers, uses the internet for entrepreneurial gains. It also shows what happened when Facebook decided to crack down on meme accounts, depriving him of his biggest platform. (Taylor Lorenz / The New York Times)

Not everyone is excited about YouTube’s new homepage layout, which the company rolled out a few weeks ago. The homepage used to be broken up into a number of different, easily digestible sections. Now, those sections are gone, replaced with an endless feed of recommended videos. (Julia Alexander / The Verge)

This reporter is on a mission to find out what a better social media world would look like, from the perspective of media historians, tech designers, science fiction writers and activists. (Annalee Newitz / The New York Times)

And finally...

Twitter CEO Jack Dorsey announced that he’s going to move to Africa for three to six months in 2020.

Whatever!

Talk to us

Send us tips, comments, questions, and things that the government of Singapore would find offensive: casey@theverge.com and zoe@theverge.com.