On June 22nd, 2016, Democrats in the House of Representatives staged a sit-in on the floor of Congress. Days before, a man had murdered 49 people and injured 53 others in a shooting at the Pulse nightclub in Orlando, and Democrats organized a protest in an effort to pass gun control legislation. Republicans, hoping to reduce the impact of the Democrats’ publicity stunt, ordered C-SPAN not to turn on its cameras and broadcast the protest. But Democrats didn’t need a broadcasting partner: they had smartphones. Periscope and Facebook Live, which had both been introduced within the past year, allowed members of Congress to broadcast the protests themselves.
It was a thrilling moment. C-SPAN cleverly re-broadcast the streams, and the incident garnered outsized national attention — if, sadly, no action — for the Democrats’ cause. In the aftermath of the stunt, Republicans passed new rules banning the taking of photographs and videos on the House floor. Anyone who violated the rules would now be subject to a fine. A tool meant to democratize broadcasting had been blunted by the democracy.
I thought of the sit-in this week when reading about Australia’s proposal to make Facebook and other tech platforms criminally liable for all the live video they publish. Cat Zakrzewski reports in the Washington Post:
Australia is considering hefty fines and even jail time for executives at social media companies who fail to remove violent content quickly. The proposal is one of the most sweeping crackdowns on tech companies’ content moderation efforts that policymakers in a democratic government have ever considered.
The new legislation, to be introduced this week, would fine companies up to 10 percent of their annual revenue and calls for up to three years in jail — and comes as Australian officials slammed social media companies such as Facebook for failing to offer immediate solutions after violent videos of the New Zealand shooting proliferated online.
Tools that democratize the sharing of information have followed a familiar pattern. First they are discovered by the early adopters, who use them generally for good; then they are discovered by criminals, who exploit them relentlessly. The Arab Spring, which was organized and promoted on social platforms, led directly to the events of 2016, as foreign states learned the platforms could be used to distort public discussions and interfere with elections.
From the start, live-streaming tools raised concerns about the frequency with which they are used to broadcast self-harm and other violent episodes. With the Christchurch shooting, we seem to have reached the end point of all newly democratized communications tools: its usage for unabashed terrorism.
In response, lawmakers around the world are now pressuring Facebook to err on the side of removing live streams, with the threat of stiff penalties if they don’t. And while I don’t expect the United States to consider any such legislation soon, the movement is picking up steam in Europe as well as in Oceania, and could lead to a further splintering of the internet.
Perhaps Facebook will hire the moderators and build the artificial intelligence necessary to adapt to these regulations — or perhaps it will simply make Facebook Live unavailable in those countries. If it does, it’s worth remembering that live broadcasts have been a powerful, pro-democracy tool in this country and others — and that regulations drafted in haste and passed in anger might cost us the kind of freedom that those House Democrats found so useful, and not that long ago.
Almost two years after the worst of the genocide in Myanmar, Patrick Howell O’Neill talks to UN investigator Christopher Sidoti. Sidoti was one of the investigators who held Facebook responsible for spreading hate speech in a subsequent report:
“Even the report commissioned by Facebook itself indicated that only around half of the posts removed by Facebook were identified by Facebook,” Sidoti said. “They’re still reliant on being informed by outsiders, and they’re not yet anywhere near satisfactory in their performance in removing material—and certainly nowhere near satisfactory in preventing posting of this material in the first place.”
Whoops! From Ryan Mac and Pranav Dixit:
One day after WhatsApp launched a tip line to combat misinformation ahead of a general election in India, the company running the project in partnership with the Facebook-owned messaging service revealed that its primary goal is to collect research, rather than immediately crack down on fake news in the world’s biggest democracy.
On Tuesday, WhatsApp announced Checkpoint, a “tip line launched to understand and respond to misinformation during elections in India” with information about submitting suspicious messages. But when BuzzFeed News inquired about the tip line’s effectiveness after submitting several tips and receiving no responses, Proto, the Indian-based company that partnered with WhatsApp, posted an FAQ website that notes the project is “not a helpline” and isn’t primarily designed to provide feedback.
It’s one thing to say you’ve banned white nationalism and another thing to do so effectively. From Andy Campbell:
Goldy’s racist propaganda would seem to represent the exact kind of content that should get someone banned under the new rules. But shown the video above, the Facebook spokesperson argued that it doesn’t promote or praise white nationalism. Instead, the spokesperson claimed, it offers a discussion about immigration and ethnicity statistics. […]
The thing is, Goldy’s content does promote white nationalism, almost exclusively. The concept of white “replacement” described in the video is a white nationalist talking point and conspiracy theory shared by prominent white supremacists all over the world.
Joseph Cox reports that unlike Facebook, Twitter and YouTube have no plans to ban white nationalism outright.
YouTube pointed Motherboard to its existing policies around calls for violence, hate speech, threats, and harassment. None of those appear to deal with content that more simply but explicitly says “I am a proud white nationalist.” On the side of enforcement, this week Motherboard found YouTube is still hosting a slew of neo-Nazi content.
Dorsey stopped podcasting long enough to tell Selina Wang that he thinks the General Data Protection Regulation was good for the industry.
Under the guise of preventing the spread of fake news, Singapore is seeking to heavily penalize publishers whose work threatens to cause “a diminution of public confidence” in the government, Mike Ives and Raymond Zhong report. Violators face up to six years in prison:
Kirsten Han, a Singaporean journalist and activist, described the legislation as “worryingly broad.”
“The bill gives ministers so much power and discretion — any minister can direct individuals or websites to post corrections or take down content, or order access to content to be blocked, and these orders have to be complied with first, even if one is going to appeal the direction in the courts,” Ms. Han said in an email.
Jim Waterston used Facebook ad transparency tools to uncover a pro-Brexit astroturfing campaign:
A series of hugely influential Facebook advertising campaigns that appear to be separate grassroots movements for a no-deal Brexit are secretly overseen by employees of Sir Lynton Crosby’s lobbying company and a former adviser to Boris Johnson, documents seen by the Guardian reveal.
The mysterious groups, which have names such as Mainstream Network and Britain’s Future, appear to be run independently by members of the public and give no hint that they are connected. But in reality they share an administrator who works for Crosby’s CTF Partners and have spent as much as £1m promoting sophisticated targeted adverts aimed at heaping pressure on individual MPs to vote for a hard Brexit.
The answer to the question in the headline seems to be “it was a mistake,” as Google has canceled the grant.
My favorite new propaganda vector is mistranslated Fox sitcoms. Jennifer Maas:
An episode of “Brooklyn Nine-Nine” that aired in Brazil included “mistranslated” dialogue in which a character implies support for President of Brazil Jair Bolsonaro, TheWrap has learned exclusively.
A source close to the situation tells TheWrap that TNT Brasil — which airs the Universal Television-produced NBC sitcom in the South American country — broadcast a version of Episode 504 featuring Charles Boyle (played by Joe Lo Truglio) making positive references toward Bolsonaro in the Portuguese voiceover.
A cybersecurity firm found that a third party was storing 146 gigabytes of Facebook user data — 540 million records — on a public Amazon Web Services bucket. Last year, when Facebook said that Cambridge Analytica-style data leakage had affected most of its user base, this is the kind of thing the company was talking about.
Facebook was letting some people verify new accounts by asking them to type in their email password, which seems bad. The company agreed to stop after the Beast wrote this piece.
I’ll be marking my calendar for this one: Acton, who was most recently seen on Twitter telling people to delete their Facebook accounts, is speaking at Disrupt in October.
Facebook is doing a series of blog posts in which they profile recent examples of viral misinformation they’ve found. I think the intention is to showcase the variety of ways the company identifies bad content, but it also leads to strange graphics asking: “Is the UN seeking to legalize pedophilia?” (It is not.)
“Older people play an outsized role in civic life,” Craig Silverman says in a fascinating report. “They also are more likely to be online targets for misinformation and hyperpartisan rhetoric.” Social platforms need a new strategy for helping their older customers.
Although many older Americans have, like the rest of us, embraced the tools and playthings of the technology industry, a growing body of research shows they have disproportionately fallen prey to the dangers of internet misinformation and risk being further polarized by their online habits. While that matters much to them, it’s also a massive challenge for society given the outsize role older generations play in civic life, and demographic changes that are increasing their power and influence.
People 65 and older will soon make up the largest single age group in the United States, and will remain that way for decades to come, according to the US Census. This massive demographic shift is occurring when this age group is moving online and onto Facebook in droves, deeply struggling with digital literacy, and being targeted by a wide range of online bad actors who try to feed them fake news, infect their devices with malware, and steal their money in scams. Yet older people are largely being left out of what has become something of a golden age for digital literacy efforts.
Snapchat may soon let you add a status to your Bitmoji avatar on the Snap Map, according to a new find from Jane Manchun Wong.
As a longtime listener to the Longform podcast, it was a thrill to talk with Aaron Lammer. We talked about what led me to start this newsletter, my relationship with the companies I cover, and much more. Check it out!
Charlie Warzel says Silicon Valley’s grow-at-all-costs mentality is the root cause of many of its troubles:
It’s true that the tech companies are dealing with thorny problems that most likely have no universally satisfying outcome. Big Tech’s problems are indeed dizzying and manifold, but the last few years have taught us that there’s an Occam’s razor quality to any explanation of the toxicity of our online platforms. The original sin, it seems, isn’t all that complicated; it’s the prioritization of growth — above all else and at the expense of those of us who use the services.
And finally ...
One of the recurring themes of this newsletter is that we have no meaningful antitrust enforcement in this country, which was why I was delighted to see our Justice Department finally rise to the challenge of regulating …. um, big tech’s Oscar eligibility:
“In the event that the Academy — an association that includes multiple competitors in its membership — establishes certain eligibility requirements for the Oscars that eliminate competition without pro-competitive justification, such conduct may raise antitrust concerns,” wrote Makan Delrahim, the DOJ’s antitrust division’s assistant attorney general, in the letter addressed to AMPAS CEO Dawn Hudson.
Here’s hoping that Delrahim decides to throw the book at Hollywood over this one — and then starts working on a sequel.
Talk to me
Send me tips, comments, questions, and pro-democracy Periscopes: email@example.com.