Good evening and happy New Year! To the hundreds of you who signed up of the break, thank you and welcome to The Interface. My goal is to offer you the best daily liveblog of a tumultuous era in technology and government, in convenient newsletter format. If the past few weeks were any indication, we’ll have plenty to talk about in 2019.
What did you miss over the break? Lots more scrutiny of Facebook, for starters. The New York Times published a fusillade of stories examining the company’s historically lax oversight of its data-sharing agreements with third parties; its problematic efforts to aid suicidal users; the political ramifications of its content-moderation regime; and the reluctance of the Federal Trade Commission to weigh in on any of it.
To some journalists and critics, this coverage offers a necessary corrective to years of blithe utopianism from the tech press. To others both inside and outside Facebook, but reportedly including Mark Zuckerberg — it feels like overkill. To me, the Times story that resonated the most over the break was the investigation into data-sharing practices — I did a bonus newsletter about it, which you can find here.
Facebook’s stock price declined so sharply over the past quarter that Zuckerberg stopped selling his shares. Sheryl Sandberg called Salesforce CEO Marc Benioff and told him that if he only read some materials that her office would send him, he would change his mind about Facebook being bad for the world. Benioff claimed that the materials never arrived, which seems preposterous.
A report commissioned by the Senate Intelligence Committee found that Instagram played a much bigger role in Russia’s 2016 information operation than was previously known, and will likely play an even bigger one in 2020. BuzzFeed’s analysis of misinformation on Facebook in 2018 found that the top 50 fake posts got only a little less engagement than the top 50 did last year — and more than the top 50 got in 2016.
Predictions for social networks this year generally leaned negative. The Guardian asked a range of people about what Zuckerberg’s big project for the year should be, and their opinions ranged from “he should probably resign” to “he should definitely resign.” Behold the magnificent self-regard of Anti-Social Media author Siva Vaidhyanathan, who wrote:
Zuckerberg could take a two-year sabbatical from Facebook, enroll at the University of Virginia, and finish his bachelor’s degree under my direction. That would serve him — and his company and all its users — better than just about anything else he could do.
As a serious proposal I find Vaidhyanathan’s idea laughable — but I do think it has legs as a sitcom premise.
Instagram tested a horizontal feed, causing momentary panic. Amnesty International — or was it Supremely Obvious Magazine — announced that women face serious abuse on Twitter. On YouTube, stars promoted a dubious form of gambling, which was the worst thing they did until a few days later, when they put on blindfolds for the spectacularly ill-conceived Bird Box Challenge.
For more links that caught my eye over the break, keep reading. First, though, I want to talk about Reid Hoffman.
There are two basic ways of thinking about Russian interference in the 2016 US presidential election. One is that the misinformation campaign was a historical anomaly enabled by social networks’ naivete, which will be prevented from recurring through a combination of additional tech security personnel, artificial intelligence, and government intervention. The other is that the 2016 revealed vulnerabilities in social platforms so fundamental that they cannot effectively be contained — leaving an opening for any interested parties who wish to sway politics.
In Project Birmingham, we have compelling evidence for the latter view. A coalition of groups working to support Democrats and progressive causes has been caught employing Russian-style information operations on Facebook. Funded in part by LinkedIn cofounder Reid Hoffman, the operations have now been found to have taken place in Alabama, Texas, and Tennessee. And while no one involved seems to think the operations proved decisive in the 2018 midterm elections, they contribute to a general diminishment of trust in social networks as a platform for honest political discussions.
Scott Shane and Alan Blinder described the first set of operations, which were discovered in Alabama, on December 19th:
The project’s operators created a Facebook page on which they posed as conservative Alabamians, using it to try to divide Republicans and even to endorse a write-in candidate to draw votes from Mr. Moore. It involved a scheme to link the Moore campaign to thousands of Russian accounts that suddenly began following the Republican candidate on Twitter, a development that drew national media attention.
“We orchestrated an elaborate ‘false flag’ operation that planted the idea that the Moore campaign was amplified on social media by a Russian botnet,” the report says.
Who’s behind all this? Everyone involved has attempted to distance themselves from the work. The Washington Post has done a heroic job attempting to sort through the various donors, nonprofits, researchers, and cybersecurity firms that are involved. Hoffman said he had no knowledge that the groups he funded were going to use Russian-style tactics, and apologized for his negligence. The firm that most people involved say was responsible for the campaign — the cybersecurity firm New Knowledge — swears it was not.
In any case, somebody’s lying. (Facebook has suspended five user accounts so far, including that of New Knowledge CEO Jonathon Morgan.)
But to underline just one of the points from that Times piece: one strategy here was to have thousands of apparent Russian bots follow Republican candidate Roy Moore, generating media coverage suggesting that Moore was the favored candidate of the Russians — and that Russia might be working to promote his candidacy.
It’s a head-spinning bit of information warfare — a legitimate false-flag operation. And it was only the start. Organizers also worked to promote a write-in candidacy for another Republican candidate in hopes it would siphon votes away from Moore.
And in a separate effort, they created a fake campaign for a “Dry Alabama” attempting to link Moore to the return of Prohibition. In a follow-up piece today, Shane and Blinder talked to one of the latter campaign’s organizers:
Matt Osborne, a veteran progressive activist who worked on the project, said he hoped that such deceptive tactics would someday be banned from American politics. But in the meantime, he said, he believes that Republicans are using such trickery and that Democrats cannot unilaterally give it up.
“If you don’t do it, you’re fighting with one hand tied behind your back,” said Mr. Osborne, a writer and consultant who lives outside Florence, Ala. “You have a moral imperative to do this — to do whatever it takes.”
Elsewhere, campaigns in Texas and Tennessee worked to undermine Republicans and promote Democrats using misleading pages that attempted to grow audiences around nonpartisan subjects and then flooded them with political content. Here’s Tony Romm, Elizabeth Dwoskin, and Craig Timberg:
Some of News for Democracy’s pages inserted Democratic messages into the feeds of right-leaning voters, according to a review of Facebook’s ad archive. News for Democracy ran ads touting Texas Democrat Beto O’Rourke on “The Holy Tribune,” a Facebook page targeted to evangelicals, the archive shows. Another page called “Sounds like Tennessee” focused on local sports and news, but also ran at least one ad attacking since-elected GOP Sen. Marsha Blackburn.
“People start to trust the content emanating from the page, because it appeals to their interests, and once there is a certain degree of trust, you can start to pivot by slowly adding in kernels of disinformation or overly-politicized information that lacks context,” said Benjamin T. Decker, research fellow at the Shorenstein Shorenstein Center on Media, Politics and Public Policy, who called such tactics an “intentional act of deception,” that mimicked strategies of Russian operatives around the 2016 presidential election.
To the extent that any of these efforts had an impact on the 2018 elections, it appears to be small. But it’s clear that groups across the political spectrum now believe that these tactics are effective enough to warrant spending significant resources. The tactics are certain to evolve — and the cynicism embodied in Osborne’s ends-justify-the-means language suggests this battle will head to some dark new places in 2020.
Facebook will no longer accept political ads in Washington due to disclosure requirements that go beyond what the company currently offers.
Naomi Nix reports on shadowy mudslinging involved in winning a large Pentagon contract these days:
Allegations of a corrupt procurement process have been directed at Pentagon officials and company managers, primarily at Amazon.com Inc., the front-runner for the contract, which involves transitioning massive amounts of Defense Department data to a commercially operated cloud system. Microsoft Corp., International Business Machines Corp. and Oracle Corp. are the biggest names jockeying against Amazon, though there’s no evidence they are behind the mudslinging.
Zachary Fryer-Biggs reports on soul-searching at the Pentagon in the wake of Google abandoning Project Maven:
Inside the Pentagon, Google’s withdrawal brought a combination of frustration and distress— even anger — that has percolated ever since, according to five sources familiar with internal discussions on Maven, the military’s first big effort to utilize AI in warfare.
“We have stumbled unprepared into a contest over the strategic narrative,” said an internal Pentagon memo circulated to roughly 50 defense officials on June 28. The memo depicted a department caught flat-footed and newly at risk of alienating experts critical to the military’s artificial intelligence development plans.
Megha Rajagopalan writes about the tension between taking down graphically violent content and preserving evidence of war crimes:
The way investigators document human rights abuses is undergoing a fundamental shift. Once researchers depended heavily on diaries, physical records, and interviews with witnesses to atrocities that sometimes took place years after the fact. Now, investigators at international bodies like the United Nations and the ICC are also cataloguing and analyzing millions of photos, posts, and videos from social media in an effort to hold human rights abusers accountable in court, working alongside nongovernmental organizations, researchers, and digital detectives. Holding perpetrators of human rights abuses accountable, researchers say, increasingly depends on access to content posted on social media platforms.
But this shift in how war crimes are being investigated comes at the same time that social media companies are facing unprecedented criticism for failing to police their platforms, allowing neo-Nazis and other extremist groups to spread their messages online.
In the wake of a similar law being passed in Australia, Pranav Dixit reports on a move to break encryption in India:
India’s government wants to make it mandatory for platforms like Facebook, WhatsApp, Twitter, and Google to remove content it deems “unlawful” within 24 hours of notice, and create “automated tools” to “proactively identify and remove” such material.
It also wants tech companies to build in a way to trace the source of the content, which would require platforms like WhatsApp to break end-to-end encryption.
Reporters Without Borders reports that the Vietnamese government is successfully getting posts from an activist removed even though he’s based in Germany.
“Our research shows that the Vietnamese government is apparently using digital space to suppress critical voices outside the country as well,” said RSF Germany’s Executive Director Christian Mihr. “Those responsible must end these attacks and respect press freedom.”
Gerry Shih reports on Chinese censors going door to door telling people to delete their tweets:
In Beijing and other cities across China, prominent Twitter users confirmed in interviews to The Washington Post that authorities are sharply escalating the Twitter crackdown. It suggests a wave of new and more aggressive tactics by state censors and cyber-watchers trying to control the Internet.
Twitter is banned in China — as are other non-Chinese sites such as Facebook, YouTube and Instagram. But they are accessed by workarounds such as a virtual private network, or VPN, which is software that bypasses state-imposed firewalls.
Speaking of Chinese censorship, Li Yuan went inside one of the country’s censorship factories:
Mr. Li works for Beyondsoft, a Beijing-based tech services company that, among other businesses, takes on the censorship burden for other companies. He works in its office in the city of Chengdu. In the heart of a high-tech industrial area, the space is bright and new enough that it resembles the offices of well-funded start-ups in tech centers like Beijing and Shenzhen. It recently moved to the space because customers complained that its previous office was too cramped to allow employees to do their best work.
“Missing one beat could cause a serious political mistake,” said Yang Xiao, head of Beyondsoft’s internet service business, including content reviewing.
Megha Rajagopalan reports on an unlikely case in which LinkedIn became a political battleground. Fun 2019 goal: get your LinkedIn page banned, but only in China.
LinkedIn censored, and then quickly restored, the profile of a New York–based Chinese human rights activist on its Chinese platform after a wave of negative publicity.
Zhou Fengsuo, one of the founders of a nonprofit organization that aids political prisoners and other vulnerable groups in China, is best known as one of the student leaders of the pro-democracy protests at Beijing’s Tiananmen Square in 1989, which ended in a bloody crackdown by the Chinese government. He was forced into exile in the United States over his role in the student movement, which landed him on a most-wanted list in China.
The debate over the effects of artificial intelligence has been dominated by two themes. One is the fear of a singularity, an event in which an AI exceeds human intelligence and escapes human control, with possibly disastrous consequences. The other is the worry that a new industrial revolution will allow machines to disrupt and replace humans in every—or almost every—area of
Deepa Seetharaman profiled Joel Kaplan, perhaps the most influential conservative voice at Facebook:
This summer, Mr. Kaplan pushed to partner with right-wing news site The Daily Caller’s fact-checking division after conservatives accused Facebook of working only with mainstream publishers, people familiar with the discussions said. Conservative critics argued those publications had a built-in liberal bias.
Mr. Kaplan argued that The Daily Caller was accredited by the Poynter Institute, a St. Petersburg, Fla.-based journalism nonprofit that oversees a network of fact-checkers. Other executives, including some in the Washington, D.C. office, argued that the publication printed misinformation. The contentious discussion involved Mr. Zuckerberg, who appeared to side with Mr. Kaplan, and Chief Operating Officer Sheryl Sandberg. The debate ended in November when The Daily Caller’s fact-checking operation lost its accreditation.
Ina Fried made an effort to round up all the different kinds of data that Facebook collects about its users.
Here’s a newly released market-research survey from Creative Strategies from April 2018, in the midst of the Cambridge Analytica fallout. Among other things, the data suggest that nearly a third of respondents planned to use Facebook less in the future.
Taylor Lorenz reports that it’s going down in the Instagram comments section:
For years, comments on Instagram were secondary to the photo and video posts that make up the app’s main feed. But recently, Instagram comment sections have begun to eclipse the photos they sit below.
Over the past year and a half, the Instagram account @commentsbycelebs has ballooned to nearly 1 million followers by documenting celebrities’ most notable Instagram comments. It has spawned a network of copycat comment accounts, many of which have thousands of followers. Part of the rise in comment culture on Instagram is due to product changes made by the platform. In August 2017, Instagram added threaded comments, making it easier for people to have coherent conversations. And in the spring of 2018, the company instituted an algorithm that surfaced noteworthy comments from celebrities, athletes, influencers, and verified accounts.
Here’s a report that says 11 percent of all engagement on sponsored influencer posts in 2017 came from fake accounts.
Google paid “less than $60 million” for a Q&A app I had never heard of, if you wondered whether the going rates for engineers had cooled off at all.
Sarah Frier and Julie Verhage had some good new details on Facebook’s blockchain division:
Facebook Inc. is working on making a cryptocurrency that will let users transfer money on its WhatsApp messaging app, focusing first on the remittances market in India, according to people familiar with the matter.
The company is developing a stablecoin – a type of digital currency pegged to the U.S. dollar – to minimize volatility, said the people, who asked not to be identified discussing internal plans. Facebook is far from releasing the coin, because it’s still working on the strategy, including a plan for custody assets, or regular currencies that would be held to protect the value of the stablecoin, the people said.
How is TikTok already so bloated it needs a lite version of itself?
Daniel Funke tells writers like me to stop blaming tech for everything that goes wrong:
It’s nearly impossible to determine whether or not a misinformation campaign directly affected an event, so there’s the question of accuracy. Second, blaming misinformation for acts of violence takes the burden off other actors, such as government and law enforcement, who have a primary responsibility to protect citizens. Third, it confers more legitimacy upon misinformers whose goal is often to get mainstream news coverage.
In 2019, when tech platforms will continue to play an outsized role in the fight over misinformation, we would do well to be less technodeterminist in our reporting.
Sally Hubbard, a former assistant attorney general, says US antitrust law should more closely resemble its European counterpart — focused less on price than on competition:
The tech giants have “platform privilege” — the incentive and ability to prioritize their own goods and services over those of competitors that depend on their platforms. By doing so, they contend they are improving their products and benefiting customers. An entrepreneur can create a superior product or service and still get crushed because Big Tech is controlling the game and playing it, too.
This distorted playing field strikes at the heart of the American Dream. And it deprives consumers of the choice, innovation and quality that comes from competition on the merits.
Josh Rogan says the US government should regulate technologies are used by authoritarians to surveil and control their citizens:
Israel-based NSO Group is only one in a growing group of companies that has put powerful spyware tools previously available only to a few governments out on the open market. Its Pegasus software, according to human rights groups and independent investigators, has been used in as many as 45 countries, often by authoritarian leaders to aid the persecution of dissidents, journalists and other innocent civilians.
What hasn’t been previously reported is that NSO is working with a group of Washington-based consultants and law firms to craft its export and ethics policies, including Beacon Global Strategies, a consulting firm run by former top U.S. intelligence and national security officials. But if recent reports of alleged continued abuse of the software are true, the system NSO and its consultants have devised for preventing abuse is clearly failing.
Tom Chivers says we should stop freaking out about kids and screen time:
The problem is that, while the headlines are really, really stark, the evidence is really, really weak. Those headlines, which one psychologist I spoke to described as “scaremongering”, are based on studies that show small, ambiguous effects; they suggest that social media is the disease, when the research cannot show that it’s anything but a symptom; and almost all the studies are weak and badly designed, so even what little they do show we can’t take very seriously.
Perhaps the saddest story over the break was the sudden death of HQ Trivia co-founder and CEO Colin Kroll. Kroll played a vital part in the creation of two social platforms — first Vine, which was acquired by Twitter and is a cultural phenomenon to this day — and later the trivia app HQ, which is one of the only vaguely social apps to have any degree of success in the past few years. I’ll miss him.
And finally ...
Perhaps your remember the story of Carter Wilkerson, who was 16 years old when he had the bright idea of tweeting at Wendy’s asking how many retweets he would need to earn in order to score a year’s worth of free chicken nuggets. Wendy’s told him that the answer was 18 million, but relented after he earned a few million.
Well, as of today, Wilkerson’s tweet is relegated to the Wendy’s trash can of history. Take it away, Hamza Shaban:
The most retweeted tweet of all time now belongs to Yusaku Maezawa, a Japanese billionaire behind the e-commerce company Zozotown. His message to the Twitterverse promised 100 winners a chance to win a piece of 100 million Japanese yen, or about $920,800, if they retweeted him.
Maezawa said his promotion on Twitter was a show of gratitude after Zozotown sold 10 billion yen worth of merchandise during its New Year’s sale. His message has been retweeted more than 5.6 million times. He said he would contact the winners through direct message.
Now why didn’t I think of that?
Talk to me
Send me tips, comments, questions, and disinformation about Alabama: firstname.lastname@example.org.