Yesterday, Mark Zuckerberg made an appearance at the Aspen Ideas Festival. In keeping with the spirit of the event, Zuckerberg brought some ideas. The big ones:
Facebook was right not to remove the doctored video of House Speaker Nancy Pelosi. Zuckerberg said it should have been flagged as misleading more quickly, but defended leaving it up. (I basically agree with him on this one.)
”This is a topic that can be very easily politicized,” Zuckerberg said. “People who don’t like the way that something was cut...will kind of argue that...it did not reflect the true intent or was misinformation. But we exist in a society...where we value and cherish free expression.”
But Facebook will treat deepfakes differently than other forms of misinformation. Zuckerberg said that the company’s policy team is currently considering it: “There is a question of whether deepfakes are actually just a completely different category of thing from normal false statements overall, and I think there is a very good case that they are.”
Facebook can’t protect against election interference alone. Zuckerberg was rightly critical of the US government’s extremely weak response to Russian attacks leading up to the 2016 election, saying:
”One of the mistakes that I worry about is that after 2016 when the government didn’t take any kind of counteraction. The signal that was sent to the world was that “O.K. We’re open for business.” Countries can try to do this stuff and our companies will try their best to try to limit it, but fundamentally, there isn’t going to be a major recourse from the American government. “Since then, we’ve seen increased activity from Iran and other countries, and we are very engaged in ramping up the defenses.”
On Tuesday, some reports had suggested that Zuckerberg was going to unveil a surprise new “constitution” for Facebook. Instead, on Thursday the company released a report detailing the progress it is making in building an independent oversight board for review. The board is connected to Zuckerberg’s big ideas — this is the body that could someday make a binding, independent evaluation of whether a video like the Pelosi fake could stay up on the site.
Since proposing the idea last year, Facebook has held six workshops around the world, which included more than 650 people from 88 countries. Among other things, the company has been conducting a kind of mock trial — having participants debate what to do with particular pieces of controversial content, as part of the work of developing a fair process for the board to implement in the future.
The idea remains to build a board of 40 people who will make content review decisions in small panels. But all of the details are up for discussion, and you can read about the infinitely branching debates the company is now having in the report itself. It makes for a surprisingly brisk read — for one thing, it goes out of its way to find and cite examples of people calling the board a stupid idea. And it’s much more entertaining than this halting, uncertain conversation between Zuckerberg and two prominent law professors, which attempts to bring a sense of history to the conversation but mostly just magnifies the historical weirdness of absolutely everything under discussion.
Mostly, though, it’s just wild to watch a public company staging a miniature constitutional convention in 2019. The main problem is that almost anything is possible. To wit, from Facebook’s report today:
Facebook has suggested that Board members serve a fixed term of three years, renewable once. Other suggestions included varied term lengths; staggered appointments; and shorter term lengths, given the “rapid pace of change” in content and technology. However, while some felt that three years was too long, others felt it was not long enough. The latter believed that more time is necessary for members to become acquainted with their responsibilities, as well as the complexities of content governance.
Feedback was similarly split on the size of the Board. Facebook has suggested up to 40 members on the initial Board, which would be global in nature and organized to operate and decide on cases in panels. Some felt this number was too small and expressed concern over “docket management” and “caseloads.” Others, conversely, found the number to be unwieldy and unmanageable. Still others, on a more practical level, suggested that the Board include 41 members, in case a tiebreak would be required.
It goes on like this for 38 pages. (The appendices go on for another 177.)
Many important decisions appear to remain totally up in the air. For example, I assumed that one benefit of developing an independent oversight board would be to allow the board to create precedents — a kind of case law for future board cases to refer to. But according to the progress report, many participants have frowned on the idea of precedents at all:
Overall, feedback generally supported some sort of precedent-setting arrangement. Most expressed hope that the Oversight Board could support “some idea of … continuity, some idea of stare decisis” that could evaluate “multiple fact patterns and have some precedential weight.” Response from the public questionnaire suggested the same. The majority of respondents (66%) stated that “considering past decisions is extremely to quite important,” while almost a third (28%) consider past decisions as “somewhat important”.
Others felt that precedent would need “to be considered carefully, as … there will need to be overruling rules articulated in order to reverse panel decisions that are later seen to be out of step with changing circumstances.” Furthermore, it was argued, “a strict coherence rule may cause a situation where the first panel to discuss a certain issue might set a standard that may not be reconsidered later. This will create a sense of arbitrariness and stagnation.” Others argued that since social media is a rapidly changing industry, precedent should not prevent review of future, similar content. In the end, many argued for balance: an understanding of precedent that would help ensure consistency but not necessarily be determinative.
The report doesn’t make clear how these questions have been resolved, though it seems likely that many have been. Facebook says a final charter for the board will be released in August, and that it will work to stand up the first group of panelists shortly thereafter.
There are at least two good reasons to support Facebook’s board initiative. One is that it shows that the company understands its power over public speech is untenable, and is seeking to devolve some of that power back to the public. Two is that by returning some of that power to the people, Facebook can become more accountable to its user base over time. The details are all messy, and of course they are — it’s a pseudo-constitutional convention! But the goal still strikes me as a worthy one, and Facebook is moving ahead with a caution that is as welcome as it is rare.
In a significant step, Twitter says it will now put a content warning over certain inflammatory tweets posted by big accounts, Makena Kelly reports:
Today, Twitter is rolling out a new notice for tweets belonging to public figures that break its community guidelines. Now, if a figure like Donald Trump were to tweet something that broke Twitter’s rules, the platform could notify users of the violation and lessen the reach of the tweet. In recent interviews, Twitter executives have hinted that a change like this would be coming soon.
This notice will only apply to tweets from accounts belonging to political figures, verified users, or accounts with more than 100,000 followers. If a tweet is flagged as violating platform rules, a team of people from across the company will decide whether it is a “matter of public interest.” If so, a light gray box will appear before the tweet notifying users that it’s in violation, but it will remain available to users who click through the box. In theory, this could preserve the tweet as part of the public record without allowing it to be promoted to new audiences through the Twitter platform.
With Missouri Sen. Josh Hawley making a racket about Facebook’s data practices, Hamdan Azhar explores how his campaign uses information gleaned from the service:
Senator Josh Hawley (R-Missouri) told Yahoo Finance that he wouldn’t trust Facebook with his money. “I don’t trust Facebook with anything,” he said.
Just one problem: Despite their professed concerns with Facebook, both senators’ campaign websites—sherrodbrown.com and joshhawley.com—have an invisible piece of Facebook technology, called a pixel, that tracks when anyone visits their homepages and shares this information with Facebook. Hawley’s website even shares when visitors donate and the exact donation amount. Facebook can then associate that information with an individual’s Facebook account.
Aarti Shahani looks at the Facebook presence of warlord Lt. Gen. Mohamed Hamdan Dagalo, who reportedly oversaw the killing of more than 100 people in Sudan:
Lt. Gen. Mohamed Hamdan Dagalo, better known as Hemeti, is a social media personality. He is also the leader of the Rapid Support Forces — the paramilitary group that attacked thousands of pro-democracy protesters this month, leaving more than 100 dead. This is a bit of a second act for Hemeti, who also served time with the Janjaweed, the militia group considered responsible for the genocide in Darfur about 15 years ago, according to Foreign Policy magazine.
On Facebook, multiple pages promote Hemeti as a formidable yet kind authority figure.
Emily Birnbaum recaps a hearing this week on online extremism:
Top tech companies, including Facebook, have claimed that their AI systems are already successfully detecting a huge swath of terrorist and extremist content. But experts at the hearing said those claims are often overblown.
“Context is vitally important and context can often be hard for algorithms to detect,” Ben Buchanan, an assistant teaching professor at Georgetown University, said.
Will Oremus explores the Amazon panopticon, now under construction:
The Amazon of today runs enormous swaths of the public internet; uses artificial intelligence to crunch data for many of the world’s largest companies and institutions, including the CIA; tracks user shopping habits to build detailed profiles for targeted advertising; and sells cloud-connected, A.I.-powered speakers and screens for our homes. It acquired a company that makes mesh Wi-Fi routers that have access to our private Internet traffic. Through Amazon’s subsidiary Ring, it is putting surveillance cameras on millions of people’s doorbells and inviting them to share the footage with their neighbors and the police on a crime-focused social network. It is selling face recognition systems to police and private companies.
The Amazon of tomorrow, as sketched out in patents, contract bids, and marketing materials, could be more omnipresent still. Imagine Ring doorbell cameras so ubiquitous that you can’t walk down a street without triggering alerts to your neighbors and police. Imagine that these cameras have face recognition systems built in, and can work together as a network to identify people deemed suspicious. Imagine Ring surveillance cameras on cars and delivery drones, Ring baby monitors in nurseries, and Amazon Echo devices everywhere from schools to hotels to hospitals. Now imagine that all these Alexa-powered speakers and displays can recognize your voice and analyze your speech patterns to tell when you’re angry, sick, or considering a purchase. A 2015 patent filing reported last week by the Telegraph described a system that Amazon called “surveillance as a service,” which seems like an apt term for many of the products it’s already selling.
Europe is moving to block any future implementation of a social credit system, James Vincent reports:
A group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals”; a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.
The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI.” Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and “human-centric” manner.
The new report offers more specific recommendations. These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI. However, the paper is only a set of recommendations at this point, and not a blueprint for legislation.
Ellen Cushing obtains audio from a meeting in which the home-goods retailer’s co-founder appears to be unaware that the line between business and politics is rapidly eroding:
His argument is a cousin of the one many of his peers in the technology industry have long clung to: that they aren’t really political entities, but simply value-neutral conveyor belts for whatever service it is that they offer—short-term rentals, rides, community, connection, information, entertainment. That their sheer scale, multiplied by the wide spectrum of beliefs held by their users, makes moderation of any kind so Sisyphean and so subjective a task that the only possible solution is to allow for just about any idea, or any customer.
But as my colleague Alexis Madrigal notes, the notion of the unbiased platform is dying before our eyes, if it ever really existed: “Some things could not be said. Some types of content were favored by advertisers and companies. The algorithms they use to sort and promote content have biases.” In other words, you simply cannot order this much information without making some judgments.
Anthony Townsend explores why Google’s plans to build a new kind of urban renewal project in Toronto has drawn outrage among locals. It boils down to trust:
Data governance has been a lightning rod because its new and scary. Early on, Sidewalk put more energy into figuring out how the robot trash chutes would work than how to control data it and others would collect in the proposed district. As part of Alphabet, you’d think this would have been a source of unique added value versus say, a conventional development. Not so — the company’s initial proposal in 2017, also hundreds of pages, tacked on a 2-page memo to CYA on the topic. It didn’t work, and belated efforts to fill the gap only led to more missteps along the way, doing little to calm critics.
More important questions and criticisms have been raised about Waterfront Toronto’s handling of the Quayside bidding process and its transparency. Existential questions for Canadian cities about the shifting line between public and private delivery of government services are also on the table. None of these have been satisfactorily addressed by Sidewalk, and the number of elected officials speaking out against the project has grown as a result.
Elizabeth Dwoskin reports that a content moderator got fired after posting lyrics from “Factory” and “The Promised Land” on an internal forum. Also:
On Thursday, a group of a dozen moderators published a new letter reviewed by The Washington Post on Facebook’s internal Workplace forum, demanding better pay and a revision of confidentiality agreements that they say prevent them from seeking clinical help to address the traumatic effects of the job, among other asks. The moderators work for an Accenture subsidiary in Austin.
Celia Chen visits China’s internet addiction treatment centers:
Run by Tao Ran, a former People’s Liberation Army colonel who headed army psychology units, the centre is one of the earliest places in China to diagnose and treat internet addiction and is said to have developed treatment protocols that are used in other parts of the country.
The facility consists of several buildings that serve as canteens, dormitories and treatment rooms, arranged around an internal open-air courtyard that doubles up as a basketball court and where patients assemble for exercise. No electronic devices are allowed.
Samantha Cole writes about a $50 app called DeepNude, which “dispenses with the idea that deepfakes were about anything besides claiming ownership over women’s bodies.”
The software, called DeepNude, uses a photo of a clothed person and creates a new, naked image of that same person. It swaps clothes for naked breasts and a vulva, and only works on images of women. When Motherboard tried using an image of a man, it replaced his pants with a vulva. While DeepNude works with varying levels of success on images of fully clothed women, it appears to work best on images where the person is already showing a lot of skin. We tested the app on dozens of photos and got the most convincing results on high resolution images from Sports Illustrated Swimsuit issues.
Adam Mosseri talks to Gayle King about, among other things: a Facebook breakup:
“I think it’s important to be really clear if you believe that we should be separated, why and what problem it’s gonna solve,” he said. “If you look at the issues that I’m most focused on, things like bullying or self-harm or elections integrity, all of those problems become exponentially more difficult for us at Instagram to address if you split us up.”
Farhad Manjoo goes to a Facebook party at Cannes Lyon:
There is obviously something conspicuously icky about the excess on display. One morning last week, everyone in Cannes woke up to The Verge’s investigation into horrendous working conditions at a contract facility that hires moderators to monitor Facebook. It was a study in contrasts: The moderators complained of bathrooms covered in feces and menstrual blood. At Cannes, Facebook bought a piece of the beach and built a coffee bar, meeting space and private boat launch to entertain its clients.
It’s not true that the internet is eliminating every job for humans. There are humans everywhere in the social media supply chain. Some of them suffer. Others get to schmooze. The internet changed everything. It also changed nothing.
Twitch is giving its creators another carrot with which to lure paying subscribers, Julia Alexander reports:
Twitch is giving its well-behaved streamers a chance to offer a new, VIP-like feature to their most loyal viewers with subscriber-only streams.
The new feature does exactly what the name suggests: any Twitch Affiliated or Partnered creator can choose to broadcast exclusively for moderators, VIPs, and subscribers. This comes at no additional cost to the subscriber beyond the minimum $5-a-month fee they’re paying to support the streamer. Fans who aren’t subscribed will be greeted with a preview of a broadcast before being asked to subscribe to a channel.
After interviewing two of its top executives, Ben Thompson calls Libra “a bad idea.”
To my mind money — which, at the end of the day, is the medium that makes society work, particularly a capitalistic one — has those same high stakes. That means the downsides should be weighed more heavily than the upsides, which means less efficiency and more accountability should be preferable to the opposite. And that, by extension, means that a currency managed, if not by a single corporation then at best a collection of them, is a bad idea.
To be sure, all of these objections apply to a reality that is very far in the future, if it arrives at all. By the time that future arrives, though, it will be too late to raise them.
Half of all adults who don’t have bank accounts live in seven countries, according to a report cited by Facebook. Elizabeth Lopatto says this could limit Libra’s power to lift people out of poverty:
Facebook is banned in China. Some countries, such as Pakistan, Indonesia, and Bangladesh, have temporarily banned Facebook for periods of time, possibly limiting the effectiveness of any money tied to the app. Facebook mentions this as a risk factor to its business in its quarterly filing: “Government authorities in other countries may seek to restrict user access to our products if they consider us to be in violation of their laws or a threat to public safety or for other reasons, and certain of our products have been restricted by governments in other countries from time to time.”
That’s not all: many of these countries have laws around cryptocurrency. (Yes, I know it is debatable whether Libra qualifies as a cryptocurrency or not. But Facebook is calling Libra a cryptocurrency, so I am going to assume cryptocurrency laws will apply.) India’s current regulations mean Libra can’t operate in the country. Pakistan is considering regulation for cryptocurrencies, but currently they are banned. Cryptocurrency is also implicitly bannedin Bangladesh and China.
And finally ...
Brad Esposito talks to people participating in my favorite current trend in Facebook Groups: pretending that you are extremely old:
In the group Snider helps manage people post Facebook-prompted text images lamenting the death of their “son” in brutal honestly (“My son is dead“), they share gifs of the American flag in faux patriotism, the words “Flood Facebook with our flag!” emboldened along the top. Often, it’s just someone replicating the ham-fisted way the older generation can often find itself using Facebook’s basic features, asking amongst an army of commas what the acronym “wyd” means (“Is this some gang language?“).
(Yes it is a gang language.)
Talk to me
Today I invite you to send me tips, comments, questions, and your nominations to Facebook’s oversight board: email@example.com.