Earlier this month, Facebook undertook an effort to recast the debate around the regulation of big tech companies on its own terms. Mark Zuckerberg wrote an op-ed; Sheryl Sandberg published a blog post; and their deputies gave interviews to outlets read closely by policymakers. The overall effect was of a company that has spent the past two years on the defensive organizing around a core set of principles to advocate for: principles that will allow the company to continue operating basically as is.
This week, we saw the second plank of Facebook’s strategy: self-regulation from its product teams. In a meeting with reporters in Menlo Park, myself included, the company announced a series of product updates organized around what the company calls “integrity.” The announcements touched most of Facebook’s biggest products: the News Feed, groups, stories, Messenger, and Instagram. (WhatsApp was a notable exception.) Collectively, the moves seek to strike a better balance between freedom of speech and the harms that come with it. And also, of course, to signal to lawmakers that the company is capable of regulating itself effectively.
Facebook says its strategy for problematic content has three parts: removing it, reducing it, and informing people about the actions that it’s taking. Its most interesting announcements on Wednesday were around reducing: moves that limit the viral promotion of some of the worst stuff on the platform.
”Click gap,” for example, is a new signal that attempts to identify sites that are popular on Facebook but not the rest of the web — a sign that they may be gaming the system somehow. Sites with a click gap will be ranked much lower in the News Feed. As Emily Dreyfuss and Issie Lapowsky describe it in Wired:
Click-Gap could be bad news for fringe sites that optimize their content to go viral on Facebook. Some of the most popular stories on Facebook come not from mainstream sites that also get lots of traffic from search or directly, but rather from small domains specifically designed to appeal to Facebook’s algorithms.
Experts like Jonathan Albright, director of the Digital Forensics Initiative at Columbia University’s Tow Center for Digital Journalism, have mapped out how social networks, including Facebook and YouTube, acted as amplification services for websites that would otherwise receive little attention online, allowing them to spread propaganda during the 2016 election.
Another move aimed at reducing harm on Facebook involves cracking down on groups that become hubs for misinformation. As Jake Kastrenakes writes in The Verge:
Groups that “repeatedly share misinformation” will now be distributed to fewer people in the News Feed. That’s an important change, as it was frequently group pages that were used to distribute propaganda and misinformation around the 2016 US elections.
Facebook will also soon give moderators a better view of the bad posts in their groups. “In the coming weeks,” it said, it will introduce a feature called Group Quality which collects all of the flagged and removed posts in a group in one place for moderators to look at. It will also have a section for false news, Facebook said, and the company plans to take into account moderator actions on these posts when determining whether to remove a group.
I like these moves: they take away “freedom of reach” from anti-vaccine zealots and other folks looking to cultivate troll armies by hijacking Facebook’s viral machinery. There are a lot of other common-sense changes in yesterday’s fine print: allowing moderators to turn posting permissions on and off for individual group members, for example; and bringing Facebook verified badges to Messenger, which should cut down on the number of fake Mark Zuckerbergs scamming poor rubes out of their money.
Still, I can’t shake the feeling that all these moves are a bit ... incremental. They’re fine, so far as they go. But how will we know that they’re working? What does “working” even mean in this context?
As Facebook has worked to right its ship since 2016, it has frequently fallen back on the line that while it’s “making progress,” it “still has a long way to go.” You can accept these statements as being true and still wonder what they mean in practice. When it comes to reducing the growth of anti-vaccine groups, for example, or groups that harass the survivors of the Sandy Hook shooting, how much more “progress” is needed? How far along are we? What is the goal line we’re expecting Facebook and the other tech platforms to move past?
Elsewhere, Mark Bergen and Lucas Shaw report that YouTube is wrangling with a similar set of questions. Would the company’s own problems with promoting harmful videos diminish if it focused on a different set of metrics? YouTube is actively exploring the idea:
The Google division introduced two new internal metrics in the past two years for gauging how well videos are performing, according to people familiar with the company’s plans. One tracks the total time people spend on YouTube, including comments they post and read (not just the clips they watch). The other is a measurement called “quality watch time,” a squishier statistic with a noble goal: To spot content that achieves something more constructive than just keeping users glued to their phones.
The changes are supposed to reward videos that are more palatable to advertisers and the broader public, and help YouTube ward off criticism that its service is addictive and socially corrosive.
But two years on, it’s unclear that new metrics have been of much help in that regard. When platforms reach planetary scale, individual changes like these have a limited effect. And as long as Facebook and YouTube struggle to articulate the destination they’re aiming for, there’s continuing reason to doubt that they’ll get there.
Aoife White reports that the Netherlands are considering antitrust action against Apple based on a recent complaint from Spotify:
The Netherlands’ Authority for Consumers & Markets will examine whether Apple abuses a dominant market position “by giving preferential treatment to its own apps,” it said in a statement on Thursday. The probe will initially focus on Apple’s App Store, where regulators have received the most detailed complaints, and Dutch apps for news media, but is also calling on app providers to flag if they have any problems with Google’s Play Store.
The antitrust probe adds to a growing backlash against the tolls Apple and Google charge to developers using their app stores. The EU’s powerful antitrust arm is weighing Spotify’s complaint targeting Apple. This builds on concerns that technology platforms control the online ecosystem and may rig the game to their own advantage. Amazon.com Inc.’s potential use of data on rival sellers is also being probed by the EU to check if it copiesproducts.
A popular internet archive is reporting that the European Union has been overzealous in its recent anti-terrorism enforcement. It’s this sort of thing that causes free-speech advocates to worry when regulations against “harmful content” are enacted:
In a blog post yesterday, the organization explained that it received more than 550 takedown notices from the European Union in the past week “falsely identifying hundreds of URLs on archive.org as ‘terrorist propaganda’.”
Here’s a story that shows how platforms are still struggling to prevent ban evasion:
A day after Facebook banned six Canadian individuals and groups for spreading hate, two made their way back onto the platform with new pages, while 11 pages with similar names and content remained online despite the ban.
Faith Goldy, the Canadian Nationalist Front, Wolves of Odin, and Canadian Infidels were all banned Monday, but more than 24 hours later BuzzFeed News and the Toronto Star found 12 pages, groups, and Instagram accounts using similar names and posting similar content that had been on the banned accounts. After asking Facebook for comment, they were all taken down.
Interesting from Erica Orden and Shimon Prokupecz:
Amazon CEO Jeff Bezos is scheduled to meet with federal prosecutors in New York as soon as this week, according to people familiar with the matter. The meeting signals that the US attorney’s office is escalating its inquiry connected to Bezos’s suggestion that the kingdom of Saudi Arabia was behind a National Enquirer story that exposed his extramarital affair and his claim that the tabloid attempted to extort him.
Speaking of Bezos, he’s facing his largest internal pressure front yet on climate change, Karen Weise reports:
This week, more than 4,200 Amazon employees called on the companyto rethink how it addresses and contributes to a warming planet. The action is the largest employee-driven movement on climate change to take place in the influential tech industry.
The workers say the company needs to make firm commitments to reduce its carbon footprint across its vast operations, not make piecemeal or vague announcements. And they say that Amazon should stop offering custom cloud-computing services that help the oil and gas industry find and extract more fossil fuels.
Today in Twitter whoopsies:
“This is a bug in our search typeahead system limited to desktop that we are working to fix,” a spokesperson said. “The issue is that for some search queries, the word ‘People’ is linked to ‘@NYTimes.’” So while we still don’t really know why the search system is working this way, we do know that it’s supposed to be working differently.
Procter & Gamble, one of the biggest advertisers on Google and Facebook, is threatening to drop its ads again if the companies don’t stop showing their ads next to drugs and whatnot. Oh but also … the chief brand officer would like them to develop better tracking solutions! Which one do you think he cares about more?
Pritchard also brought up a key point of friction in the industry. He wants the ad platforms to use a standardized way of identifying individual consumers, so that advertisers can track people as they move across the internet and make sure they’re not repeatedly hitting a consumer with the same ad. But as privacy becomes a bigger concern for people and governments, Facebook, Google and others have used it as a reason to make it even more difficult to do that kind of tracking. The added privacy makes it harder for advertisers to send pinpointed messages to people, increasing their costs and annoying consumers who get hit with the same ad over and over again.
Matt Day, Giles Turner, and Natalia Drozdiak examine the use of human review teams to help improve Alexa’s speech-recognition abilities. It kicked up a firestorm on Twitter over privacy concerns:
Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere.
Natt Garun visits a photo booth convention to examine how the Instagram and the larger internet has changed them:
If ever there was an analogy for technology in 2019, the photo booth may be the mascot. What was once an innocuous machine designed to help you socialize and capture moments with friends has now been reappropriated to gather data for profit. In pursuit of shareability, machines are incentivized to create viral-worthy, multimedia content that, in turn, receive and funnel data straight to advertisers. Some photo booths, like Baltimore-based Pixilated, can even follow the same email address to track the specific events a customer attends.
James Wellemeyer says that for his generation, phone calls are out and FaceTime is in:
For groups of college students and high schoolers, texting is out. Their go-to method of communication is FaceTime. “If you want to say something to someone, you don’t call them on the phone anymore,” says Kyle Baker, a 21-year-old junior at George Washington University. “You’ve gotta see their face.” Baker says he FaceTimes his friends “all the time,” and doing so is “totally normal.”
I can back him up. I’m 19, and for my friends and me, random FaceTimes are a way of life. I FaceTime my friends without warning to ask them for help on homework or just to see when they’re free to meet up. It’s easier (and far more fun) than texting, and we’ve been doing it for more than two years now. FaceTime was released in February 2011, when I was 11. My peers and I literally grew up with it.
Nothing makes me happier than LinkedIn attempting to explain human emotions as part of a product launch:
You can use Celebrate to praise an accomplishment or milestone like landing a new job or speaking at an event, or Love to express deep resonance and support, like a conversation about work life balance or the impact of mentorship.
Twitter’s prototype app has a nifty new feature where you can swipe on a reply to like it. It’s a small thing, but definitely my favorite aspect of the new conversational design so far.
The New York Times has a new hub of pieces from its Opinion section looking at evolving attitudes toward privacy and how the rest of us are going to get along in the surveillance economy. Here’s a fine call to arms from my pal Kara Swisher.
Disney’s chairman and CEO, who once considered buying Twitter, had some choice works for social networks when he accepted a humanitarian award:
“Hitler would have loved social media,” he said, according to Variety. “It’s the most powerful marketing tool an extremist could ever hope for because by design social media reflects a narrow world view filtering out anything that challenges our beliefs while constantly validating our convictions and amplifying our deepest fears.
“It creates a false sense that everyone shares the same opinion,” he continued. “Social media allows evil to prey on troubled minds and lost souls and we all know that social news feeds can contain more fiction than fact, propagating a vile ideology that has no place in a civil society that values human life.”
John Herrman makes the case — persuasively — that Twitter ought to let us delete our tweets in bulk.
For one user, it may be akin to updating a Facebook profile or brushing up a LinkedIn page; for another, it takes into account the demands of a new job; for another, deletion may be necessary to travel safely. Twitter has stubbornly refused to address widespread harassment on its platform, and tweet deletion offers a way to mitigate it in some forms. (That is probably the most-demanded feature, if you can call it that — that users be able to use the service without being confronted with targeted abuse.) The Twitter Archive Eraser app is, like Twitter, most popular in the United States, but according to its creators has also gained traction in Saudi Arabia, where Twitter, once seen as a tool of liberation, has been embraced by the government as a tool for surveillance and targeted repression.
And finally ...
Here’s something that feels totally unnecessary and inevitable at the same time:
instead of tapping a famous actor to fill out the white suit, KFC is using a computer-generated bro that the company is calling “Virtual Influencer Colonel.” As his name implies, this dude loves living the good life and posing for photos on Instagram with his girlfriend, who is also a computer-generated model. Of course he’s got his own hashtag — #secretrecipeforsuccess — and those words are also tattooed across his chiseled abs.
Talk to me
Send me tips, comments, questions, and your incremental Facebook improvements: email@example.com.