Most of the giant platforms have an easy pitch for developers. Apple developers make apps for iOS and Mac, then sell them for money. Google developers make Android apps and ChromeOS hardware, then sell them for money. Amazon developers launch businesses on AWS or Amazon’s storefront, and then sell goods and services for money.
Today Facebook’s annual F8 developer conference kicks off in San Jose, and it’s worth noting how complicated its pitch to developers has become. There once was a time when it was relatively straightforward — developers like Zynga once printed money selling virtual cows through Facebook’s popular gaming platform.
But that part of Facebook has withered to near-nothingness, and in the meantime developers became one of the company’s biggest headaches. Cambridge Analytica is, at its root, a developer story — one that Facebook responded to, out of necessity, by shutting down wide swathes of the platform to prevent anything like it from ever happening again.
So what should we expect from F8 this year? At Wired, Brian Barrett does a full walk-up, predicting announcements around privacy, encryption, AI, Oculus and augmented reality, Facebook Dating, and Facebook Watch, among other subjects.
A glance at the schedule for this year shows that Facebook’s developer story is now one that is largely about growth marketing and customer service. Sessions planned for after the keynote focus on using Messenger as a business tool, building better ads using the stories format, and “the value of VR for enterprise.”
(Incidentally, that last one is an area where Facebook’s developer story looks like a normal software business: Oculus developers can sell software for money.)
Facebook could have some big surprises in store. But given what we know so far, it looks like 2019 could be a year where the company announces more incremental developments. That might be fine — products announced at F8 often fail to ship.
But what if you hoped for a more searching discussion about the relationship between social networks and modern society? CEO Mark Zuckerberg released one of those on Friday: a discussion with the historian Yuval Noah Harari, whose book Sapiens has been required reading in Silicon Valley for the past few years.
Harari is whip-smart, and he does an admirable job here of honing in on some key issues. Does Facebook want to “connect” people for any particular purpose, or simply to keep them looking at a screen? How do you build a social network that improves cohesion among people around the world, rather than erodes it? How do you build artificial intelligence systems that don’t serve as tools of surveillance and control? Is the internet economy undermining human agency and democracy?
Any one of those questions could serve as the basis for a book-length thesis. And it’s easy to see how anyone working on these issues might be tripped up by a question when put on the spot. Still, I found myself unsettled by how weak Zuckerberg’s answers generally are here. Faced with the toughest questions he’s had yet during his year of challenging conversations, he defaults to a simple faith in the power of Facebook and democracy.
Here’s a representative sample:
Harari: The Soviet model just didn’t work well because of the difficulty of processing so much information quickly and with 1950s technology. And this is one of the main reasons why the Soviet Union lost the Cold War to the United States. But with the new technology, it’s suddenly, it might become, and it’s not certain, but one of my fears is that the new technology suddenly makes central information processing far more efficient than ever before and far more efficient than distributed data processing. Because the more data you have in one place, the better your algorithms and then so on and so forth. And this kind of tilts the balance between totalitarianism and democracy in favor of totalitarianism. And I wonder what are your thoughts on this issue.
Zuckerberg: Well, I’m more optimistic about —
Harari: Yeah, I guess so.
Zuckerberg: About democracy in this.
Mark Zuckerberg: I think the way that the democratic process needs to work is people start talking about these problems and then even if it seems like it starts slowly in terms of people caring about data issues and technology policy, because it’s a lot harder to get everyone to care about it than it is just a small number of decision makers. So I think that the history of democracy versus more totalitarian systems is it always seems like the totalitarian systems are going to be more efficient and the democracies are just going to get left behind, but, you know, smart people, you know, people start discussing these issues and caring about them, and I do think we see that people do now care much more about their own privacy about data issues, about the technology industry.
”Smart people care about these issues and are discussing them” is an argument about a positive future rooted in hope rather than empirical fact. I want to believe that the virtues of democracy are self-evident and capable of withstanding creeping authoritarianism around the world — but when a historian of Harari’s stature starts raising questions like these, I have reason to worry.
One reason why Zuckerberg’s arguments feel so thin here is that they generally lack a personal point of view. Zuckerberg would almost always rather describe the world as it is rather than opine about how he thinks it ought to be. His case for Facebook suffers for it: Optimism is an attitude, not a worldview.
I’m glad Zuckerberg offered a Harari a big platform on which to raise the questions he did. And while I don’t expect we’ll get any answers at a developer conference, I would like to see Facebook at least acknowledge the stakes. In past years F8 has cheerfully described a world coming together. In 2019 it feels more like a world hanging in the balance.
Yet another shooting from someone who apparently became radicalized online unfolded near San Diego over the weekend. Ben Collins looks at the Facebook connection. (Separately, here’s a good Alex Stamos thread looking at the similarities and differences between fighting ISIS and white nationalist terrorism on social networks.)
A link to the Facebook page was posted before the shooting to the far-right message board 8chan by a user claiming to be John T. Earnest, the white supremacist who has been charged in the attack on the Chabad of Poway synagogue. The page promised a livestream of the attack and an “open letter” filled with anti-Semitic tropes.
Many of the posts on the Facebook page celebrated mass shooters, and the first posts provided guesses as to how many people he would kill. One user posted a meme featuring an AR-15 rifle with the words “here we go.” Another commenter asked for a video stream of the attack, saying he “needs the blood for Santa Muerte,” the saint of death. Both of those accounts remain active.
Today Facebook announced its first set of grants for independent research on social networks’ effects on democracy. Makena Kelly walks us through them:
The projects will give more than 60 academics access to “privacy-protected Facebook data” to help conduct research into a range of topics, including the impact of IRA-trolling on Germany’s 2017 election and the spread of fake news during the Chilean elections in the same year.
Facebook will provide the researchers with data from the platform’s APIs for CrowdTangle, its Ad Library, and, eventually, an anonymized URL dataset. Researchers around the world competed for the grants, although Facebook was not involved in determining which projects were approved. The company has also pledged not to interfere in the research going forward.
Facebook held a call with reporters Friday to discuss its preparations for the EU elections; the transcript of the call is here.
Julian E. Barnes and Adam Goldman report that intelligence agency task forces set up to protect election integrity for the midterm elections have been made permanent:
“We recognize that our adversaries are going to keep adapting and upping their game,” Christopher A. Wray, the F.B.I. director, said Friday in a speech in Washington, citing the presence of Russian intelligence officers in the United States and the Kremlin’s record of malign influence operations.
“So we are very much viewing 2018 as just kind of a dress rehearsal for the big show in 2020,” he said in his remarks at the Council on Foreign Relations.
David Uberti explores how Fox News has continued to get more engagement than any other publisher on Facebook, despite its shift to emphasize meaningful conversations among friends:
Fox News may also be benefiting from Facebook’s effort to clean up its act. Soon after the 2016 election, as fears around misinformation hit a fever pitch, Facebook began turning its dials to screen out so-called fake news and conspiracies. In some cases, Facebook demoted or banned hyperpartisan pages, many of them smaller or more conspiratorial than famed Sandy Hook truther site InfoWars.
This cleanup effort could be inadvertently boosting Fox, said Robert Faris, who studies digital communication as the research director at Harvard’s Berkman Klein Center for Internet & Society. By diminishing the reach of a constellation of largely right-wing, fringe sites, Facebook may have effectively redirected some users’ attention toward Fox News.
Judd Legum reports that the Trump campaign ran hundreds of ads that violated its rules … by including the phrase “Attention Ladies.”
The campaign appears to be leaning on Melania to bolster Trump’s low support with women. Focusing on Texas, which some Democrats believe is the next swing state, is also an interesting choice.
But these ads also explicitly violate Facebook’s ad guidelines because they include “prohibited content.” Facebook’s rules prohibit ads that reference the “personal attributes” of the people being targeted.
“Ads must not contain content that asserts or implies personal attributes” Facebook’s rules state, including “direct or indirect assertions or implications about a person’s… gender identity.” The phrase “Attention Ladies” at the beginning of each of these ads violates the guidelines.
Facebook is trying to stop companies that sell followers and other fake engagement, Adi Robertson reports:
Facebook has sued a New Zealand company that sold fake likes, shares, and followers on Instagram, saying that “we will act to protect the integrity of our platform.” It alleges that Social Media Series Limited — a company run by Arend Nollen, Leon Hedges, and David Pasanen — spent years flouting Facebook’s requests to stop selling fake, automated engagement through sites with names like Likesocial.co and IGFamous.net. The suit asks a US court to stop the company’s behavior and award damages for manipulating the Instagram platform.
Roger Parloff has a nice profile of Chris Cox, featuring the first interview with him since he quit Facebook:
He is leaving because of what he calls, only half jokingly, “artistic differences” with CEO Mark Zuckerberg, he says. “But I respect and care about Mark so deeply,” he adds, “that I would never really want to get into more detail than that.”
According to three people familiar with his thinking, he disagreed with Zuckerberg about his decision to integrate the Facebook “family of apps”—Facebook, Messenger, Instagram, and WhatsApp—and also with Zuckerberg’s more surprising and dramatic pivot, announced March 6, to create more end-to-end encrypted spaces on the platform for individual and small group communications.
Apple started banning screen-time management apps after introducing its own tools, Jack Nicas reports. (Apple responded saying that the apps were improperly using mobile device management features intended for business use only.)
Over the past year, Apple has removed or restricted at least 11 of the 17 most downloaded screen-time and parental-control apps, according to an analysis by The New York Times and Sensor Tower, an app-data firm. Apple has also clamped down on a number of lesser-known apps.
In some cases, Apple forced companies to remove features that allowed parents to control their children’s devices or that blocked children’s access to certain apps and adult content. In other cases, it simply pulled the apps from its App Store.
Mark Bergen and Josh Eidelson report that Google may be cracking down on its internal dissidents:
Hundreds of Google staffers met on Friday and discussed what activists allege is a frequent consequence of criticizing the company: Retaliation. Two leaders of recent company protests said they’ve been mistreated by managers and collected similar stories from other workers at the world’s largest internet company.
The claims of retaliation are the latest in a series of internal upheavals over issues ranging from the use of artificial intelligence for military purposes to executive misconduct and the rights of contract workers.
Weeks later, YouTube’s biggest individual creator addressed the fact that terrorists keep telling people to subscribe to his channel:
Kjellberg is calling for his subscribers to stop the meme entirely, and addressing the uncomfortable implications of the New Zealand attack.
“To have my name associated with something so unspeakably vile has affected me in more ways than I’ve let shown,” Kjellberg says in the video. “I just didn’t want to address it right away, and I didn’t want to give the terrorist more attention. I didn’t want to make it about me, because I don’t think it has anything to do with me. To put it plainly, I didn’t want hate to win.
“But it’s clear to me now the “Subscribe to PewDiePie” movement should have ended then.”
Chase Bank broke the cardinal rule of Twitter (never tweet) and got its banker ass handed to it as a result. Elizabeth Warren’s dunk was my favorite. Michael Cappetta reports:
Chase Bank’s effort to provide a little “Monday Motivation” backfired after a tweet pushing people to be fiscally responsible was widely criticized as “poor-shaming.”
Chase, which has 365,000 Twitter followers, quickly deleted the post.
Kurt Wagner’s history of Facebook’s pivot to mobile includes this interesting note on Project Oxygen, the company’s effort to ensure its app could be distributed in the event Apple or Google ever tried to block it. (PS, today is Kurt’s first day at Bloomberg. Congratulations Kurt!)
Oxygen, according to four sources familiar with the plan, is intended as a defense against a bigger, longer-term issue that Google created for Facebook: As the owner of the Android operating system — and thus the Google Play store where billions of people around the world can download Facebook’s products — Google has a virtual chokehold over Facebook’s distribution. Oxygen is the “break glass in case of emergency” plan Facebook put in place in case Google ever decides to suck all the oxygen out of the room.
The plan was created around 2013, sources say, a time when Facebook was still very much transitioning to mobile and worried about Google’s leverage. It’s unclear how necessary Oxygen is today, though in the past the plan included ways to ensure people can access the Facebook app on Android phones outside the Google Play Store, according to former employees. That includes strategies like sideloading, which would let people download an Android app from a mobile web browser instead of the Play Store, for example.
My favorite story of the day for social media goths. Valar morghulis y’all!
They predict that, based on the social media platform’s user levels in 2018, the number of deceased Facebook users could be at least 1.4 billion or potentially as high as 4.9 billion by the end of the century.
In the most extreme scenario, in which Facebook gains no new users ever again, the dead outnumber the living within 50 years.
Mark Zuckerberg built a cute box to help his wife get more sleep and he posted about it on Instagram:
Zuckerberg, who shares two daughters, Maxima, 3, and August, 1½, with Chan, said the dilemma led him to create a small wooden box that sits atop Priscilla’s nightstand and puts out a dim light between the hours of 6 and 7 a.m.
The father of two noted that the light is “visible enough that if she sees it she’ll know it’s an okay time for one of us to get the kids, but faint enough that the light won’t wake her up if she’s still sleeping.”
Charlie Warzel is alarmed at the similarities in online posts from the suspects in shootings at a California synagogue and a New Zealand mosque:
The Poway attack seems to be another horrifying entry in a lineage of hate crimes carried out for a captive audience of digital onlookers. Worse yet, these online communities appear to be incentivizing the darkest impulses of their worst users. Like the Christchurch massacre, the Poway shooting is not only tailored for the internet but also sickeningly standardized. The digital footprint and manifestoes of these white nationalist terrorists follow a familiar template — one that each shooter fills in with their own hideous details. Indeed, it seems real-world murderous hate crimes have become a message board meme of sorts. And like any online meme, the creation cycle only seems to be accelerating, refining itself and, horrifyingly, increasing in frequency. Online, it plays out like some game, but its effects are morphing into the real world and spreading violence.
Mozilla researchers say Facebook’s ad archive API isn’t sufficient for researchers to understand what’s happening on the platform:
The fact is, the API doesn’t provide necessary data. And it is designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation.
Last month, Mozilla and more than sixty researchers published five guidelines we hoped Facebook’s API would meet. Facebook’s API fails to meet three of these five guidelines.
Anna Wiener reviews Dorsey’s appearance at TED:
As Dorsey pivoted from non-answer to non-answer, it was hard not to wonder whether, despite his appearance of media-savvy calm, he wasn’t in over his head. Since the 2016 election, it has grown increasingly clear that allowing young, mostly male technologists to build largely unregulated, proprietary,international networks might have been a large-scale, high-stakes error in judgment.
That Dorsey is now expected to find a solution to unprecedented and unforeseen problems, on a platform designed thirteen years ago for narrow and relatively innocent use cases, seems darkly comical at best—an instance of refusing to learn from our mistakes. “He’s dealing with a scale of a problem that doesn’t have a lot of precedent in human history,” a programmer friend of mine texted. “It’s actually kind of scary that he comes across as so unstudied. I think ‘conversational health’ is a dodge. Twitter, and Jack, want to avoid taking positions on who is doing harm. But they don’t have that luxury at this point, because Twitter is such a megaphone.” Change will need to happen on a systemic level, as Dorsey noted. The extent to which this is possible, when the systems are not only working as designed but being rewarded for it, depends on the willingness of a public company to bet against its own future.
Katie Notopoulos loves her new Facebook Portal so much, and laments that the company’s bad reputation over privacy means most people won’t try it:
Remember when the internet was fun? Remember when the idea of a social network — Facebook, even — was fun and awesome? Getting to look at pictures of your friends online? That fucking RULED. It’s mind-boggling to remember that at one point, Facebook was something that genuinely sparked joy.
I’m mad that Facebook ruined a good thing by fucking up so much with constant privacy fumbles. I’m mad that it isn’t fun anymore, that instead of a place for a cool time with buddies it’s literally destabilizing democracy and enabling genocide. I’m mad that Facebook has diarrhea-shat all over itself so often that all reasonable people hate and distrust the company so much that no one in their right mind would EVER buy a Facebook video chatting machine, no matter how amazing and good it is. I’m mad that Twitter took a platform that’s the perfect joke vessel and let it become a place people don’t feel safe on. I’m mad Tumblr was left to die, I’m mad about Flickr, I’m mad that the promise of an internet that seemed bursting with possibilities turned out to be bursting with horrors. I’m angry that a million tiny bad things and wrong turns have happened to erode our trust in the internet, and now “the internet” is shorthand for “bad place.” Most of all I’m pissed that Facebook’s constant stream of unforced errors is ruining my ability to have a Good Time Online.
And finally ...
We often write about Russia in the context of its interference in foreign affairs. Its latest salvo in this effort is apparently a giant spy whale and I can’t imagine ending today’s newsletter any other way:
A beluga whale found off Norway’s coast wearing a special Russian harness was probably trained by the Russian navy, a Norwegian expert says.
Marine biologist Prof Audun Rikardsen said the harness had a GoPro camera holder and a label sourcing it to St Petersburg. A Norwegian fisherman managed to remove it from the whale.
If Pixar wants to make something that is not a bad sequel to one of its existing franchises … I think Russian spy beluga could have legs.
Talk to me
Send me tips, comments, questions, and your F8 predictions: email@example.com.