Mark Zuckerberg doesn’t want to run Facebook on his own anymore. In the aftermath of the Cambridge Analytica privacy scandal, he told Recode he was “fundamentally uncomfortable” making some policy decisions. Later, he outlined an idea for an independent Facebook “supreme court” that could review decisions about community standards. And in his giant manifesto last year, he called for a “large-scale democratic process” for running Facebook. “We are committed to always doing better, even if that involves building a worldwide voting system to give you more voice and control,” he promised.
Zuckerberg has long talked about Facebook as a new kind of nation, and his comments have played into a larger debate over how to give users a stake in the platforms they populate. But it’s worth remembering that years ago, Facebook did try to become a democracy — and nobody showed up.
I’m talking about Facebook’s site governance vote system, which was announced in February 2009. The company was responding to controversy over a new policy change, which critics had interpreted as giving Facebook unchecked power over user data. It promised to publish draft versions of various rules, let users submit responses, then release new versions based on the feedback. Facebook users would then vote on the draft, and if more than 30 percent of all active registered users participated, their decision would be “binding.”
“Companies like ours need to develop new models of governance.”
Zuckerberg described the new system in idealistic language. “As people share more information on services like Facebook, a new relationship is created between internet companies and the people they serve. The past week reminded us that users feel a real sense of ownership over Facebook itself, not just the information they share,” he said in the press release. “Companies like ours need to develop new models of governance.” The release even included a tentative endorsement from the charity Privacy International, praising Facebook’s “bold move towards transparency and democratization.”
A couple of months later, Facebook tested its fledgling democratic process, asking users to approve some new terms of service. Nobody voted.
Well, according to Facebook, a total of 665,654 people voted. But this was around 0.3 percent of its 200 million users at the time. The Los Angeles Times called Facebook’s vote “a homework assignment no one did,” pointing out that Facebook asked users to choose between two long and very similar versions of a policy whose effects they’d probably barely notice either way. Facebook followed the majority opinion, but since most people had voted for the proposed change, this just meant doing a thing it already wanted to do.
Facebook maintained a governance page and opened a few more comment periods for various policies. But the next widely publicized vote, held in 2012, was for a new site policy that would get rid of voting. (It also let the company share account data between services like Facebook and Instagram.) Facebook said it was ending voting to encourage good feedback from a small group of users rather than perfunctory engagement from a lot of them. This time around, the vast majority of voters disapproved of that idea, with 88 percent voting to keep the old documents. But since that was 88 percent of 668,500 voters — comprising 0.0668 percent of Facebook’s 1 billion users — it didn’t really matter. Facebook’s brief experiment with direct user control was over.
Most people won’t volunteer to help run an online government
That happened over five years ago, and today, an idea like the governance vote would be even tougher to pull off. Facebook has 2 billion users to reach, and its policies carry more real-world weight than they used to. Would the platform be comfortable asking people to vote on policies addressing something like anti-Rohingya propaganda in Myanmar? If so, would users worldwide be making a call on such a nationally specific issue? Would we get federated Facebook “states” with their own rules? And as Facebook is still rooting out fake propaganda accounts, how would it stop people from committing virtual voter fraud to game the system?
But all these questions are moot if Facebook can’t get people to vote in the first place. And that would require not just putting up a page or a press release, but making sure every user — regardless of location or language — gets clearly encouraged to participate. If Facebook wants people to do more than blindly check a box, it would also need to make sure users understand exactly what they’re voting on, in plain language — not just drop a few links to a wall of policy text.
Facebook’s site governance page is still active, with new rules showing up for comment (but not a vote) every couple of years. One was actually posted yesterday, so you can go participate in the site’s version of a town hall right now — as long as you’re willing not only to read Facebook’s entire proposal for new privacy rules and terms of service, but look up the old policies on your own to see what’s being changed.
My colleague Casey Newton has rightly pointed out that making Facebook a “democratic system,” as Zuckerberg has put it, could let the platform duck responsibility for hard decisions. But there’s a more immediate problem: as we learned almost a decade ago, building a system for social media self-governance is hard. Zuckerberg uses “democracy” as a shorthand for “letting users make decisions” or “doing what communities want,” but he doesn’t talk about how Facebook will inform and engage users and communities, instead of assuming that they’ll proactively take on the extra work of running a digital government. We’ve already seen that particular assumption play out on Facebook. It didn’t end well.