At midnight on February 13th, 2014, a Wikipedia user named Lightningawesome added to the list of My Little Pony: Friendship is Magic characters a lengthy biography of Lightning Dash, a capricious, lovable filly existing only in fan fiction. Around that time, an unnamed user defaced the Love Holy Trinity Blessed Mission page, writing, “The LHTBM is nothing but a sect that ruins peoples lives!!!” Four minutes later, someone changed the title of Battlefield 4 to “Kentucky Friend [sic] Chicken and a Pizza Hut.” Three minutes after that, the Horrible Histories page had scrawled across it a dire warning: “You need to get off wikipedia or you will DIEEEE.”
Wikipedia is the encyclopedia “anyone can edit,” and as of this writing it’s had nearly 700 million edits — not all of them well-meaning. Sometimes the mischief is directed, as when The Oatmeal encouraged readers to include Thomas Edison under possible references for “douchebag,” or when Stephen Colbert sends his viewers out to alter “Wikiality” by, say, “proving,” that Warren G. Harding’s middle initial stands for “gangsta.” Mostly, though, it’s predictably uninteresting — shout-outs, profane opinions, keyboard-mashed gibberish — happening thousands of times a day over more than 4 million articles.
But you’ll likely never see any of it. Within minutes if not seconds, bad edits are “reverted,” banished to a seldom-seen revision history. As Wikipedia has grown in size and complexity, so has the task of quality control; today that responsibility falls to a cadre of cleverly programmed robots and “cyborgs” — software-assisted volunteers who spend hours patrolling recent edits. Beneath its calm exterior, Wikipedia is a battlezone, and these are its front lines.

Rise of Wikipedia and the coming of the bots
Wikipedia launched in 2001 from the ashes of expert-penned Nupedia. When Nupedia floundered, founders Jimmy Wales and Larry Sanger pivoted to a crowdsourced encyclopedia. Within four years, the English Wikipedia had more than 750,000 articles. No longer an obscure internet experiment, it had gone mainstream.
The increased attention brought a flood of new users with all of the attendant headaches: self-promotion, amateurish additions, and outright vandalism. Wikipedia’s shortcomings, both as an information source and as a self-organizing community, were becoming apparent. In the fall of 2006, Jimmy Wales gave a keynote speech calling on Wikipedians to focus on article quality over article quantity. The site apparently responded. Over the next several months, the rate of new articles created slowed, while the cull of unworthy articles increased. Wikipedia was discovering how to manage itself.
Around the same time, it faced what was probably the first example of ongoing malicious edits. Someone began blanking pages and replacing them with an image of Squidward Tentacles, the SpongeBob Squarepants character. Using open proxies, multiple user accounts, and possibly a bot, the Squidward Vandal bedeviled Wikipedians, at one point bragging via email, "I am a computer programmer and I know all the codes in the world." He or she also claimed to be a new editor who’d gone rogue after being accused of sabotage.
"When I look at these tools, I really think that they saved Wikipedia from a sad defeat at the hands of random people."
In response, four Wikipedians built what would become known as AntiVandalBot. As the name suggests, it was a first attempt at automated vandalism protection: using a relatively simple set of rules, it monitored recent edits and intervened accordingly. Obvious vandalism could be removed automatically, while borderline cases went to another program, VandalProof, for human intervention. Crude by today’s standards, AntiVandalBot nonetheless saved editors time and attention.
It may even have saved the site. One study examined the probability that a typical Wikipedia visit between 2003 and 2006 showed a damaged article. While the chance was always minuscule — measured in thousandths — it had increased exponentially over just three years. Without the evolving anti-vandalism tools, that trend could have continued; editors would simply be overwhelmed by defacers. "When I look at these tools, I really think that they saved Wikipedia from a sad defeat at the hands of random people," says Aaron Halfaker, a PhD candidate and researcher working for the Wikimedia Foundation. By June 2006, anti-vandalism bots were widespread. (The Squidward Vandal was bested ultimately not by bots, but by sleuthy editors; similar vandalism has periodically reappeared.)
In 2007 Jacobi Carter, then a high school student, looked at MartinBot, then the latest evolution of AntiVandalBot. He saw too many false positives (benign edits being reverted as vandalism) and too much real damage slipping through. He decided he could improve on it, coding a bot that would score edits based on rules about profanity, grammatical correctness, personal attacks, and so on. Vandals often removed a lot of information or blanked pages completely; long-time editors were rarely vandals. By combining all of these rules, Carter’s program, Cluebot, became very effective. In the first two months of service it corrected over 21,000 instances of vandalism. It ran almost continuously for the next three years.

By late 2010, though, Carter was ready to work on the next generation, appropriately titled Cluebot NG. Basic heuristics had served the original bot well, testifying to the predictability of most Wikipedia vandals. But the rules caught only the most obvious vandalism, and there was plenty of room for improvement. Carter and his friend (and friendly rival) Chris Breneman began working on a totally revamped Cluebot.
The original bot relied on simple heuristics; Cluebot NG would instead employ machine learning. That meant instead of supplying basic rules and letting the software execute them, Carter and Breneman would provide a long list of edits classified as either constructive or vandalism — the same process is often used in spam filtering and intrusion detection. The key to successful machine learning is a large collection of data. Luckily, an anti-vandalism competition had already provided a dataset of about 60,000 human-categorized edits. From that, Cluebot could begin learning, finding patterns and correlations within the data.
To enable that learning, Breneman used an artificial neural network, a computational model that mimics the working of organic brains. But, says Breneman, "You can't just throw a set of English words at a neural network and expect it to figure out a pattern." Preprocessing is required: coding examples into numbers the program can understand. That’s also an opportunity for another kind of processing, called Bayesian classification, which in this case compares the edited words to those in the database. If "science," for example, tends to appear in constructive edits, the probability is higher that an unclassified edit containing "science" is also high. That’s a simple example; Cluebot uses a number of Bayesian classifications, all of which are fed into the neural network. There are about 300 total inputs leading to a single output: the probability that a given edit is vandalism. Cluebot applies a final pass of filters (checking that the page hasn’t been reverted already, that a user is on a whitelist) before taking any action.
It patrols 24 / 7, can execute more than 9000 edits per minute, and never sleeps or lets its attention flag
Next to previous incarnations, Cluebot NG is effective, controllable, and reasonably adaptive. One worry for Wikipedians is a high rate of false positives, a fear that good-faith edits will be categorized as vandalism. Being unfairly chastised for vandalism, as the Squidward Vandal claimed to have been, could turn off new editors before they have a chance to understand Wikipedia’s myriad and Byzantine rules. Cluebot allows its administrator to set the rate of false positives, though that rate can never effectively be zero. "Yes, it does get false positives," says Breneman, "but it's much better than any previous bot."
It patrols 24 / 7, never sleeping or letting its attention flag. It can execute more than 9,000 edits per minute, though it never has to approach that limit. Since 2010 it’s run almost constantly, rolling back thousands of bad edits a day; in early 2013 it topped 2 million edits. One study showed that when Cluebot NG was not operating, the time to revert vandalism nearly doubled. Malicious edits were still found, by humans, but it took almost twice as long.
That’s what Cluebot does: like all bots, it makes work more efficient. But one Slashdotter questioned whether bots fit the fundamental ethos of Wikipedia as a community-edited project. After arguing that vandalism is a subjective judgment not reducible to mathematical formulae, beakerMeep wrote, "Editing bots are wrong for Wikipedia, and if they allow it they are letting go of their vision of community participation in favor of the visions (or delusions) of grand technological solutions." Yet from a practical point of view, it’s hard to imagine today’s Wikipedia surviving without bots.
Of course, there are some vandals that only a human can catch.

The trollhunters: humans and cyborgs
Racing to revert vandalism is fun, but "you want to take a second to consider that you’re not crushing somebody."
Early on the morning of February 7th, 2014, a Wikipedia user known only by IP address changed the page for Date Night, the Steve Carell and Tina Fey movie. At the bottom of the cast list, he or she added "Brittany Taya as Art." Just a few minutes earlier, the same IP address had added "Rachel McAdams as Natasha Henstridge (uncredited)" to the cast list for the parody Date Movie. The same IP address was linked to similar changes on over a dozen movies — sneaky changes, tiny bits of misinformation inserted where few would likely notice them. Dozens of edits, spanning months, originated from a range of related IP addresses. Always with the same MO: false additions to cast lists. Seeing the changes is like watching a termite chew through Wikipedia’s carefully built edifice of reliability.

Cluebot didn’t recognize these insidious, inexplicable changes as vandalism. That task fell to a human, a long-time Wikipedia patroller who goes by the name SeaPhoto. (Citing problems with angry vandals, including a death threat from an Australian student who later apologized, he asked only to be identified by username.) SeaPhoto has over 55,000 edits, most of them either fixing vandalism or chastising vandals. He often patrols while watching TV, one eye on the recent edits scrolling by on the screen. Cast-list vandal aside, it usually doesn’t take much attention, so it’s perfect for multitasking. "No patrolling during Breaking Bad or Game of Thrones," though, he writes with a LOL.
SeaPhoto uses a program called Huggle, one of several add-ons that provides an easy interface for reviewing recent edits. That makes him, in the taxonomy of one article on Wikipedia vandalism, a "cyborg" — not a fully automated robot, but not a human manually reverting edits, either. He came to the site in 2006 looking for information about model ships, one of his hobbies. Wikipedia had no information on the topic, and he learned the hard way about the rules of editing. But, he says, "You either accept them or you don’t continue with the project."
He’s committed to Wikipedia’s five pillars, the site’s defining principles, one of which says, "Editors should treat each other with respect and civility" and should always assume good faith on the part of fellow contributors. Wikipedia is both a product — the free encyclopedia — and a collection of dynamic social processes, containing millions of interactions among its members, nearly all of whom will never meet in real life. Without the crowd, Wikipedia stagnates. Participation in the site peaked in 2007; research suggests the rate of new editors slowed. There are several possible explanations for that, from the site’s clunky editing interface to its often impenetrable jargon to long-time editors shutting out new (but inexperienced) users. It’s the last that SeaPhoto worries about the most. Racing to revert vandalism is fun, even when you’re likely to be beaten by a bot, but "you want to take a second to consider that you’re not crushing somebody."
Join us
How does automation affect the social interactions among Wikipedians? That’s the question Aaron Halfaker, the Wikimedia Foundation researcher, has been asking. Looking at anti-vandalism software such as Huggle and Cluebot, he says, "I see this amazing thing: it makes Wikipedia tractable." The long-conventional view of the site as a free-for-all palimpsest of anonymous scribblings — "anyone can edit" — becomes something much different. The tools that saved Wikipedia also altered it by adding a layer of gatekeepers.
Halfaker has examined how such gatekeepers affect new contributors. "When you show up at the edge of a community and you’re there to help, you expect your interactions will be with someone who at least has time to say hello," he says. "These tools aren’t really designed to do that. They’re designed to be efficient. They’re designed to do a job." They’re saving Wikipedia from vandalism, but doing nothing to welcome new users.
The tools that saved Wikipedia also altered it by adding a layer of gatekeepers
How could the gatekeepers be made less imposing? Halfaker and other researchers began experimenting. First they changed Cluebot’s messages to vandals; that revealed that nicer messages actually stop vandalism quicker. That suggested new Wikipedians valued a sense of human-to-human interaction, but the problem was larger than that. Wondering if he could somehow bring one-to-one interactions to the businesslike, efficient world of vandalism prevention, Halfaker began developing what he called Snuggle.
Snuggle was designed to offer a more nuanced view of vandalism. Halfaker offers the example of an Egyptian soccer player who goes by his last name, Homos. Seen in isolation, his name would often be quickly dismissed as crude vandalism. But coming from an editor with plenty of experience on soccer articles, it would be perfectly reasonable. Snuggle was designed to provide this missing context; to show more about the person behind potential vandalism — to "give a clear picture of the whole person, rather than just one action," as Halfaker puts it.
Cluebot efficiently removes the most egregious types of vandalism. But tools like Huggle offer users more ambiguous examples of potential vandalism, then provide little in the way of nuanced response. Halfaker sees that as a problem for the social aspect of Wikipedia: he points to the 2007 peak and notes that new contributions haven’t necessarily gotten worse. People are just as likely today to botch up their first edit. And Wikipedia has never been, from a user-experience standpoint, incredibly welcoming. But encouraging vandalism-fighters while alienating new users does Wikipedia a disservice in the long run. "These design decisions had consequences that no one could have known about," Halfaker says.
Part of what Halfaker’s trying to emphasize is that Wikipedia is not just a battlezone: it’s not simply barbarian hordes throwing themselves against the gates. It’s also a place of collaboration among strangers, with all of the complex social interactions that entails. He’s found, for example, that not everyone wants to use Snuggle, because it doesn’t fit into their already-established means for stopping vandalism and helping new users. People being people, they have their own often idiosyncratic way of doing things. Halfaker’s larger project, of improving "newcomer socialization" on Wikipedia, involves devising better ways for novices and veterans to reach common ground.
That, of course, is what Wikipedia is founded on: finding consensus. Robots and cyborgs aside, that’s a difficult thing — one more likely to be aspired to than attained. And one that Wikipedia’s perfectly human users will keep chasing.
Illustration by Gavin Potenza