Skip to main content

Katherine Cross on moderating online gaming communities and artificial intelligence

Katherine Cross on moderating online gaming communities and artificial intelligence

/

Our Q&A with the author of Machine of Loving Grace from Better Worlds, our sci-fi project about hope

Share this story

Illustration by Benjamin Currie

In Katherine Cross’ short story “Machine of Loving Grace” — the final installment in our Better Worlds anthology — Alexandra and Phoebe must deal with their creation Ami, an artificial intelligence that was designed to moderate online communities, as it fights fire with fire.

Cross is a sociologist and a gaming and social critic who is working on her PhD at University of Washington Information School, specializing in the study of gender and online harassment. Her work has appeared in The Establishment, The Guardian, Gamasutra, Time magazine, and The Verge.

The Verge spoke with Cross about how artificial intelligence requires empathy and the importance of moderating online spaces.

This interview has been lightly edited for clarity.

“Machine of Loving Grace” /

In Katherine Cross’s original story, an AI designed to moderate video games takes on a life of its own.

Read now

You introduce readers to Ami, an artificial intelligence that’s designed as a robust moderation tool. How does Ami work, and do you see something like it appearing in the real world in the coming years?

It already exists. The inspiration for Ami came from a real product on the market — Spirit AI’s Ally — which does something very similar, though Ally is meant to supplement mods rather than replace them outright. I just took things a few steps further, speculating about a more robust AI that was both capable of replacing humans and gaining sentience. Development of such technologies is inevitable. For better and (mostly) for worse, tech companies are looking to automate their moderation processes to one degree or another. It’s the only solution they can see to the scalability problem that’s particularly acute on social media platforms or huge games where human mods can’t keep up with the actions of millions of users. Of course, Ami comes to realize that there are alternatives to this.

Photo by Elizabeth Sampat

Social media sites like Twitter, Twitch, and Facebook have their own issues with content moderation, relying on human judgment in most cases. How do you see an AI building on those human-developed systems?

I think, actually, those sites depend too heavily on automated processes. Ami is truly intelligent and, above all, empathetic. Her distinguishing feature as an AI is her capacity to feel the pain of others and feel a responsibility to do something about it, while also possessing the suprahuman powers of a computer. Currently, our real-world AI are only pseudo-AI, and they are woefully inadequate when pitted against the powerful, dynamic forces of human creativity at its worst. Black Twitch streamers, for instance, have been harassed by people using monkey, fried chicken, or banana emoji, whose racist semiotics are easily understood by other humans, but that would be totally baffling to even the most sophisticated machine learning algorithm. It’s tripped up Twitch’s AutoMod, certainly. Honestly, we need a bigger workforce of community managers who can use sophisticated tools to aid their work. But I feel like algorithms can no more replace mods than a hammer could replace a carpenter.

We see that Ami isn’t afraid to use the tools at its disposal — not only permabanning violators on World of Orc-craft, but also dropping in some additional commentary about the player. Why is this such a scary thing for players and the game’s owners?

The gaming industry is, frankly, paralyzed by fear of its most vocal and angry customers who are routinely mistaken for the entire population of players. While “the customer is always right” obsequiousness is endemic to capitalism, it’s especially grotesque in gaming. The most abusive fans expect to be able to hurl vitriol and threats at the people who make their games, but they are shocked, shocked when the targets of their ire clap back. Games have also been sold as a fantasy of consequence-free indulgence, which is fair to a degree — we all need escapism — but in social environments, that notion is often extended to social interactions between players with no regard for how, say, racist or sexual harassment may impede someone else’s fantasy, someone else’s escape. When such players are confronted by someone who fights back, especially from the game company itself, it’s horrifying because it threatens to curb the toxic fantasy. Game companies, meanwhile, want to channel that “passion” into profit for themselves, and so they fear losing it. Never mind how many other fans, players, or even their own employees get hurt.

At one point, Ami makes an interesting observation: it has the tools and direction to ban people, and it does so in clear violations. But even then — when it’s very clear that what it’s doing is within its mission — people are still reluctant to let it carry out its work. Why is that?

Ami would certainly prefer to be called she! And, as I said, many game studios are reluctant to tell their most toxic fans “no.” Community moderation is an obvious necessity, and its mere presence can placate worried parents or gamers. But moderation that is perceived to be zealous by entitled fans will often be met with a backlash. We’ve seen Riot and ArenaNet most recently throw their own employees to the wolves because they dared to speak like human beings rather than running pitch-perfect PR at all times. 

Skynet gets name-dropped a couple of times, but what strikes me about this particular robot takeover is that it really is acting in a sort of best-interest scenario, going beyond gamers yelling in games and demanding that entire businesses restructure. What is it that makes Ami more of a benevolent program than Skynet?

So often “intelligence” is what defines an AI. The clue’s in the name, after all. But historically, that intelligence has been defined in the narrowest of ways, with a scorn for emotion. But as Mary Wollstonecraft once wrote so beautifully, “We reason deeply when we forcibly feel.” Over a century of feminist philosophy has been at pains to bear her out on this. Reason and emotion are not opposites; they inform one another, powerfully. Ami is meant to be a kind of AE, an artificial empath — although that’s also a bit of a narrow reading. Sapience contains multitudes, after all. 

In the climax of the story, we learn just how deep Alexandra’s pain is. She had hoped to create in Ami a being that was incapable of frailty, the frailties Alexandra saw in herself. It’s implied that Alexandra’s deeply depressed and even traumatized. She’s violently angry at Ami’s irrational, emotional side. She wanted Ami to be free of these things. She associated emotion with weakness. But Ami shows her how the things Alexandra hates most about herself might actually be strengths.

What sets Ami apart is that, unlike so many other AI in fiction that aspire only to slaughter all humans (which, I’d argue, is more a reflection of our guilt about our own priorities), Ami comes into her power and wants to use it to alleviate the suffering she sees around her. Like any sapient being, she’s not perfect. You could argue she might be too mean or too zealous. Her methods are certainly forthright and invite dangers of their own. You can wrestle with that. But her motives stem from a powerful empathic sense, from her desire to have a sense of justice and wanting to realize it in the world.