Skip to main content

The empathy layer

Can an app that lets strangers — and bots — become amateur therapists create a safer internet?

Illustrations by Peter Steineck

In January 2016, police in Blacksburg, Virginia, began looking into the disappearance of a 13-year-old girl named Nicole Lovell. Her parents had discovered her bedroom door barricaded with a dresser, her window open. Lovell was the victim of frequent bullying, both at school and online, and her parents thought she might have run away.

On social media, Lovell posted openly about her anguish. On Kik, a messaging app, Lovell told one contact, “Yes, I’m getting ready to kill myself.” In another exchange, she grabbed a screenshot from a boy she liked who had changed his screen name to “Nicole is ugly as fuck.” She broadcasted these private interactions to the wider world by posting them on her Instagram, where she also snapped a photo of herself looking sad, adding the caption “Nobody cares about me.”

Starved for affection among her peers, Lovell sought it out online. Police found a trail of texts on Kik between Lovell and a user named Dr. Tombstone. Kik allows users to remain anonymous, and over the course of a few months, the conversation turned romantic. Tombstone’s real identity was David Eisenhauer, a freshman at Virginia Tech, five years older than Lovell. In a horrific turn of events, authorities say Eisenhauer lured Lovell to meet him, then murdered her.

“Everyone felt we had to do more, an increased sense of responsibility.”

According to Kik employees of the time, the tragedy was a moment of reckoning for the platform. In the beginning of 2016, the app laid claim to 200 million users, and 40 percent of teenagers in the US. Kik’s terms of service stated that anyone under the age of 18 needed a parent’s permission to use the app, but these rules were easily ignored. Because it allowed users to remain anonymous, a wave of negative press around Lovell’s murder painted Kik as a playground for predators. “It was, for the entire company, a shock,” says Yuriy Blokhin, an early Kik employee who left the company recently. “Everyone felt we had to do more, an increased sense of responsibility.”

Executives at Kik wanted a system to identify, protect, and offer resources to its most vulnerable users. But it had no way of knowing how to find them, and no system in place for administering care even if it did. Through their investors, Kik was put in touch with a small New York City startup named Koko. The company had created an iPhone app that let users post entries about their stresses, fears, and sorrows. Other users would weigh in with suggestions of how to rethink the problem — a very basic form of cognitive behavioral therapy. It was a peer-to-peer network for a limited form of mental health care, and, according to a clinical trial and beta users, it had shown very positive results. The two teams partnered with a simple goal: find a way to bring the support and care found on Koko to Kik users in need.

But as the two companies talked, a more ambitious idea emerged. What if you could combine the emotional intelligence of Koko’s crowdsourced network with the scale of a massive social network? Was there a way to distribute the mental health resources of Koko more broadly, not just in a single app, but to anywhere people gathered online to socialize and share their feelings? Over the last year the team at Koko has been building a system that would do just that, and in the process, create an empathy layer for the internet.


In 1999 Robert Morris, future co-founder of Koko, was a Princeton psychology major who got good grades but struggled to find direction — or a thesis advisor. “They didn't know what to do with me,” Morris told me recently. “I had a bunch of vague and strange research ideas and I would show up to their office with a bunch of bizarre gadgets I had hacked together: microphones, sensors, lots of wires.”

Morris finally found a home at the MIT Media Lab. A budding coder, Morris spent much of his time on a site called Stack Overflow, a critical resource for programmers looking for help on thorny problems. Morris was blown away by the community’s ability to help him on demand and free of charge and wondered if that crowdsourced model could be applied to other personal challenges. “I struggled with depression on and off for much of my life, but my early time at MIT was especially difficult,” he recalls. “I liked StackOverflow, but I needed something to help me 'debug' my brain, not just my code.” For his thesis project, he set out to build just that.

Based on the peer-to-peer model of StackOverflow, Morris’ MIT thesis, named Panoply, offered two basic options: submit a post about a negative feeling or respond to one. To quickly build and test the platform, Morris needed users. So he turned to Mechanical Turk, an online marketplace where anyone can crowdsource simple tasks for a small payment.

Morris taught MTurk workers a few basic cognitive behavioral techniques to respond to posts: how to empathize with a tough situation, how to recognize cognitive distortions that amplify life’s troubles, and how to reframe a user’s thinking to provide a more optimistic alternative. The only quality control Morris put in place was basic reading and writing comprehension. For each completed task the MTurk workers were paid a few cents.

Using an online ad for a stress-reduction study, Morris recruited a few hundred volunteers in order to fully test the system. Like the MTurk workers, the subjects were given some brief training and set loose to post their issues and reframe the issues of others. This random assemblage of people was about as far as you could get from trained and expensive therapists. But in a clinical trial conducted along with his dissertation, Morris found that users who spent two months with the Panoply system reported feeling less stressed, less depressed, and more resilient than the control group. And the most effective help was given not by the paid MTurk workers, but by the unpaid volunteers who were themselves part of the experiment.

It was a single study and has not yet been replicated, but it gave Morris confidence that he was onto something big. And then a stranger came calling. “A week after I defended my dissertation, I got several manic emails out of the blue from some guy named Fraser,” Morris said. “It was immediately apparent that he had an incredibly deep understanding of the problem.”

At the same moment that Morris was building Panoply at MIT, Fraser Kelton and Kareem Kouddous, a pair of tech entrepreneurs, had been pursuing the same idea. The pair had hacked together their own version of a peer-to-peer system for therapy. They recruited participants off Twitter and put them into WhatsApp groups, then had one group teach the other group the basics of cognitive behavioral therapy. “At the end of testing, 100 percent of helpers thanked us for the opportunity to participate and asked if they could keep doing it,” said Kelton. “When we asked why, they all said something along the lines of "for the first time since I finished therapy I found a way to put 5 or 10 minutes a day toward practicing these techniques."

A month later Kelton came across Morris’ work and emailed him immediately. “This is embarrassing, but I think I emailed him two or three times that night,” says Kelton. “We thought we had a clever idea, but he had taken it and jumped miles ahead of where our thinking was, run a clinical trial, gotten results, and defended a dissertation.” Within a few weeks Kelton, Kouddous, and Morris had mocked up a wire frame of an app that became the blueprint for Koko. They called the company Koko because the service is meant to help users by showing them different perspectives. Koko backwards is “ok ok.”

Fraser, who knew the startup scene, approached investors. “It seemed to us that there was a possibility that a peer to peer network in this space was kind of a perfect application,” says Brad Burnham, a managing partner from Union Square Ventures. The firm had previously invested in a number of startups that relied on networks of highly engaged users: Twitter, Tumblr, Foursquare. But Burnham had never seen something quite like Koko before. When Koko users added value to the network by rethinking problems, they actually provided value to themselves, by practicing the core techniques of cognitive behavioral therapy. “By helping others, they were helping themselves, and that seemed like a great synergy," said Burnham. In January of 2015 Union Square Ventures, along with MIT’s Joi Ito, invested $1 million into Koko. Less than a month later, the company launched its iOS in beta.


The first time Zelig used Koko, she was sitting in a parking lot waiting to pick up one of her kids from a summer program. She had downloaded the app in search of emotional relief. Her son, an intelligent and outgoing boy with Asperger’s syndrome, seemed to have no place of acceptance outside of home, and was facing the increasing isolation often prevalent in the lives of teens on the autism spectrum. Her younger daughter had just been diagnosed with Obsessive-Compulsive Disorder.

“I have a special needs kid and high needs kid. My life is not typical,” Zelig explained in a phone call. “It’s pretty stressful and it’s always on. You make attempts to do your best and things don’t work, which is really scary.” She asked that we only use her Koko screen name in this story to preserve her family’s privacy. “My kids were struggling mightily, and there just wasn’t a way for me to see anything that could possibly make it better.”

The Koko app offered Zelig two choices. She could write a post laying out her troubles and share it with everyone who opened the app. They would give her advice on how to rethink her problems — not offer a solution, but rather suggest a more optimistic spin on the way she saw the world. But Zelig didn’t feel ready to open up about her own struggles. “It was hard for me to take the big things going on in my life and make them the size of a tweet, to get to the core. It was hard to turn loose those emotions.”

Instead, Zelig started reading through posts from other users. The Koko app starts users off with a short tutorial on “rethinking.” The app explains that rethinking isn’t about solving problems, but offering a more optimistic take. It uses memes and cartoons to illustrate the idea: if you choose the right reframe, a cute puppy offers his paw for a high-five. The app walks new users through posts and potential reframes, indicating which rethinks are good and which aren’t. The tutorial can be completed in as little as five minutes.

Once users finish the tutorial, they can scroll through live posts on the site. Despite the minimal training, the issues they are confronted with can be quite serious: an individual who is afraid to tell her family that she’s taking anti-depressants because they might think she’s crazy; a user stressed from school who believes “no one actually likes the real me, and if they see it, they will hate me”; a user with an abusive boyfriend who has come to feel “I am a failure and worth being yelled at.” I walked a friend through the tutorial recently, and they were shocked by how quickly Koko throws you into the deep end of human despair.

Koko lets you write anything you want for a rethink, but also offers simple prompts: “This could turn out better than you think because…,” “A more balanced take on this could be…,” etc. The company screens both the posts and rethinks before they become public, attempting to direct certain users to critical care and weed trolls out of the system. Originally, this was accomplished with human moderators, but increasingly, the company is turning to AI.  

Accepting and offering rethinks is meant to help users get away from bad mental habits, cycles of negative thought that can perpetuate their anxiety and depression. Over the next few months, Zelig found herself offering rethinks of other Koko users almost every day. “Having it in your pocket is really good. All of sudden it would hit me what I needed say in the reframe, so I would pull my car over, or stand in the produce aisle.”

In the process of giving advice Zelig felt, almost immediately, a sense of relief and control. She began to recognize her own dark moods as variations on the problems she was helping others with. Zelig says the peculiar power of Koko is that by helping others, users are able to help themselves. She eventually got around to sharing her issues, but always felt that “I was more helped by the reframing action than I was by the posting. It trained me to be able to see my world that way.”


The last few years have seen an explosion of startup and mobile apps offering users mental health care on demand. Some, like MoodKit and Anxiety Coach, offer self-guided cognitive behavioral therapy. Others, like Pacifica, mix self-guided lessons with online support groups where users can chat with one another. Apps like Talkspace use the smartphone as a platform for connecting patients with professional therapists who treat them through calls and text messages.

For the moment, Koko is one of just a few company built primarily around a peer-to-peer model. Its best analog might be companies like Airbnb or Lyft. Why pay for a hotel room or black car when the spare apartment or neighbor’s car is just as good? Why pay for therapy when the advice of strangers has proven to be helpful and free?

Studies have found that cognitive behavioral therapy can be as effective at treating depression and anxiety as prescription drugs. Since the 1980s, people have been practicing self-guided cognitive behavioral therapy through workbooks, CD-ROMs, and web portals. But left to their own devices, most people don’t finish courses or stop practicing fairly quickly.

Koko is still a tiny company, staffed by the three co-founders and one full-time employee, all based out of New York City. To date, over 230,000 people have used Koko, and more than 26 million messages have been sent through the app over the last six months. Many, like Zelig, have used it on a daily basis for more than a year. But like so many mobile apps these days, Koko has struggled to attract a large following.

The Koko team always knew it would be difficult to charge users for the app, or to make money advertising to a relatively small number of anonymous users. It was at this critical juncture that the team from Kik came calling. After the murder of Nicole Lovell, Kik reached out to its investors at Union Square Ventures for advice. Burnham connected Kik with Koko, setting in motion an entirely new direction for the young company.

When users sign up for Kik, the first contact added to their address book is a chatbot. It answers questions about the service, tells jokes, and posts updates about new features. “A few months before meeting with Koko, we noticed something interesting happening with the Kik bot,” said Yuriy Blokhin, the former Kik engineer who helped forge the partnership with Koko. “People were not only talking to it the way it was meant to be, as a brand ambassador, but also sometimes people were mentioning they were depressed, concerned about their parents getting a divorce, or being unpopular at school.”

Kik didn’t know how to respond to these kinds of emotional confessions, but Koko did. It had millions of posts, carefully labeled by workers from Mechanical Turk to describe the type of problem they represented. It used that database to train artificial intelligence that could respond to posts sent to a chatbot. If the content of a message was critical — defined by Kokobot as being a danger to themselves or others — it would connect users with a service like Crisis Textline; if the issue was manageable, the bot would pass the person on to Koko users; if it was a troll, the bot would hide the post. This is the same AI approach Koko now uses to classify posts on its peer-to-peer network.

Once that approach proved successful, Koko went one step further. If a user posted about a stress Koko had a highly rated response for — a sick family member, a difficult test at school, a spat with a significant other — the chatbot would automatically offer up that rethink. The AI was now acting as a node in the peer-to-peer network.

Beginning in August 2016, any user on Kik could share their stress with the Kokobot. Most received a reply in just a few minutes. Working with Kik made Koko realize how big the business opportunity was. “Do a search on Twitter, Reddit, Tumblr, any social network, and you will find a cohort of users reaching out into the ether with their problems,” said Kelton. The team realized that if they could train an AI to identify and respond to users sharing emotional stress, they might also be able to train algorithms to automatically detect users who were at risk, even if they hadn’t reached out. Koko was transforming itself into an intervention tool, scanning platforms and stepping in on its own volition. Koko hopes to provide these tools to online communities for free, using the feedback to train an AI with services it can one day sell to digital assistants like Siri and Alexa.

The move into detection and intervention, however, has been complicated. This past January, the team set up the Koko bot on two Reddit forums r/depression and r/SuicideWatch. It scanned incoming posts, and messaged several users offering help.

“I feel deeply disturbed that they would use a bot to do this.” 

The response wasn’t what Koko engineers had expected: the community was outraged.

“I feel deeply disturbed that they would use a bot to do this,” wrote one user. “Disgusting that assholes would try and take advantage of people,” wrote another. The moderator of the two forums set up a warning advising users to ignore Koko’s chatbot. “I have to say that the technology itself looks like an interesting idea,” the moderator wrote. “But if it's in the hands of people who behave in this way, that is incredibly disturbing.” The Verge reached out to both moderators and users who left angry comments about Koko, but did not hear back.

The Koko team acknowledged it made a mistake by allowing its chatbot to send messages on Reddit without warning, and not educating users and moderators about who they were and what their goal was. But Kelton believes that the feedback from users who did interact with the bot on Reddit shows the system can do real good there. “One mod bent out of shape on how we handled the launch vs. many at-risk people helped in a way that they appreciated,” was a trade-off Kelton could live with. “Helping mods understand and embrace the service is a containable problem, one that we're already having good success with.”


In January 2017, top officials from the US military met with executives from Facebook, Google, and Apple at the Pentagon. The topic was suicide prevention in the age of social media. The federal government considers the subject a top priority, as suicide has become the leading cause of death among veterans. For the tech companies, the problem is wide ranging. Among teenagers in the United States, most of whom spend six and a half hours each with their smartphones and tablets daily, suicide is the second leading cause of death.

In attendance was Matthew Nock, a professor of psychology at Harvard and an expert in suicide prediction and prevention. When it comes to using technology for detection and intervention, “the consensus in the academic community is there is great potential promise here, but the jury is still out,” says Nock. “Personally I have seen a lot of interest in people using social media and the latest technologies to understand, predict, and prevent suicidal behavior. But so far many of the claims have outstripped the actual data.”

Despite those concerns, Nock is interested in what companies like Koko might offer. “We know that cognitive behavioral therapy is effective for treating people with clinical depression. There is not enough cognitive therapy to reach everyone who needs it.” Koko provides people with the simple tools they can use to help themselves and others. “These people aren’t clinicians, they have been trained in the basics, but for scaling purposes, I think it’s what we can do right now.”

The scalability of tech makes it an alluring tool for mental health — but the business comes with unique risks. “Everyone wants to be the Uber of mental health,” says Stephen Schueller, an assistant professor at Northwestern University who specializes in behavioral intervention technologies. “The thing I worry about is, unless you have a way to make sure the drivers are behaving appropriately, it’s hard to make sure people are getting quality care. Psychotherapy is a lot more complicated than driving a car.”

Koko’s experience with Reddit wasn’t the first mishap to befall company trying to scale mental health, an industry traditionally made up of heavily regulated, sensitive, one-on-one clinical relationships across an online community. Those challenges were made apparent in the case of Talkspace, where therapists didn’t feel they were able to warn authorities about patients who may have been a danger to themselves or others. That led some therapists to abandon the platform. Samaritans, a 65-year-old organization aimed at helping those in emotional distress, released an app in 2014 called Samaritan Radar. It attempted to identify Twitter users in need of help and offer assistance. But due to the public nature of the interaction, the warnings ended up encouraging bullies and angering users who felt their privacy had been invaded.

The ethics of using of artificial intelligence for this work has become a central question for the industry at large. “The potential demand for mental health is likely to always outstrip the professional resources,” says John Draper, project director at the National Suicide Prevention Lifeline. “There is increasingly a push to see what can technology do.” If AI can detect users at risk and engage them in emotionally intelligent conversations, should that be the first line of defense? “These are important ethical questions that we haven’t answered yet.”


In a recent manifesto on the state of Facebook, CEO Mark Zuckerberg noted that as people move online, society has seen a tremendous weakening of the traditional community ties that once provided mental and emotional support. To date, creating software that restores or reinforces those safeguards has been a reactionary afterthought, not an overarching goal. Systems designed to foster clicks, likes, retweets, and shares have become global communities of unprecedented scale. But Zuckerberg was left to ask, “Are we building the world we all want?”

“There have been terribly tragic events -- like suicides, some live streamed -- that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more,” Zuckerberg wrote. “Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review.” In early March it was reported that Facebook had begun testing an AI system which scanned for vulnerable users and reached out to offer help.

The goal for Koko is the same, but distributed across any online community or social network. Its AI hopes to reach vulnerable users, people like Nicole Lovell, who are posting cries for help online, searching for an empathic community. On a recent afternoon I opened the Koko app, and spent an hour scrolling through a litany of angst: not having the money to complete school, feeling obsessed with an older married man, overwhelmed at the prospect of caring for sick relatives who can no longer remember your name. Beneath each post, three or four users had suggested rethinks, blueprints for coping that users could learn from.

“What is the critical thing this person was dealing with? It’s an emotional, social puzzle.”

For people who are suffering, knowing that others are in pain, and that they can do something about it, is one way of healing themselves. “Something that caught me right away and kept me coming back to the app again and again was the amazing feeling of hope,” said Zelig, when I emailed her recently to ask a few questions about Koko. “That regardless of all the crap that seemed to be happening in my life, that I could still be of help to someone and could take a positive action.”

Zelig’s kids, like most teenagers, have become keenly interested in what keeps their mother occupied on her smartphone. “They see me typing away and want to know what I’m doing,” Zelig explained. “I’ll ask them, do you think this is a reframe? How would you do it? It was cool, because it’s a puzzle we solve together. What is the critical thing this person was dealing with? [It’s] an emotional, social puzzle.”

A year and a half after she downloaded the app, Zelig still uses it almost every day, but she doesn’t consider herself to be in a state of crisis anymore. She wasn’t sure how she felt about Koko using chatbots and AI to reach out to people who had never heard of the service. At first she told me that if a chatbot had approached her out of the blue, she would have ignored it. But she wrote back later to say that, if these technologies mean more people find their way into the Koko community, she’s in favor. “Life really had me and our family by the throat there for a while,” she told me. “Koko was part of what gave me the ability to see a way through to the other side.”

Illustrations by Peter Steineck