Skip to main content

Facebook is using AI to spot users with suicidal thoughts and send them help

Facebook is using AI to spot users with suicidal thoughts and send them help

/

The tool was tested earlier this year and is now rolling out to more countries

Share this story

Facebook stock image

Facebook is using artificial intelligence to scan users’ posts for signs they’re having suicidal thoughts. When it finds someone that could be in danger, the company flags the post to human moderators who respond by sending the user resources on mental health, or, in more urgent cases, contacting first-responders who can try to find the individual.

The social network has been testing the tool for months in the US, but is now rolling out the program to other countries. The tool won’t be active in any European Union nations, where data protection laws prevent companies from profiling users in this way.

In a Facebook post, company CEO Mark Zuckerberg said he hoped the tool would remind people that AI is “helping save peoples’ lives today.” He added that in the last month alone, the software had helped Facebook flag cases to first responders more than 100 times. “If we can use AI to help people be there for their family and friends, that's an important and positive step forward,” wrote Zuckerberg.

The AI looks for comments like “are you ok?” and “can I help?”

Despite this emphasis on the power of AI, Facebook isn’t providing many details on how the tool actually judges who is in danger. The company says the program has been trained on posts and messages flagged by human users in the past, and looks for telltale signs, like comments asking users “are you ok?” or “can I help?” The technology also examines live streams, identifying parts of a video that have more than the usual number of comments, reactions, or user reports. It’s the human moderators that will do the crucial work of assessing each case the AI flags and responding.

Although this human element should not be overlooked, research suggests AI can be a useful tool in identifying mental health problems. One recent study used machine learning to predict whether or not individuals would attempt suicide within the next two years with an 80 to 90 percent accuracy. However, the research only examined data from people who had been admitted to a hospital after self-harming, and wide-scale studies on individuals more representative of the general population are yet to be published.

Some may also be worried about the privacy implications of Facebook — a company that has previously worked with surveillance agencies like the NSA — examining user data to make such sensitive judgements. The company’s chief security officer Alex Stamos addressed these concerns on Twitter, saying that the “creepy/scary/malicious use of AI will be a risk forever,” which was why it was important to weigh “data use versus utility.”

However, TechCrunch writer Josh Constine noted that he’d asked Facebook how the company would prevent the misuse of this AI system and was given no response. We’ve reached out to the company to find out more information.