Skip to main content

    Facebook leverages artificial intelligence for suicide prevention

    Facebook leverages artificial intelligence for suicide prevention

    Share this story

    As vain and manufactured as our online personas can be, Facebook is still a popular avenue for venting and rambling about our day-to-day struggles. Facebook recognizes this, and is now working on new ways to help troubled users with the use of artificial intelligence and pattern recognition, in addition to expanding its suicide prevention tools.

    The new tools are similar to what Facebook launched back in 2015, which allows friends to flag a troubling image or status post. Now, this feature is available on Facebook Live — with the goal of connecting a user with a mental health expert in real-time. If Facebook believes a reported Live streamer may need help, that user will receive notifications for suicide prevention resources while they’re still on the air. The person who reported the video will also get resources to personally reach out and help their friend, if they wish to identify his or herself.

    Facebook is partnering with organizations like the National Suicide Prevention Lifeline, the National Eating Disorder Association, and the Crisis Text Line so when users’ posts are flagged and they opt to speak to someone, they can connect immediately via Messenger.

    Using data from reported posts, Facebook says it will be using its AI technology to spot patterns between flagged items, identifying posts that suggest that user may be suicidal. “Our Community Operations team will review these posts and, if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet,” Facebook wrote in a blog post.

    In his recent mission statement update, CEO Mark Zuckerberg acknowledged the need to detect signs of suicidal users to offer help before it’s too late. “There have been terribly tragic events -- like suicides, some live streamed -- that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” he wrote. “To prevent harm, we can build social infrastructure to help our community identify problems before they happen.”

    The new tools are currently being tested in the United States. No timeline was given for future rollouts.