Facebook is teaching its AI to find people at risk of suicide
In recent months, Facebook has faced a backlash from its own co-founders and former staff members. Sean Parker said he, alongside Mark Zuckerberg, knowingly created the social network to exploit a “vulnerability in human psychology” and voiced concerns about what it’s doing to our children’s brains. While the inventor of the Like button, Justin Rosenstein, admitted regret for helping to make people obsessed with social media.
In a bid to undo some of this reputational damage, and genuinely help its users, Facebook has announced plans to expand its suicide prevention tools.
It’s not an entirely new initiative – suicide-prevention tools have been part of Facebook for more than a decade – but the company is stepping up its game with the use of AI. Building on a recent trial in the United States, Facebook is using pattern learning on posts previously flagged for suicide, in the hope the site can step in even without a manual report. It’s clearly a difficult line to tread with regards to privacy versus safety, and thus Facebook has a couple of flavours for this.
The first is in the form for a nudge for users’ friends. If the AI detects a pattern of text that matches past suicide reports, it’s likely that that the option to report the post for “suicide or self-injury” will appear more prominently alongside it, making it easier for friends to intervene.
It’s also been testing a feature that would automatically flag these posts for review by the company’s community operations team. If the AI’s suspicions are confirmed by a human eye, the site will provide resources for the person even without intervention from their friends. In a post on his personal Facebook page, Zuckerberg explained: “Starting today we’re upgrading our AI tools to identify when someone is expressing thoughts about suicide on Facebook so we can help get them the support they need quickly. In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times.
“With all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today.”
Suicide is a leading causes of death among young people and Facebook said it is working closely with Save.org, National Suicide Prevention Lifeline ‘1-800-273-TALK (8255)’, Forefront Suicide Prevent, and with first responders to continually improve the software.
How at-risk people will react to an AI intervention is an open question, and one that could just simply push behavioural-warning signs away from Facebook’s prying eyes. The balance of privacy versus urgency to act is clearly one that has been an internal source of debate. Facebook product manager Vanessa Callison-Burch told the BBC the company has to balance effective responses against being too invasive – by directly informing friends and family, say. “We’re sensitive to privacy and I think we don’t always know the personal dynamics between people and their friends in that way, so we’re trying to do something that offers support and options,” she explained.
While the AI hasn’t previously been integrated into Facebook’s more real-time services, the company has tried to make both Facebook Live and Messenger more helpful to people in crisis too. Existing tools to reach out or report have been built into the Facebook Live broadcasting service, and Messenger now has the option for users to connect with support services in real-time, including Crisis Text Line, the National Eating Disorders Association and the National Suicide Prevention Lifeline.
In a blog post announcing the recent trial, the company explained its motivations: chiefly that it has the power to make a difference. “Experts say that one of the best ways to prevent suicide is for those in distress to hear from people who care about them,” the blog post reads. “Facebook is in a unique position – through friendships on the site – to help connect a person in distress with people who can support them.”
That may very well be true, but it would be foolish to ignore the wider climate in which these updates have arrived. This year has already seen a number of reported broadcasts of suicide on Facebook Live – and while it’s possible these additional tools would have done nothing to prevent them, it’s not a good look for the social network to be seen ignoring the problem.
It’s not the first time Facebook has used artificial intelligence to try to make it a more welcoming place: back in April 2016, the site announced it was using AI to describe the contents of images to its visually impaired users. With three labs dedicated to AI research, and almost two billion users for artificial intelligence to learn from, this is unlikely to be the last time Facebook tries to crack a problem with machine learning.
It’s also not the first time AI has been used in this way in an attempt to identify people at risk. Using a neural decoder previously trained to identify emotions, as well as complex thoughts, researchers from the University of Pittsburgh and Carnegie Mellon University recently developed an algorithm that can spot signs of suicidal ideation and behaviour.
READ NEXT: Suicide, depression and the tech industry
The researchers applied their machine-learning algorithm to brain scans and the software correctly identified whether a person was in an at-risk group with 91% accuracy using changes in their brain activation patterns.
A follow-up test saw the AI being trained specifically on the brain scans of those in the group linked with suicidal thoughts to see if the software could identify those who had previously attempted suicide. It was correct in 94% of cases.
If you or a loved one has been affected by the issues raised in this story, you can get support and advice over in our online help and support guide. You can also reach the Samaritans for free 24 hours a day on 1116 123 or the support group Campaign Against Living Miserably (CALM) specifically for young men at 0800 585858.