An AI developed by researchers at MIT can spot signs of depression simply by analysing human speech.

The use of artificial intelligence in spotting early signs of depression isn’t new — last year saw a similar trial succeed by monitoring brain activity — but by analysing human speech, researchers should be able to identify depression more readily.
In a paper being presented at the Interspeech Conference, the researchers detail how a neural-network model unleashed on raw text and audio data from interviews can discover speech patterns indicative of depression.
“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory.
READ NEXT: This AI can spot signs of depression from a persons brain
The research team say it’s so advanced it can accurately predict if the individual is depressed, without needing any other information about the questions and answers.
“If you want to deploy depression-detection models in a scalable way, you want to minimize the number of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual,” said Alhanai.
The researchers’ model was trained and tested on a dataset of 142 interactions from audio, text, and video interviews of patients with mental-health issues. Each subject was scored in terms of depression on a scale between 0 to 27, using a personal health questionnaire. Scores between 10 to 14 were considered moderate and those between 15 to 19 were considered depressed, while all others below that threshold were considered not depressed. Out of all the subjects in the dataset, 20% were labelled as depressed.
A key insight from the research was that during experiments, the model needed much more data to predict depression from audio than it did text. With text, the model accurately detects depression using an average of seven question-answer sequences. Whereas with audio, the model needed around 30 sequences.
READ NEXT: Your choice of Instagram filter can reveal if you’re depressed
“That implies that the patterns in words people use that are predictive of depression happen in shorter time span in text than in audio,” Alhanai added.
It’s hoped this method has the potential to be developed as a tool to detect the signs of depression in natural conversation, such as a mobile app that monitors a user’s text and voice for mental distress and send alerts. Although, that raises concerns regarding ethics around messaging platform holders and an invasion if a user’s privacy — something that isn’t really touched upon by MIT’s research.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.