Detecting depression early with AI
Add Your Heading Text Here
Can artificial intelligence be used in conjunction with mental health practitioners to better discover and address cases of depression and other mental illnesses? We have seen impactful examples of AI being applied to a range of health and wellness use cases, with the potential to help improve the lives of millions of people. Recent advances in this breakthrough technology have also shown the great promise that AI holds in helping individuals and doctors better manage cases and mitigate adverse health impacts of depression and other illnesses that plague the human mind.
Depression is a key problem faced by humans across the world — from milder forms of repetitive depressive mood states to more deeper-seated clinical depression. While we have seen significant improvements in our theoretical understanding of clinical depression, its causes and how to effectively mitigate it, the problem continues to affect millions of people. Psychiatric conditions such as this do not have vaccines or proven preventive medication. So much of the success in helping those with clinical depression hinges on early detection, continuous monitoring and scientific counselling and – in certain cases with a physician’s prescription – medication.
Health Data
At the macro-level, the downside of not managing clinical depression, especially for big populations, can be devastating. According to a study in 2015, it is estimated that approximately 216 million individuals across the globe (3% of total global population) have been affected with clinical depression – with a usual onset in the 20s and 30s. A lack of concerted focus on overall mental health and well-being can lead to outcomes as diverse as high suicide and crime rates to a reduction in national productivity and high levels of substance abuse and addiction.
Today we have a huge repository of health data. By applying machine learning and deep learning algorithms, AI can help individuals and physicians detect early indications of the onset of depression, monitor the progress of patients using medical and other therapeutic options and provide continuous support.
Early Detection
By using AI to understand and highlight speech patterns, use of specific words in text communication and even facial expression, we could possibly improve the speed at which such cases are uncovered. For achieving this, it is important that doctors and machine learning experts work together to identify patterns that have a strong correlation with clinical depression.
A research led by Dr Fei-Fei Li of Stanford University found that using facial and speech recognition data and algorithmic models, it might be possible to detect cases where there is an onset of clinical depression. By leveraging a combination of facial expressions, voice intonation and spoken words, their research was able to diagnose if an individual was suffering from depression with 80% accuracy.
Facebook too has begun piloting its own machine learning algorithm to go through posts to identify linguistic red flags that are indicative of depression. In early testing, the algorithm was said to perform just as well as existing questionnaires that are used to identify depression.
Therapist Interaction
Can AI inform therapists and help them do their job better? With the high and growing number of mental illnesses, the limited number of therapists need AI interventions to help make their job easier while helping improve health outcomes. AI could change the game in the therapy arena by augmenting therapists in identifying subtle signs showcased by patients – such as intonations in voice and facial expressions – which therapists might sometimes miss. Further, AI can also provide guided suggestions to them in providing effective treatment options.
An interesting innovation in this area comes from Ginger. This technology company combines a strong clinician network with machine learning to provide patients with the emotional support that they need at the right time – offering 24/7 availability for cognitive behaviour therapy (CBT), alongside mindfulness and resilience training.
On the back-end, AI is using each patient’s progress to inform its capabilities and make it smarter and more scalable. AI also helps Ginger match its users with a team of three emotional support coaches that it determines to be best suited for the individual.
Continuous Support
The stigma associated with mental illnesses often acts as a bottleneck in uncovering which individuals are suffering from it. Having a non-human companion that can provide human-like compassion can be a possible game changer in helping counsel patients suffering from depression and help them in the rehabilitation process. There are also advantages associated with AI being that counsellor – as it offers higher availability at a much lower cost.
Woebot is an excellent example and a bellwether for what AI could accomplish in the future for this use case. Created by Dr Alison Darcy, Woebot is a computer program that can be integrated with Facebook and is aimed at replicating the typical conversation a patient would have with a therapist. The tool asks about a patient’s mood and thoughts and offers evidence-based CBT. Just like a therapist would in a real-life situation, Woebot ‘listens’ actively in a conversation and learns about the patient to offer interactions more tailored to an individual’s unique situation.
Last year, researchers from the Massachusetts Institute of Technology presented a paper detailing a neural-network model that can be run over raw text and audio data to discover speech patterns that could be indicative of depression, without the need for a pre-set questionnaire. This immense breakthrough will hopefully ripple into more similar research, which could help aid those who suffer from cognitive conditions such as dementia.
While the potential for AI in the field of mental health is massive, it is also imperative that we consider the aspect of protecting the privacy and identity of those included in studies pertaining to this domain. Maintaining data security norms is an extremely crucial part of any AI exercise and in this case, it is even more paramount. To protect the sanctity of AI interventions and participating patients providing the data, it is extremely important to address concerns around data and information security first, before addressing any future use cases in this technology.