Is AI the future of talk therapy?
Key Takeaways
In the assessment of mental health, the use of artificial intelligence (AI) applications that can analyze the human voice and human-authored text is becoming more prevalent.
As these apps proliferate, they raise questions about AI's efficacy, ethics, and patient privacy concerns.
Mental healthcare providers can familiarize themselves with these emerging technologies and understand their impact on clinical practice now and in the future.
Traditionally, talk therapy has required little more than a couch and a quiet place to talk. But in the not-so-distant future, will it also require an internet-connected device running artificial intelligence (AI)-powered software?
As technology advances and tech startups venture into the mental health arena, what was once science fiction is becoming clinical reality. Emerging research demonstrates the utility of AI and machine learning (ML)-driven analysis to assess human mental health, but this brings questions about its efficacy, ethics, and patient privacy.
Increased demand
The perceived utility of AI and ML applications for mental health may have increased amid the COVID-19 pandemic, which spiked demand for mental healthcare.
According to a 2021 American Psychological Association survey of psychologists, demand for anxiety-, depression-, and trauma-related treatment increased in the thick of the pandemic.[]
The largest increases were in treatment demand for anxiety disorders (84%, up from 74% in 2020), depressive disorders (72%, up from 60%), and trauma- and stress-related disorders (62%, up from 50%).
A shortage of mental healthcare workers complicates things. According to a September 2021 Kaiser Family Foundation report, not a single state can meet its demand for mental healthcare services.[] Coming in last in the ranking, Missouri meets 5.9% of demand. Holding the top spot, New Jersey meets 68.9%.
It takes time to mint new mental health clinicians—time that struggling patients may not have. AI and ML applications may offer an expedient solution.
The robot will see you now
Woebot, founded in 2017, was an early entrant into the AI-powered mental healthcare technology space. The app is free on Android and iOS devices.
Text chat-based Woebot uses AI and natural language processing to assess mental health, then recommends interventions rooted in cognitive behavioral therapy (CBT), interpersonal psychotherapy, and dialectical behavioral therapy.
Skeptical clinicians may wonder if text chatting with a robot is comparable to a couch-based chat with a clinician. A 2021 Journal of Medical Internet Research study indicates that it is.[]
The study involved data from about 36,000 Woebot users who ranged from 18 to 78 years in age. After 5 days of using the app, participants had a mean WAI-SR score of 3.36 (WAI-SR measures alignment of clinicians and patients on therapeutic goals as well as the clinician-patient bond).
The researchers noted that comparable studies on traditional outpatient CBT have achieved similar scores. In other words, patients appear to feel connected to their AI-based robo-therapists. But does the intervention actually work?
What is the role of AI?
AI-based interventions may be effective, but with some important caveats, as noted in a 2019 overview published in Current Psychiatry Reports.[] The researchers wrote that AI may give mental health clinicians more capacity to use their “uniquely human skills” to focus on patients and create “personalized treatments.”
AI may also assist with diagnosis and screening.
“Leveraging AI techniques offers the ability to develop better prediagnosis screening tools and formulate risk models to determine an individual’s predisposition for, or risk of developing, mental illness,” the researchers wrote.
This is loosely the premise behind the Sonde Mental Fitness app, which analyzes the human voice to assess mental health. The app’s algorithm listens to voice changes to establish a mental health baseline. Like a wearable blood glucose monitor, user insights from the app may prompt people to seek care.
The Current Psychiatry Reports researchers also wrote that AI may reveal mental health trends in large population datasets, shaping how care is delivered on the couch.
But as with any new technology, there are some issues that clinicians must note.
Ethics and privacy
An integral part of the ethics debate over AI use in (and outside of) healthcare is the black box problem. Users, including patients, don’t know what data are driving the AI’s analysis and recommendations, or how the algorithm parses those data. This can even be a mystery to the data scientists and programmers creating the algorithms.
"If the data involved is sent or received from a healthcare provider, then HIPAA applies."
— Jason Connor
This is problematic because people can be biased, and that bias—whether overt or implicit—could possibly affect data selection and how data are used.
Those involved in “making decisions about the selection, testing, implementation, and evaluation of AI technologies must be aware of ethical challenges, including biased data (eg, subjective and expressive nature of clinical text data; linking of mental illnesses to certain ethnicities),” the Current Psychiatry Reports researchers wrote.
Patient privacy is also an issue. Many of these apps operate in the liminal space of consumer health products, which are not governed by HIPAA, according to Jason Connor, GRC manager at the cybersecurity firm GoVanguard.
“HIPAA applies to data involved in patient treatment,” Connor said in an exclusive MDLinx interview. “If the data involved is sent or received from a healthcare provider, then HIPAA applies. But, in the case of something like WoeBot, probably at most it’s considered consumer health information.”
Looking ahead
Will AI be replacing human clinicians in mental health practice?
Probably not, according to the Current Psychiatry Reports piece. Its authors envision a future in which human intelligence combines with AI to expand and accentuate mental healthcare.
Humanity is inseparable from the care equation, they wrote. In addition to possibly spotting unobserved factors, addressing possible biases, and identifying mistakes, practitioners can "focus on the human aspects of medicine that can only be achieved through the clinician-patient relationship."
What this means for you
It appears that the promise of AI in mental healthcare is to support, not replace, mental healthcare providers. It may expand access to care, provide early screening, and generate population-level insights about mental health, which may benefit an overburdened mental healthcare system. However, AI’s black-box nature requires clinicians who use it, as well as its developers, to be cognizant of how bias affects AI and AI-supported care. Also, many AI-based mental health applications are consumer-grade, raising questions about patient data privacy.