Can an AI bot replace your therapist?

By Hal Conick | Fact-checked by Barbara Bekiesz
Published August 14, 2023

Key Takeaways

  • Artificial intelligence (AI) has shown promise in diagnosing and detecting mental health issues in patients, especially in its ability to parse large data sets. 

  • Research has found that AI could help identify individuals or populations who are suicidal and help provide support.

  • Ethical guidelines and research are needed for AI in mental healthcare. Researchers want standard guidelines, as well as regulation for controversial tools such as AI, which can supposedly detect emotion in human faces.

The healthcare industry has adopted artificial intelligence (AI) technology to diagnose disease, monitor patients, and even discover new drugs. 

But how can AI assist in the field of mental health, an endeavor flush with human emotion? Researchers are both hopeful and cautious on how AI can innovate mental health treatment.

Managing data with AI

In a review published in Current Psychiatry Reports, researchers wrote that mental healthcare has been slow to adopt AI technology but could benefit greatly from wider adoption, especially for parsing large data sets.[]

Related: Is AI-driven drug discovery the next big thing? Not quite yet

  

Mental health practitioners generally rely on soft skills, such as forming relationships with patients and observing their emotions. AI has shown promise in combing through mounds of data to screen and diagnose mental illnesses.

“Leveraging AI techniques offers the ability to develop better pre-diagnosis screening tools and formulate risk models to determine an individual’s predisposition for, or risk of developing, mental illness,” the review authors wrote. “To implement personalized mental healthcare as a long-term goal, we need to harness computational approaches best suited to big data.”

Preventing suicide

Using AI to churn through large datasets may even help predict and mitigate suicidal behaviors.

In a paper published in the Australian & New Zealand Journal of Psychiatry, researchers used an AI predictive analytics tool to search old datasets.[]

They found that this tool could help generate a risk algorithm, which could help identify when individuals or populations are in need of emotional support—even emergency assistance. Used proactively, this could potentially prevent suicide. 

“There could be several advantages of incorporating artificial intelligence into suicide care, which includes a time- and resource-effective alternative to clinician-based strategies, adaptability to various settings and demographics, and suitability for use in remote locations with limited access to mental healthcare supports,” the authors wrote. 

An AI chatbot as your therapist

AI tools in mental healthcare aren’t just for data—there are now AI tools that chat with people about their feelings and problems.

For example, there’s an AI chatbot service called Wysa, which prompts users with questions like “What’s bothering you?” then analyzes their responses.[] The tool gives users supportive messages and advice—pre-written by psychologists—on managing issues like grief, depression, and chronic pain. 

In the 2021 Springer collection Multiple Perspectives on Artificial Intelligence in Healthcare, researchers wrote that chatbots have shown benefits in psycho-education and adherence.[] But the authors also warn that chatbots could interfere with the patient-therapist relationship and may cause users to over rely on chatbots, which have demonstrated limited emotional intelligence. 

Related: ChatGPT: The clinician’s new tech assistant?

Defending the role of human therapists, authors of a 2021 paper published in SSM Mental Health argued that human interactions, not chatbots, must remain the first line of mental health treatment.[] This is especially true for the most vulnerable populations, they wrote. 

Most Americans seem to bristle at the idea of using chatbots for their own mental healthcare.

According to a survey conducted by the Pew Research Center, 71% of people say that they would not use an AI chatbot for their own mental health support.[] Additionally, 28% disliked the idea so much that they believe that these chatbots shouldn’t be available at all.

Ethical guidance needed

Researchers warn that there must be guard rails to prevent misuse before AI is widely used in mental healthcare.

In an article about the ethical implications of AI in mental healthcare published in the Journal of Medical Internet Research, the authors suggested creating and implementing standard guidelines for AI in this field.[] 

These authors also believed that the use of AI tools in mental healthcare should be transparent, with mental health professionals continuing to oversee the use of AI tools and algorithms. Clinicians also must avoid replacing traditional mental healthcare with robotic tools for those in need, they wrote. 

Can AI discern emotions in people’s facial expressions?

In addition to AI tools that dig into data and analyze human writing, there are now tools that purportedly recognize emotion in human faces. 

Ellie, a virtual avatar developed at the University of Southern California, uses nonverbal cues to guide the conversation, soothing users with human-like vocal intonations like “Hmmm.”[]

Kate Crawford, PhD, a research professor at University of Southern California Annenberg and principal researcher at Microsoft Research, wrote in 2021 in Nature that the emotion recognition industry may be worth $37 billion by 2026.[] Such projections run high, even though scientists disagree as to whether AI can detect emotions. In Dr. Crawford’s view, these tools need to be regulated to avoid being misused.

"These safeguards will recentre rigorous science and reject the mythology that internal states are just another data set that can be scraped from our faces."

Kate Crawford, PhD, Nature

The future of AI in mental healthcare

While there’s hope that AI can be of great assistance to mental healthcare, it requires further research. Like with the AI devices that claim to detect emotions in faces, many of the AI tools in the mental health space remain under-studied. 

For example, a 2020 study found that only 2.08% of psychosocial and wellness mobile apps are backed by research.[]

With more research, information, and ability will come a greater need for discernment. In an editorial published in the Journal of Mental Health, Sarah Carr, senior fellow in mental health policy at the University of Birmingham, wrote that the industry must work to consider the perspective of patients, caretakers, and families in how AI can best serve their mental healthcare needs.[]

“If AI is to be increasingly used in the treatment and care of people with mental health problems then patients, service users and carers should participate as experts in its design, research and development,” Carr wrote.

What this means for you

AI in mental healthcare is simultaneously being adopted slowly and developing rapidly. People who can’t afford a therapist may find comfort in talking about their issues to a chatbot, but they need to be aware that its efficacy is still being researched. Mental health clinicians may expect to see AI used more widely in the mental healthcare space, especially for discovering trends and diagnoses via big data. 

Read Next: AI and healthcare: Who’s footing the bill?
Share with emailShare to FacebookShare to LinkedInShare to Twitter
ADVERTISEMENT