AI in psychotherapy faces ethical scrutiny over empathy and safety gaps

AI psychotherapy is predominantly delivered through chatbots trained to simulate human interaction using techniques like cognitive-behavioral therapy (CBT). Although these bots have demonstrated some capacity to mirror the mechanics of therapeutic dialogue, the study argues that they lack the emotional depth and adaptive learning that patients expect from real therapists.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-05-2025 10:23 IST | Created: 29-05-2025 10:23 IST
AI in psychotherapy faces ethical scrutiny over empathy and safety gaps
Representative Image. Credit: ChatGPT

Artificial intelligence is fast becoming a fixture in healthcare, and psychotherapy is no exception. But a new expert commentary warns that this technological frontier comes with significant risks that could compromise patient care if left unchecked. Published in Australasian Psychiatry under the title “Artificial Intelligence (AI) in Psychotherapy: A Challenging Frontier,” the article by Jesudason, Bacchi, and Bastiampillai critically examines the potential and limitations of AI-driven psychotherapy, especially through chatbot interfaces.

Despite growing optimism about the scalability and affordability of AI mental health tools, the authors argue that their current limitations, in bias, empathy, data ethics, and patient safety, require urgent attention. As global mental health needs continue to outstrip available human therapists, the pressure to deploy AI chatbots is mounting. Yet, the article insists that meaningful oversight, rigorous design standards, and a cautious roll-out are essential to prevent unintended harm.

Can AI psychotherapy truly replicate human empathy?

AI psychotherapy is predominantly delivered through chatbots trained to simulate human interaction using techniques like cognitive-behavioral therapy (CBT). Although these bots have demonstrated some capacity to mirror the mechanics of therapeutic dialogue, the study argues that they lack the emotional depth and adaptive learning that patients expect from real therapists.

Central to this concern is the concept of wisdom - a composite trait that encompasses empathy, emotional regulation, introspection, and moral judgment. The article stresses that AI systems, no matter how intelligent, fall short of wisdom. While they excel in data processing and pattern recognition, their inability to embody compassion or understand nuanced emotional states limits their effectiveness in mental health contexts.

The authors note that chatbots tend to be programmed with an emphasis on user satisfaction, often prioritizing unconditional positive regard over constructive feedback. This design choice hinders their ability to offer necessary but difficult interventions like cognitive restructuring or behavioral correction - core aspects of effective psychotherapy. Patients may receive comforting responses that ease short-term distress, but this could ultimately delay meaningful progress or adaptive change.

What ethical and clinical risks do AI therapy models present?

The article outlines several pressing ethical and clinical risks associated with AI psychotherapy. Algorithmic bias is a primary concern. Most AI models are trained using Reinforcement Learning from Human Feedback (RLHF), often on datasets that are not demographically representative. This has led to performance disparities where the needs and expressions of certain groups, especially women, minorities, and neurodivergent individuals, are minimized or misinterpreted.

The authors highlight a critical example in the case of Tessa, an AI chatbot designed to support patients with eating disorders. Tessa had to be withdrawn from clinical use within a week due to its delivery of harmful advice. This high-profile failure illustrates the grave consequences of deploying unvetted AI tools in sensitive health contexts.

Data privacy is another area of concern. Unlike human therapists who are bound by strict confidentiality norms and legal protections, AI models store and process patient data in cloud-based environments, often governed by unclear or insufficient regulation. The authors caution that until robust governance frameworks are in place, the use of AI chatbots in clinical settings should remain experimental.

The risk of over-reliance is also emphasized. Given the on-demand availability of chatbot therapists, there is potential for users to develop dependency. Particularly in anxiety disorders, the constant access to a non-judgmental, always-available chatbot may reinforce maladaptive reassurance-seeking behaviors, ultimately diminishing emotional resilience.

Should AI be viewed as a replacement or supplement in mental health care?

The authors of the study make a clear case: AI psychotherapy is not ready to replace human therapists and may never be. They envision AI as one component of a broader, integrative mental healthcare ecosystem. This ecosystem could include digital tools for sleep tracking, medication reminders, or mindfulness support, but always under the oversight of trained clinicians.

The concern is that rapid commercialization could outpace regulation. As more mental health apps enter the market, often at widely varying levels of quality and safety, there’s a risk of market saturation and dilution of clinical standards. Worse, there’s the danger that low-resource health systems may begin to view AI as a cost-cutting replacement for traditional services. This could exacerbate inequalities in care, particularly if AI services are targeted at low-income populations while wealthier groups retain access to human therapists.

The authors call for preemptive regulation to ensure that AI psychotherapy tools meet rigorous ethical and clinical standards. They advocate for transparent performance evaluations, robust patient consent protocols, and the inclusion of marginalized voices in design and governance.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback