Ethical concerns cloud patient acceptance of AI in clinical care

Patients associate trust with the professional expertise and experience of healthcare providers. Education, hands-on experience, and the ability to understand patient-specific contexts all contribute to what patients perceive as a trustworthy healthcare professional. Many feared that the use of AI-CDSS might undermine these qualities.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 31-05-2025 09:25 IST | Created: 31-05-2025 09:25 IST
Ethical concerns cloud patient acceptance of AI in clinical care
Representative Image. Credit: ChatGPT

As artificial intelligence (AI) becomes increasingly integrated into clinical care, questions about how patients perceive its role remain largely unanswered. A new qualitative study provides fresh insights into this issue, focusing on how patients respond to AI-based clinical decision support systems (AI-CDSS). The study, titled "Indecision on the Use of Artificial Intelligence in Healthcare: A Qualitative Study of Patient Perspectives on Trust, Responsibility and Self-Determination Using AI-CDSS", was published in Digital Health in 2025.

Drawing on 18 in-depth patient interviews conducted in three focus groups in Germany, the research explores attitudes toward AI-CDSS applications in nephrology, surgery, and home care. It highlights how patients grapple with trust, the delegation of responsibility, and their desire for self-determination in an evolving healthcare environment shaped by algorithms.

How do patients perceive trust in the context of AI-CDSS?

Patients associate trust with the professional expertise and experience of healthcare providers. Education, hands-on experience, and the ability to understand patient-specific contexts all contribute to what patients perceive as a trustworthy healthcare professional. Many feared that the use of AI-CDSS might undermine these qualities.

Some participants acknowledged the potential of AI to enhance accuracy and consistency in diagnoses or treatment planning. However, concerns surfaced when the outputs of AI-CDSS conflicted with physician recommendations. This tension led to confusion and a breakdown in trust - not only in the technology but also in the healthcare professionals who might defer to it.

Patients also voiced apprehensions about AI-CDSS being perceived as infallible, especially by inexperienced professionals who might over-rely on these tools. This dynamic, they warned, could diminish the physician’s role as a trusted guide. The idea that AI could be elevated to the status of a “second opinion” raised red flags for patients, who feared a loss of personal engagement and relational care.

What are the ethical concerns around responsibility?

Responsibility was another area of tension. Patients overwhelmingly believe that ultimate responsibility for clinical decisions should remain with human professionals. While AI-CDSS may offer helpful insights, the idea that decisions could be dictated by an algorithm was seen as fundamentally flawed.

Respondents argued that healthcare professionals must remain accountable, not only for making final decisions but also for interpreting AI output within the context of each patient's unique situation. They feared that a shift toward technological determinism could lead to a gradual erosion of professional skills, and consequently, dilute the quality of care.

Participants also pointed to the complexity of assigning responsibility in case of errors. They expressed doubts about holding software developers or manufacturers accountable, citing practical barriers to establishing liability. Most patients insisted that the clinician must remain the point of contact and bear the burden of accountability, even if an AI-CDSS tool was involved in the decision process.

Some patients did acknowledge their own role in the healthcare process, emphasizing shared responsibility and the importance of being involved in treatment decisions. Yet they were clear that AI should not replace the personal connection and dialogue with human caregivers.

How does AI-CDSS affect self-determination?

Perhaps the most emotionally charged responses came in relation to self-determination. Patients worried that AI-CDSS might diminish their autonomy in subtle but significant ways. Several participants expressed concern that systems designed for predictive analytics could override their preferences, particularly in home-care scenarios or end-of-life planning.

Patients cited scenarios in which recommendations from AI could be misinterpreted as mandatory directives, thereby limiting their ability to participate actively in their own care. The prospect of being reduced to a data point rather than a person with preferences and values was deeply unsettling to many.

Information control was another major concern. Some patients feared that AI might access or share sensitive health data in ways they could neither understand nor control. Others highlighted their right “not to know,” citing potential psychological harm from algorithms that might predict future illnesses.

A recurring theme was the desire to remain an active, equal partner in healthcare decisions. Patients viewed AI-CDSS positively when used as a supplemental tool but rejected its use as a replacement for human interaction or judgement. They called for better communication about how AI systems work and for inclusive design practices that consider their values and concerns.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback