AI can inform, but not reassure: Patients still crave human touch in surgery

The authors also observed that trust and satisfaction strongly correlated with perceived consent validity. When respondents reported feeling comfortable discussing their health concerns, they were more likely to accept AI-assisted consent as ethically sound. Conversely, lower emotional comfort corresponded with doubts about AI’s capacity for genuine care and accountability.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-10-2025 11:29 IST | Created: 13-10-2025 11:29 IST
AI can inform, but not reassure: Patients still crave human touch in surgery
Representative Image. Credit: ChatGPT

While the majority of people accept AI-assisted consent as legally valid, confidence in AI remains well below that of human doctors, pointing to an enduring divide between technological capability and human trust, according to a new study published in AI & Society that explores how patients perceive artificial intelligence in one of medicine’s most sensitive areas, surgical consent

The study, titled “Is Consent-GPT Valid? Public Attitudes to Generative AI Use in Surgical Consent,” investigates public reactions to a prototype large language model called Consent-GPT. The system was designed to replicate the informed consent process typically conducted by surgeons and junior doctors. Researchers surveyed 376 participants across the United Kingdom to measure their views on the ethical validity, satisfaction, and legal defensibility of AI-mediated consent in surgery.

The findings expose a nuanced public stance: people are open to AI involvement in medical discussions but remain skeptical about replacing human interaction entirely. The research highlights a growing challenge for healthcare systems worldwide, how to integrate generative AI safely and ethically into emotionally charged, high-stakes medical contexts.

AI and human consent: A measurable trust gap

The study’s first major insight concerns public perception of validity. Despite equal accuracy in information delivery, respondents showed a marked preference for human clinicians. While 87.6 percent of participants considered AI-assisted consent valid, 96.2 percent validated consent given by junior doctors and 97.6 percent by treating surgeons. This reveals that people still associate trustworthiness and moral legitimacy with human judgment, even when AI performs the same informational task with precision.

This trust gap deepened when measuring satisfaction. Only 59.5 percent of respondents expressed satisfaction with the AI-assisted process, compared with over 90 percent for both human clinicians. Participants appeared to value empathy, reassurance, and conversational responsiveness, qualities that AI systems currently struggle to replicate. The results suggest that while information accuracy satisfies regulatory standards, emotional engagement remains critical to how patients evaluate consent interactions.

The authors argue that these perceptions reflect not just technological limitations but also deeper cultural expectations. Informed consent in medicine is not only about transferring facts but about fostering understanding, reassurance, and shared decision-making. The absence of human presence, they note, may make the interaction feel transactional rather than collaborative, weakening the perceived legitimacy of consent.

Legal accountability and ethical comfort

Apart from trust and satisfaction, the study examined whether AI’s involvement affects public attitudes toward legal responsibility. Participants were asked if a patient should be justified in suing a hospital if complications occurred after AI-mediated consent. Although the majority supported AI as a valid agent, respondents were slightly more likely to endorse litigation when consent was obtained through an AI system.

Interestingly, the deciding factor in most responses was not whether a human or AI conducted the consent process, but whether risks were properly disclosed and understood. The research found that clear, comprehensive risk communication remained the most influential determinant of perceived legitimacy. This implies that the legal defensibility of AI-assisted consent hinges less on the technology itself and more on procedural safeguards and oversight.

The authors also observed that trust and satisfaction strongly correlated with perceived consent validity. When respondents reported feeling comfortable discussing their health concerns, they were more likely to accept AI-assisted consent as ethically sound. Conversely, lower emotional comfort corresponded with doubts about AI’s capacity for genuine care and accountability.

These results highlight a critical tension in healthcare innovation. While AI systems like Consent-GPT can standardize information and reduce clinician workload, the ethical principle of patient autonomy still depends on a sense of being heard and understood—something that remains difficult to automate.

Building a hybrid future for surgical communication

The study concludes with a pragmatic recommendation: develop a hybrid consent model that merges AI efficiency with human oversight. The authors suggest that AI tools like Consent-GPT could handle the information-heavy aspects of consent, such as risk documentation, legal compliance, and patient education, while human clinicians focus on empathy, reassurance, and addressing nuanced emotional responses.

Such a model would not only streamline clinical workflows but also ensure that ethical and emotional dimensions of consent remain intact. The researchers caution, however, that the transition to hybrid systems must be guided by transparent policies, legal frameworks, and ongoing public dialogue to maintain trust and accountability.

The authors also call for continued empirical research to track how attitudes evolve as AI becomes more common in clinical practice. Early adoption in administrative or non-critical areas might normalize AI presence, gradually improving comfort levels. However, without clear safeguards and patient-centered design, rapid implementation could risk alienating patients who already view medical systems as impersonal.

Crucially, the study situates the debate within broader ethical discussions about AI autonomy and human agency. Even as generative AI demonstrates linguistic fluency and informational accuracy, its lack of moral reasoning, emotional sensitivity, and responsibility limits its acceptability in contexts requiring empathy and judgment. Informed consent, as the authors argue, is not merely about data transfer, it is an ethical act grounded in relational trust.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback