Real-world AI trust hinges on human judgment, not algorithm transparency
Current narratives in AI governance revolve around the assumption that trust can be engineered through explainability, interpretability, and transparency. These ideas are embodied in regulatory frameworks like the European Commission’s guidelines on trustworthy AI, which assert that systems must offer technical robustness and explain their reasoning.

Trust in artificial intelligence (AI) systems used in hospitals is shaped less by technical transparency and more by social context and human judgment, according to a new study. A peer-reviewed study titled Trust in AI and published in AI & Society draws on ethnographic fieldwork from UK and US hospitals to examine how clinicians actually engage with AI in daily practice.
The findings, based on fieldwork conducted in UK and US hospitals between 2019 and 2023, challenge dominant assumptions in AI governance, revealing that technical explainability often fails to establish trust in clinical settings, even when systems are accurate or offer detailed explanations.
Does explainability guarantee trust?
Current narratives in AI governance revolve around the assumption that trust can be engineered through explainability, interpretability, and transparency. These ideas are embodied in regulatory frameworks like the European Commission’s guidelines on trustworthy AI, which assert that systems must offer technical robustness and explain their reasoning.
However, the study shows that explainability does not necessarily equate to trust. In one case, radiologists were asked to collaborate with an AI system designed to assist in cancer diagnosis. While the AI slightly outperformed human radiologists when working independently, it failed to boost accuracy when used collaboratively. More strikingly, radiologists who expressed distrust in the system performed worse when using it than when working alone, even though the system offered explanations for its decisions.
This suggests that the ability of AI to "explain itself" does not automatically foster trust. The radiologists' skepticism persisted despite the transparency features. As the study points out, a technical explanation does not equal emotional or professional buy-in from users. The root issue is not just how AI works, but whether its users accept it as a reliable partner in high-stakes decision-making.
Can trust exist without transparency?
Contrary to prevailing theories, the study also documented scenarios in which trust flourished in the absence of explicit explainability. In one hospital, an AI system was used routinely to flag emergent findings in X-rays. Radiologists relied on it to review scans left unread at the end of each workday. Here, trust was not even a topic of discussion - it was embedded in routine and accepted as part of the clinical workflow.
No concerns about how the system worked were raised by staff. There were no debates about black-box algorithms or demands for system interpretability. Trust existed simply because the technology had become normalized, and users found it helpful and efficient. This case illustrates that explainability and transparency, while useful, may be irrelevant once a technology is fully integrated into daily practice.
Another snapshot centered on a radiation therapy system known as CyberKnife. Despite the complexity of the technology, staff trusted the system precisely because of the strong human oversight embedded in its use. The trust was not placed in the machine, but in the humans, physicists, therapists, and technicians, who surrounded it and managed its operation. In essence, it was trust in people that made the technology feel safe.
Are policies focused on the wrong solution?
These ethnographic accounts expose a disconnect between AI policy ambitions and the lived realities of those who use AI tools. While regulations are built on the premise that trust can be “built into” technology via explainability protocols, the study shows that trust is highly context-dependent, socially constructed, and sometimes orthogonal to technical features.
In another case, even well-established AI systems with excellent performance records were not trusted by clinical staff. Radiologists often double-checked AI-generated results or refused to use the systems altogether. These behaviors were not necessarily triggered by a lack of system transparency but stemmed from concerns about professional responsibility and liability. Doctors, not machines, are accountable for mistakes - a legal reality that inherently limits trust in automation, no matter how technically sound it is.
Viewing trust as a property that can be engineered through design features is overly simplistic. Trust should instead be seen as a dynamic, evolving relationship shaped by human-machine interactions, institutional culture, and real-world constraints, the study concludes. It recommends that AI developers and policymakers stop assuming a one-size-fits-all solution and instead tailor approaches to the social and professional contexts in which AI is deployed.
- READ MORE ON:
- trust in AI
- AI trust in healthcare
- explainable AI limitations
- medical AI adoption
- why healthcare professionals don’t trust AI systems
- the role of human oversight in AI adoption
- real-world trust issues in healthcare AI
- AI transparency vs trust
- trustworthiness of machine learning in healthcare
- FIRST PUBLISHED IN:
- Devdiscourse