Bridging compliance and care: XAI enables trustworthy AI in European healthcare

The European Union's AI Act, which came into force in 2024, introduces a risk-based regulatory framework, imposing stringent requirements on high-risk AI systems, including those used in medical diagnostics and clinical decision-making. The study scrutinizes an AI-powered CDSS designed to assist ICU coordinators in determining patient readiness for transfer based on predicted indicators such as remaining length of stay (LOS), readmission risk, and mortality probability.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-05-2025 10:17 IST | Created: 29-05-2025 10:17 IST
Bridging compliance and care: XAI enables trustworthy AI in European healthcare
Representative Image. Credit: ChatGPT

A new interdisciplinary study titled “The EU AI Act, Stakeholder Needs, and Explainable AI: Aligning Regulatory Compliance in a Clinical Decision Support System,” published on arXiv, presents a landmark investigation into the intersection of regulatory frameworks and user-centered AI design in clinical settings.

Conducted by a team of legal scholars, AI developers, and healthcare researchers from institutions including RISE Research Institutes of Sweden and University of Bayreuth, the study evaluates how explainability in AI can operationalize the core principles of the EU AI Act within high-risk environments like intensive care units (ICUs).

How does the AI Act apply to real-world clinical systems?

The European Union's AI Act, which came into force in 2024, introduces a risk-based regulatory framework, imposing stringent requirements on high-risk AI systems, including those used in medical diagnostics and clinical decision-making. The study scrutinizes an AI-powered CDSS designed to assist ICU coordinators in determining patient readiness for transfer based on predicted indicators such as remaining length of stay (LOS), readmission risk, and mortality probability.

After mapping the legal scope of the system, the researchers confirmed it falls within the definition of an AI system under Article 3(1) of the Act. As such, the CDSS is subject to conformity assessment and CE-marking under Article 6. While it doesn’t fall under existing medical device regulation requiring third-party notified bodies, it is covered under Annex III of the AI Act due to its role in evaluating eligibility for healthcare services.

The legal analysis identifies the provider as responsible for ensuring system safety and documentation, and the deployer, typically hospital management, as accountable for proper usage, oversight, and impact assessment through a Fundamental Rights Impact Assessment (FRIA). These obligations collectively highlight the importance of transparent operations, robust data governance, and traceability.

What stakeholder needs must explainable AI address?

In stark contrast to the AI Act’s focus on providers and deployers, explainable AI research emphasizes end-user and stakeholder desiderata—such as trust, usability, and fairness. The study identifies five key stakeholder groups: developers (providers), deployers, clinicians (users), patients (affected parties), and regulators. Each has distinct expectations from the AI system.

For instance, developers prioritize technical performance and robustness. Hospital administrators (deployers) seek regulatory compliance and clinician acceptance. Clinicians, who interact directly with the AI output, require interpretability to assess whether predictions align with their medical judgment. Patients, though not direct users, are critically impacted and thus demand fairness and correctness in AI-driven decisions.

To meet these divergent needs, the researchers selected post-hoc, model-agnostic XAI methods compatible with the deep neural network (DNN) architecture of the CDSS. Techniques such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-Agnostic Explanations), and Integrated Gradients were chosen for their ability to deliver localized, understandable explanations of individual predictions.

By employing visual heatmaps and numerical output formats tailored to varied expertise levels, the CDSS enhances clinicians’ ability to critically evaluate AI predictions, thus fostering appropriate reliance rather than blind trust. This mitigates risk by empowering users to override AI suggestions when necessary, a key safeguard under Article 14’s human oversight mandate.

Can explainable AI serve as a bridge between legal mandates and clinical practice?

The study’s core contribution lies in demonstrating that XAI can mediate between legal obligations and real-world stakeholder expectations, effectively translating abstract compliance into operational transparency. Three key insights (KIs) emerged from this synthesis:

KI1: The XAI community must align its stakeholder role definitions with the formal responsibilities articulated in the AI Act. While XAI prioritizes user and affected party perspectives, the Act delegates these concerns to providers and deployers, resulting in potential gaps in user-centered accountability.

KI2: XAI enhances the AI Act by connecting end-user needs, especially those of clinicians and patients, with broader regulatory aims. For example, by emphasizing interpretability and fairness, XAI helps ensure that CDSS outputs are not only technically valid but also ethically grounded and clinically actionable.

KI3: XAI supports compliance by furnishing the technical transparency necessary for conducting FRIA. By documenting how predictions are generated and offering risk-related insights, XAI mechanisms become instrumental in legal due diligence processes.

To sum up, the study underscores that trust in AI cannot be mandated solely through regulation; it must be cultivated across the entire stakeholder spectrum. XAI becomes the common language through which providers, deployers, users, and regulators can engage in transparent, accountable, and collaborative AI governance.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback