New dual-checklist system prevents AI failures in hospitals and clinics
The researchers highlight the necessity of a “human-in-the-loop” model to retain clinician responsibility for decisions, alongside clear audit trails and risk assessments. They also suggest that just as doctors undergo continuing certification, AI agents, particularly those capable of semi-autonomous decision-making, should be subjected to competency checks and recertification mechanisms.

In a bid to bridge the widening gap between artificial intelligence (AI) innovation and healthcare safety, a team of global researchers has released a comprehensive framework designed to guide the development and deployment of trustworthy AI systems, published in the journal AI, the study, titled “Artificial Intelligence in Healthcare: How to Develop and Implement Safe, Ethical and Trustworthy AI Systems”, presents a novel dual-questionnaire approach tailored for developers and implementers of AI systems, aiming to operationalize regulatory expectations and ethical responsibilities across both the United States and the European Union.
What ethical and regulatory challenges does AI pose in healthcare?
The research underscores that healthcare is not only one of the most regulated industries but also among the most complex due to the stakes involved, namely, human life. Introducing AI into this environment brings risks such as biased algorithms, data privacy concerns, cybersecurity threats, and potential misdiagnosis or patient harm. Regulatory gaps in liability allocation, inconsistent accreditation processes, and the lack of global harmonization further complicate safe AI adoption.
In the EU, the AI Act (2024) classifies AI systems into risk tiers and mandates conformity assessments for high-risk systems, those involved in medical decision-making, for instance. Meanwhile, in the U.S., the FDA’s draft guidance from January 2025 outlines lifecycle management for AI-enabled devices but lacks provisions for adaptive, black-box models like generative AI. Notably, software in the U.S. is often treated as a “service” rather than a product, which exempts it from strict liability laws that would typically apply to medical devices.
The study’s authors argue that current frameworks, though evolving, fail to adequately address the distinct responsibilities of developers (pre-market phase) versus implementers (post-market deployment). They advocate for a lifecycle-based evaluation of AI systems, encompassing design, deployment, oversight, and retraining, especially in settings where the margin for error is critically thin.
How can healthcare providers and developers align with ethical AI practices?
To support both groups, the study proposes two detailed questionnaires: one for developers (56 questions across 12 domains) and another for implementers (18 questions across 6 domains). These tools were derived from a literature review and expert consultation involving stakeholders from medicine, machine learning, and policy sectors.
For developers, the questionnaire emphasizes technical documentation, human oversight, clinical evaluation, post-market surveillance, and alignment with data protection regulations like the EU’s GDPR or the U.S.’s HIPAA. It also explores the need to anticipate algorithmic drift, provide explainable AI outputs, and ensure patient safety through rigorous model testing.
Implementers, typically healthcare administrators, clinicians, and IT departments, are urged to establish governance structures, validate AI performance against human decision-makers, and integrate feedback loops for continuous improvement. The study cites UW Medicine in Seattle as a best-practice example, where a Generative AI Task Force and structured pilot review processes govern the adoption of AI tools.
Accountability remains a recurring theme. The researchers highlight the necessity of a “human-in-the-loop” model to retain clinician responsibility for decisions, alongside clear audit trails and risk assessments. They also suggest that just as doctors undergo continuing certification, AI agents, particularly those capable of semi-autonomous decision-making, should be subjected to competency checks and recertification mechanisms.
What needs to change for AI to truly transform healthcare safely?
The researchers call for regulatory harmonization between the U.S. and EU, emphasizing that current laws remain inadequate to handle the complexities of autonomous or generative AI systems. Liability remains murky, particularly in scenarios where doctors follow AI guidance that leads to adverse outcomes. While some legal scholars suggest assigning personhood to AI or creating insurance-backed liability pools, others propose shared accountability between developers and providers.
Accreditation bodies have also not yet caught up. The study finds no established AI-specific accreditation process in hospitals, either in the U.S. or EU. While institutions are increasingly integrating AI under broader quality control mandates, there is no standardized method for evaluating bias, transparency, or explainability in AI tools. This gap presents a significant risk to both patient safety and institutional accountability.
Furthermore, the paper highlights the need for enhanced AI literacy among healthcare professionals and patients alike. Transparency about AI’s role in medical decision-making is crucial to maintaining trust. Hospitals are encouraged to provide mandatory staff training and publicly accessible documentation on how AI tools are deployed and monitored.
Looking ahead, the authors propose a series of research priorities to refine these frameworks: longitudinal studies on AI model drift, empirical validation of the proposed questionnaires, comparative analysis of governance models, and global studies beyond the EU and U.S. They also advocate for developing formal AI agent certification standards and embedding ethical training within AI development pipelines.
- READ MORE ON:
- Artificial intelligence in healthcare
- Trustworthy AI systems
- AI accountability in healthcare
- AI safety standards in hospitals
- How to implement ethical AI in healthcare
- Regulating AI in U.S. and EU healthcare systems
- EU AI Act healthcare
- FDA AI medical device guidance
- HIPAA AI compliance
- GDPR and AI in healthcare
- AI liability in clinical decision-making
- FIRST PUBLISHED IN:
- Devdiscourse