Health systems that don’t integrate AI risk falling behind
The COVID-19 pandemic, however, offered a rare proof of concept. Health systems that coordinated HIT–BMI resources effectively deployed rapid-response dashboards, predictive analytics for patient loads, and telehealth workflows. These high-performing systems demonstrated that HIT–BMI synergy, termed "ambidextrous leadership" in the paper, can enable both operational reliability and innovation simultaneously.

Global healthcare systems face a defining challenge: how to integrate artificial intelligence into clinical care while improving quality, cutting costs, and overcoming deeply entrenched barriers. A new study published in NPJ Health Systems under the title “Learning Health System Strategies in the AI Era” presents a detailed roadmap for how institutions can operationalize the Learning Health System (LHS) model as a scalable, value-driven framework for healthcare delivery.
The study argues that AI’s potential will remain untapped unless health systems fundamentally realign governance, data infrastructure, workforce training, and quality improvement practices. While the LHS model has been widely endorsed by national and global health authorities, it remains poorly implemented. This new work positions the LHS not just as a theoretical ideal, but as an urgent strategic response to technological disruption and economic constraint.
What prevents Learning Health Systems from scaling in the AI era?
Despite broad recognition of its promise, LHS implementation has been hindered by fragmented data, siloed institutional cultures, regulatory inertia, and the lack of agile governance frameworks. The authors emphasize that the health information technology (HIT) and biomedical informatics (BMI) domains remain poorly integrated across most U.S. systems, undermining the continuity needed for evidence generation, real-time analytics, and clinical feedback loops.
The COVID-19 pandemic, however, offered a rare proof of concept. Health systems that coordinated HIT–BMI resources effectively deployed rapid-response dashboards, predictive analytics for patient loads, and telehealth workflows. These high-performing systems demonstrated that HIT–BMI synergy, termed "ambidextrous leadership" in the paper, can enable both operational reliability and innovation simultaneously.
Still, most institutions face substantial obstacles. Data remains highly variable, poorly standardized, and often biased. Interoperability across electronic health record (EHR) systems is inconsistent. And while large language models (LLMs) offer promise in automating foundational analytic tasks, concerns over hallucinations, reproducibility, and regulatory compliance persist.
How can health systems use AI to drive research and quality?
The paper makes a critical distinction between model development and real-world implementation. While AI research has flourished, clinical deployment remains rare due to inadequate data infrastructure, workforce readiness, and governance capabilities. Institutions must be prepared not just to test AI models, but to maintain, monitor, and adapt them within live clinical environments.
Strategies recommended include:
- Silent trials of AI tools running in the background of clinical systems to identify data drift and bias.
- Rapid-cycle randomized testing of quality improvement (QI) interventions using EHR-integrated A/B experiments.
- Pre-implementation validation of clinical decision support tools to assess feasibility before real-world use.
Successful examples cited include the University of California’s ACQUIRE framework for aligning QI, research, and evaluation. Similarly, NYU Langone’s QI trials on flu vaccine messaging and appointment adherence have demonstrated measurable clinical and operational benefits.
The authors argue that these techniques close the translational research gap by replacing slow, expensive trials with adaptive, systems-integrated methods. The LHS model thus becomes a tool not just for knowledge creation, but for accelerating safe, data-driven innovation.
What workforce and financial strategies are required?
Transforming health systems into AI-ready learning organizations requires a massive investment in human capital. Yet most clinicians and administrators remain underprepared. The study critiques traditional training pipelines for failing to include systems science, digital health, and implementation research. Medical schools and residency programs still focus narrowly on biological knowledge and task-based skills.
Recent national initiatives, such as AHRQ- and PCORI-funded LHS researcher training programs, have begun to address this gap. The report highlights dozens of emerging master's and certificate programs in clinical informatics and digital health at institutions like Stanford, Cornell Tech, and UT Houston. However, these remain fragmented and insufficiently evaluated.
The authors call for a standardized “Master of Digital Health” degree, analogous to the MPH, to unify training across data science, AI ethics, system redesign, and health technology leadership. They also stress the importance of embedding informatics and QI principles into existing curricula at all levels, including nursing, residency, and executive education.
Financially, the study acknowledges that traditional research funding models are misaligned with LHS needs. As federal agencies like the NIH reduce indirect cost reimbursements, academic health centers are under growing pressure. The paper urges institutions to reallocate internal budgets from low-value programs toward scalable digital health evaluation capabilities. Examples of such strategic alignment include innovation centers at Stanford, Penn Medicine, and UC San Diego.
Additionally, the authors suggest that AI could reduce overhead by semi-automating administrative processes like billing, coding, and regulatory reporting, potentially offsetting the very costs of its implementation. This cycle of reinvestment would make the LHS model not only operationally feasible, but financially self-sustaining over time.
- FIRST PUBLISHED IN:
- Devdiscourse