Generative AI threatens critical thinking in health education
Medical educators and researchers are increasingly incorporating genAI into literature reviews, diagnostic support, and academic writing. However, students and trainees may trade intellectual rigor for speed, leading to a decline in skills such as ethical judgment, independent problem-solving, and analytical interpretation. The study warns that health professionals risk becoming passive recipients of AI-generated content rather than active evaluators of evidence - a shift that could compromise the quality of care and decision-making.

A new study warns that unchecked reliance on technologies like generative artificial intelligence (genAI) could erode the very cognitive skills essential to healthcare education. Titled “Critical Thinking in the Age of Generative AI: Implications for Health Sciences Education” and published in Frontiers in Artificial Intelligence (May 2025), the study by Naqvi et al. raises urgent concerns about the risk of cognitive complacency, weakened research integrity, and deteriorating clinical judgment in health science students and professionals.
Is Generative AI Undermining Cognitive Skills in Healthcare Education?
The study explores how the proliferation of genAI, particularly large language models (LLMs), is reshaping the intellectual landscape of medical training and research. While these systems offer high efficiency in summarizing literature, drafting manuscripts, and designing intervention protocols, their convenience comes at a hidden cost: diminished engagement in critical reasoning.
Medical educators and researchers are increasingly incorporating genAI into literature reviews, diagnostic support, and academic writing. However, students and trainees may trade intellectual rigor for speed, leading to a decline in skills such as ethical judgment, independent problem-solving, and analytical interpretation. The study warns that health professionals risk becoming passive recipients of AI-generated content rather than active evaluators of evidence - a shift that could compromise the quality of care and decision-making.
GenAI’s automation capabilities often mask the cognitive labor traditionally involved in scientific inquiry. Instead of interpreting data, questioning assumptions, or identifying biases, users may become dependent on outputs that mimic authority but lack contextual depth. This dynamic fosters what the authors term “cognitive complacency,” where professionals begin accepting algorithmic answers uncritically. The result: a generation of clinicians and researchers well-versed in AI tools but lacking in foundational cognitive disciplines that underpin ethical healthcare.
How can AI be aligned with critical thinking and academic integrity?
The authors propose repositioning genAI as a tool for “augmented intelligence” rather than as a surrogate for human reasoning. The goal is not to discard AI but to integrate it in a way that strengthens, not weakens, cognitive engagement. This reframing encourages users to apply critical scrutiny to AI-generated outputs, challenge flawed assumptions, and contextualize recommendations.
The study emphasizes the concept of “human-centered AI,” which ensures that human oversight and intellectual agency are preserved throughout the process of diagnosis, research, and education. Human validation sits at the core of AI-augmented decision-making: genAI provides diagnoses, clinicians review outputs, and outcome feedback refines future AI performance.
The risk, the authors warn, is a global “feedback loop” where genAI repeatedly reinforces existing knowledge patterns instead of catalyzing novel inquiry. Since genAI systems are trained on historical data, their outputs tend to replicate known answers, potentially narrowing the range of intellectual exploration. This feedback loop may compromise scientific innovation and the ethical integrity of academic publishing, especially in disciplines such as healthcare that demand both analytical rigor and emotional intelligence.
Transparency and reproducibility, cornerstones of scholarly communication, are also threatened by genAI. Current plagiarism detection tools often fail to flag AI-generated text, and instructors are increasingly unable to distinguish between student work and AI outputs. The result is an erosion of authorship clarity, peer review standards, and ethical research practices.
What role should institutions play in responsible AI adoption?
The study makes a strong case for institutional responsibility in governing the ethical use of genAI in medical education. Leading universities such as Western Michigan and the University of Illinois have already introduced structured AI training in clinical simulations and scientific writing. These programs are designed not only to improve AI fluency but also to cultivate critical appraisal skills essential for health professionals.
Academic curricula must explicitly frame genAI as a supportive tool rather than a shortcut. This means embedding AI ethics, prompt engineering, bias detection, and cross-verification training into health science education. Assignments should require students to compare AI-generated insights with peer-reviewed sources and reflect on the epistemological limitations of machine-generated knowledge.
Medical educators must also address a knowledge gap among faculty themselves. Many instructors lack the technical expertise to detect or evaluate genAI-generated submissions, leading to inconsistent standards in assessment and authorship. Without proper oversight, students may exploit this ambiguity, further diluting the pedagogical value of education driven by human inquiry.
Moreover, the study warns of the clinical consequences of misapplied AI. Cases such as flawed sepsis prediction models illustrate how inaccuracies in AI outputs can lead to delayed intervention and patient harm. Institutions must therefore mandate human validation for any AI-assisted diagnostic or therapeutic decision to avoid such life-threatening risks.
- READ MORE ON:
- generative AI in education
- AI in healthcare education
- AI and cognitive skills
- medical education and AI
- overreliance on AI
- ethical risks of AI in medical education
- AI tools in health sciences education programs
- integrating AI ethics into medical school curricula
- ethical use of AI in research
- prompt engineering and AI bias detection
- FIRST PUBLISHED IN:
- Devdiscourse