Students offload critical thinking to ChatGPT, becoming cognitively passive

Students using ChatGPT were less likely to engage in deep processing, meaning they were more prone to skimming and relying on surface-level understanding. They reported exerting less mental effort, reflecting a decrease in self-regulated thinking, and were more susceptible to distractions, indicating weaker sustained attention.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-07-2025 09:29 IST | Created: 03-07-2025 09:29 IST
Students offload critical thinking to ChatGPT, becoming cognitively passive
Representative Image. Credit: ChatGPT

A new study has raised concerns about the cognitive effects of generative artificial intelligence (AI) tools in academic settings. Titled “ChatGPT produces more ‘lazy’ thinkers: Evidence of cognitive engagement decline” and published in arXiv, the research provides compelling evidence that generative AI, specifically ChatGPT, may significantly reduce students’ cognitive engagement during academic writing tasks.

While ChatGPT and similar large language models (LLMs) have become ubiquitous in education, especially for writing support, this study delves into the psychological and pedagogical implications of their use. With a controlled experiment comparing AI-assisted writing against traditional unaided writing, the research finds that students using ChatGPT exhibit lower levels of mental effort, attention, and strategic thinking. These findings raise critical questions about the long-term role of AI in learning environments and whether its benefits outweigh its cognitive costs.

Does ChatGPT enhance or undermine student engagement?

Past discourse on LLMs has centered on their efficiency, utility, and ethical considerations. However, cognitive engagement - a student's psychological investment and mental effort during learning - has received comparatively little attention.

The research addressed this gap using a controlled design. Forty university-level participants were randomly divided into two groups: one that used ChatGPT to assist in a structured argumentative writing task, and another that completed the same task unaided. Afterward, all participants completed a newly developed four-item Cognitive Engagement Scale for AI (CES-AI), assessing facets like deep processing, mental effort, sustained attention, and strategic thinking.

The results were stark. Participants in the AI-assisted group scored significantly lower on all measures of cognitive engagement compared to their non-AI counterparts. The difference was not just anecdotal but statistically significant, with the control group averaging a CES-AI score of 4.19, while the ChatGPT group averaged only 2.95. This suggests that rather than supporting learning, ChatGPT may inadvertently promote cognitive offloading, where users reduce their own mental effort in favor of the AI’s capabilities.

What specific aspects of cognitive engagement decline?

The study's multidimensional assessment tool revealed precisely where AI use diminished engagement. Students using ChatGPT were less likely to engage in deep processing, meaning they were more prone to skimming and relying on surface-level understanding. They reported exerting less mental effort, reflecting a decrease in self-regulated thinking, and were more susceptible to distractions, indicating weaker sustained attention.

Additionally, these students demonstrated less strategic flexibility, failing to explore alternative perspectives or problem-solving methods. This contrasts with the control group, which showed higher levels of cognitive adaptability and engagement. Such findings highlight the multifaceted nature of engagement and illustrate how AI tools may erode critical components of effective learning.

This pattern of cognitive disengagement mirrors earlier findings in related studies. Research by Kosmyna et al. (2025) showed that LLM users exhibited reduced neural activity and weaker memory retention. Similarly, Lo et al. (2024) noted a general decline in cognitive and metacognitive engagement among students using ChatGPT, although their review also acknowledged behavioral benefits such as increased participation.

The study provides direct experimental evidence and a novel instrument (CES-AI) tailored to measure engagement specifically in the context of AI-assisted tasks, allowing for a more precise diagnosis of how and where cognitive deficits emerge in AI-integrated learning environments.

What are the implications for education and future research?

The findings challenge the assumption that AI tools inherently enhance learning. While ChatGPT can undoubtedly support idea generation and provide structural scaffolding, it appears to do so at the cost of deeper cognitive engagement, particularly when used uncritically.

The study calls for a rethinking of how AI is integrated into academic practice. Rather than banning these tools, Georgiou advocates for pedagogical strategies that encourage active, reflective use. For instance, educators can design assignments that require students to critique or annotate AI-generated content, thus fostering metacognitive awareness and critical thinking. Alternatively, hybrid models can be adopted where AI is used selectively, ensuring that students remain the primary drivers of their intellectual work.

Limitations of the study, such as its reliance on self-reported data and a relatively small, homogenous sample, point to areas for further exploration. Future research should incorporate neurophysiological tools and behavioral analytics to triangulate findings. Additionally, expanding the participant pool across disciplines, languages, and educational levels would enhance generalizability.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback