Chatbots boost critical thinking and digital skills in university courses

The analysis revealed that students used chatbots across a broad spectrum of educational tasks. These included improving text clarity, organizing research material, generating summaries, facilitating brainstorming, and even translating academic sources. Notably, learners often framed these interactions as collaborative processes, using expressions like "we" or referring to the chatbot as a "colleague." This linguistic framing suggested a shift in how students perceive and co-construct knowledge with digital agents.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-05-2025 09:30 IST | Created: 30-05-2025 09:30 IST
Chatbots boost critical thinking and digital skills in university courses
Representative Image. Credit: ChatGPT

In today’s rapidly evolving digital education landscape, artificial intelligence tools are reshaping the dynamics of teaching and learning. Among these innovations, AI-powered chatbots are gaining traction for their ability to support academic inquiry, offer personalized feedback, and facilitate student engagement in virtual settings. As these tools become commonplace in university classrooms and online courses, educators and researchers are increasingly asking whether chatbots enhance or hinder meaningful learning.

A new peer-reviewed study offers critical insights into this debate. Titled "Investigating AI Chatbots’ Role in Online Learning and Digital Agency Development," and published in Education Sciences in 2025, the research explores how students interact with AI chatbots during online coursework and whether these tools promote or hinder the development of digital agency - the capacity to think critically, act independently, and make informed choices in digital contexts.

How are students using chatbots in online education?

Drawing from qualitative data gathered through digital learning journals from 77 university students in Finland, the study explores how learners voluntarily incorporated AI chatbots into their coursework during a six-week online university module. Participants were free to choose their own chatbot tools, including ChatGPT and others, without formal instruction on usage.

The analysis revealed that students used chatbots across a broad spectrum of educational tasks. These included improving text clarity, organizing research material, generating summaries, facilitating brainstorming, and even translating academic sources. Notably, learners often framed these interactions as collaborative processes, using expressions like "we" or referring to the chatbot as a "colleague." This linguistic framing suggested a shift in how students perceive and co-construct knowledge with digital agents.

Instead of treating chatbots as passive tools, students positioned them as intelligent dialogue partners who could scaffold critical thinking, offer conceptual guidance, or help structure ideas. Chatbot-supported learning was not restricted to factual queries but expanded into reflective, iterative, and metacognitive practices. This self-directed use pointed to growing learner autonomy and adaptability in AI-mediated environments.

Do chatbots foster or replace student agency and reflection?

The study's central finding is that AI chatbot usage did not displace students' cognitive effort but instead facilitated deeper learning strategies. Many students reported that engaging with chatbots helped them clarify concepts and reframe their thinking. For instance, some learners tested their understanding by paraphrasing AI-generated responses or by comparing the chatbot's feedback with their own ideas.

This dialogic process often led to enhanced reflection, planning, and self-monitoring - key components of digital agency. Students not only asked questions but also evaluated the quality of chatbot answers, adjusted their learning goals accordingly, and learned how to steer chatbot interactions more effectively over time.

However, the study also acknowledged nuanced risks. A few students described tendencies to over-rely on chatbots, especially for quick answers or grammatical fixes, which could limit critical engagement. The researchers emphasized that digital agency is not a binary outcome but a developmental process. The chatbot's potential to enhance or hinder learner autonomy depends largely on the user's intent, digital literacy, and metacognitive awareness.

Moreover, learners developed strategies to manage chatbot limitations, such as rephrasing prompts, cross-verifying answers, or combining chatbot input with peer and teacher feedback. This reflexive adaptation underlines how students are not merely consumers of AI-generated content but are actively learning to navigate complex human-AI interactions.

What are the pedagogical implications for educators and institutions?

The findings carry important implications for instructional design in higher education. Rather than banning or ignoring AI chatbots, educators are urged to treat them as part of the broader digital learning ecosystem. When appropriately scaffolded, chatbots can encourage critical thinking, collaborative learning, and autonomous knowledge-building.

One key recommendation from the study is to embed opportunities for reflection on AI use directly into course activities. Digital learning journals, as employed in this research, served not only as data sources but as metacognitive tools that encouraged students to evaluate their own learning behaviors. This practice could be expanded to help learners develop AI literacy, including ethical awareness and source verification skills.

Additionally, the study highlights the need to design curricula that are robust to AI augmentation. This involves rethinking assessment strategies to prioritize higher-order thinking, creativity, and problem-solving over rote reproduction. Courses should integrate guidance on productive AI use while also challenging students to critique and contextualize AI-generated content.

Institutionally, universities must support faculty development and provide frameworks that balance innovation with integrity. As AI tools become ubiquitous, educational policies should address their use not through restriction but through strategic integration, ensuring that learners are equipped to operate as reflective, responsible participants in digital societies.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback