Students see generative AI as a supplement, not a substitute, in learning

Rather than viewing AI as a replacement for teachers or personal study, students see its function as an enhancer. For them, generative AI offered expanded perspectives and reduced cognitive load but did not substitute for critical thinking or peer interaction.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-09-2025 12:44 IST | Created: 22-09-2025 12:44 IST
Students see generative AI as a supplement, not a substitute, in learning
Representative Image. Credit: ChatGPT

Generative artificial intelligence, or GenAI, is rapidly entering classrooms worldwide, and new evidence shows how students’ views evolve when they use it in structured, active learning environments. A study provides fresh insight into the educational role of AI, revealing both opportunities and limits in how undergraduates interact with emerging technologies.

Their article, “Human–AI Collaboration: Students’ Changing Perceptions of Generative Artificial Intelligence and Active Learning Strategies,” published in Sustainability, analyzes the impact of integrating AI into course design. The findings show that while many students find AI useful for retrieving information and generating ideas, effective collaboration requires critical thinking, prompt design, and structured guidance from instructors.

How do students use AI in the classroom?

The study focused on a cohort of 100 undergraduates enrolled in three course sections between 2023 and 2024 at a Korean university. Students were introduced to posthumanism concepts before undertaking writing assignments that required them to engage generative AI at least three times. Each student drafted an essay, prompted AI for feedback and expansion, and then incorporated AI responses into their final submission. Peer reviews were also part of the process, offering a multi-layered view of how AI influenced both individual work and collaborative evaluation.

Analysis revealed three distinct learner types shaped by how students perceived task difficulty. Cognitive learners used AI primarily for factual information and idea generation, keeping the tool at the level of a reference engine. Metacognitive learners treated AI as a dialogue partner, refining prompts, evaluating answers, and iterating until they achieved satisfactory results. Affective learners maintained clear boundaries, reluctant to accept AI as a collaborator, and expressed doubts about its capacity for genuine interaction.

Despite these differences, a clear pattern emerged. Students overwhelmingly reported that AI was most useful for retrieving information quickly and reliably. Idea generation and grammar assistance were valued but less consistently. The results point to a student body that views AI as an efficiency booster, though not as a substitute for human reasoning.

What do students believe AI should do in higher education?

While most students embraced AI’s capacity to simplify tasks, they were cautious about its role in education. The study found that students agreed most strongly on the supplementary role of AI in learning. They saw it as a tool to expand access to resources, enable deeper study, and provide translation support for international materials.

This perception is significant. Rather than viewing AI as a replacement for teachers or personal study, students see its function as an enhancer. For them, generative AI offered expanded perspectives and reduced cognitive load but did not substitute for critical thinking or peer interaction.

Students also identified what they needed to use AI effectively. They rated training in organizing, evaluating, and applying information as the most important skill to ensure responsible and beneficial use. Calls for clear instructions on AI methods and explicit rules set by instructors reflected a desire for structure and transparency. In other words, students wanted AI embedded within a guided learning process rather than introduced in isolation.

Importantly, prompting behavior shaped outcomes. Nearly half of the students said that more prompts led to richer themes in their essays. A majority reported that their critical thinking remained consistent during AI use, and most judged their final essays logically sound. These findings suggest that students who actively engaged with AI, rather than passively copying outputs, experienced better academic outcomes.

What are the implications for teaching and policy?

The authors argue that the study offers key lessons for educators and policymakers. First, AI should be integrated into course design as part of a structured process. Simply providing access to AI tools risks shallow engagement and potential misuse. Instead, effective teaching must guide students in prompt design, source evaluation, and critical integration of AI-generated material.

Second, AI’s role in classrooms should be framed as collaborative rather than substitutive. The evidence shows that students themselves prefer AI as a support mechanism, not a replacement for instructors or traditional study practices. This perspective aligns with sustainable education goals, which emphasize long-term skill development over short-term efficiency.

Third, universities and policymakers should ensure equitable access to reliable AI systems while also establishing ethical guidelines for use. Students expressed a clear demand for formal instruction and regulatory clarity. Structured training programs for both students and instructors could prevent misuse, reduce anxiety, and improve the quality of outcomes.

The findings further point to broader societal questions. By exposing students to both the capabilities and the limits of AI, the course encouraged reflection on the boundaries between human and machine intelligence. Students who engaged critically with these issues were better able to regulate their use of AI and preserve their own intellectual contributions.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback