How task type and support systems shape student engagement with GenAI
The research highlights trust as a crucial mediator between external factors and long-term engagement. System-like trust positively influences human-like trust, suggesting that confidence in a tool’s functionality is a prerequisite for students to perceive it as a responsible and relatable partner in learning.

A new study reveals the key psychological and contextual drivers influencing university students’ willingness to continue using generative AI (GenAI) tools in their academic work. Conducted in China and published in Systems (2025) under the title “Understanding Trust and Willingness to Use GenAI Tools in Higher Education: A SEM-ANN Approach Based on the S-O-R Framework”, the research highlights how trust, expectations of performance, and task type determine students’ sustained use of AI for learning.
The findings offer timely insights for universities worldwide as they navigate the integration of GenAI into curricula and support students in developing responsible and productive AI-assisted study habits.
What shapes trust in GenAI tools
The study builds on the stimulus–organism–response (S-O-R) framework to examine how external conditions, students’ perceptions, and cognitive-emotional responses interact to shape usage intentions. It focuses on three key stimuli: AI facilitating conditions (such as access, skills, and institutional support), performance expectancy (students’ belief in AI’s effectiveness), and task type (distinguishing subjective, creative tasks from objective, fact-based ones).
Trust is divided into two forms. System-like trust reflects confidence in the tool’s technical reliability, accuracy, and usability. Human-like trust captures perceptions of ethical alignment, fairness, and empathy, qualities that influence how students relate to AI as more than just a machine. The study then analyzes how both forms of trust affect the intention to continue using GenAI for academic purposes.
Using survey data from 466 Chinese university students, the researchers applied partial least squares structural equation modeling (PLS-SEM) to map causal pathways and artificial neural network (ANN) analysis to rank the relative importance of each factor.
The results show that performance expectancy plays the strongest role in shaping both system-like and human-like trust. Students who believe that GenAI tools will enhance their academic performance are more likely to trust the tools’ technical functions and feel aligned with their ethical use. Facilitating conditions, such as access to stable internet, institutional policies, and adequate training, significantly strengthen system-like trust but do not directly influence human-like trust.
Interestingly, the type of task influences trust asymmetrically. When students use GenAI for subjective tasks, like essay writing, brainstorming, or creative problem solving, they report higher levels of both forms of trust. By contrast, using AI for objective tasks, such as factual quizzes or calculation-heavy assignments, shows no significant effect on trust.
Trust as the bridge to continued use
The research highlights trust as a crucial mediator between external factors and long-term engagement. System-like trust positively influences human-like trust, suggesting that confidence in a tool’s functionality is a prerequisite for students to perceive it as a responsible and relatable partner in learning.
Both trust types strongly predict continuance usage intention, with system-like trust having a slightly stronger direct effect. This pattern implies that institutions aiming to promote sustained and responsible GenAI adoption should first ensure that tools meet students’ expectations for accuracy, reliability, and ease of use before addressing more nuanced concerns about fairness and ethics.
ANN results further clarify the hierarchy of drivers:
- Performance expectancy is the leading determinant of system-like trust.
- System-like trust itself is the top driver of human-like trust.
- Facilitating conditions exert the greatest influence on the final outcome - continuance usage intention.
This evidence suggests a sequential pathway where practical support and functional reliability build confidence, which in turn fosters deeper trust and stronger motivation to continue using GenAI.
Implications for universities and policymakers
The study’s findings carry practical implications for higher education institutions grappling with the rapid spread of GenAI in classrooms and study environments. Universities can take several concrete steps to encourage responsible adoption:
- Strengthen facilitating conditions: Provide robust technical infrastructure, offer targeted training sessions, and establish clear policies to help students use AI tools effectively and confidently.
- Enhance performance expectancy: Introduce GenAI in ways that demonstrate its tangible benefits for learning outcomes, such as improved comprehension, feedback, or creativity.
- Prioritize subjective, creative learning tasks: Integrate GenAI into areas like writing, design, and open-ended problem solving, where it naturally complements human judgment and sparks innovation.
- Build trust gradually: Focus first on the functional reliability and transparency of AI tools, then address ethical considerations to foster human-like trust over time.
These strategies, the authors stress, can help universities leverage GenAI to enrich student learning experiences while maintaining focus on academic integrity and equitable access. They also suggest that future research could explore cultural and disciplinary differences in trust formation, as well as the long-term impact of GenAI integration on learning outcomes.
- FIRST PUBLISHED IN:
- Devdiscourse