Chinese top-tier university students want human-centric AI, not just efficiency tools
Despite their interest in AI, students raised a host of concerns regarding accuracy, ethics, and personal development. A majority flagged issues around the reliability of AI-generated content, pointing to citation fabrication and incorrect responses as recurring problems. There was also a prevalent fear of plagiarism and the erosion of academic integrity. These concerns were not abstract; several students recounted experiences where AI tools provided misleading references or incomplete answers, prompting calls for better verification mechanisms.

In an era of rapid digital transformation, a new study offers rare insight into how top-performing university students in China perceive the integration of artificial intelligence (AI) in education. Conducted at a prestigious C9 League university, the research explores the attitudes, expectations, and reservations of 253 students regarding AI’s role in teaching and learning. The study, titled “Perceptions of AI in Higher Education: Insights from Students at a Top-Tier Chinese University,” was published in the June 2025 issue of Education Sciences.
While students largely view AI as a time-saving, efficiency-boosting tool, the findings also reveal a gap between enthusiasm and proficiency, deep skepticism about AI’s reliability, and a strong desire for emotionally responsive, human-centered AI technologies. These nuanced perspectives emerge at the intersection of elite academic pressure, institutional endorsement, and the evolving role of technology in higher education.
How do students perceive AI’s usefulness and ease of use?
The study shows that students strongly associate AI tools with improved productivity and efficiency. Quantitative responses indicated that nearly 90% of participants believe AI enhances their learning efficiency, particularly for repetitive tasks, drafting text, and accessing reference material. However, this praise is tempered by a more cautious view of AI’s problem-solving capabilities. Students showed moderate confidence in AI’s ability to tackle complex academic challenges, reflecting a critical understanding of AI’s current limitations.
Perceived ease of use also ranked favorably, though students noted shortcomings when AI systems failed to deliver contextually accurate or sufficiently tailored responses. Many found AI tools relatively simple to operate, yet their functionality often did not meet specific academic demands. This subtle disconnect suggests that while AI may be user-friendly, it is not yet fully adapted to the nuanced needs of rigorous academic environments.
Furthermore, students’ behavioral intentions leaned toward selective adoption: most were willing to use AI but remained wary of overreliance, and few indicated a desire to maximize usage indiscriminately. These trends illustrate a strategic, utilitarian engagement with AI that blends enthusiasm with cautious pragmatism.
What are students’ primary functional and emotional concerns?
Despite their interest in AI, students raised a host of concerns regarding accuracy, ethics, and personal development. A majority flagged issues around the reliability of AI-generated content, pointing to citation fabrication and incorrect responses as recurring problems. There was also a prevalent fear of plagiarism and the erosion of academic integrity. These concerns were not abstract; several students recounted experiences where AI tools provided misleading references or incomplete answers, prompting calls for better verification mechanisms.
Privacy and data security emerged as additional worries, especially given AI’s expanding presence in personalized learning environments. Some students were apprehensive about the implications of sharing sensitive information with systems they do not fully understand.
Beyond functional reservations, a deeper, more human dimension of concern surfaced in the qualitative responses. Many students expressed discomfort with AI’s impersonal nature and voiced a desire for tools that could offer emotional support, career advice, and even companionship. In a particularly striking example, one respondent envisioned a future where AI might be embedded in the human brain as a “secondary mind.” This imaginative leap points to the emotional void students sometimes experience in high-pressure environments, and their hope that AI could one day help fill it.
This emotional gap also manifests in expectations for personalized learning support. Students expressed a desire for AI systems that can adapt to individual needs, offer feedback with human-like sensitivity, and assist in building confidence, especially in challenging fields like mathematics, programming, and research-intensive disciplines.
How do institutional and social factors shape AI usage?
The institutional context of the university, characterized by advanced infrastructure, a tech-forward academic culture, and strategic AI partnerships, proved to be a major factor in students’ willingness to adopt AI tools. The university had recently launched AI-focused colleges and collaborated with industry leaders to deploy large language models across disciplines. These developments created an enabling environment that motivated students to engage with AI, even as they maintained a critical perspective on its performance.
Survey results showed that institutional support played a more significant role than peer or familial influence in encouraging AI usage. University resources, including access to AI platforms and training, were widely cited as factors that boosted student engagement. In contrast, social influence from immediate networks such as friends, family, or teachers had a limited effect on AI adoption behavior.
Notably, the study uncovered a clear gap between students’ high interest in AI (averaging 4.18 on a 5-point scale) and their self-rated proficiency (mean of 3.44). This disparity highlights a critical need for more comprehensive training programs and institutional initiatives that bridge the divide between curiosity and competence. Without targeted efforts to build AI literacy, student enthusiasm risks being underutilized or misdirected.
The elite status of the university also appears to condition students’ expectations and usage behaviors. Unlike students in more generalized contexts, participants here demonstrated what the researchers described as a “trust-but-verify” approach. They frequently relied on AI for inspiration and support but were vigilant in checking its outputs, especially in high-stakes academic tasks. This dual attitude reflects both the intellectual rigor of the environment and the high stakes attached to academic performance.
Implications and Future Directions
The findings offer critical implications for educators, AI developers, and policymakers. For academic institutions, the results emphasize the need to tailor AI training to students’ diverse competencies, ensuring that interest translates into effective usage. Curricula should be adjusted to foster AI fluency alongside critical thinking and ethical awareness.
Developers, meanwhile, are called upon to design AI tools that are not only functionally accurate but also emotionally intelligent. The demand for human-centric AI, capable of providing psychological support, tailored feedback, and culturally sensitive interactions, signals a frontier that remains underexplored in current product design.
Policymakers must consider regulatory frameworks that protect data privacy, promote transparency, and ensure that AI applications align with educational goals rather than commercial interests. Institutional endorsement of AI should go hand-in-hand with ethical safeguards that address students’ legitimate concerns.
Additionally, the researchers recommend future studies expand beyond elite Chinese universities to include diverse cultural and institutional settings. Comparative, cross-cultural research could reveal how different academic ecosystems influence the adoption and perception of AI in education.
- READ MORE ON:
- Artificial intelligence in higher education
- Student perceptions of AI
- AI adoption in Chinese universities
- AI tools in university learning
- Human-centric AI tools
- AI-assisted learning platforms
- How students perceive AI in Chinese elite universities
- AI limitations in complex academic problem-solving
- FIRST PUBLISHED IN:
- Devdiscourse