AI in higher education: Trust, risk, and institutional support shape teachers’ digital literacy

AI adoption in higher education is not simply a matter of technological availability, but a sociotechnical process shaped by individual perceptions, interpersonal networks, and institutional contexts. The study’s predictive model explained 67.2% of the variance in behavioral intention to adopt AI, confirming the robustness of its findings. Moreover, high predictive values for trust and actual usage suggest that fostering confidence and usability will be critical to AI’s success in education.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 07-08-2025 09:24 IST | Created: 07-08-2025 09:24 IST
AI in higher education: Trust, risk, and institutional support shape teachers’ digital literacy
Representative Image. Credit: ChatGPT

The successful integration of artificial intelligence (AI) in higher education depends heavily on teachers’ perceptions, trust, and institutional backing. A newly published study provides fresh evidence on how educators navigate the opportunities and challenges posed by AI tools in universities.

The research, titled “From Perception to Practice: Artificial Intelligence as a Pathway to Enhancing Digital Literacy in Higher Education Teaching,” was published in Systems. It investigates the psychological, social, and organizational factors influencing university teachers’ willingness to adopt AI in their classrooms. The findings emphasize that while educators recognize AI’s benefits, trust, perceived risks, and institutional support play decisive roles in shaping adoption behavior.

How do teachers perceive AI in their classrooms?

The study analyzed data from 487 university teachers across China using a perception–intention–behavior framework. The results reveal that two key factors, perceived usefulness (PU) and perceived ease of use (PEOU), strongly predict teachers’ intention to adopt AI tools. Educators are more likely to embrace AI when they believe it enhances teaching effectiveness, student engagement, and course design. Additionally, ease of use not only encourages adoption but also amplifies perceptions of AI’s utility, highlighting the need for user-friendly tool design.

Beyond these cognitive factors, trust emerged as a critical mediator. Teachers who trust AI systems, believing they are reliable, secure, and unbiased, show a greater willingness to integrate them into their teaching practices. Conversely, perceived risk, including concerns over data privacy, technical failures, and diminished human interaction, acts as a barrier to adoption. Interestingly, these risks are mitigated when AI tools are straightforward and intuitive to use, underscoring the value of accessible technology design.

A distinctive finding is the role of the perceived substitution crisis (PSC), teachers’ fear of being replaced by AI. While PSC negatively influences trust, it does not directly discourage AI adoption. This indicates that despite identity-related anxieties, educators remain pragmatic in exploring AI’s potential as long as they maintain confidence in its reliability and relevance.

What social and organizational factors influence AI adoption?

The research demonstrates that adoption is not purely an individual decision but is shaped by the environment in which educators operate. Peer influence (PI) plays a significant moderating role, particularly in reinforcing trust. Teachers who witness their colleagues successfully using AI tools are more likely to trust and adopt the technology themselves. This social validation effect aligns with the broader theory of innovation diffusion, where interpersonal networks drive technology uptake.

Equally significant is organizational support (OS), which amplifies the impact of ease of use on adoption. Institutions that provide training programs, technical assistance, and clear policies create an enabling environment for educators. The study notes that in Chinese higher education, this support aligns with national-level initiatives such as the “Smart Education of China” framework, which promotes digital transformation across universities.

However, general institutional support alone does not fully address risk-related concerns. The authors highlight that targeted measures, such as transparent data security protocols and robust system reliability guarantees, are essential to overcoming specific fears around AI use. This insight underscores the need for policies that go beyond infrastructure and actively address educators’ psychological and ethical concerns.

What strategies can drive AI integration in higher education?

The authors propose that AI tools should be co-developed with educators, ensuring that designs address practical classroom needs. Continuous feedback loops between teachers and developers can enhance both usability and relevance.

Building trust through transparent governance is essential. Universities must adopt strong data protection measures and openly communicate these safeguards to faculty members. Trust-building also extends to reducing fears of job displacement by positioning AI as a complement to human expertise, not a replacement.

Leveraging peer influence is another crucial strategy. Institutions should encourage educators who have successfully integrated AI to share their experiences through workshops and collaborative forums. These peer-led initiatives can normalize AI use, reduce resistance, and foster a culture of innovation.

Furthermore, the study calls for a comprehensive professional development roadmap. Universities should invest in training programs that build digital literacy and AI competence, offering discipline-specific sessions and real-time technical support. This infrastructure reinforces educators’ confidence and willingness to engage with AI technologies.

Finally, aligning AI adoption with broader policy frameworks ensures systemic progress. Policymakers and university leaders must integrate AI into digital transformation strategies, provide adequate funding, and include AI-related metrics in performance evaluations to incentivize adoption. Collaborative efforts between institutions and national strategies can drive sustainable innovation in education.

Implications for the future of AI in teaching

AI adoption in higher education is not simply a matter of technological availability, but a sociotechnical process shaped by individual perceptions, interpersonal networks, and institutional contexts. The study’s predictive model explained 67.2% of the variance in behavioral intention to adopt AI, confirming the robustness of its findings. Moreover, high predictive values for trust and actual usage suggest that fostering confidence and usability will be critical to AI’s success in education.

Tthe study acknowledges limitations, including its focus on Chinese universities and the absence of longitudinal data. Future research is encouraged to explore cross-cultural variations, field-specific differences, and the long-term dynamics of AI adoption in education.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback