AI governance in universities hinges on collaboration, not just compliance

The research posits that when universities treat knowledge not as static content but as a strategic and reflexive resource, they are better positioned to implement meaningful AI governance. The concept of Knowledge-Based Educational Orientation (KBEO) serves as the foundation of the model, reflecting how curricula emphasize critical thinking, interdisciplinary dialogue, and problem-based learning.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 26-05-2025 09:32 IST | Created: 26-05-2025 09:32 IST
AI governance in universities hinges on collaboration, not just compliance
Representative Image. Credit: ChatGPT

In a digital education era increasingly shaped by artificial intelligence, a pressing challenge emerges: how to ensure AI’s integration is guided not just by innovation but by ethics, legality, and institutional responsibility. A new study titled “Artificial Intelligence Governance in Higher Education: The Role of Knowledge-Based Strategies in Fostering Legal Awareness and Ethical Artificial Intelligence Literacy”, published in the journal Societies (2025), offers a data-driven framework for bridging AI adoption with the critical task of preparing students to navigate its risks and responsibilities.

The study evaluates responses from 270 university students across Romania and Turkey. It proposes a six-path model showing how collaborative knowledge practices and institutional culture directly shape students’ ethical and legal understanding of AI systems.

How does knowledge orientation influence institutional AI governance?

The research posits that when universities treat knowledge not as static content but as a strategic and reflexive resource, they are better positioned to implement meaningful AI governance. The concept of Knowledge-Based Educational Orientation (KBEO) serves as the foundation of the model, reflecting how curricula emphasize critical thinking, interdisciplinary dialogue, and problem-based learning.

According to the study’s results, a strong KBEO significantly boosts both the ethical culture (β = 0.217) and formal governance structures (β = 0.393) surrounding AI within institutions. Universities that cultivate active engagement with knowledge are more likely to build policies addressing algorithmic fairness, data privacy, and transparency in AI-based systems used for grading, administration, or learning personalization.

The analysis shows that this orientation does more than shape classroom content - it establishes the institutional logic by which technologies are introduced, assessed, and regulated. Where reflective knowledge practices are embedded, governance frameworks emerge organically, rather than being externally imposed or symbolic.

What role do collaboration and culture play in AI ethics and legal literacy?

Collaboration across disciplines and stakeholder groups proved to be a pivotal factor in the institutionalization of responsible AI. Termed Collaborative Knowledge Dynamics for AI Awareness (CKD-AA), this dimension includes informal peer discussions, interdisciplinary forums, and co-curricular exchanges that normalize ethical discourse around AI.

The study finds that CKD-AA strongly influences both ethical culture (β = 0.555) and AI governance (β = 0.457), signaling that dialogue-rich environments foster greater institutional responsiveness and awareness of AI’s social implications. These collaborative environments act as incubators for an Ethical AI-Oriented Institutional Culture (EAIOC), where values like equity and transparency are woven into daily operations.

Crucially, EAIOC has the strongest influence on student outcomes, with a path coefficient of β = 0.663 linking institutional culture to AI literacy and legal awareness (AILLA). This means that when ethical principles are lived, rather than simply stated, students develop a more nuanced understanding of risks like data misuse, algorithmic bias, and opaque decision-making in educational technologies.

Formal AI governance structures also contribute positively (β = 0.195), though less powerfully. The findings suggest that clear policies alone are insufficient; they must be contextualized by a value-driven academic ecosystem that supports interpretation, critique, and reflection.

Are students really learning to navigate the legal and ethical landscape of AI?

To assess whether institutional strategies translate into student literacy, the researchers tested a model with six core hypotheses. The results show a consistent and statistically significant connection between knowledge-sharing, governance mechanisms, and student awareness of AI’s ethical and legal dimensions.

Students who perceived their academic institutions as ethically driven and participatory were more likely to express confidence in assessing AI systems for fairness and compliance. They understood the importance of algorithmic accountability, recognized the need for data protection, and supported training on the ethical implications of AI use in education.

What stood out was the mediating effect of governance: knowledge culture and collaboration increased student literacy mainly through their impact on formal and informal governance mechanisms. In other words, students become legally and ethically literate not just by being taught these topics, but by being immersed in environments where AI policies are debated, enacted, and exemplified.

The model’s explanatory power is notable. Knowledge and collaboration together explained 61.4% of variance in governance practices, and nearly 60% of student awareness outcomes were shaped by institutional culture and governance combined. The study’s statistical robustness, validated through SmartPLS 4, confirms its utility for policy and curriculum reform across international higher education systems.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback