AI governance gaps threaten trust and fairness in academic institutions
The research found that AI tools are widely integrated across these areas. Educators increasingly use generative language models, editing and paraphrasing tools, learning and assessment platforms, visualization software, and research data management systems.

A new study published in Education Sciences, “Ethical Problems in the Use of Artificial Intelligence by University Educators,” provides the first structured framework for identifying ethical risks in how AI tools are used across teaching, research, and academic management. Drawing on a thematic review of 42 open-access peer-reviewed articles published between 2022 and 2025, the research highlights the urgent need for universities to implement targeted policies and training to address privacy, fairness, transparency, and integrity concerns.
As AI-powered tools become common in lesson planning, research analysis, and even institutional management, the authors warn that the lack of consistent policies and ethical safeguards risks eroding trust in higher education.
Mapping ethical risks across educators’ core responsibilities
The study initially examines which AI applications university educators use most and where ethical issues arise. Using Slovak legislation as a reference framework, the authors mapped educators’ responsibilities into three categories: teaching, scientific research, and other academic activities such as self-directed professional development and institutional management.
The research found that AI tools are widely integrated across these areas. Educators increasingly use generative language models, editing and paraphrasing tools, learning and assessment platforms, visualization software, and research data management systems.
However, this widespread use has brought a variety of ethical challenges. The study identifies six categories of risks:
- Privacy and data protection, as AI tools often rely on sensitive personal and institutional data.
- Bias and fairness, with AI models sometimes reinforcing stereotypes or disadvantaging certain student groups.
- Transparency and accountability, as opaque algorithms make it difficult to understand or challenge AI-generated outcomes.
- Autonomy and human oversight, with educators and students potentially becoming overly reliant on automated decisions.
- Governance gaps, as universities lack clear, enforceable rules for AI deployment.
- Integrity and plagiarism risks, stemming from the ease with which AI tools can generate content.
These risks, the authors argue, are not uniform across tasks and require context-specific policies and safeguards.
Challenges in governance and the need for context-specific policies
Secondly, the study explores why existing guidelines and rules fail to address the full spectrum of AI-related risks in higher education. The authors point to a common problem: many institutions attempt to apply one-size-fits-all rules for AI use, ignoring that ethical risks vary significantly across different academic activities.
For example, in teaching, the primary concerns revolve around protecting student privacy and ensuring equitable access to AI-supported learning tools. In research, the risks shift toward maintaining data integrity, ensuring transparency in algorithmic decisions, and avoiding plagiarism in AI-assisted writing. In academic management, accountability and human oversight become central, as administrative decisions based on AI recommendations can impact hiring, funding, or resource allocation.
The study points out that governance frameworks must therefore be task-specific and adaptable, focusing on the particular risks faced in each domain. Without such nuance, rules often fail to guide practical use or prevent harmful outcomes.
Recommendations for universities adopting AI tools
Next up, the research addresses what universities can do to mitigate ethical risks while continuing to benefit from AI tools. Chinoracky and Stalmasekova propose that universities build a structured ethical framework around the following actions:
- Developing comprehensive AI-use policies that distinguish between teaching, research, and management tasks.
- Providing mandatory training and capacity-building for educators, so that they understand not just how to use AI tools, but also the ethical and legal implications.
- Establishing clear accountability mechanisms to ensure that decision-making remains transparent and that human oversight is preserved.
- Promoting integrity in academic work, with clear guidance on the appropriate and inappropriate uses of generative AI in writing, grading, and research.
The authors highlight that universities need to go beyond simply banning or restricting AI tools. Instead, they should aim to integrate them responsibly, supporting innovation while minimizing risks to fairness, privacy, and integrity.
- FIRST PUBLISHED IN:
- Devdiscourse