AI in higher education raises etthical concerns over bias, privacy and accessibility
Most institutions focus on AI’s ability to streamline operations, cut costs, and manage student data more effectively. While personalization and accessibility are often cited as benefits, few AI initiatives are explicitly designed to support underrepresented or disadvantaged learners. The study highlights six areas where AI is currently being applied: personali

Artificial intelligence is making its way into universities and colleges across the world, but its promise of inclusivity remains largely unfulfilled. A new rapid review highlights how AI adoption in higher education is advancing faster than policies and safeguards, creating both opportunities and risks for marginalized students.
Published in Education Sciences, the study “The Impact of AI on Inclusivity in Higher Education: A Rapid Review”, the study examines current implementations of AI tools in universities and assesses their implications for equity, fairness, and student outcomes. The findings reveal a gap between the theoretical potential of AI for inclusion and its real-world applications, which often prioritize efficiency over accessibility.
How is AI being used in higher education?
The research identifies a growing list of AI applications in universities. Adaptive learning systems, intelligent tutoring platforms, automated assessment tools, predictive analytics, and digital administration are increasingly becoming part of institutional practice. These tools are promoted as innovations that can deliver personalized learning, optimize student services, and improve teaching efficiency.
Yet, the evidence shows that adoption is not primarily driven by inclusion. Most institutions focus on AI’s ability to streamline operations, cut costs, and manage student data more effectively. While personalization and accessibility are often cited as benefits, few AI initiatives are explicitly designed to support underrepresented or disadvantaged learners.
The study highlights six areas where AI is currently being applied: personalization, tutoring, automated grading, behavioral prediction, administrative efficiency, and online learning. Each has the potential to improve inclusivity, but the current emphasis on operational benefits means the transformative power of AI for equity remains underdeveloped.
What are the risks for inclusivity and ethics?
The review underscores that AI can widen educational inequalities if risks are ignored. Socioeconomic divides remain a persistent barrier, as not all students or institutions have equal access to digital tools, stable internet, or sufficient training. AI systems built on biased datasets risk perpetuating stereotypes or misclassifying students, particularly those from minority backgrounds.
Ethical concerns emerge across multiple dimensions. The study highlights recurring issues such as algorithmic bias, data privacy violations, lack of transparency, over-reliance on automation, and limited accountability for errors. Intellectual property disputes and the rise of AI-generated misinformation further complicate the academic environment.
These risks are particularly damaging for vulnerable groups. Without deliberate safeguards, AI can exclude or surveil students rather than support them. Instead of improving outcomes for marginalized learners, poorly designed systems may reinforce structural disadvantages, leaving those most in need at greater risk of being overlooked.
The review also stresses the lack of formal institutional guidance. Universities are adopting AI tools rapidly, but governance frameworks to ensure fairness and inclusion are limited. This creates an environment where both staff and students are left to navigate ethical and operational challenges without adequate support.
What steps are needed to ensure inclusive AI in education?
The authors provide a set of recommendations to realign AI adoption with inclusivity goals. They argue that inclusion must be embedded by design, requiring diverse development teams and impact assessments before new systems are implemented. Without proactive planning, AI risks being deployed in ways that benefit only those who are already advantaged.
Building ethical guardrails is another priority. The review calls for clear frameworks addressing fairness, accountability, transparency, and data protection. These safeguards must cover the entire AI lifecycle, from model training to deployment and monitoring, to prevent unintended harm.
Investment in infrastructure and teacher capacity is also crucial. Many rural and under-resourced institutions lack reliable internet connectivity, up-to-date devices, or the technical expertise to use AI effectively. Expanding professional development programs can help educators adopt AI tools for inclusive pedagogy rather than limiting them to efficiency gains.
The authors stress the importance of stronger evidence. Current claims about AI’s benefits for inclusion are often speculative. More empirical studies are needed, particularly in non-Western contexts, to test how AI tools impact marginalized students in practice. The global research base must expand to include diverse populations and varied higher education systems.
- FIRST PUBLISHED IN:
- Devdiscourse