AI helps universities detect and prevent dropouts early
The most important is early detection of students at risk, which enables universities to intervene before problems escalate. By predicting academic difficulties sooner, institutions can target tutoring and support resources more effectively. Explainable AI is also helping to address concerns about transparency, making predictions easier for educators and administrators to understand and apply in practice.

Artificial intelligence is rapidly transforming the way universities support students, but a new study published in Information warns that its long-term success depends on how ethically and effectively it is applied. The paper provides the most comprehensive review to date of how AI is being used to predict academic performance, prevent dropout, and strengthen tutoring systems in higher education.
The research, titled Leveraging Artificial Intelligence for Sustainable Tutoring and Dropout Prevention in Higher Education: A Scoping Review on Digital Transformation, systematically reviews 63 peer-reviewed studies published between 2019 and 2025, offering a clear overview of the state of AI adoption in universities worldwide.
How is AI currently used to predict student performance and dropout?
The study finds that universities are increasingly relying on supervised machine learning models, including decision trees, random forests, support vector machines, and neural networks, to identify students at risk. More recently, deep learning and hybrid approaches have begun to emerge, along with explainable AI methods that provide more transparency about how predictions are generated.
Traditionally, predictors such as grades, attendance, and exam results dominated these models. But the review shows that today’s systems also incorporate digital traces like login frequencies, forum participation, and clickstream data. Some studies have expanded further, adding psychosocial variables such as motivation, anxiety, and self-confidence, as well as socioeconomic indicators. This multidimensional approach has led to improved accuracy, but it also raises new questions about privacy, fairness, and the ethical handling of sensitive data.
Most studies focus on undergraduate programs, particularly in online and hybrid environments accelerated by the pandemic. Research is concentrated in Asia and Latin America, where dropout rates are higher and digital transformation is moving quickly. By contrast, evidence from Africa and Oceania is limited, pointing to regional gaps in the literature.
What benefits and limitations do AI systems present in tutoring and retention?
The review highlights clear benefits of AI adoption. The most important is early detection of students at risk, which enables universities to intervene before problems escalate. By predicting academic difficulties sooner, institutions can target tutoring and support resources more effectively. Explainable AI is also helping to address concerns about transparency, making predictions easier for educators and administrators to understand and apply in practice.
However, the study stresses that many challenges remain. High-quality datasets are essential for reliable predictions, but data quality varies greatly between institutions. Many models are built and tested only within specific universities, limiting their generalizability. Others remain at the proof-of-concept stage, never moving into actual integration with tutoring systems.
False positives and misclassifications are another limitation. When students are incorrectly flagged as at-risk, they may receive unnecessary interventions, while those genuinely at risk could be overlooked if the models are not well-calibrated. These errors undermine trust in the technology, highlighting the need for more robust validation and ethical oversight.
Ethical concerns also weigh heavily. The use of personal and behavioral data risks invading student privacy, and algorithms may reinforce existing inequalities if not carefully designed. Without proper governance, bias in datasets could produce unfair outcomes, worsening rather than improving student retention.
What gaps remain in research and practice?
The authors identify several areas that demand further exploration. Most critically, there is a shortage of longitudinal studies that track the real-world effectiveness of AI-driven interventions over time. While prediction models are becoming increasingly sophisticated, few studies measure whether they lead to better graduation rates, employability, or long-term academic success.
Another gap lies in the limited integration of prediction and action. Most AI systems are designed to identify risk, but far fewer incorporate mechanisms for follow-up interventions or evaluations of effectiveness. The authors argue that universities must develop models that not only forecast dropout but also guide targeted support and measure its outcomes.
Ethical frameworks and governance structures are also underdeveloped. While explainable AI is helping to improve transparency, broader adoption of institutional policies around privacy, consent, and fairness is needed. Without these safeguards, the risk of mistrust could undermine the legitimacy of AI tools in education.
The review also underscores the importance of expanding research to underrepresented regions. With limited studies from Africa and Oceania, policymakers and institutions in these areas lack evidence to guide decision-making. Expanding the geographic scope of research would ensure that solutions are more inclusive and globally relevant..
- FIRST PUBLISHED IN:
- Devdiscourse