Indian universities face integrity crisis amid rise of AI-assisted cheating
The study warns that over-reliance on AI undermines critical thinking, originality, and the personal intellectual growth of students. It also raises questions about the reliability of AI-generated content, which may contain fabricated data or fictitious references, further compromising the quality of academic output.

A new academic paper published in the journal Engineering Proceedings warns that Indian universities face a growing crisis in maintaining academic integrity as students increasingly turn to artificial intelligence (AI) tools for assignments and research.
The study, authored by Akash Gupta, Harsh Mahaseth, and Arushi Bajpai of Jindal Global Law School at O.P. Jindal Global University, reveals that institutions remain ill-equipped to handle AI-generated submissions. Titled AI Detection in Academia: How Indian Universities Can Safeguard Academic Integrity, the paper argues that without immediate reforms, universities risk widespread academic dishonesty and the erosion of genuine scholarship.
Why AI is both a boon and a threat in education
The authors recognize the transformative role of AI in learning. Tools like ChatGPT can help students proofread, edit, brainstorm ideas, and simplify complex subjects. Teachers too benefit from AI in preparing lectures, drafting assignments, and automating grading, which allows more time to focus on student engagement. When used responsibly, these technologies offer efficiency, accessibility, and enhanced creativity in education.
However, the risks far outweigh the benefits when AI is misused. Students can generate essays, reports, or even research papers that are virtually indistinguishable from human-written content. Unlike traditional plagiarism, which relies on copying existing work, AI generates original-like text that escapes detection by conventional plagiarism checkers such as Turnitin or Grammarly. This creates a dangerous loophole where academic dishonesty can flourish undetected.
The study warns that over-reliance on AI undermines critical thinking, originality, and the personal intellectual growth of students. It also raises questions about the reliability of AI-generated content, which may contain fabricated data or fictitious references, further compromising the quality of academic output.
Why detection remains a challenge for universities
According to the study, current detection methods are failing. Traditional plagiarism detection tools compare student submissions with existing databases of books, journals, and web content. But because AI generates entirely new text, such tools cannot flag it as copied.
Specialized AI detectors such as Turnitin’s AI Writing Detector or GPTZero attempt to identify algorithmic writing patterns, but these systems remain error-prone. They frequently produce false positives, wrongly flagging human writing as AI-generated, or false negatives, missing AI-generated work altogether. The evolving sophistication of generative AI models further complicates detection, as new updates make outputs more natural and human-like.
This cat-and-mouse dynamic has placed universities in a difficult position. Even when suspected AI use is detected, proving academic dishonesty is legally and ethically fraught. International cases highlight this challenge. In Scotland, more than 600 students were accused of AI-linked misconduct, with some expulsions recorded. In the United States, lawsuits have emerged over whether AI-assisted work constitutes cheating. In Ontario, Turnitin’s claims of 98 percent detection accuracy came under public scrutiny after false detection reports sparked controversy.
For India, the issue is compounded by regulatory gaps. While the University Grants Commission (UGC) issued Anti-Plagiarism Regulations in 2018, these rules only cover traditional forms of plagiarism and do not explicitly address AI-generated content. Students accused of AI misuse could argue that, under Indian copyright law, they remain the legitimate authors of AI-generated work. Such loopholes expose universities to disputes and weaken the enforcement of academic standards.
How Indian universities can respond to the AI challenge
The authors propose a structured framework to help Indian universities confront the challenge head-on. At its core is the expansion of UGC’s anti-plagiarism guidelines to explicitly cover AI-generated content. This would provide clarity for institutions, educators, and students alike.
First, the paper recommends mandating the adoption of AI detection systems across universities. To counter the limitations of individual detectors, institutions should be required to use at least two updated tools in tandem, ensuring fairer and more accurate assessments. Regular updates to these systems would be critical, given the rapid evolution of AI models.
Second, the authors call for comprehensive ethical guidelines. Inspired by international examples such as Newcastle University in the UK, which requires students to disclose and reflect on their use of AI, Indian universities should mandate transparent acknowledgment of AI assistance in academic work. This would encourage ethical use without discouraging innovation.
Third, a structured penalty mechanism should be introduced for AI plagiarism. The authors argue for flexibility: minor or first-time cases could result in resubmissions or grade reductions, while repeated or blatant misuse could attract stricter penalties, including suspension or expulsion. Such measures would ensure proportionality and fairness while reinforcing the seriousness of academic misconduct.
Tthe study presses for AI literacy programs. Without proper awareness, both students and faculty will struggle to navigate the grey areas of AI use. The authors highlight international models, such as the University of Calgary’s literacy program that trains students to critically evaluate AI tools. A similar initiative by the UGC could provide checklists, workshops, and resources to ensure responsible and informed use of AI in academia.
- FIRST PUBLISHED IN:
- Devdiscourse