Higher education faces urgent AI ethics gap, new survey warns

Students generally endorsed AI’s academic utility. Over 77% agreed that AI tools enhanced their research capabilities. Nearly 68% saw value in real-time feedback, while more than 65% reported improvements in digital skills and time management. The strongest perceived benefits included improved assignment performance and better comprehension of theoretical material, particularly among undergraduates and postgraduates.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 15-05-2025 09:16 IST | Created: 15-05-2025 09:16 IST
Higher education faces urgent AI ethics gap, new survey warns
Representative Image. Credit: ChatGPT
  • Country:
  • Greece

Generative artificial intelligence tools like ChatGPT have quickly become embedded in student life, but how students engage with these technologies, and how well higher education institutions prepare them for responsible use, remains far from uniform. A new survey of over 500 Greek university students reveals a complex picture of AI adoption, highlighting major educational benefits as well as persistent ethical and cognitive concerns.

The peer-reviewed study, titled “AI and ChatGPT in Higher Education: Greek Students’ Perceived Practices, Benefits, and Challenges” and published in Education Sciences (May 2025), uses a structured, quantitative cross-sectional methodology to examine how Greek higher education students interact with generative AI. The research identifies key behavioral patterns, perceived benefits and challenges, and outlines what students believe universities should do to ensure ethical, productive AI integration in learning environments.

How familiar are students with AI, and how are they using ChatGPT?

While generative AI tools have entered the mainstream since late 2022, familiarity among Greek students remains inconsistent. The study shows that only 38% of students reported high familiarity with AI concepts. The majority of respondents indicated limited or no regular use of ChatGPT and similar tools for academic purposes such as coding, translation, or assignment writing. However, around 38% reported using these tools frequently for information search and research, indicating a preference for low-stakes, assistive use cases.

Gender and academic level influenced patterns of use. Male students demonstrated higher AI familiarity, but female students used AI more for research-related tasks. Undergraduate students relied more heavily on AI tools for personal assistance and assignment preparation compared to doctoral candidates, who showed greater conceptual understanding but lower direct usage.

ICT skill levels emerged as a significant predictor of AI familiarity and usage. Students with moderate to high digital proficiency were far more likely to explore ChatGPT’s functions across tasks, while those with low skills remained disengaged. This digital divide, the study warns, may exacerbate educational inequality unless addressed by institutional interventions.

What do students see as the educational value and the risk?

Students generally endorsed AI’s academic utility. Over 77% agreed that AI tools enhanced their research capabilities. Nearly 68% saw value in real-time feedback, while more than 65% reported improvements in digital skills and time management. The strongest perceived benefits included improved assignment performance and better comprehension of theoretical material, particularly among undergraduates and postgraduates.

Yet these advantages were tempered by growing concern over AI’s pitfalls. More than 68% of students believed AI use could limit critical thinking. Roughly 63% cited increased plagiarism risk, while others highlighted reduced student cognitive engagement, opaque ethical standards, and the reliability of AI-generated content. These concerns were most pronounced among senior and doctoral students, suggesting that academic maturity heightens awareness of AI’s intellectual risks.

Challenges also extended to institutional infrastructure. Students cited a lack of formal AI training, insufficient university support, and limited public discourse around educational AI as barriers to ethical use. Concerns over dependence on tech companies and unclear content authorship further complicated student sentiment.

What do students expect from universities?

Despite the mixed perception, students expressed overwhelming support for structured AI integration in higher education. Nearly 78% of respondents called for free institutional access to AI tools. A similar number demanded increased technical support, curriculum adaptation, and specialized training for both faculty and students. Ethics, privacy, and regulatory clarity were top priorities, especially among postgraduates and those with advanced ICT skills.

Students also emphasized the need for training programs that extend beyond technical skills. Over 75% supported instruction in ethical, pedagogical, and data privacy issues associated with AI tools. Calls for strengthening AI research and innovation within academic institutions were also widespread. Importantly, students with lower ICT skills were significantly less likely to recognize these needs, underscoring the importance of inclusive, digitally equitable educational strategies.

Higher levels of academic study appeared to correlate with greater concern over AI's impact on academic standards. Doctoral students, in particular, voiced skepticism about ChatGPT’s effect on critical thinking, originality, and academic integrity. These findings suggest that future institutional policies must address not only technical enablement but also intellectual rigor and scholarly ethics in AI use.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback