AI in higher education under scrutiny: Ethical use hinges on student understanding
The study issues a broader warning about the unregulated growth of Gen-AI in education, highlighting sustainability and governance concerns that remain largely unaddressed. Researchers point to the environmental toll of training and running large language models, including high energy consumption, carbon emissions, and rare-earth mineral use. A single generative AI model, when scaled, consumes thousands of megawatt-hours and emits hundreds of tons of CO₂, factors typically absent from university-level discourse about AI use.

In Ecuador’s rapidly digitizing academic landscape, students are embracing generative artificial intelligence (Gen-AI) tools like ChatGPT in record numbers. But as adoption surges, a new study raises critical concerns about whether ethical considerations are keeping pace with technological enthusiasm.
A peer-reviewed investigation titled “Ethical Use of Generative Artificial Intelligence Among Ecuadorian University Students” published in Sustainability (May 2025) explores how affective, behavioral, and cognitive learning dimensions influence students’ ethical engagement with Gen-AI tools. Conducted with 833 university students across Ecuador, the study reveals that emotional attitudes and conceptual understanding strongly predict ethical awareness, while mere usage frequency has no meaningful impact.
What drives students to use gen-AI, and which tools are leading?
The study found that a staggering 97% of students use AI applications for academic tasks, with ChatGPT dominating the field, used by 62.2% of respondents, followed distantly by Gemini and Siri. This reflects global patterns reported by the Digital Education Council and a 2025 global study involving 23,000 students across 109 countries. These tools are used to brainstorm ideas, simplify content, and accelerate learning workflows.
Despite the widespread reliance on these platforms, the study's data indicate that students’ ethical engagement is not a function of frequency or breadth of use. Behavioral learning, defined as how often and in what ways students use AI, showed no significant effect on ethical awareness (β = –0.128, p = 0.058). Instead, ethical attitudes stemmed from emotional and cognitive dimensions: how students feel about AI and how deeply they understand its implications.
Affective learning (emotional perceptions) and cognitive learning (knowledge and reasoning) showed strong positive effects on ethical behavior, with standardized coefficients of β = 0.413 and β = 0.567, respectively. These findings suggest that reflective and informed users are far more likely to engage with AI in ethically responsible ways than habitual users who lack awareness.
Why usage alone does not equal ethical engagement
The research model employed structural equation modeling (SEM) and confirmed excellent instrument reliability (α = 0.992; Ω = 0.992), with high internal consistency across all dimensions. While the behavioral dimension scored well in statistical reliability, its negligible effect on ethical awareness underscores a key gap between use and responsibility. The study argues that without emotional investment or conceptual clarity, frequent users may unknowingly misuse AI or rely on it uncritically, weakening academic integrity and diminishing critical thinking.
This disconnect between action and ethics mirrors international trends. Researchers from Australia and the U.S. have found that AI tools are often integrated into learning environments before students, or institutions, fully understand their social, environmental, or epistemological consequences. Ecuadorian students, the authors note, show enthusiasm and digital fluency but lack structured guidance on ethical usage.
The implications extend beyond academic cheating or passive learning. Without ethical awareness, students may unwittingly propagate algorithmic bias, misuse data, or depend excessively on black-box systems. The study emphasizes the need for universities to go beyond tool access and develop AI literacy that includes ethical, emotional, and cognitive dimensions.
What are the policy and sustainability challenges ahead?
The study issues a broader warning about the unregulated growth of Gen-AI in education, highlighting sustainability and governance concerns that remain largely unaddressed. Researchers point to the environmental toll of training and running large language models, including high energy consumption, carbon emissions, and rare-earth mineral use. A single generative AI model, when scaled, consumes thousands of megawatt-hours and emits hundreds of tons of CO₂, factors typically absent from university-level discourse about AI use.
Equally pressing are the social and regulatory gaps. In Ecuador, as in much of Latin America, legal frameworks specific to AI use in education are still nascent. A 2024 legislative proposal seeks to regulate AI in academic and professional environments, but its passage and enforcement remain uncertain. Meanwhile, most universities lack clear institutional policies that define ethical AI usage, leaving students to navigate ambiguous digital territories without guidance.
To address these deficits, the study calls for multilayered reforms: (1) enforceable university policies on AI integration in teaching and assessment; (2) curricula that embed AI ethics, critical thinking, and sustainability awareness; and (3) international alignment with standards like the UNESCO Recommendation on the Ethics of AI and the EU’s 2024 Artificial Intelligence Act.
Furthermore, the authors propose that faculty receive continuous training to guide students in responsible AI usage, and that institutions embrace global frameworks such as ISO/IEC 42001:2023 and NIST’s AI Risk Management Framework. These steps would help ensure that AI systems deployed in education are fair, accountable, and aligned with human rights and ecological priorities.
- READ MORE ON:
- generative AI in education
- AI ethics in higher education
- ChatGPT in classrooms
- AI adoption in Ecuador
- generative AI and academic integrity
- sustainable AI in education
- how university students use generative AI tools
- ethical challenges of ChatGPT in education
- AI awareness among Ecuadorian university students
- understanding ethical AI behavior in higher education
- AI misuse in student assignments
- AI regulation in education policy
- FIRST PUBLISHED IN:
- Devdiscourse