Generative AI poses both risks and opportunities for student wellbeing
The research shows that students are acutely aware of both the advantages and drawbacks of using generative AI. While many recognize its potential to enhance creativity, efficiency, and learning, they are equally concerned about over-reliance, ethical pitfalls, and the loss of critical thinking skills.

A new study published in the European Journal of Investigation in Health, Psychology and Education highlights how university students frame the idea of digital well-being when navigating the growing influence of generative artificial intelligence. Their insights reveal a nuanced balance between embracing AI’s potential and maintaining personal, ethical, and cognitive boundaries to safeguard wellbeing.
The study, titled “Conceptualisation of Digital Wellbeing Associated with Generative Artificial Intelligence from the Perspective of University Students,” explores how young adults perceive the opportunities and risks of working with generative AI. The findings are based on the experiences of 61 students enrolled in a Digital Competence course at Masaryk University in the Czech Republic.
Balancing benefits with risks
The research shows that students are acutely aware of both the advantages and drawbacks of using generative AI. While many recognize its potential to enhance creativity, efficiency, and learning, they are equally concerned about over-reliance, ethical pitfalls, and the loss of critical thinking skills.
Students framed digital wellbeing as a dynamic balance rather than a fixed set of rules. They stressed that wellbeing depends on personal limits, critical engagement with AI-generated outputs, and the preservation of human agency in decision-making. The analysis of 474 coded statements highlighted the top concerns: critical thinking, ethics, responsible use, time boundaries, and keeping AI in its proper role as a tool rather than a human substitute.
Four dimensions of digital wellbeing
The study proposes a four-part model to capture how students conceptualize digital wellbeing in the context of generative AI:
- Psychological Aspects of Communication: Students emphasized the importance of recognizing that AI is not human and should not replace real human relationships or empathy.
- Risks and Challenges: They identified risks such as misinformation, hallucinations, overuse, dependency, and the emotional toll of constant verification.
- Socially Positive Potential of AI: Students acknowledged the pressure to adopt AI tools but also saw them as valuable for productivity and learning when used responsibly.
- Adaptation Strategies: They advocated for strategies including critical and creative thinking, human accountability for decisions, transparent use of AI, and strong boundaries between human and machine roles.
This framework moves beyond simple warnings or uncritical enthusiasm, offering a pragmatic roadmap for sustaining wellbeing in AI-rich learning environments.
Rules students recommend
The students outlined practical principles for healthy AI use:
- Set clear time limits to avoid fatigue and maintain a healthy digital-life balance.
- Prioritize offline connections to ensure that human relationships remain central.
- Retain human responsibility for decisions and outputs, avoiding blind trust in AI tools.
- Verify critical information from reliable sources rather than relying solely on AI-generated content.
- Declare AI use transparently in academic and professional contexts to uphold fairness and trust.
- Minimize environmental impact by avoiding unnecessary AI use, considering the energy demands of large models.
These recommendations reflect a collective effort to integrate AI constructively without undermining mental health, learning integrity, or social responsibility.
Implications for education and policy
The study argues that higher education should incorporate AI-related risks and coping strategies into digital competence curricula. Rather than discouraging students from using generative AI, educators can help them develop reflective habits and adaptive practices that foster both innovation and wellbeing.
The research also calls for institutional support for responsible AI use, including guidelines for ethical practice, recognition of AI’s environmental costs, and tools to help students critically assess AI outputs. By integrating these insights into teaching and policy, universities can better prepare students to thrive in a digital ecosystem where generative AI plays an increasingly prominent role.
- FIRST PUBLISHED IN:
- Devdiscourse