AI in classrooms isn’t equal: student use depends on personality differences
Students' understanding, trust, perceived value, and willingness to use AI tools are strongly shaped by underlying personality traits, raising important questions about equity, learning outcomes, and the future of digital education, according to a new study published in Sustainability.
The study, titled “The Big Five Personality Traits and Perceptions of Generative AI in Higher Education: A Canonical Correlation Analysis for Sustainable Digital Education,” examines how personality profiles influence multiple dimensions of students’ perceptions of generative AI. Based on survey data from 375 university students, the research uses a multivariate analytical approach to show that AI adoption in education is deeply linked to psychological differences rather than being a uniform technological shift.
Personality profiles drive AI knowledge and awareness
Personality traits are closely tied to how well students understand generative AI, particularly its capabilities and limitations. The most dominant pattern identified in the research links higher levels of openness, agreeableness, and conscientiousness with greater knowledge of AI.
Students with these personality traits tend to exhibit a more informed and reflective approach to AI. Rather than simply using AI tools for convenience, they are more likely to understand how such systems generate outputs, recognize potential inaccuracies, and engage with AI critically. This suggests that AI literacy is not just a matter of exposure but is also influenced by deeper cognitive and behavioral tendencies.
This knowledge dimension goes beyond basic familiarity. It includes awareness of ethical concerns, such as misinformation, bias, and overreliance on AI-generated content. Students who score higher in these personality traits are more likely to approach AI with a balance of curiosity and caution, which aligns with broader goals of responsible and sustainable digital education.
Increased knowledge is sometimes accompanied by a higher perception of cost. Students who understand AI more deeply may also be more aware of its limitations, risks, and complexities. This dual awareness reflects a more mature engagement with technology, where benefits and drawbacks are considered simultaneously.
The findings suggest moving beyond a one-size-fits-all approach to AI education. While some students naturally develop strong AI literacy, others may require targeted support to reach similar levels of understanding. Without such support, differences in personality could translate into unequal learning outcomes.
Anxiety and perceived risk limit AI engagement
A second key pattern identified in the study links higher levels of neuroticism with more negative perceptions of generative AI. Students who score higher on this personality trait are more likely to perceive AI as costly, complex, or risky, while also assigning lower value to its usefulness and relevance.
This cost-focused perception includes concerns about effort, uncertainty, and potential misuse of AI tools. For these students, generative AI may appear difficult to navigate or emotionally taxing, which reduces their willingness to engage with it. The study suggests that such perceptions are not purely technical but are closely tied to emotional sensitivity and anxiety.
As AI tools become more integrated into learning environments, students who are more prone to anxiety or uncertainty may be at risk of disengagement. Simply providing access to AI tools may not be enough to ensure equitable participation. The research also indicates that these students may struggle to see the long-term value of AI, including its relevance for future careers. Lower perceived utility and attainment value suggest that they may not view AI skills as important for their academic or professional development.
Addressing these challenges requires a more supportive and structured approach to AI integration. The study points to the need for clear guidance, ethical frameworks, and confidence-building strategies that help students navigate AI tools without feeling overwhelmed. Without such measures, the benefits of AI in education may remain unevenly distributed.
Value, motivation, and intent shape future AI use
A third pattern highlights the role of motivation in shaping students’ intentions to use AI. This pattern links lower levels of neuroticism with higher levels of openness and conscientiousness, creating a personality profile associated with stronger motivation and lower perceived barriers.
Students with this profile are more likely to see AI as valuable for their learning and future careers. They assign higher attainment value to AI, meaning they view competence in AI as important for their identity and long-term goals. They also show stronger intentions to continue using AI tools in the future.
This motivational dimension reflects a shift from awareness to active engagement. While knowledge is a key foundation, the decision to use AI consistently depends on whether students perceive it as meaningful and manageable. When perceived cost is low and value is high, students are more likely to integrate AI into their learning practices.
Notably, intrinsic enjoyment of AI does not play a major role in these patterns. Instead, practical considerations such as usefulness, relevance, and perceived effort are more influential in shaping behavior. This suggests that students are approaching AI as a functional tool rather than a source of intrinsic interest.
These findings point to a growing need fpr aligning AI integration with students’ goals and motivations. Educational strategies that emphasize the practical benefits of AI, such as improving efficiency and supporting learning outcomes, may be more effective than those focused solely on novelty or innovation.
Implications for sustainable and equitable AI education
The study calls for human-centered approaches to AI integration. Rather than focusing solely on technological advancement, sustainable education requires attention to inclusion, equity, and long-term learning outcomes.
One of the key implications is the need for differentiated AI literacy programs. Students vary widely in their knowledge, confidence, and perceptions of AI, and these differences are shaped in part by personality traits. Tailored support can help bridge these gaps, ensuring that all students have the opportunity to engage with AI effectively.
For students with lower AI knowledge, introductory training can provide a foundation for understanding how AI systems work. For those with higher knowledge, more advanced instruction can focus on critical evaluation and ethical considerations. This layered approach can help create a more inclusive learning environment.
The study also highlights the importance of addressing perceived cost. Students who view AI as complex or risky may benefit from structured guidance and clear expectations for use. Providing examples of appropriate AI use, along with opportunities for practice, can help reduce uncertainty and build confidence.
Overall, the findings suggest that educational institutions should rethink how they measure digital readiness. Familiarity with technology does not necessarily translate into readiness for AI. Generative AI introduces new challenges related to authorship, critical thinking, and ethical decision-making, which require specific forms of literacy.
The research also acknowledges limitations, including its reliance on self-reported data and a single institutional context. While the findings provide valuable insights, further research is needed to confirm their applicability across different educational settings.
- FIRST PUBLISHED IN:
- Devdiscourse

