Lack of AI literacy and policy stalls adoption in higher education
Generative AI tools are primarily being adopted to support idea generation, text editing, content explanation, and research writing. According to survey data, 41% of participants used AI for idea generation, 38% for explaining concepts, and 31% for editing text. However, usage was not evenly distributed. While about 30% of students and staff reported frequent use (several times a week or more), another 25–31% claimed never to have used AI tools.

As artificial intelligence tools rapidly reshape higher education and research practices, a new study has found that perceptions, usage patterns, and literacy levels vary significantly between students and academic staff, with age, gender, and socioeconomic status emerging as key factors. The findings offer a detailed look into how AI is transforming the academic landscape while also exposing deep-seated anxieties, digital divides, and policy gaps that could stall its equitable adoption.
The study, titled “AcademAI: Investigating AI Usage, Attitudes, and Literacy in Higher Education and Research” by Richard Brown, Elizabeth Sillence, and Dawn Branley-Bell, was published in the Journal of Educational Technology Systems. Through an explanatory mixed-methods design combining survey responses from 269 students and staff with 24 qualitative interviews, the research provides a rare comprehensive view of how generative AI tools like ChatGPT, DALL-E, and Copilot are impacting teaching, learning, and scholarly work.
How is AI being used in academic contexts?
Generative AI tools are primarily being adopted to support idea generation, text editing, content explanation, and research writing. According to survey data, 41% of participants used AI for idea generation, 38% for explaining concepts, and 31% for editing text. However, usage was not evenly distributed. While about 30% of students and staff reported frequent use (several times a week or more), another 25–31% claimed never to have used AI tools.
Interview responses revealed a striking perception gap: participants generally believed their peers were using AI far more than they themselves reported. Many staff suspected that students were heavily reliant on AI for assignments, while students were convinced that staff quietly employed it for research writing. Despite this mutual skepticism, both groups tended to describe their own AI usage as cautious, ethical, and limited.
The study also confirmed the broad versatility of AI tools in academic workflows. Beyond writing support, a smaller group of participants mentioned using AI for data analysis, coding, and language translation. Some international students reported using AI to enhance their academic English, raising concerns about fairness and degree authenticity.
What drives or hinders AI use in universities?
Sociodemographic variables emerged as powerful predictors of AI use, attitudes, and literacy. Quantitative analysis showed that males reported higher levels of AI usage, more positive attitudes, and greater self-assessed literacy than females. Younger participants and those with higher subjective socioeconomic status (SSES) also demonstrated higher literacy levels. Interestingly, neither student versus staff status nor age predicted usage frequency, but older individuals did report lower AI literacy.
Qualitative interviews further explored these divides. Participants widely agreed that younger students, part of the so-called “iPad generation”, are more adept at adopting new technologies, while older faculty often struggle to integrate AI tools into their work. Some expressed that digital fluency was increasingly a prerequisite for both academic success and employability.
The study also uncovered barriers related to financial access. Participants voiced concern that subscription-based AI tools could exacerbate educational inequalities, particularly among low-income students. Without institutional support or licensing, some students felt they were left to “pay their way” into AI-enhanced education while others were left behind.
Another major barrier was institutional uncertainty. Both students and staff reported a lack of clear guidance about what constitutes acceptable AI use, particularly in assessments and academic writing. This regulatory ambiguity created a culture of caution, with some participants avoiding AI entirely out of fear of penalties. Others felt that university messaging unintentionally fostered anxiety rather than clarity, stifling responsible exploration of AI’s educational benefits.
What should universities do to promote fair and responsible AI adoption?
Participants across both groups emphasized a need for formal AI literacy training, not just technical tutorials, but comprehensive instruction on responsible, ethical, and effective use in education and research. Many advocated for institution-wide workshops, course-integrated AI components, and mandatory guidance on academic honesty in AI-assisted work.
The study recommends that universities develop transparent, equitable, and inclusive AI policies tailored to support both students and faculty. Beyond just access, policies should address ethical use, fair evaluation practices, and the preservation of academic integrity. As fears about workload increases and job security mount, particularly among faculty who see AI efficiency gains as a prelude to downsizing, the authors argue that a proactive, human-centered approach is essential.
Additionally, the researchers propose that institutions work to dispel misinformation around AI usage. The perception gaps between actual and assumed usage rates underscore the need for clearer intra-institutional dialogue. Misconceptions, if left unaddressed, can harden into mistrust and stigma, preventing constructive integration of new technologies.
The study uses the Unified Theory of Acceptance and Use of Technology (UTAUT) to frame its findings, emphasizing the roles of social influence, performance expectancy, and institutional support as levers for adoption. Encouraging “AI champions” within student and staff communities, those already using tools responsibly and effectively, may help model and normalize best practices among more hesitant peers.
- READ MORE ON:
- AI in higher education
- academic AI adoption
- AI literacy in universities
- student and staff AI use
- ChatGPT in academia
- generative AI tools in education
- AI in academic research
- equity in AI education
- how students and staff use AI differently in higher education
- barriers to AI adoption among low-income students
- gender gaps in AI attitudes and usage in academia
- FIRST PUBLISHED IN:
- Devdiscourse