AI literacy becomes make-or-break skill for language teachers


COE-EDP, VisionRICOE-EDP, VisionRI | Updated: 11-05-2026 12:38 IST | Created: 11-05-2026 12:38 IST
AI literacy becomes make-or-break skill for language teachers
Representative image. Credit: ChatGPT

Artificial intelligence (AI) is transforming language education in ways that depend not only on teachers’ technical competence, but also on the emotions AI triggers as educators confront new tools, new expectations and new risks, warns a new study published in Language Teaching Research. The review finds that language teachers’ ability to understand, apply, evaluate and ethically manage AI is closely tied to feelings ranging from excitement and professional empowerment to anxiety, frustration, fear of replacement and loss of control.

The study, titled “Bridging Language Teachers’ AI Literacy and AI-Induced Emotions: A Systematic Review and a Framework for Future Research,” proposes an appraisal-based framework to explain how teachers’ cognitive evaluations of AI shape their emotional responses in the classroom.

AI literacy is becoming a core language-teaching competence

AI now spans automated writing assessment, natural language processing, intelligent tutoring systems, chatbots, automatic speech recognition, data-driven learning, computerized dynamic evaluation and generative AI systems. These technologies are changing lesson planning, resource development, feedback, assessment, student interaction and the wider role of the language teacher.

This transformation is more than a technical upgrade, the study suggests. It is a shift in the professional conditions under which language teachers work. Tasks once seen as central to teaching, including lesson preparation, material creation and some forms of assessment, can now be supported or partly automated by AI. That shift requires teachers to develop new competencies, but it also creates uncertainty about professional identity, classroom authority and the future of human teaching.

The authors define AI literacy among language teachers as a multidimensional professional capability. It includes knowing and understanding AI, applying AI in pedagogy, evaluating AI critically and using AI ethically. This framework moves beyond simple tool familiarity. It treats AI literacy as a blend of conceptual knowledge, classroom judgment, practical skill and moral responsibility.

Knowing and understanding AI means that teachers need a working grasp of how AI systems function, what they can and cannot do, and how their limitations affect classroom use. Teachers do not need to become computer scientists, but they must understand enough to avoid treating AI output as automatically accurate, neutral or pedagogically appropriate. Without this foundation, AI integration may become superficial or risky.

Applying AI refers to the ability to use AI tools meaningfully in language teaching. This includes using AI to design tasks, support formative assessment, generate resources, provide feedback, aid grammar work, support speaking practice or help students develop writing. The study finds that language teachers are already using AI as a planning assistant, a resource-generation partner and a support tool for personalized instruction.

Evaluating AI is the critical dimension. Teachers must judge whether AI-generated content is accurate, fair, suitable for learners, aligned with learning goals and useful for language development. This is especially important in language education, where nuance, cultural context, accuracy, tone, meaning and interaction matter. A grammatically polished AI output may still be pedagogically weak, culturally insensitive or inappropriate for a specific learning objective.

AI ethics is presented as a key part of literacy rather than an optional add-on. The authors highlight concerns over algorithmic bias, data privacy, academic integrity, plagiarism, hallucination, learner autonomy, overreliance and the ownership of AI-generated content. These issues are especially sensitive in language classrooms because teachers are responsible not only for delivering content but also for shaping communication, identity, cultural understanding and student confidence.

The review finds that most empirical studies on language teachers’ AI literacy address all four dimensions, suggesting that the field is beginning to recognize AI literacy as a broad and complex competence. However, some studies place less emphasis on ethics, which the authors identify as a weakness. They argue that ethics must sit at the heart of AI literacy because language teachers must make daily decisions about whether, when and how AI should be used with learners.

Prompt literacy emerges as one of the most practical and urgent elements of AI literacy. Teachers are not simply issuing commands to AI systems. They must design prompts that reflect learning goals, adjust them for student level, refine them through repeated interaction and critically assess the output. The review shows that prompt literacy involves task design, pedagogical alignment, reflection and judgment.

The study also identifies major barriers. Many teachers report limited digital competence, uneven experience with AI tools, lack of technical support, weak institutional guidance and insufficient professional development. Even teachers who are interested in AI may struggle if they lack training, classroom examples or clear policies. This gap can lead to hesitation, ineffective use or emotional strain.

The authors argue that language teacher education and professional development must change. AI literacy should be embedded in teacher training, not treated as a short workshop or optional skill. Teachers need sustained, practice-oriented learning that combines tool use, pedagogy, ethics, assessment design and emotional support. Professional learning communities are identified as especially valuable because they allow teachers to share experiences, solve problems and normalize the emotional uncertainty that often comes with AI adoption.

Teachers’ AI emotions range from empowerment to anxiety and loss of control

The authors argue that language teachers’ engagement with AI cannot be understood only through knowledge, skills or attitudes. It is also deeply emotional. Teachers may feel curiosity, excitement, satisfaction and empowerment when AI helps them save time, personalize instruction or create new learning opportunities. They may also feel anxiety, fear, frustration, unease and professional insecurity when AI threatens their sense of control or exposes gaps in their skills.

The review categorizes AI-induced emotions into challenge emotions, deterrence emotions, loss emotions and achievement emotions.

  • Challenge emotions arise when teachers see AI as an opportunity and believe they have some control over how to use it. These emotions include excitement, curiosity, hope, anticipation, playfulness and a sense of flow. In this state, teachers are more likely to experiment, learn, adapt and integrate AI constructively.
  • Deterrence emotions occur when teachers perceive AI as a threat but still believe they have some control. These include anxiety, worry, fear and distress. Teachers may worry about whether they can use AI effectively, whether AI outputs are reliable, whether students will misuse tools, or whether their institution expects rapid adaptation without enough support. These emotions do not always lead to rejection, but they make engagement more cautious and stressful.
  • Loss emotions are stronger and more resistant. They arise when teachers perceive AI as threatening and feel they have little control over its consequences. These emotions include anger, frustration, dissatisfaction, disappointment and resentment. In the language classroom, they may appear when teachers feel AI is being imposed on them, when they fear professional replacement, or when they believe AI weakens the human relationships at the center of language learning.
  • Achievement emotions appear when teachers experience AI as beneficial and connected to positive outcomes. These include satisfaction, enjoyment, pleasure and relief. Teachers may feel relief when AI reduces routine workload, satisfaction when students benefit from personalized support, or enjoyment when AI helps them design more creative tasks. However, the study cautions that achievement emotions may stabilize AI use rather than drive deeper pedagogical transformation if teachers become comfortable with limited uses.

Positive emotions often emerge when AI aligns with teachers’ pedagogical goals. Teachers are more likely to feel empowered when AI helps them design better materials, support different learner needs or reduce repetitive work. AI can also encourage creativity by giving teachers fresh ideas and helping them test alternative approaches.

Negative emotions are often tied to unfamiliarity, lack of support and professional uncertainty. Teachers may feel overwhelmed by rapid technological change, unsure how to judge AI output or anxious about ethical issues. Fear of replacement is also significant. Some teachers worry that AI will reduce the value of human expertise or weaken their professional identity.

Overreliance is another emotional trigger identified in the study. Teachers may worry that students will depend too heavily on AI for writing, translation, vocabulary learning or task completion. They may also worry that they themselves could become too dependent on AI-generated resources, weakening professional judgment and creativity over time. This produces tension between efficiency and educational integrity.

Teachers’ emotions are rarely purely positive or negative. Many teachers experience ambivalence. They may see AI’s promise while worrying about its consequences. They may enjoy the efficiency of AI-generated materials while feeling uneasy about accuracy, ethics or student dependence. They may welcome personalization while fearing a decline in human interaction.

This mixed emotional landscape matters because emotions shape behaviour. Teachers who experience challenge emotions are more likely to engage with AI actively. Those who experience deterrence or loss emotions may avoid AI, comply passively or resist implementation. The study hence argues that emotional support is not a secondary issue. It is central to whether AI integration succeeds.

A new framework links AI literacy, emotion and classroom adoption

The authors propose an expanded appraisal framework to explain how AI literacy and AI-induced emotions interact. Drawing on Appraisal Theory, the framework suggests that emotions arise from teachers’ cognitive evaluations of AI, especially whether they see it as an opportunity or threat and whether they feel capable of managing it.

In this framework, AI literacy shapes appraisal. A teacher who understands AI, can apply it in lessons, can evaluate its output and can reason ethically about its use is more likely to see AI as manageable and professionally useful. This can produce challenge or achievement emotions. On the other hand, a teacher with limited AI literacy may see AI as uncertain, risky or threatening, increasing anxiety, frustration or avoidance.

Primary appraisal involves judging whether AI supports or threatens professional goals. Teachers who believe AI can improve instruction, support learners and reduce workload may appraise it as an opportunity. Teachers who believe AI threatens autonomy, job security, student integrity or classroom relationships may appraise it as a threat.

Secondary appraisal involves judging control. Even if teachers see AI as useful, they may feel anxious if they lack training, institutional voice or technical support. Conversely, teachers who initially view AI with suspicion may become more confident if they receive guidance, time, peer support and practical classroom strategies.

The framework is important because it explains why the same AI tool can produce different reactions among different teachers. AI’s emotional impact does not come only from the technology itself. It comes from how teachers interpret the technology, how prepared they feel, what support they receive and what institutional conditions shape their work.

The study also highlights the role of social and institutional context. Teacher emotions are not private reactions isolated from the workplace. They are shaped by school policy, leadership, curriculum pressure, peer culture, professional expectations and wider beliefs about educational technology. When institutions promote AI without training or ethical guidance, teachers may feel pressured and unsupported. When institutions provide structured support, teachers may feel more capable and open to experimentation.

The authors argue that institutions must adopt a holistic strategy. Technical training alone is not enough. Language education institutions need AI literacy programmes, emotional support systems, ethical decision-making training and professional learning communities. Teachers need spaces to discuss anxiety, uncertainty and ethical dilemmas without fear of judgment.

Professional development should be modular and gradual. It should begin with foundational AI concepts and move toward advanced classroom applications. It should include prompt engineering for language tasks, AI-assisted assessment, feedback design, academic integrity, data privacy and bias recognition. Training should be tied to real classroom situations rather than abstract demonstrations.

The study also recommends scenario-based ethics training. Language teachers need to work through realistic dilemmas, such as whether students can use AI for essay drafts, how to respond to AI-assisted plagiarism, how to protect learner data, how to judge biased AI feedback and how to preserve student agency. These dilemmas require professional judgment, not just technical knowledge.

For teachers, the study describes a likely development path: familiarization, exploration, reflection and refinement. Teachers may begin with curiosity or uncertainty, experiment with AI tools, reflect on their classroom value and refine their use over time. This process requires both metacognitive awareness and emotional regulation.

The authors also call for future research that is more culturally and methodologically diverse. Current studies are concentrated in certain language education contexts and often rely on interviews or self-report surveys. More mixed-method, longitudinal and cross-cultural research is needed to understand how teachers’ AI literacy and emotions change over time and across educational systems.

The review warns against treating AI literacy and emotions as separate issues. A teacher’s knowledge of AI can shape emotional response, and emotional response can influence whether that teacher continues to learn, experiment or resist. This feedback loop will shape how AI enters real classrooms.

AI integration in language education should not be framed only as a matter of tool adoption. It is a professional transition that affects teacher identity, workload, ethics, confidence and classroom relationships. Successful adoption will require both cognitive readiness and emotional resilience.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback