How linguistic frames influence AI policy, education and ethics

Cognitive offloading presents AI as a means of reducing human mental workload. This view highlights efficiency gains and productivity but raises concerns about dependency and reduced autonomy. By framing AI as a tool to handle cognitive burdens, societies risk normalizing reliance on systems that are not infallible, potentially weakening human critical judgment over time.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 15-09-2025 07:12 IST | Created: 15-09-2025 07:12 IST
How linguistic frames influence AI policy, education and ethics
Representative Image. Credit: ChatGPT

Artificial intelligence is not only defined by its algorithms and applications but also by the language that frames it. A new study by Tifany Petricini of Penn State Erie – The Behrend College reveals that the way AI is described profoundly influences how societies, policymakers, and educators perceive its role in culture and governance.

The research, titled “The Power of Language: Framing AI as an Assistant, Collaborator, or Transformative Force in Cultural Discourse” and published in AI & Society, examines the rhetorical and semantic frameworks that position AI as a tool, a partner, or a transformative force. The study argues that these framings are not neutral but carry cultural, political, and ethical weight, shaping both public imagination and institutional responses.

How linguistic frames influence perceptions of AI

The study identifies three dominant frames: cognitive offloading, augmented intelligence, and co-intelligence. Each of these linguistic choices embeds assumptions about what AI is and how it should interact with humans.

Cognitive offloading presents AI as a means of reducing human mental workload. This view highlights efficiency gains and productivity but raises concerns about dependency and reduced autonomy. By framing AI as a tool to handle cognitive burdens, societies risk normalizing reliance on systems that are not infallible, potentially weakening human critical judgment over time.

Augmented intelligence emphasizes AI as an extension of human ability. This optimistic narrative encourages a vision of collaboration where AI supports human decision-making. Yet the study cautions that this framing, while reassuring, can obscure structural issues such as labor displacement and the concentration of decision-making power in AI-driven systems.

Co-intelligence positions AI as a collaborator, creating a shared space where humans and machines produce meaning together. This framing offers a synergistic and even utopian vision of human–machine partnerships. However, the study highlights that such narratives blur distinctions between tools and agents, reinforcing anthropomorphic views that can distort both expectations and policy.

These framings are not just descriptive; they act as cultural signposts that influence how societies choose to regulate, adopt, and educate around AI.

What theoretical frameworks reveal about AI and language

To unpack these framings, the study draws on two major traditions: general semantics and media ecology. General semantics, rooted in Alfred Korzybski’s assertion that “the map is not the territory,” warns that words about AI often misrepresent the underlying technical reality. Descriptions that attribute thinking, creativity, or learning to machines are, in this view, category errors that mislead people into treating systems as human-like actors.

Media ecology, shaped by thinkers such as Marshall McLuhan, Neil Postman, and Walter Ong, emphasizes that communication technologies form environments that shape thought and culture. AI, when described as intelligent or collaborative, is not only a tool but part of a media ecosystem that reshapes how people view agency, trust, and authority. Petricini argues that these linguistic frames form “semantic environments” that shape imagination, policy, and cultural norms.

By placing AI discourse within these frameworks, the study reveals how misalignments between language and technical reality create distortions. For instance, when AI is linguistically elevated to the status of an autonomous agent, regulators may overemphasize machine responsibility and underemphasize human accountability.

What is at stake for policy, education and culture

The implications of these framings extend beyond semantics. The study finds that policy debates, education systems, and cultural narratives are all shaped by the language used to describe AI.

In policy, terms such as “trustworthy AI” or “high-risk AI” influence legal frameworks like the European Union’s AI Act. By anthropomorphizing or exaggerating AI’s autonomy, these discourses risk regulating machines as if they were independent actors, rather than systems built and controlled by people. Such linguistic distortions can divert attention away from human accountability and ethical responsibility in AI development.

In education, anthropomorphic metaphors such as AI “learning” or “thinking” create misconceptions for students and teachers. These terms can either inspire misplaced fear or encourage over-trust in AI systems. By reshaping how knowledge and learning are understood, the study warns, such framings may erode human-centered approaches to teaching and critical inquiry.

Culturally, the dominance of Western terminology risks sidelining diverse perspectives. Petricini points to the danger of “semantic imperialism,” where Western narratives impose a one-size-fits-all framing of AI that marginalizes non-Western traditions. For instance, Japan’s concept of Society 5.0 presents an alternative model in which AI is integrated into society with a participatory and pluralistic orientation. Recognizing such diversity, the study argues, is essential for creating more balanced global conversations about AI.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback