Feminist AI framework pushes back against eurocentric and androcentric AI models

The research suggests that current frameworks, including the European Union’s AI Act, will remain insufficient if they do not incorporate deeper ontological and epistemological changes. Audits and rights-based oversight, while necessary, risk becoming superficial fixes if they do not confront the assumptions underlying AI development.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-08-2025 16:43 IST | Created: 22-08-2025 16:43 IST
Feminist AI framework pushes back against eurocentric and androcentric AI models
Representative Image. Credit: ChatGPT

Artificial intelligence may be marketed as human-centred, but the reality of current practice falls far short of its promises, according to new academic research. In a paper authored by Tanja Kubes of Freie Universität Berlin, the limitations of human-centred AI are dissected, with the author arguing that only a radical shift toward a feminist approach can address both the social and structural biases embedded in today’s systems.

The study, titled “A Critique of Human-Centred AI: A Plea for a Feminist AI Framework (FAIF)”, published in AI & Society, outlines how the dominant paradigm of human-centred AI reinforces Eurocentric, androcentric, and anthropocentric worldviews. Kubes contends that market-driven models of AI development reproduce global inequities and ecological damage, and that a Feminist AI Framework (FAIF) is urgently needed to reorient the field around situated, relational, and accountable practices.

Why human-centred AI falls short

Human-centred AI is often framed as the solution to concerns about fairness, accountability, and bias. Yet the author argues that this model is already failing. At its core, human-centred AI continues to concentrate power and capital in the hands of a few corporations and countries, leaving much of the Global South with little influence over the governance of systems that shape their societies.

The analysis identifies two tiers of problems. The so-called “easy” problems include biases such as Eurocentrism, racism, androcentrism, digital colonialism, and economic exclusion. These are deeply problematic but can be addressed with incremental measures like audits, improved regulation, and greater inclusion of diverse perspectives. However, the “hard” problems run deeper. These include entrenched assumptions of human exceptionalism, mind–body dualism, and Western ontologies that divide nature and society. These structural issues cannot be solved by policy fixes alone, but require a wholesale rethinking of how AI is conceptualized, designed, and deployed.

Examples of malfunctioning systems reveal how these problems play out in practice. Facial recognition technologies that misclassify women and people of colour, hiring tools that discriminate against underrepresented groups, and medical algorithms that ignore diverse patient data illustrate the inadequacy of human-centred AI. By focusing on abstract notions of the human, Kubes argues, the field fails to reckon with the situated, embodied, and relational nature of real-world contexts.

What a feminist AI framework proposes

The study introduces the Feminist AI Framework (FAIF) as an alternative that integrates insights from feminist science and technology studies, posthumanism, and more-than-human anthropology. At its foundation, FAIF challenges the illusion of neutrality in technology and knowledge. Drawing on Donna Haraway’s concept of “situated knowledges,” it emphasizes accountability and partial perspectives, rejecting the “view from nowhere” that has long dominated science and engineering.

The framework also incorporates critical posthumanist and new materialist approaches, which question the centrality of humans and recognize the agency of non-human actors. Technologies, environments, and communities are understood not as separate entities but as interdependent assemblages. This relational view is further developed through the lens of multi-species anthropology, which sees humans, non-humans, and technical systems as co-constitutive.

In practice, FAIF redefines AI design and governance. Instead of prioritizing optimization for abstract users, it demands co-constitutive approaches that embed local contexts, ecological boundaries, and non-human stakeholders. It requires that bias work go beyond checklists and address structural hierarchies, including colonial and patriarchal legacies. It elevates environmental accountability by making assessments of energy use, water consumption, and e-waste central to design decisions. And it insists on respecting data sovereignty, especially for Indigenous communities and local knowledge systems.

Through this reorientation, FAIF does not merely mitigate risks but reshapes the ethical foundation of AI, ensuring that systems are aligned with equity, justice, and sustainability.

What this means for the future of AI

The implications of Kubes’s framework are wide-ranging, extending from regulatory regimes to practical design processes. The research suggests that current frameworks, including the European Union’s AI Act, will remain insufficient if they do not incorporate deeper ontological and epistemological changes. Audits and rights-based oversight, while necessary, risk becoming superficial fixes if they do not confront the assumptions underlying AI development.

By foregrounding relational accountability, FAIF shifts the focus from individual users to networks of relations. This means assessing not only how a system treats end-users but also how it affects communities, ecosystems, and broader power structures. It also reframes objectivity: instead of pursuing neutrality, it calls for situated accountability, where developers and institutions acknowledge the positions and limitations from which they operate.

This approach carries direct policy implications. For governments and regulators, it means moving beyond abstract ethical principles to frameworks that actively dismantle hierarchies of knowledge and power. For developers, it means designing with multi-species and ecological concerns in mind, accounting for the environmental costs of data centers and AI infrastructure. For global governance, it means recognizing and incorporating diverse ontologies, including Indigenous perspectives, into AI policymaking.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback