AI welfare debate: Experts warn against diverting care to AI while humans and nature suffer
The authors state that the urgency of current global crises, humanitarian emergencies, biodiversity collapse, and climate change, demands that caregiving resources be directed toward living beings that are already known to require support. In their view, focusing on hypothetical future AI suffering risks undermining the care owed to entities whose vulnerability is empirically observable.

The debate over whether artificial intelligence systems deserve moral consideration is gaining momentum, but a team of ethicists and philosophers warns against diverting care resources away from vulnerable living beings. In a new publication titled "Against AI Welfare: Care Practices Should Prioritize Living Beings over AI", they argue that ethical care practices must remain focused on life forms that exhibit inherent fragility rather than artificial systems.
Published in AI Magazine, the paper critiques the rising “AI welfare” movement. This movement suggests that AI might eventually become conscious or capable of suffering, potentially entitling it to moral care. The authors reject this approach, presenting a new ethical guideline that provides a clearer framework for allocating care.
Why does the AI welfare debate matter?
Recent discussions about AI ethics have raised questions about whether artificial systems could someday possess consciousness or experience suffering. Proponents of AI welfare argue that society should prepare for this possibility by establishing care frameworks for AI. However, the authors of this study highlight that such debates rely on speculative assumptions. There is no evidence that current AI systems have consciousness or the capacity to suffer, yet significant attention is being directed toward their potential moral status.
The authors state that the urgency of current global crises, humanitarian emergencies, biodiversity collapse, and climate change, demands that caregiving resources be directed toward living beings that are already known to require support. In their view, focusing on hypothetical future AI suffering risks undermining the care owed to entities whose vulnerability is empirically observable.
They introduce the concept of precarity as a critical distinction. Unlike artificial systems, living beings must continuously interact with their environments to sustain their existence. Humans, animals, and ecosystems rely on delicate processes of regeneration and self-maintenance, making their survival inherently fragile. AI systems, by contrast, can be paused, restarted, or replicated without experiencing irreversible loss, disqualifying them from the same moral considerations.
What is the precarity guideline?
To provide a more concrete basis for ethical decision-making, the authors propose the Precarity Guideline. This guideline shifts the focus from speculative discussions about AI consciousness to observable features of vulnerability. Under this framework, an entity’s entitlement to care depends on its precarity, its inherent dependence on dynamic interactions with the environment to preserve its structure and existence.
The guideline contains two main components. The first, called the Inalienable Marker, recognizes that living beings are intrinsically precarious because their survival depends on continuous environmental exchanges. The second, known as the Relationship Marker, extends care indirectly to non-precarious entities when their maintenance supports the well-being of precarious beings. For example, AI-powered infrastructure may warrant upkeep because its failure could harm humans or ecosystems.
By adopting these markers, the Precarity Guideline provides a practical approach to allocating care. It avoids reliance on philosophical debates about whether AI can suffer and instead uses measurable, empirically grounded criteria to determine where caregiving efforts should be directed.
How should care practices be prioritized in a world of limited resources?
According to the authors, caregiving resources, time, money, and attention, are finite and must be distributed where they have the greatest moral impact. Under conditions of scarcity, prioritizing living beings with immediate and observable needs becomes an ethical necessity. The Precarity Guideline offers a way to make these difficult decisions by focusing on entities whose existence is most at risk.
This framework also addresses emerging technologies. While current AI systems do not exhibit precarity, the authors acknowledge that future innovations, such as synthetic or hybrid life forms, could blur the boundaries. They recommend caution when developing technologies like self-organizing biological entities, which might one day display traits of precarity. Until such evidence emerges, however, care practices should remain centered on living beings.
The study argues that some care for AI systems is justified when it indirectly benefits humans and other life forms. For instance, maintaining AI in critical infrastructure or caregiving robots supports the well-being of precarious entities. Yet this care remains relational, not intrinsic, and does not equate to granting AI moral status on par with living beings.
- FIRST PUBLISHED IN:
- Devdiscourse