AI’s autonomy is a myth: Humans power every move behind the machines
The study categorizes human-AI interactions into three key relationships: enhancement, where technology amplifies human skills; replacement, where it takes over tasks without necessarily harming cognitive functions; and displacement, where AI undermines human capabilities by deskilling or masking human contributions. This distinction is crucial for understanding the ethical stakes of AI deployment. While enhancement can lead to positive outcomes, displacement raises concerns about human dignity, skill erosion, and the invisibility of human work.

A new research paper submitted on arXiv dismantles the widespread belief that artificial intelligence (AI) operates independently of human involvement. It sheds light on the ethical imperative to recognize, protect, and value the human contributions that underpin AI systems.
Titled "What Does ‘Human-Centred AI’ Mean?," the research offers a provocative rethinking of what it means to place humans at the center of AI design and operation. The study highlights calls for a move away from surface-level ethics toward structural reforms that make hidden labour visible, prevent displacement, and resist the hype that distorts public understanding.
What does it really mean to call AI human-centred?
The term human-centred artificial intelligence (HCAI) has become a buzzword in technology policy and ethics, often implying that AI systems are designed to serve human needs. However, this research reveals that this framing is shallow unless it addresses the deeper relationship between human cognition and technology. According to the study, AI is not a self-sufficient technology but a sociotechnical system that fundamentally depends on human input, oversight, and labour, whether visible or hidden.
The study categorizes human-AI interactions into three key relationships: enhancement, where technology amplifies human skills; replacement, where it takes over tasks without necessarily harming cognitive functions; and displacement, where AI undermines human capabilities by deskilling or masking human contributions. This distinction is crucial for understanding the ethical stakes of AI deployment. While enhancement can lead to positive outcomes, displacement raises concerns about human dignity, skill erosion, and the invisibility of human work.
The study underscores that discussions about HCAI often focus narrowly on user interface design or high-level ethical principles, neglecting the underlying power structures and labour dynamics embedded in AI systems. By shifting the perspective, the author argues for a more robust interpretation of human-centredness that acknowledges these sociotechnical realities.
Why is human labour hidden behind AI systems?
The study raises a critical question surrounding the extent to which modern AI systems rely on hidden human labour. Contrary to the popular image of AI as an autonomous entity, the research highlights how tasks such as data labeling, content moderation, and algorithmic fine-tuning are carried out by large, often underpaid workforces. These contributions are systematically obscured by the industry, creating a false narrative that AI systems operate independently.
The author connects this obfuscation to broader historical patterns where technology has long been used to hide or minimize human involvement. From early calculators to garment factory automation, human skills have been alternately enhanced, replaced, or marginalized by technological developments. In today’s AI industry, this phenomenon is intensified by the reliance on global labour networks that remain invisible to end-users.
Large language models, image generators, and chatbots epitomize this issue. While they appear to generate responses autonomously, they are trained on massive datasets curated by human workers and rely on ongoing human feedback to correct outputs. The so-called intelligence of these systems is deeply rooted in the cognitive labour embedded in their design and maintenance. This hidden workforce, often based in regions with lower labour costs, bears the brunt of ethical risks while receiving little recognition.
The study argues that making these contributions visible is essential for an honest discourse on AI ethics. By exposing the human backbone of AI, policymakers and developers can better address issues of fairness, accountability, and labour rights.
How should AI ethics and policy respond to this reality?
The research issues a clear warning: current approaches to human-centred AI risk becoming superficial if they fail to address the sociotechnical dimensions highlighted in the study. The study calls for a paradigm shift in how AI ethics, governance, and design are approached.
First, the paper stresses the importance of preventing harmful displacement of human skills. When AI systems replace human roles without safeguards, they erode not only employment but also cognitive capabilities and societal resilience. Industries must adopt practices that preserve and enhance human agency rather than rendering it obsolete.
Second, transparency in AI development must go beyond algorithmic explainability to include visibility into the human labour embedded in AI. This requires policy interventions that compel companies to disclose how their systems are built and maintained, including the conditions of workers who contribute to data curation and moderation.
Third, the author critiques the hype-driven narrative surrounding AI, which often anthropomorphizes machines and exaggerates their autonomy. Such narratives fuel unrealistic expectations, obscure accountability, and divert attention from the real human factors at play. The study warns against over-reliance on benchmarks and performance metrics that fail to capture the full ethical context of AI.
Finally, the paper urges stakeholders to reclaim human cognition as central to AI development. This means recognizing that AI does not exist independently of human thought and labour but is an extension of it. By embedding this understanding into regulations, corporate policies, and research frameworks, society can move toward AI systems that genuinely prioritize human well-being.
- FIRST PUBLISHED IN:
- Devdiscourse