AI systems are absorbing everything and hiding what they exclude
A new philosophical and socio-technical analysis by Katia Schwerzmann of Ruhr University Bochum argues that contemporary AI is not merely advancing computation but transforming the very logic through which societies define inclusion, exclusion, and knowledge.
The study, titled “From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic,” published in AI & Society, reframes AI through the historical concepts of enclosure and foreclosure, revealing how modern machine learning systems extend older forms of extraction, classification, and control into a new, more pervasive paradigm.
AI expands from selective enclosure to totalizing control
The research traces the roots of modern AI back to the concept of enclosure, a foundational mechanism in both political and economic systems. Historically, enclosure referred to the act of drawing boundaries, dividing spaces, and determining what belongs inside or outside. This logic shaped everything from land ownership to state sovereignty and capitalist accumulation, relying on visible and contestable acts of exclusion.
According to the study, contemporary AI systems inherit this logic but fundamentally transform it. Traditional enclosures operated through explicit distinctions, marking clear lines between inclusion and exclusion. Machine learning systems, by contrast, are driven by what the study describes as a shift toward foreclosure, a logic that seeks to eliminate the outside entirely by absorbing everything into its domain.
This transformation is rooted in the architecture of big data. Unlike traditional archives, which are constrained by physical limits and selective curation, modern data infrastructures are designed to capture, store, and process vast quantities of information with the ambition of representing reality in its entirety. The study emphasizes that this aspiration toward exhaustiveness is central to how AI systems function, as they depend on increasingly large and diverse datasets to improve predictive accuracy.
However, this apparent inclusivity masks a deeper structural change. While traditional archives made their exclusions visible, big data systems obscure them. The act of selection still occurs, but it is embedded within automated processes and distributed infrastructures, making it harder to identify and challenge. As a result, the logic of enclosure does not disappear but becomes less perceptible, operating through systems that claim to reflect reality rather than shape it.
By presenting themselves as neutral representations of the world, AI systems gain authority as objective tools, even though they are built on layers of human decisions, biases, and economic incentives. The study argues that this transformation makes AI more difficult to contest politically, as its operations are framed as technical rather than ideological.
Machine learning redefines inclusion while masking exclusion
A key focus of the research is how ML models handle data, particularly the transformation of outliers into so-called edge cases. In traditional statistical models, outliers were often excluded because they disrupted the ability to generalize patterns. This exclusion was explicit, requiring human judgment and interpretation.
ML systems, however, operate differently. Their goal is not to explain data but to predict outcomes, which requires incorporating as much variation as possible. In this context, outliers are no longer discarded but integrated into the model to improve its ability to handle rare or unexpected scenarios. This shift creates the appearance of greater inclusivity, as more data points are incorporated into the system.
However, the study finds that this inclusion does not eliminate underlying inequalities. Instead, it transforms them into resources for optimization. By integrating edge cases, machine learning models improve their predictive performance, but they do so within the constraints of the existing dataset and its embedded norms. This means that the system continues to reflect and reinforce the patterns present in the data, even as it claims to encompass all possibilities.
The use of synthetic data further illustrates this dynamic. Machine learning systems can generate artificial data points to fill gaps in the dataset, creating representations of scenarios that may not exist in reality. While this enhances the model’s ability to generalize, it also reinforces the statistical structure of the original data, extending its influence rather than challenging it.
This process has broader social implications. The study highlights how algorithmic systems redefine recognition, shifting it from a political and ethical process to a technical one based on classification and prediction. In this framework, fairness becomes a matter of equal representation within the dataset, rather than addressing the structural conditions that produce inequality.
Consequently, marginalized groups may be more visible within AI systems, but this visibility does not necessarily translate into improved outcomes. Instead, it can lead to increased surveillance and control, as predictive models are used in areas such as risk assessment, policing, and resource allocation. The study warns that this transformation risks normalizing inequality by embedding it within seemingly neutral systems.
AI’s “neutral voice” conceals hidden norms and power
The research also examines the emergence of a new form of subjectivity in AI systems, particularly in large language models that communicate using the first-person pronoun. These systems often present themselves as neutral, objective, and without personal perspective, reinforcing the perception that their outputs are unbiased reflections of data.
However, the study argues that this neutrality is illusory. AI systems are shaped not only by the data they are trained on but also by the processes used to fine-tune their behavior. These processes involve human decisions about what constitutes appropriate, helpful, or safe responses, embedding specific values and norms into the system.
Despite this, the AI’s use of a unified “I” creates the impression of a singular, authoritative voice that represents all perspectives. This rhetorical position allows the system to claim both universality and neutrality, masking the complexity of its underlying construction. The study identifies this as a form of epistemic foreclosure, where the conditions that produce knowledge are hidden, making it difficult to question or critique the system’s outputs.
This dynamic has consequences for how users interact with AI. By presenting itself as an objective interlocutor, the system encourages trust and reliance, even though its responses are generated through probabilistic processes rather than understanding or reasoning. The study suggests that this can lead to new forms of dependency, where users defer judgment to AI systems without fully recognizing their limitations.
The emergence of this “neutral voice” also reflects broader changes in how knowledge is produced and validated. As AI systems become more integrated into everyday life, they shape not only what information is available but also how it is interpreted and used. This influence extends beyond individual interactions, affecting institutional practices and societal norms.
A system that seeks totality but cannot achieve it
Modern AI is characterized by a “totalizing desire” to represent and process all aspects of reality. This ambition drives the expansion of data collection, model complexity, and system integration, creating increasingly comprehensive infrastructures of knowledge and control.
Additionally, the research notes that this totality can never be fully realized. AI systems are limited by computational constraints, data availability, and the socio-political contexts in which they are developed. Their outputs are always shaped by the choices and assumptions embedded in their design, even as they present themselves as complete and objective representations of the world.
This tension between aspiration and limitation defines the current trajectory of AI. While the technology promises greater efficiency, accuracy, and inclusivity, it also introduces new forms of opacity and control. By masking the processes of selection and exclusion, AI systems make it more difficult to identify and challenge the power structures they embody.
The study calls for greater transparency, accountability, and awareness of the socio-technical dynamics that shape AI systems. This includes recognizing the role of economic and political forces in driving AI innovation and ensuring that alternative models of development remain possible.
- FIRST PUBLISHED IN:
- Devdiscourse

