State surveillance enters new era as AI redraws citizenship and control

At the core of this transformation lies a fundamental change in epistemology: from variables to patterns, from surveys to sensors, and from rules to associations. Traditional governance technologies relied on structured data collected through censuses, surveys, and statistical instruments. These methods produced populations as coherent statistical objects, defined by attributes like age, race, and income.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 17-05-2025 10:06 IST | Created: 17-05-2025 10:06 IST
State surveillance enters new era as AI redraws citizenship and control
Representative Image. Credit: ChatGPT

In a seismic transformation quietly reshaping public governance, artificial intelligence is overtaking statistics as the primary lens through which the modern state “sees” its citizens. What began as a series of discrete algorithmic innovations across social services, law enforcement, and border control is rapidly crystallizing into a new paradigm of statecraft - one governed not by variables and rules, but by patterns and associations. This epistemic shift carries profound implications for concepts as foundational as citizenship, democracy, and governance.

These findings are at the center of a newly published peer-reviewed study titled "Artificial Intelligence and the State: Seeing Like an Artificial Neural Network," by Törnberg et al., featured in the journal Big Data & Society. The paper argues that artificial intelligence, particularly through artificial neural networks (ANNs), constitutes a new technology of state governance. Far from a mere technical upgrade, this shift signals a transformation of how states perceive, process, and act upon data.

How does artificial intelligence redefine the state’s way of seeing?

At the core of this transformation lies a fundamental change in epistemology: from variables to patterns, from surveys to sensors, and from rules to associations. Traditional governance technologies relied on structured data collected through censuses, surveys, and statistical instruments. These methods produced populations as coherent statistical objects, defined by attributes like age, race, and income.

AI, by contrast, derives insights from unstructured or semi-structured data, images, texts, sensor feeds, without requiring predefined variables. Instead of representing populations through measurable characteristics, AI identifies emergent patterns in data flows. The implications are profound: what was once legible and contestable through democratic debate, like the use of racial categories in policy, is now abstracted into opaque algorithmic associations that resist public scrutiny or intervention.

This transformation enables what the authors term “inductive statecraft,” a mode of governance focused on pattern recognition, real-time response, and anticipatory intervention. For example, rather than enforcing food safety through mandated inspections, a city might use spikes in Google searches about nausea to dynamically dispatch health inspectors. The predictive orientation of AI lends itself to preemptive policy moves, effectively reconfiguring governance from reactive to speculative.

Where and how is AI already changing governance?

The transition is already underway across various “niches” of governance, where experimental technologies are deployed with minimal public oversight. The paper highlights how the Global South and securitized zones, like borders and law enforcement, serve as incubators for AI-based systems. Migrant tracking systems using biometrics and predictive policing algorithms trained on historical crime data illustrate how marginalized populations are often the first to experience the consequences of AI governance.

In Europe, the United Nations High Commissioner for Refugees (UNHCR) has implemented digital identity systems for displaced populations, incorporating AI-enabled biometric tracking. In the United States, facial recognition algorithms are being integrated into public safety networks. The United Kingdom has introduced AI chatbots capable of simulating conversations with state entities. These deployments reveal a recurring pattern: AI is being used where legal protections are weakest or social resistance is least likely.

The study also frames AI adoption as a radical innovation requiring sociotechnical transition. Bureaucracies, regulatory frameworks, and legacy infrastructures typically resist abrupt changes. Yet, crises, whether economic, public health-related, or geopolitical, can serve as “windows of opportunity” for embedding these technologies. The COVID-19 pandemic, for example, significantly accelerated the normalization of digital surveillance and automated decision-making.

What are the consequences for politics, citizenship, and the future of governance?

Perhaps most alarmingly, the study suggests that AI may destabilize the political conditions necessary for democracy. Statistical governance allows for categorization, transparency, and critique. A racial variable in a dataset can be debated, contested, or eliminated. AI, however, operates through latent features that are neither visible nor easily understood. Discrimination, when it occurs, is hidden in high-dimensional vector spaces and cannot be easily attributed or rectified.

This opacity raises pressing questions about explainability and accountability. Many jurisdictions require public decisions to be explainable- a principle AI often violates. A refusal of benefits or denial of asylum, when based on neural networks, may not come with an understandable justification. This undermines legal protections and erodes trust in public institutions.

Additionally, AI alters the conceptual boundaries of the state itself. Whereas statistical governance relied on geographically bounded, demographically defined populations, AI operates on cross-border data streams and mobile digital identities. Citizenship, once rooted in territorial jurisdiction and legal rights, becomes a matter of real-time behavioral profiling. The state is no longer a fixed political space but a data-processing engine acting on predictive scores and probabilistic assessments.

Yet, the study cautions against wholesale pessimism. AI also offers opportunities for enhancing state capacity after decades of neoliberal austerity. Rather than defaulting to criticism, the authors call for a constructive vision of AI governance that includes transparency, citizen agency, and ethical oversight.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback