Emerging technologies are testing societal limits, not just performance

The authors warns that the expansion of experimental practices into public domains creates a political vacuum. Trials are often labeled as temporary, flexible, and low-stakes, terms that obscure their high-impact nature. In reality, these pilots initiate lasting changes to social organization, data regimes, and institutional responsibilities, often without clear oversight or public debate.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 22-07-2025 15:54 IST | Created: 22-07-2025 15:54 IST
Emerging technologies are testing societal limits, not just performance
Representative Image. Credit: ChatGPT

Advanced technologies such as autonomous vehicles, facial recognition, and AI diagnostics are increasingly moving from labs into public environments. Amidst this shift, a pressing question looms: what exactly is being tested? A new academic study reveals that in these real-world experiments, it is not just the technologies under scrutiny, but society itself.

The article, “How Tech Trials Put Society to the Test,” by Noortje Marres, published in Theory, Culture & Society (2025), argues that contemporary technology pilots fundamentally alter the social contract. Drawing on empirical examples from the UK, ranging from driverless car trials to AI diagnostic tools in healthcare, the author makes the case that such trials turn society into a testing ground, while at the same time undermining its capacity to respond critically and democratically.

When testing innovation means testing society

The author traces how testing practices have moved from the lab into society. Technologies once trialed in remote or controlled environments,  such as self-driving cars or AI tools, are now piloted on city streets, in hospitals, and through public services. This transition transforms trials from technical assessments into “trials of implicitness”: experiments that incorporate new technologies into everyday life in ways that are difficult to detect, question, or oppose.

A key case explored by the author is the Streams app trial at London’s Royal Free Hospital, where DeepMind obtained access to 1.6 million NHS patient records without consent. Though framed as a pilot to enhance clinical efficiency, the trial also served as a social stress test for data governance, privacy norms, and institutional accountability, yet without any formal assessment of those societal dimensions.

Similarly, the deployment of live facial recognition by UK police forces, such as the South Wales Police and at London’s King’s Cross Station, extended beyond evaluating algorithmic accuracy. These pilots tested public tolerance for biometric surveillance, the limits of lawful experimentation, and the restructuring of police-citizen interactions. In each case, what is under trial is not just the technology but the social system it enters.

What is the “Durkheim Test” and why do we need it?

To counter the creeping normalization of implicit testing, Marres introduces a critical conceptual intervention: the “Durkheim test.” This approach calls for creating deliberate, participatory, and transparent experimental settings in which society can evaluate intelligent technologies against its own objectives, rather than being reshaped by them without consent.

The Durkheim test is proposed as a corrective to infrastructural experimentation, which she argues is increasingly dominant. This refers to trials that embed new technologies directly into public infrastructures, like roads, medical records, or urban surveillance grids, without adequate evaluative protocols or public involvement. The consequence is a weakening of society’s capacity to judge innovation on terms other than those set by the developers themselves.

Rather than isolating variables, the Durkheim test emphasizes social inquiry, collective reasoning, and norm contestation. It aims to make explicit what current trials render implicit: Who benefits from this innovation? What are the trade-offs? Whose values are being encoded into the system? What democratic procedures, if any, are available to challenge its legitimacy?

In other words, while current pilots test whether society can absorb technological change, the Durkheim test asks whether the technologies align with society’s needs, values, and institutions.

How trials displace accountability and redefine governance

The authors warns that the expansion of experimental practices into public domains creates a political vacuum. Trials are often labeled as temporary, flexible, and low-stakes, terms that obscure their high-impact nature. In reality, these pilots initiate lasting changes to social organization, data regimes, and institutional responsibilities, often without clear oversight or public debate.

This experimental mode of governance blurs the lines between development and deployment, between inquiry and policy. For instance, when facial recognition systems are trialed on public streets, the distinction between pilot and implementation becomes functionally irrelevant, especially for those subjected to the technology in real time.

In such settings, the public is not a participant in a democratic process but rather a passive test subject. Social roles and institutional obligations shift, yet few mechanisms exist for redress or contestation. Even more troubling is the fact that these trials can serve to preempt or delay regulation. By framing innovation as an ongoing test, firms and institutions can argue that it is too early for governance, even as the technology reshapes social relations.

The author argues that this dynamic represents a new phase in the politics of public experimentation, one where the very ability of society to judge, respond, or resist is being eroded by the conditions of the trial itself.

The article calls to reclaim experimentation as a democratic act, not just a technical process. This means creating space for dissent, deliberation, and normative reflection within the structure of technology trials. It requires shifting from an experimentalism driven by technological momentum to one anchored in collective evaluation.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback