Ethical dangers of recursive AI systems and digital human twins
Recursive digital selves underpin targeted advertising, predictive policing, and workplace monitoring. In each case, the opacity of algorithmic loops raises concerns about fairness, transparency, and the erosion of basic rights. The study highlights that these loops normalize a world where individuals are continuously shaped by hidden feedback mechanisms rather than open deliberation.

Artificial intelligence is reshaping how humans see themselves, raising urgent ethical questions about autonomy, agency, and the construction of digital identities. A new study by Bogdan-Andrei Lungu of Babeș-Bolyai University, Romania, investigates how machine learning systems recursively build digital versions of people and argues for the need to rethink identity and ethics in an algorithmically mediated world.
The research, titled “Machines looping me: artificial intelligence, recursive selves and the ethics of de-looping” and published in AI & Society, examines how recursive machine learning processes create digital human twins, the risks of being trapped in self-reinforcing loops, and the normative need for what the author calls an ethics of “de-looping.”
How machine learning creates recursive digital selves
The study introduces the concept of the “recursivisation of personhood,” describing how individuals become entangled in feedback loops generated by machine learning infrastructures. Through constant data capture and algorithmic processing, people are represented as digital human twins - datafied versions of themselves that evolve with every interaction.
These recursive processes are evident across widely used technologies, from conversational agents and recommender systems to IoT devices and predictive platforms. Each interaction generates new data that refines the digital twin, feeding back into algorithms that anticipate, guide, and even manipulate user behavior. Over time, this recursive construction blurs the boundary between the human self and its algorithmic representation.
The author warns that this process can diminish autonomy and self-determination. The more these loops define a person’s digital presence, the more individuals risk being confined by algorithmic expectations rather than their own evolving sense of identity. The recursive nature of these systems means that human agency becomes increasingly subordinated to machine-mediated patterns.
Why recursive loops threaten autonomy and social rights
The research emphasizes that recursive machine learning loops are not neutral. They are deeply embedded in commercial and institutional infrastructures that shape how people are categorized, evaluated, and treated. When digital human twins are built and refined within such systems, they become tools of prediction and control that often escape user awareness.
The case of mental health chatbots illustrates the dilemma. These AI systems engage in ongoing conversations with users, creating recursive exchanges that not only simulate therapeutic dialogue but also construct data-driven profiles of emotional states. While such tools can expand access to mental health support, they also risk reinforcing alienation, eroding privacy, and embedding bias. Users become caught in loops where their data is repeatedly folded back into systems that may constrain their choices, perceptions, and opportunities.
Recursive digital selves underpin targeted advertising, predictive policing, and workplace monitoring. In each case, the opacity of algorithmic loops raises concerns about fairness, transparency, and the erosion of basic rights. The study highlights that these loops normalize a world where individuals are continuously shaped by hidden feedback mechanisms rather than open deliberation.
The ethical stakes, therefore, lie not only in what AI systems do but in how they reshape identity itself. The recursive construction of the digital human twin creates conditions where people’s futures are increasingly determined by machine-generated models rather than human choice.
What the ethics of “de-looping” proposes
To counter these risks, the study introduces the idea of an ethics of de-looping. This framework seeks to interrupt, slow, or make visible recursive processes in order to preserve autonomy and accountability. Rather than allowing machine learning systems to define human identity invisibly, de-looping would make recursive operations transparent and open to contestation.
The study frames this as a form of infra-ethics, focused not only on broad moral principles but also on the underlying infrastructures that shape human-machine interaction. By exposing how recursive loops operate, infra-ethics aims to empower individuals to resist harmful datafication, reclaim agency, and reshape the conditions under which digital selves are constructed.
This approach does not call for the wholesale rejection of AI systems. Instead, it advocates for a balanced model where human identity remains open, dynamic, and self-determined, even within machine-mediated contexts. De-looping emphasizes the need for accountability mechanisms in AI development, including rights to explanation, visibility, and redress when recursive processes harm individuals or communities.
The study argues that embedding de-looping principles into governance frameworks could help prevent the normalization of recursive alienation. It calls for policymakers, technologists, and ethicists to recognize that identity is no longer just a philosophical question but a technological and political one.
- FIRST PUBLISHED IN:
- Devdiscourse