AI 'ambiguity' not a flaw but an ethical necessity
The study exposes the myth of linear AI development pipelines. Contrary to dominant narratives that present AI systems as seamlessly structured, the empirical evidence demonstrates that AI development is inherently ambiguous and non-linear. Practitioners across various sectors, including academia, Big Tech, and startups, describe their work as a process of continual experimentation - one that involves frequent dead ends, trial-and-error, and limited predictive certainty.

A new study published in Big Data & Society challenges the widespread notion that ambiguity in artificial intelligence (AI) must be resolved. Titled “Artificial Intelligence and the Ethics of Navigating Ambiguity,” the research presents a compelling case for rethinking AI ethics by embracing ambiguity as a core and inevitable component of socio-technical AI practice.
The study reveals how ethical engagement in AI development emerges not from fixed frameworks or predictable outputs but from deeply entangled, iterative processes shaped by material constraints, creative intuition, and the navigation of power dynamics. The research turns to existentialist philosophy, particularly Simone de Beauvoir’s Ethics of Ambiguity, to argue for an ethics that recognizes and reflects the uncertainty, relationality, and moral complexity inherent to AI practice.
What is the role of ambiguity in contemporary AI practice?
The study exposes the myth of linear AI development pipelines. Contrary to dominant narratives that present AI systems as seamlessly structured, the empirical evidence demonstrates that AI development is inherently ambiguous and non-linear. Practitioners across various sectors, including academia, Big Tech, and startups, describe their work as a process of continual experimentation - one that involves frequent dead ends, trial-and-error, and limited predictive certainty.
Ambiguity in AI arises at multiple levels. There is ontological ambiguity, difficulty in translating abstract phenomena into modelable data; epistemological ambiguity, uncertainty in how knowledge claims are validated; and application ambiguity - uncertainty about how models will function in real-world contexts. Practitioners often have to make qualitative judgments about data, resorting to intuition, creative heuristics, and resourceful workarounds to build models under constraints of limited hardware, time, or clean data.
Far from being fringe issues, these ambiguous experiences are central to AI development. Yet, existing governance frameworks often exclude these realities, preferring deterministic views of outputs and ethical checklists. By neglecting this ambiguity, AI ethics frameworks risk simplifying complex socio-technical entanglements and reproducing systemic inequities, especially when models are developed in materially unequal contexts with input from a narrow demographic of practitioners.
How do practitioners navigate ethical tensions amid constraints?
The study explores how AI practitioners actually confront ethical decisions in their day-to-day work. Ethics in this context is not derived from abstract principles but rather emerges from relational negotiations with the world, shaped by limitations and personal motivations.
Practitioners described relying on what the study terms “ethical heuristics” - informal, situational strategies to guide decision-making in ambiguous circumstances. These included perspective-taking (e.g., imagining one’s own family as end users), intuitive judgments formed through experience, and reflexive anticipation of harm. Rather than applying static notions like “fairness” in isolation, practitioners engaged with ethics as an embodied, evolving process, one that accounted for both their agency and their limits.
However, the study also reveals how ambiguity can diffuse moral responsibility. When AI outputs are abstracted from their development contexts, practitioners may feel detached from the consequences, especially in large-scale systems. For instance, some expressed unease about personalization projects or reinforcement learning models that segment users with little transparency. Others described the limitations of documentation, stating that it would be nearly impossible to capture all the contexts in which a model may succeed or fail.
Despite this, many practitioners demonstrated a desire to be ethically accountable, even while working in ambiguous and unpredictable environments. The study points out that this tension, between the desire to act ethically and the constraints of the system, is not a flaw to be eliminated but the very essence of moral deliberation. Drawing on de Beauvoir’s philosophy, Bennett suggests that ethics begins precisely at the point where outcomes are uncertain, where decisions must be made despite conflicting values and material limitations.
Why should ethics frameworks recognize rather than resolve ambiguity?
The study calls to reframe the role of ambiguity in AI governance. By integrating existentialist ethics, particularly Simone de Beauvoir’s view that morality is a constant struggle with ambiguity, the paper offers an alternative to solutionist models of AI ethics. De Beauvoir’s framework positions ambiguity not as a failure but as the essence of ethical agency, rooted in relationality, situated knowledge, and moral freedom.
In AI contexts, this translates to recognizing that each decision, whether about data collection, model design, or implementation, impacts others’ freedom and access to resources or representation. Bennett argues that universalist notions of “social good” or “fairness” often erase these specific relational dynamics. For instance, efforts to make models fairer for some disabled people may reinforce the marginalization of others considered outliers. Similarly, delegating ethical reflection to technical outputs obscures the complex human judgments and negotiations that underpin them.
Most importantly, the study cautions against viewing ethical ambiguity as an issue that can be solved through more software or checklists. Instead, Bennett calls for ethics frameworks that acknowledge the creative, uncertain, and relational nature of AI work. This includes making space for reflection, dialogue, and situated deliberation, not just at the point of deployment, but throughout the design and development lifecycle.
Moreover, AI practitioners should not be treated as isolated agents solely responsible for ethical failures or successes. Their work is embedded in larger sociotechnical systems shaped by power asymmetries, global inequalities, and institutional pressures. Governance models must account for these broader entanglements, and not simply demand individual accountability for systemic challenges.
- FIRST PUBLISHED IN:
- Devdiscourse