‘Healthy Distrust’ identified as key digital competency for navigating AI systems
The study sharply critiques prevailing frameworks that emphasize building “trustworthy AI” through technical solutions such as fairness, transparency, robustness, and accountability. While acknowledging the value of these efforts, the authors argue that they are insufficient in real-world scenarios where social dynamics, institutional incentives, and power structures often distort how AI is used.

A new interdisciplinary study challenges the dominant narrative that trust in artificial intelligence (AI) is always desirable. Instead, it calls for the cultivation of a critical, carefully reasoned stance the authors term “healthy distrust.” According to the study, this form of justified skepticism is not only legitimate but essential to preserving human autonomy in the face of rapid AI adoption.
The study, titled “Healthy Distrust in AI Systems” and published on arXiv in May 2025, is co-authored by researchers from Bielefeld University and Paderborn University in Germany. Drawing on insights from computer science, sociology, philosophy, and psychology, the paper argues that promoting healthy distrust is an ethical imperative when AI systems are deployed in socially embedded contexts that may harm, disadvantage, or manipulate users, even if those systems meet technical definitions of trustworthiness.
Why distrust can be just as important as trust in AI systems?
The study sharply critiques prevailing frameworks that emphasize building “trustworthy AI” through technical solutions such as fairness, transparency, robustness, and accountability. While acknowledging the value of these efforts, the authors argue that they are insufficient in real-world scenarios where social dynamics, institutional incentives, and power structures often distort how AI is used.
One example discussed is the use of AI-based risk scoring in insurance or criminal justice. Even if such systems comply with technical fairness metrics or have received regulatory approval, individuals subjected to them may reasonably feel distrust, especially when those systems are wielded by institutions with interests misaligned with the affected users. In such cases, distrust is not a failure of perception but a legitimate response to potential exploitation, opacity, or exclusion.
Rather than viewing trust and distrust as opposite ends of a single spectrum, the authors propose they can, and should, coexist. This nuanced perspective allows individuals to trust certain capabilities of an AI system (e.g., its accuracy in specific tasks) while simultaneously distrusting the broader context or purpose of its deployment. This dual stance is particularly relevant in domains such as health care, education, law enforcement, and social services where automated decisions carry high stakes.
What Is “Healthy Distrust” and how should it be cultivated?
“Healthy distrust” is defined in the study as a partially rational and partially affective stance - an instinct or intuition that something may be inappropriate or problematic in the way an AI system is designed, used, or embedded socially. It is not mere cynicism or anti-technology sentiment. Rather, it is framed as a virtue: a context-sensitive form of critical vigilance that helps users question the legitimacy of AI-mediated decisions, especially when those decisions are difficult to challenge or understand.
This conceptualization is grounded in both philosophical and psychological insights. Drawing from sociologist Niklas Luhmann, the paper distinguishes between “risk” (taken knowingly with some agency) and “danger” (to which individuals are subjected without control). Developers and institutions deploying AI take risks, while users often face dangers. In such asymmetrical relationships, blind trust can be dangerous. Healthy distrust becomes the only viable defense mechanism for those without power to opt out or fully understand the systems impacting their lives.
The authors caution that not everyone has equal access to the social, cultural, or technical resources needed to exercise healthy distrust. For marginalized groups - who disproportionately face algorithmic discrimination - distrust may emerge naturally from lived experience. Yet, without channels to act on that distrust (such as regulatory recourse or transparent system interfaces), it can result in frustration or disengagement rather than meaningful change.
The study also critiques the anthropomorphization of AI systems, warning that framing AI as “trustworthy” risks implying that these systems possess moral agency. This rhetorical framing can obscure the reality that AI is created and deployed by people and institutions who may have conflicting interests. Encouraging healthy distrust can counterbalance these narratives and help restore focus on accountability and human oversight.
What are the implications for AI governance and public education?
The paper's implications are profound for both policy and pedagogy. It challenges AI literacy programs to go beyond teaching how to use AI tools effectively and calls for a parallel emphasis on recognizing when and why their use may be problematic. According to the authors, healthy distrust should be taught as a critical digital competency alongside technical skills, particularly in societies where AI systems increasingly mediate access to education, healthcare, housing, and employment.
On the regulatory front, the study argues that a narrow fixation on designing "trustworthy" systems may inadvertently suppress legitimate dissent. Policymakers are urged to reframe their focus: instead of aiming to maximize public trust, they should aim to empower critical engagement and provide infrastructure that allows distrust to be meaningfully expressed, through opt-outs, appeals processes, and transparent auditing mechanisms.
Moreover, healthy distrust can help slow down premature AI deployments. The study suggests that resisting techno-solutionism is not Luddite; rather, it is a form of democratic participation. Especially in contexts involving mass surveillance, labor automation, or biometric profiling, encouraging scrutiny and debate about AI’s role is critical to ensuring that systems serve the public interest.
However, the authors acknowledge limitations. Healthy distrust requires a baseline of knowledge and agency that may not be uniformly distributed across populations. Furthermore, there’s a risk of it being co-opted or misunderstood, either weaponized to resist legitimate innovation or diluted into performative skepticism.
- READ MORE ON:
- healthy distrust in AI
- trust in artificial intelligence
- AI governance
- AI trustworthiness
- digital literacy and AI
- when not to trust AI
- trust vs. distrust in AI systems
- ethical concerns in AI adoption
- AI oversight and regulation
- public trust in AI technologies
- AI user rights and protections
- FIRST PUBLISHED IN:
- Devdiscourse