‘Problem of many hands’ shields AI developers from blame

The study argues that AI developers, despite their transformative influence, are often excluded from ethical scrutiny. Unlike professionals such as surgeons or pilots who interact directly with those they affect, developers typically operate in isolation, disconnected from end users. This lack of proximity, the authors argue, reduces the social cues and institutional checks that traditionally reinforce moral responsibility.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 27-05-2025 09:13 IST | Created: 27-05-2025 09:13 IST
‘Problem of many hands’ shields AI developers from blame
Representative Image. Credit: ChatGPT

As artificial intelligence systems rapidly expand their reach across industries, a new scholarly analysis raises urgent concerns about the neglected moral agency of AI developers. The study, titled “Excuses, Excuses: Moral Agency and the Professional Identity of AI Developers”, published in AI & Society in May 2025, argues that the prevailing culture among AI engineers permits ethical disengagement while wielding tremendous societal influence.

The research, authored by Tricia Grifn, Brian P. Green, and Jos V. M. Welie, examines the structural, philosophical, and organizational barriers that shield AI developers from moral accountability. Drawing from foundational ethical theories and original analysis of developer perspectives, the study warns that developers have been relegated to a contradictory professional status: technically central but morally peripheral.

Are AI developers truly moral agents?

The study argues that AI developers, despite their transformative influence, are often excluded from ethical scrutiny. Unlike professionals such as surgeons or pilots who interact directly with those they affect, developers typically operate in isolation, disconnected from end users. This lack of proximity, the authors argue, reduces the social cues and institutional checks that traditionally reinforce moral responsibility.

Drawing on Alasdair MacIntyre’s theory of moral agency, the authors establish three critical criteria for developers to be considered moral agents: they must perceive themselves as more than their roles, engage with a moral community, and work in an environment conducive to ethical action. The study finds mixed results across all three dimensions. While some developers do reflect a strong sense of ethical identity and take principled stands, many feel constrained by organizational pressure, lack of peer support, or fear of retaliation for raising concerns.

Moreover, the analysis highlights a significant discrepancy between technical and ethical reasoning. Developers often demonstrate exceptional technical foresight, yet this does not translate into moral imagination, particularly in foreseeing and mitigating harmful secondary uses of the systems they build.

What excuses do developers use to avoid moral responsibility?

A major portion of the study dissects the “problem of many hands” - a concept introduced by political theorist Dennis Thompson - which explains how responsibility gets diffused in complex systems. The authors identify four common excuses AI developers use to deflect moral responsibility: intentionality, ignorance, compulsion, and inevitability.

  • Intentionality: Developers often claim they can only be responsible for the intended use of their systems, not how others misuse them. Yet the study argues this is a logical fallacy; foreseeing misuse obligates at least preventive design measures.
  • Ignorance: Some developers plead lack of knowledge, arguing it is impossible to predict how AI systems will be deployed. However, the authors contend that developers, who select, organize, and refine data, are best positioned to anticipate misuse.
  • Compulsion: Developers often feel compelled by organizational directives to proceed with ethically questionable work. While real, this excuse ignores the discretionary authority developers often wield in choosing model parameters or tuning performance thresholds.
  • Inevitability: This rationale claims that harmful outcomes like bias or automation-related unemployment are unavoidable in progress. The authors refute this by emphasizing that acknowledging inevitability only reinforces the need for proactive mitigation, not absolution.

By documenting how these excuses are regularly invoked across industry and academic studies, the paper underscores the systemic erosion of moral accountability in software engineering. The authors warn that as long as these rationalizations persist, ethical failures will be recurrent and largely untraceable to individual choices.

What conditions enable or undermine moral agency in AI workplaces?

The study's final and perhaps most revealing segment assesses how developers understand and act upon their moral agency in practice. Drawing from 12 empirical studies, the authors categorize developers' perceptions of moral identity, ethical community, and workplace environment.

  • Moral Identity: While some developers show strong personal conviction, others express disinterest or disengagement from ethical questions. A notable subset admitted they had never seriously considered the societal impact of their code.
  • Moral Community: Developers widely express a desire for ethical dialogue but report limited access to trusted colleagues or support structures. Those in mature organizations with integrated quality management systems were more likely to seek peer input.
  • Moral Environment: Ethical engagement is often stifled by tight deadlines, opaque organizational policies, and power hierarchies. In some cases, developers resorted to passive resistance,like intentionally producing subpar code,when asked to create unethical products. Others confessed to ethical fatigue, citing the futility of raising concerns that management would ignore.

The study reveals a pervasive mismatch between developers' ethical intentions and their ability to act on them. In environments lacking transparency and moral support, developers' practical reasoning is curtailed, leading to a compartmentalization of technical and ethical decision-making.

Implications: Toward Ethical Empowerment in AI Engineering

The authors conclude that the current professional culture surrounding AI development is morally unjustifiable. Despite being deeply implicated in the outcomes of their systems, developers are offered no reliable channels for ethical deliberation or recourse. The study advocates for immediate reforms across three domains:

  • Education: Universities must instill a deeper sense of moral responsibility in AI curricula, ensuring that developers understand the broader social contexts of their work.
  • Professional Standards: Industry associations should mandate ethical reflection as integral to technical publications and presentations, embedding ethics within the core of AI science rather than treating it as peripheral.
  • Organizational Reform: Employers should restructure work environments to support moral agency, including offering access to domain experts, ensuring transparency of use cases, and integrating ethical review throughout the development lifecycle.

The authors insist that AI ethics must stop treating developers as invisible hands behind the code and start holding them, and the structures around them, to account.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback