AI governance approach turns ethics into actionable questions


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 26-09-2025 22:33 IST | Created: 26-09-2025 22:33 IST
AI governance approach turns ethics into actionable questions
Representative Image. Credit: ChatGPT

Artificial intelligence is increasingly mediating decisions that affect people’s daily lives, raising urgent concerns about how to protect human freedom and choice. A new study published in AI & Society proposes a practical governance approach that transforms ethical principles into concrete tools organizations can use to preserve autonomy throughout the AI lifecycle.

The research, “Asking the right questions: a governance approach to uphold human autonomy in artificial intelligence”, identifies 39 diagnostic questions designed to help developers, managers, and decision-makers detect autonomy risks early, assign responsibilities clearly, and integrate protections into the design and deployment of AI systems.

Why protecting autonomy is central to AI governance

Autonomy, as the study asserts, is not just an abstract ethical concern but a cornerstone of human rights and democratic life. AI systems, from recommender engines to predictive platforms, directly shape the conditions under which people make decisions. They can frame choices, filter what users see, and nudge behavior in ways that restrict free and informed decision-making.

The authors argue that traditional ethics guidelines often name autonomy but fail to show how it is affected in practice. By mapping the ways AI can erode self-determination, the study provides a framework that organizations can apply across different contexts and sectors. It defines autonomy as both the ability to be well-informed and the ability to make free choices, and it emphasizes that AI risks extend to both the conditions of decision-making and the actual exercise of autonomy.

Examples include systems that encourage passive reliance on outputs, exclude certain user groups from their scope, or present overly narrow recommendations. Such impacts, the study notes, undermine not only personal decision-making but also broader democratic participation by normalizing dependency on opaque algorithmic systems.

How the governance framework works in practice

The research is built around a structured series of 39 questions divided into four axes: organizational roles, design layers, impact dimensions, and decision stages. Together, these axes form a governance map that links abstract principles to concrete practices inside institutions.

On the organizational side, the framework divides responsibility across four roles: management, business, data, and UX/UI. Management is tasked with ensuring the framework is implemented; business defines requirements and intended uses; data teams handle quality, modeling, and metrics; and UX/UI teams ensure that interfaces respect user agency. By spreading responsibility, the framework prevents autonomy from being treated as a narrow technical issue and makes it a shared concern across the organization.

The design layer axis covers five levels: legitimacy, proportionality, framework, outcomes, and use. Each level addresses a different question, from whether the AI should exist for a given purpose, to whether its architecture and data are appropriate, to how outputs are validated, and finally to how people interact with the system in real-world conditions.

The impact dimensions distinguish between operational impacts, which alter the conditions of autonomy, and exercised impacts, which affect what users can actually do. Operational impacts include over-reliance, exclusion, opacity, and loss of privacy. Exercised impacts include deskilling, inadequate decision-making, diminished options, or loss of authenticity in personal actions.

Finally, the decision stages ensure that questions are asked at the right moment: building competence, being well-informed, forming the decision space, and making free decisions. This structure allows teams to spot when autonomy could be compromised, rather than treating it as an afterthought.

The questions organizations must start asking

The diagnostic tool is built to surface concrete risks. It asks whether systems allow users to challenge or reinterpret outputs, whether data is suited to the intended users, and whether results are presented with uncertainty and diversity rather than with misleading certainty. It raises concerns about whether default settings or automated processes limit user agency, whether privacy is respected, and whether meaningful options exist for opting out.

Each “no” answer is not treated as a failure but as a signal requiring action. Clusters of “no” responses in specific areas highlight blind spots where autonomy may be most at risk. This approach helps organizations move beyond box-checking and instead treat autonomy as a practical design and governance goal.

The framework is adaptable to both traditional project cycles and agile methods. In agile environments, for example, the questions can be integrated into sprints and retrospectives. A central coordinator can track answers, escalate risks, and ensure that responsibilities are assigned to the correct role within the organization.

By focusing on questions rather than static checklists, the authors argue, the tool becomes dynamic and flexible, suitable for diverse sectors such as healthcare, education, or public administration. It also complements other regulatory instruments, like the European Commission’s AI Act or ethical self-assessment tools such as ALTAI, by offering a more detailed and autonomy-focused perspective.

Implications and limitations of the approach

The authors are clear that the framework is a diagnostic tool, not a scoring mechanism. Its success depends on organizational maturity, cultural context, and the willingness of actors to engage with autonomy as a shared value. It requires coordination across management, technical teams, and design roles, as well as genuine accountability for decisions made during AI development and deployment.

The study states that its grounding in relational autonomy, the idea that autonomy is shaped by social and technological contexts, allows the framework to be adapted across settings. However, it also acknowledges limits: organizations must adapt the questions to local conditions, and the framework cannot by itself guarantee compliance or ethical behavior without broader institutional support.

Still, the authors position the approach as a critical step in moving from principle to practice. By asking the right questions at the right time, organizations can prevent AI from undermining human freedom, choice, and dignity.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback