Governance gap: Why AI can’t fully lead on climate and justice decisions
The study unpacks the notion of expertise not merely as technical proficiency but as a tri-dimensional capability encompassing factual, temporal, and social complexity. From this perspective, human experts perform a range of functions AI currently cannot: they interpret ambiguous data, contextualize decisions across time horizons, and navigate stakeholder legitimacy and trust.

In the growing effort to decarbonize global economies and build sustainable futures, artificial intelligence is increasingly touted as a solution to complex organizational challenges. But new research published in Frontiers in Artificial Intelligence raises a cautionary flag: while AI can support rapid data analysis and optimize processes, it remains fundamentally limited in replicating human expertise, especially when addressing super-wicked problems like climate change.
The study, titled “Is AI a Functional Equivalent to Expertise in Organizations and Decision-Making? Opportunities and Pitfalls for AI in the Context of Just Transitions”, was authored by Marco Billi and Julio Labraña of the University of Chile. Their analysis argues that although AI can partially replicate certain technical functions of expertise, it cannot replace the adaptive, contextual, and socially legitimate decision-making that human experts bring to sustainability transitions.
Can AI truly replace human expertise in decision-making?
The study unpacks the notion of expertise not merely as technical proficiency but as a tri-dimensional capability encompassing factual, temporal, and social complexity. From this perspective, human experts perform a range of functions AI currently cannot: they interpret ambiguous data, contextualize decisions across time horizons, and navigate stakeholder legitimacy and trust.
AI systems, the authors argue, excel at recognizing patterns and processing large datasets at speed. However, these systems struggle with contextual adaptation. They often fail to interpret interdependent decision environments, such as the overlapping ecological, economic, and social systems affected by climate change, because they lack a broader understanding of meaning and consequence. For instance, while an AI tool might optimize energy usage in one sector, it may not anticipate knock-on effects in another without human interpretive oversight.
This issue is particularly acute in organizational settings, where decisions are rarely isolated. The research applies Social Systems Theory to show that organizations reduce complexity through structured decision-making rooted in institutionalized expertise. In such environments, AI cannot serve as a standalone decision-maker because it lacks institutional integration, communicative accountability, and the ability to legitimize its recommendations socially.
What is the role of AI in supporting just transitions?
The concept of a “just transition” requires not just ecological transformation but social inclusion, fairness, and equitable distribution of resources and burdens. In this context, Billi and Labraña argue that AI must be seen as an assistant - not a substitute - for human expertise.
The study proposes a hybrid model where AI augments decision-making processes by extending the cognitive reach of human experts. In the factual dimension, AI can supply vast, cross-sectoral datasets and simulations to help human experts evaluate complex scenarios. In the temporal dimension, AI supports rapid-response tasks like energy optimization, while humans focus on long-term planning. In the social dimension, however, AI is weakest. It cannot establish legitimacy or negotiate between competing values and perspectives - tasks that remain squarely in the domain of human governance.
This hybridization, the authors argue, is essential for climate governance. The dynamics of climate change demand decisions that are context-sensitive and forward-looking. AI can aid in gathering evidence and projecting scenarios but must operate under the guidance of human experts who understand the political, ethical, and institutional contexts in which those decisions are made.
What are the limits and risks of AI in sustainability governance?
Despite AI’s potential, the study emphasizes that over-reliance on AI could backfire in governance environments. For example, machine learning systems trained on biased or incomplete data could reinforce existing inequalities, especially if used uncritically in environmental planning or climate adaptation policy.
Another risk lies in what the authors describe as the "opacity problem." Many AI systems function as “black boxes,” producing outputs that cannot be easily explained or interrogated. This undermines trust and accountability - both critical in public decision-making. Unlike human experts, who can justify and take responsibility for their decisions, AI systems offer no such assurances.
Moreover, AI lacks the social mechanisms that grant authority to human expertise. Professional networks, academic credentials, and institutional affiliations legitimize human experts in ways that AI systems cannot replicate. This absence of social embedding makes AI unsuitable for high-stakes decisions that require public buy-in and multi-stakeholder negotiation.
Finally, in terms of education and training, the study recommends recalibrating AI literacy programs to equip decision-makers with both technical and critical skills. Understanding how to use AI effectively, and when not to use it, is seen as a cornerstone for fostering just transitions.
- READ MORE ON:
- AI in decision-making
- AI governance in organizations
- AI limitations in climate policy
- human vs AI decision-making
- AI trust and legitimacy
- can AI replace human expertise in governance
- limitations of artificial intelligence in complex decision-making
- why AI lacks legitimacy in public policy
- legitimacy in AI systems
- climate governance tools
- FIRST PUBLISHED IN:
- Devdiscourse