Why AI must be governed as complex system, not just regulated technology

The paper critiques dominant narratives that portray AI as inherently resilience-enhancing. It underscores that AI’s benefits, such as predictive analytics for disaster response or epidemic tracking, cannot be isolated from its risks. These include algorithmic bias, privacy violations, data monopolies, and geopolitical tensions arising from the so-called "AI arms race." Crucially, AI systems can marginalize less powerful actors in international governance by reinforcing epistemic asymmetries, especially when private corporations or technologically advanced nations dominate decision-making.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-06-2025 18:21 IST | Created: 03-06-2025 18:21 IST
Why AI must be governed as complex system, not just regulated technology
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is rapidly redefining the landscape of international governance, offering both transformative potential and systemic risks. A new study titled “Artificial Intelligence, Complexity, and Systemic Resilience in Global Governance”, published in Frontiers in Artificial Intelligence by Andrés Ilcic, Miguel Fuentes, and Diego Lawler, presents a deep analysis of how complexity science can serve as a conceptual foundation for integrating AI into resilient, inclusive governance models. The authors argue that without rethinking governance architectures through the lens of complexity, AI could reinforce inequality, fragment global cooperation, and undermine systemic resilience.

The paper positions AI not merely as a technological tool but as a radical socio-technical force that interacts dynamically with political institutions, economic systems, and ethical frameworks. By conceptualizing AI as a complex adaptive system, the study emphasizes the need for global governance models that are modular, participatory, and robust enough to adapt under uncertainty.

Can AI enhance systemic resilience or undermine it?

The study begins with a sobering premise: while AI holds the promise to enhance decision-making, crisis management, and resource allocation, its integration into global systems is far from neutral. The authors note that systemic resilience, defined as a system’s capacity to absorb shocks, adapt, and reorganize without collapsing, has become an urgent necessity in the face of escalating global challenges. Yet, when driven by market forces alone or narrow state interests, AI deployment often exacerbates rather than alleviates structural inequalities.

The paper critiques dominant narratives that portray AI as inherently resilience-enhancing. It underscores that AI’s benefits, such as predictive analytics for disaster response or epidemic tracking, cannot be isolated from its risks. These include algorithmic bias, privacy violations, data monopolies, and geopolitical tensions arising from the so-called "AI arms race." Crucially, AI systems can marginalize less powerful actors in international governance by reinforcing epistemic asymmetries, especially when private corporations or technologically advanced nations dominate decision-making.

Instead of framing resilience as a technical fix, the authors propose it as a “boundary concept” that bridges science, ethics, and governance. This perspective draws from complexity science to argue that effective resilience requires adaptive feedback loops, stakeholder inclusion, and institutional modularity - attributes rarely present in existing global regulatory regimes.

What does it mean to govern AI as a complex system?

The study's core proposition is that AI should be governed as a complex adaptive system. Traditional governance models, whether state-centric or market-led, are based on linear assumptions and fail to respond to the non-linear, emergent dynamics that define AI’s integration into society. Through the lens of complexity science, the authors advocate for a shift toward polycentric governance, where multiple decision-making centers operate semi-autonomously but remain interconnected through shared values and collaborative structures.

The authors highlight how boundary concepts like resilience can be operationalized to connect siloed disciplines and institutional actors. These concepts facilitate epistemic translation across domains, enabling a shared language for addressing the cascading effects of AI. For instance, real-time data sharing enabled by AI enhances crisis response but simultaneously exposes systems to cascading cyber threats. The ability to navigate these boundary tensions, between openness and control, adaptability and stability, is central to governing AI responsibly.

The study critiques both hierarchical state regulation, which tends to be slow and epistemically narrow, and laissez-faire market models that privilege information hoarding and shareholder primacy. Instead, it points to the need for hybrid governance models that build in redundancy, scenario planning, and institutional learning - tools already common in ecological and climate policy, but rarely applied to digital systems.

AI governance, the authors argue, is a “wicked problem” par excellence. It is marked by epistemic uncertainty, cross-sectoral complexity, and conflicting stakeholder values. Agent-based modeling, network analysis, and soft systems methodologies are among the tools suggested to understand and navigate these dynamics. The paper underscores that AI is not just a target of complexity-informed governance - it is also a tool for managing complexity itself.

Are existing global frameworks sufficient to address AI’s risks?

In its final section, the paper evaluates current regional AI governance frameworks using a complexity-informed “resilience analysis.” The European Union’s AI Act is praised for its risk-based approach and for embedding data privacy as a foundational principle. This legislation, combined with the General Data Protection Regulation (GDPR) and the forthcoming Data Act, exemplifies efforts to encode resilience and inclusivity into regulatory architecture.

By contrast, the United States is characterized by regulatory fragmentation and ideological swings between administrations. Trump’s 2025 Executive Order, which replaces the Biden-era AI oversight framework, emphasizes deregulation and national competitiveness. This approach, according to the study, risks weakening global cooperation and eroding ethical standards in favor of short-term market gains.

China’s model, meanwhile, balances tight state control with rapid innovation, creating yet another paradigm that prioritizes sovereignty and economic growth over transparency and inclusiveness. These divergent approaches, the authors contend, illustrate a broader problem: global AI governance remains highly fragmented, shaped more by geopolitical competition than by shared human values.

The authors call for a pivot toward modular, adaptive, and participatory institutional design. They draw from polycentric governance theory and cybernetics to suggest that institutions should evolve in tandem with technological change. Governance structures must incorporate epistemic pluralism, enabling diverse perspectives and knowledge systems to influence policy. This includes empowering marginalized communities to shape AI systems that directly affect them.

Ultimately, the study asserts that resilience-oriented AI governance must be value-driven. Transparency, equity, and accountability are not optional add-ons but foundational principles that must be embedded from the ground up. The authors advocate for a proactive epistemological shift - one that understands AI as a constitutive force in social life, and not merely a passive tool. This demands new metrics, new institutions, and new norms capable of responding to a digital future in flux.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback