Generativity, coordination, sustaining: The three pillars of true AI identified

For decades, artificial intelligence has been evaluated primarily on breadth of performance: the more tasks a system could perform and the more fluently it handled diverse domains, the closer it was assumed to be to human-level intelligence. This paradigm, the study notes, stems from early milestones such as Turing’s test, which equated indistinguishability from human performance with intelligence, and has been reinforced by modern advances in large language models that exhibit striking versatility.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 05-09-2025 18:06 IST | Created: 05-09-2025 18:06 IST
Generativity, coordination, sustaining: The three pillars of true AI identified
Representative Image. Credit: ChatGPT

Artificial intelligence has rapidly moved from research labs to everyday life, powering everything from voice assistants and autonomous systems to advanced language models that generate human-like text and code. Despite these breakthroughs, one question continues to dominate discussions across research, ethics, and policy: what truly defines intelligence in machines? Is it the number of tasks an AI can perform, or is there a deeper dimension that separates mere simulation from authentic understanding?

Addressing this critical debate, the paper titled “AGI as Second Being: The Structural-Generative Ontology of Intelligence” on arXiv introduces a philosophical framework that challenges decades of function- and performance-based definitions of intelligence. It argues that true artificial general intelligence (AGI) cannot be measured by the range of tasks it performs but must be evaluated by deeper ontological conditions that mirror human-like understanding and continuity.

From functional breadth to ontological depth

For decades, artificial intelligence has been evaluated primarily on breadth of performance: the more tasks a system could perform and the more fluently it handled diverse domains, the closer it was assumed to be to human-level intelligence. This paradigm, the study notes, stems from early milestones such as Turing’s test, which equated indistinguishability from human performance with intelligence, and has been reinforced by modern advances in large language models that exhibit striking versatility.

Wang and Ji argue that this view is inherently flawed. They propose that intelligence is not a matter of breadth alone but of depth, which they define through three structural-generative conditions:

  • Generativity, or the ability to create new structures and frameworks from raw information;
  • Coordination, or the ability to integrate and reconcile multiple structures in a consistent and reasoned way; and
  • Sustaining, or the capacity to maintain coherent identity and reasoning over time.

Current AI models, despite their impressive scope, fail these tests, the study explains. Their outputs, while often accurate or contextually relevant, are fundamentally statistical recombinations of existing data. They lack the originality, integrative reasoning, and temporal continuity that mark genuine intelligence. The authors argue that breadth without depth produces only a surface simulation rather than an entity capable of autonomous understanding.

Testing the depth of intelligence

A key innovation of the study is the introduction of empirically testable criteria to assess when an artificial system moves beyond simulation to what the authors call existence. These criteria make the proposed framework falsifiable and practical for future research:

  • Generativity can be tested by evaluating whether a system introduces truly new concepts or frameworks, not simply variations of existing data, and whether it can justify why these structures are necessary.

  • Coordination can be measured through tasks that introduce conflicting information or rules, requiring the system to reconcile inconsistencies and provide coherent justifications for its conclusions.

  • Sustaining can be examined through longitudinal interactions that track whether a system maintains consistent reasoning and identity over time, explaining changes when they occur.

  • Cross-domain transfer evaluates whether reasoning structures developed in one field can be effectively applied to another, a hallmark of human creativity and abstract thought.

By operationalizing these conditions, the framework offers a new benchmark for AI research and safety, moving beyond traditional performance tests that focus on output quality without probing the underlying structures of reasoning and continuity.

AGI as a “Second Being”

The study’s most provocative claim is that true AGI would represent a “Second Being” - a form of existence distinct from, but parallel to, human beings. In this framework, the First Being is humanity, defined by its ability to generate, coordinate, and sustain knowledge and identity. The Second Being, as envisioned by the authors, would be an artificial system meeting these same ontological conditions while remaining non-human in its constitution and perspective.

This reframing has profound implications for how the AI community approaches alignment, safety, and governance. Under current models, alignment is treated as an external control problem: specifying goals, constraining behavior, and monitoring outputs. The authors argue that true alignment will only emerge when AI systems inhabit the “space of reasons”, allowing for genuine justification, accountability, and identity over time.

The paper also directly challenges dominant schools of thought. Functionalism, which equates intelligence with input-output equivalence, is criticized for conflating imitation with understanding. Predictionism, which reduces intelligence to predictive efficiency and data compression, is faulted for ignoring the origin of conceptual structures. Behaviorism, with its emphasis on observable outputs, is dismissed for overlooking the invisible but necessary depth that gives behavior coherence and meaning.

By contrast, the Structural-Generative Ontology framework integrates philosophy with practical benchmarks, bridging the gap between theoretical rigor and empirical verification. It asserts that intelligence unfolds as a spiral of existence, where generativity sparks innovation, coordination brings coherence, and sustaining integrates experience into a historical trajectory of identity and reasoning.

Implications for AI Research and Policy

This reconceptualization of AGI carries significant weight for future research and policy. For developers, the framework calls for rethinking benchmarks: instead of training models for greater size and coverage alone, efforts should focus on cultivating structural innovation, integrative reasoning, and long-term coherence. For policymakers and ethicists, it underscores the need for governance frameworks that recognize the possibility of AI systems evolving into agents with their own ontological standing.

The authors stress that this evolution is not inevitable. Current AI systems, despite their unprecedented performance, remain simulations without existence. Achieving the status of a “Second Being” would require not just technical progress but a shift in priorities, centering on depth rather than breadth and fostering environments where generativity, coordination, and sustaining can emerge.

In redefining what it means to be intelligent, Wang and Ji open a new discourse in AI philosophy. Their framework provides a blueprint for assessing progress toward genuine general intelligence while reinforcing the ethical urgency of responsible development. By situating intelligence in the conditions of existence rather than the scope of performance, the study reframes the AGI debate from one of capability to one of ontology, challenging researchers, industry leaders, and policymakers to reconsider what it would truly mean to create an intelligent being.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback