Self-aware machines may not need emotions, just introspection and adaptation
The researchers challenge the dominant philosophical narrative that centers around subjective experience, such as the infamous question of "what it is like to be a bat." According to the authors, such inquiries are ill-suited for technical implementation and distract from the real, tangible advances that cognitive engineering can achieve.

A team of researchers from the Public University of Navarra, Spain, has released a pivotal theoretical study proposing a radical shift in how artificial consciousness should be understood and developed. Their work, published in Metrics, argues that efforts to recreate subjective experience in machines, long a focus of philosophical debate, are both misguided and scientifically impractical.
Titled "Cognitive Systems and Artificial Consciousness: What It Is Like to Be a Bat Is Not the Point", the study contends that instead of replicating phenomenological experiences, the field should prioritize building cognitive architectures that exhibit measurable, introspective capabilities. Their work shifts attention from metaphysical speculation toward a concrete, engineering-centered framework for artificial consciousness grounded in self-assessment, internal monitoring, and adaptive behavior.
Why functional, not phenomenal, consciousness matters in AI
The researchers challenge the dominant philosophical narrative that centers around subjective experience, such as the infamous question of "what it is like to be a bat." According to the authors, such inquiries are ill-suited for technical implementation and distract from the real, tangible advances that cognitive engineering can achieve. They primarily argue that artificial consciousness should not attempt to mimic human consciousness in its raw sensory or emotional form, but instead be built on functional consciousness, the ability of an artificial agent to introspect, evaluate performance, and regulate behavior autonomously.
This functional model includes a range of implementable features such as self-explanation, internal goal monitoring, and adaptive decision-making. These are not abstract qualities but specific engineering targets that can be integrated into AI architectures. The study explains that these features are already present in many state-of-the-art cognitive systems, albeit under different labels. The researchers argue that formalizing and refining these capabilities offers a more productive path toward intelligent machines that can think about their thinking, reason about their reasoning, and correct themselves when needed.
Their view redefines artificial consciousness as a matter of capability rather than essence, an ability to process information about one’s own internal state and make context-sensitive decisions. This pragmatic stance distances itself from debates about machine qualia, instead focusing on the types of mechanisms that enable autonomous, purposeful agents.
What do current AI architectures already reveal?
The study presents a comparative review of contemporary AI systems that already approximate the features of functional consciousness. Many modern architectures incorporate feedback loops, context awareness, memory-based learning, and real-time performance evaluation. These are the building blocks of introspection and self-regulation, according to the authors.
Rather than beginning from scratch, the research suggests that artificial consciousness can be incrementally developed by expanding these existing cognitive components. For example, access consciousness, defined in neuroscience and cognitive theory as the availability of internal information to decision-making processes, is already partially realized in several AI models used in robotics, autonomous vehicles, and intelligent virtual agents.
By analyzing these systems, the authors show how access consciousness manifests as goal prioritization, self-correction, and temporal reasoning. These operational attributes, they argue, are far more valuable to the advancement of artificial consciousness than elusive concepts like “feeling” or “awareness.” The study insists that the discipline must distance itself from philosophical abstractions and instead embrace engineering clarity, empirical testability, and implementation logic.
Moreover, this model encourages a layered design strategy for AI: starting with basic self-monitoring, then adding introspective capabilities such as internal state labeling, and finally enabling recursive self-evaluation. This progressive structure allows AI developers to measure progress and standardize what consciousness-related performance looks like across different systems.
What are the broader implications for AI development?
By framing consciousness as an engineered construct, the study opens new directions for both AI research and technology governance. It introduces clear criteria for evaluating whether a system can be considered functionally conscious and sets the stage for ethical and technical standards that avoid speculative definitions.
For AI developers, the study provides a roadmap to build machines that can reason about their actions, identify contradictions in their decision-making, and evolve strategies based on internal feedback. These capacities are essential for deploying AI in high-stakes environments such as healthcare, autonomous navigation, or crisis management, where reliability and autonomy are critical.
For philosophers and cognitive scientists, the paper proposes a realignment. It challenges the long-held belief that consciousness must necessarily involve subjective experience and instead asks whether goal-directed, adaptive systems can qualify as conscious based on their internal structures and behaviors. It also invites interdisciplinary collaboration by aligning AI research more closely with cognitive psychology, neuroscience, and systems engineering.
For policymakers and ethicists, a functional model of artificial consciousness offers a more actionable foundation for regulation, accountability, and oversight. By focusing on observable behavior and traceable decision-making, legal frameworks can move away from philosophical impasses and toward measurable, auditable criteria for intelligent systems.
- FIRST PUBLISHED IN:
- Devdiscourse