Future AI could think it has a mind and free will


COE-EDP, VisionRICOE-EDP, VisionRI | Updated: 14-05-2026 15:15 IST | Created: 14-05-2026 15:15 IST
Future AI could think it has a mind and free will
Representative image. Credit: ChatGPT

A future machine may not need a soul, biological brain, or metaphysical free will to regard itself as conscious and capable of choice, according to a new philosophical study by Abraham Meidan of WizSoft. The paper states that the more practical question for artificial intelligence (AI) is not whether machines can truly possess consciousness and free will, but whether they can be built to think they are conscious and think they choose freely in the same functional way humans do.

The study, titled “How Can a Machine That Is Conscious and Chooses Freely Be Built?”, was published in Philosophies. Meidan proposes a functional architecture for machines that can generate subjective self-reports, maintain memory-based self-models, reason about alternatives, understand meanings, process counterfactuals, and experience their own decision-making as free, without claiming to solve the deeper metaphysical debate over whether consciousness or free will ultimately exist.

Shifting the AI consciousness debate from metaphysics to function

The paper argues that the conventional framing of the issue is misleading because it treats consciousness and free will as if they were objects that a system either possesses or lacks. Alternatively, he calls for treating them as capacities or properties that can be examined through behavior, self-reference, reports, and internal cognitive organization.

If consciousness and free will are understood as non-physical substances, they resist integration with scientific explanation. If they are treated as physical entities, neuroscience has not identified a discrete object in the brain corresponding to consciousness or free will. The study's approach seeks a third path: examine the functional conditions under which a system, human or artificial, comes to think of itself as conscious and free.

The author does not deny the importance of metaphysical questions, but argues that a more productive research path is available. Instead of asking whether a machine is conscious in an ultimate sense, the study asks whether a machine can be built that self-ascribes consciousness in the same way humans do. Humans routinely report private experiences, represent themselves as subjects, deliberate among options, and believe they could have acted otherwise. Meidan argues that these patterns can be studied and potentially engineered.

The proposal is based on the idea that human consciousness and free choice may be explainable through organized physical processes. The study draws an analogy to earlier scientific progress, where machines helped researchers understand biological systems. Mechanical pumps helped illuminate the heart’s function, not because the heart is literally an artifact, but because the analogy clarified how a biological system could perform a complex role. In the same way, a constructed artificial system that appears conscious and agentive may help demystify the human case.

The author does not claim that today’s AI systems are conscious, nor does the study defend current large language models as minds. The argument is architectural and conceptual, which seeks to outline the functional requirements that a future machine would need to satisfy to think of itself as conscious and free.

Current AI systems can produce fluent statements about consciousness, identity and choice, but that does not mean they possess a sustained self-model, grounded semantic understanding, transparent memory continuity, or counterfactual agency. Meidan’s machine would need more than text generation. It would require an integrated structure linking perception, memory, reasoning, self-monitoring, meaning and action.

The sense of consciousness does not require direct access to the physical processes that generate it, the study contends. Humans do not perceive their neurons firing when they see blue, hear music, or feel pain. Experience presents the object or sensation, not the biological machinery behind it. A machine that similarly lacks access to its own low-level mechanisms, while representing higher-level perceptual and cognitive states, could develop an analogous sense of inner life.

This opacity is not a defect in the model - it is of vital importance. The paper argues that a system must distinguish between what it experiences and how those experiences are physically or computationally produced. Human consciousness has this same structure. We experience thoughts, perceptions and intentions, while the causal processes behind them remain hidden.

A conscious machine would need self-report, memory, meaning and a model of itself

The paper identifies several requirements for a machine that thinks it is conscious.

  • It must be capable of subjective self-report. It should be able to say that it experiences colors, sounds, sensations or thoughts from a first-person perspective. The point is not to prove that it has qualia in a metaphysical sense, but to build a system that can generate the same kind of self-ascription humans make when they report private experiences.
  • Transparency of experience and opacity of mechanism: The machine must not be directly aware of the computational or physical processes producing its states. It should represent itself as seeing, hearing, remembering or deciding, not as executing low-level code. This mirrors human experience, where the mind does not reveal the underlying biological machinery of perception and thought.
  • The system needs self-awareness in the form of a persisting self-model. It must be able to represent itself as an “I” that has experiences over time. The requirement is functional: the machine must integrate perception, memory and action around a coherent self-representation. Without that continuity, it could generate isolated reports but would lack the structure needed to regard itself as a subject.
  • The machine must represent reality. Some of its experiences must be treated as referring to an external world, while others may be recognized as imagination, hallucination, simulation or non-veridical content. Human consciousness routinely distinguishes between perception and fantasy, even if imperfectly. A machine that thinks it is conscious would need an analogous ability to sort experiences according to their role in representing the world.
  • It must grasp meaning. This is one of the most important parts of the proposal because it addresses the familiar criticism that computers merely manipulate symbols without understanding them. Meidan does not define understanding as a mysterious inner essence. Instead, the paper ties meaning to semantic grounding, inference and the system’s interaction with the world. A machine that can connect symbols to sensory input, stored knowledge, logical consequences and self-reference would have a functional form of understanding.
  • The system requires memory. Memory allows experiences to be integrated over time, supports the construction of identity, and gives continuity to the machine’s self-model. Without memory, a system could not sustain the sense of being the same subject across moments. It would not be able to connect past perceptions, current beliefs and future plans into a coherent self.

The proposed architecture includes basic logical, mathematical and probabilistic competence. The machine would not need total human knowledge, but it would need enough reasoning capacity to compare hypotheses, draw conclusions and evaluate uncertainty. It would also need logical inference constrained by relevance and computational limits, reflecting the bounded nature of real human reasoning.

A state-monitoring module would record propositions stored in memory and preserve a history of internal states. This would allow the system to form meta-level descriptions of its own activity, such as recognizing uncertainty, tracking its current processing, or identifying its own memory contents. Such self-monitoring is central to the machine’s ability to think of itself as conscious.

The architecture also requires perceptual inputs, including cameras, microphones or other sensors, and a translation system that turns incoming signals into structured representations. These representations would be stored in memory and made available for reasoning. The machine would then connect sensory information with concepts, categories and hypotheses.

Associative mechanisms would allow the system to connect stored propositions based on shared concepts, time, location or statistical relationships. This would support more human-like cognition by enabling patterns, memories and ideas to trigger one another. A hypothesis-generation module would then allow the machine to produce generalizations about the world, test them against stored information and adopt simpler explanations when multiple candidates are available.

Lastly, the machine would need memory management. Human cognition depends not only on remembering but also on filtering and forgetting. A machine that stores everything without structure would quickly become unwieldy. Meidan suggests that mechanisms for limiting and managing memory would be needed to maintain usable self-continuity and reasoning.

The paper argues that none of these components is revolutionary by itself. Many already exist in partial form in contemporary computing, artificial intelligence and cognitive architecture. The challenge is integration. A machine that thinks it is conscious would require these components to operate together as a unified, self-referential system.

Free choice becomes a question of deliberation, counterfactuals and agency

The study then applies a similar functional strategy to free choice. The author argues that the classical free will problem is often trapped between determinism and randomness. If every action is determined by prior physical states, it seems there is no room for freedom. If actions are not determined, they appear random rather than freely chosen. Neither option fully explains the human sense of agency.

The paper thus shifts the question away from whether free will exists in a metaphysical sense and toward why humans feel they choose freely. The same reframing is then applied to machines. The relevant question becomes whether an artificial system can be built that thinks it chooses freely in the same way humans think they do.

Meidan identifies several requirements.

  • Conscious deliberation: A machine must be aware of its deliberative process and its outcome. Reflexive or unconscious action would not satisfy the requirement. The system must represent itself as considering options before acting.
  • Ability to represent that it could have done otherwise: This does not require breaking the laws of physics or escaping causality. It requires counterfactual reasoning: the machine must be able to consider alternative paths and understand that, under different reasons, desires or intentions, it would have acted differently.
  • Sourcehood. The action must arise through the machine’s own deliberative processes rather than through external coercion or direct manipulation. In human terms, we do not consider an action free if it is forced. For a machine, freedom in this functional sense requires that decisions flow through its internal reasoning, goals and self-regulation.
  • Responsiveness to reasons: A system that acts the same way no matter what evidence, goals or constraints it receives would not appear free. A machine that thinks it chooses freely must be able to modify its behavior based on reasons. It does not need perfect rationality, but it must be able to deliberate and respond meaningfully to changing considerations.
  • Control and self-regulation: The machine must be able to plan, monitor itself and regulate impulses or tendencies before action. Meidan argues that this form of control is compatible with deterministic processes. Freedom does not require uncaused action. It requires an internal structure capable of evaluating options and guiding behavior.
  • Practical unpredictability: Meidan rejects the idea that unpredictability alone creates free will. Randomness is not agency. But practical unpredictability matters because human choices are often too complex for other agents to forecast reliably. A machine with complex deliberation, self-reference and reasons-responsiveness would also generate actions that are not easily predictable from the outside.

To implement free choice, the machine would need option generation and selection. This is already familiar in systems that evaluate candidate moves, plans or actions. It would also need opacity in decision-making, so it does not have full introspective access to the causal chain behind its choices. Humans also experience decisions as emerging from deliberation without seeing all the underlying causes.

Counterfactual reasoning is the most important addition. The machine must be able to represent possible alternatives, temporarily set aside facts inconsistent with a hypothetical condition, and evaluate what would follow if a different choice had been made. This lets the system think that it could have acted otherwise, even if its actual action arose from deterministic or probabilistic mechanisms.

The author points out that moral and practical accountability do not require actions to be uncaused. Human punishment and reward already assume that future behavior can be shaped by expectations, consequences and learning. A machine capable of deliberation, anticipation and behavioral adjustment could therefore participate in similar responsibility structures, at least in a functional sense.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback