AI may not need to think like humans to be intelligent
However, the study points out that human cognition itself is constrained by biology. Neurons transmit signals relatively slowly, working memory is limited, and evolutionary pressures have shaped the brain in ways that may not be optimal for certain tasks. Translating these features directly into AI could limit performance unnecessarily.

The growing debate over whether artificial intelligence should mimic human thinking has been reignited by a recent study by researcher Soumya Banerjee. The work questions long-held anthropocentric assumptions and argues for broader perspectives on intelligence beyond human cognition.
The paper, When Planes Fly Better than Birds: Should AIs Think like Humans?, was published in the journal Proceedings. Drawing on philosophical, engineering, and ethical perspectives, Banerjee compares AI’s trajectory with the evolution of aviation, suggesting that just as airplanes surpass birds without replicating their biology, AI might achieve intelligence without mirroring human thought.
Do machines need to think like humans to be intelligent?
The analysis centers on the philosophical question of whether intelligence requires human-like reasoning. The study critiques the tendency to equate intelligence with human cognition, noting that such a narrow view risks overlooking alternative forms of problem-solving.
The author asserts that intelligence is substrate-independent - it can manifest in different forms depending on the system. Just as planes achieve flight through different mechanics than birds, AI may develop its own pathways to problem-solving. This challenges the anthropocentric belief that replicating the human brain is the only, or even the best, route to artificial intelligence.
The study also warns that insisting on human-like reasoning could create artificial bottlenecks. It could restrict progress by forcing emerging systems into molds designed for biological constraints rather than optimizing for machine-specific strengths. This perspective frames intelligence as a broader spectrum where human cognition is only one expression.
What are the engineering trade-offs between human-like and non-human AI?
The paper examines the practical trade-offs of designing AI systems that either emulate human cognition or pursue alternative computational strategies. Human-inspired models, such as cognitive architectures like ACT-R and SOAR, as well as neuromorphic computing, provide interpretability and closer alignment with human values. These systems are often easier to monitor and understand, making them suitable for high-stakes applications where trust and transparency are essential.
However, the study points out that human cognition itself is constrained by biology. Neurons transmit signals relatively slowly, working memory is limited, and evolutionary pressures have shaped the brain in ways that may not be optimal for certain tasks. Translating these features directly into AI could limit performance unnecessarily.
The author compares this with non-human models such as deep learning, reinforcement learning, and specialized systems like AlphaGo and AlphaFold. These models achieve superhuman performance without replicating the way humans think. Their success demonstrates that high-level outcomes such as strategy optimization and scientific discovery can emerge from architectures radically different from human cognition.
The analysis suggests that the future of AI engineering will likely involve hybrid systems, models that harness the raw computational power of machine learning while incorporating symbolic reasoning and interpretability features that align with human oversight. This combination could strike a balance between efficiency and trust.
What ethical challenges arise from non-human AI thinking?
The study also explores the ethical implications of designing AI that diverges from human-like reasoning. A major concern is alignment - ensuring that AI systems act in ways consistent with human values and intentions. Human-like models can support alignment by making AI decisions more understandable and relatable. In fields such as healthcare, law, and finance, this interpretability is critical for trust.
On the other hand, radically non-human architectures present challenges in predictability and control. If AI systems develop reasoning methods that are opaque to humans, questions arise about accountability and responsibility. This complicates ethical discussions around agency and the moral status of AI.
The study acknowledges that the pursuit of non-human intelligence also reshapes debates about rights and responsibilities. If AI systems operate on fundamentally different cognitive frameworks, society will need new ethical categories to manage their role in decision-making.
The study argues that the solution may lie in hybrid approaches that integrate the strengths of both paradigms. By combining statistical learning with symbolic reasoning, embedding human-in-the-loop mechanisms, and building value alignment protocols, developers can create systems that are both powerful and trustworthy.
- FIRST PUBLISHED IN:
- Devdiscourse