Will AI Start the Next War? Six Theories That Worry Experts, and Why They May Not
A RAND Corporation study outlines six potential ways AI could increase the risk of interstate war, chiefly by destabilizing power balances and distorting human judgment. While most scenarios are unlikely, the authors stress that responsible governance is crucial to preventing AI-fueled conflict.

In a compelling study published by the Bulletin of the Atomic Scientists, researchers from the RAND Corporation, Zachary Burdette, Karl P. Mueller, Jim Mitre, and Lily Hoak, analyze whether artificial intelligence (AI) could one day be the spark that ignites the next major war. Their findings strike a cautious balance: while AI is unlikely to directly trigger conflict on its own, it has the potential to magnify geopolitical tensions and increase the risk of war if mishandled. Drawing on strategic history, military doctrine, and contemporary political science, they explore six plausible pathways through which AI might increase the likelihood of interstate war. The scenarios are examined critically, with the researchers ultimately concluding that although most are improbable, two deserve heightened attention due to their credibility and potential danger.
AI and the Temptation of Power Shifts
The first scenario considers how AI might disrupt the global balance of power, tempting some states to pursue military aggression. If a country gains a significant technological edge through AI-enabled military systems, such as autonomous drone swarms or superior cyber warfare tools, it may perceive previously unreachable goals as suddenly attainable. The researchers argue that for this outcome to materialize, multiple factors must align: the AI advantage must be decisive, rivals must fail to catch up in time, and the possessing state must have political goals that require the use of force. Even then, deterrence might still prevail if both sides recognize the shift. Interestingly, AI might also have a stabilizing effect if it strengthens the military capabilities of countries supporting the territorial status quo, such as the U.S. and its allies, discouraging aggressive moves by revisionist powers like Russia or China.
The Race to Preempt: Preventive Wars
In a variation of the first hypothesis, the second scenario envisions states engaging in preventive war to stop rivals from gaining AI supremacy. If national leaders believe AI offers transformative economic and military advantages, they may calculate that it is safer to strike first than to risk future coercion or subjugation. But such logic is fraught with risk. For instance, a U.S. strike on Chinese AI infrastructure would likely target civilian tech hubs, an act that could provoke severe retaliation. Moreover, being a fast and effective adopter often matters more than being the first to invent, weakening the rationale for such high-stakes preemptive actions. Though a nation might attempt to preserve a monopoly on advanced AI through force, the political and military costs of such a move are likely to be prohibitive.
The Illusion of a Low-Cost War
Could AI make war seem “cheaper” and more palatable? This scenario rests on the belief that autonomous systems could drastically reduce human casualties and material costs. If AI-enabled machines take over dangerous battlefield roles, governments might become more willing to use force to achieve political ends. However, the RAND team finds this vision overly optimistic. As seen in Ukraine, drones supplement but do not replace human combatants. Moreover, large-scale autonomous forces are still costly to produce and maintain. AI could also prolong wars by making it easier to replenish robotic forces, increasing overall economic strain and loss of life over time. A particularly alarming variant imagines AI weakening nuclear deterrence, but even advanced AI is unlikely to fully neutralize the retaliatory capacity of nuclear-armed states.
Domestic Turmoil and the Diversionary War Myth
Another hypothesis centers on the idea that AI-induced societal chaos, like mass unemployment or political unrest, could drive leaders to manufacture foreign crises as distractions. Historically, however, this “diversionary war” theory lacks strong support. Most leaders in domestic turmoil choose repression or reform over foreign aggression. For example, Franklin Roosevelt pursued sweeping economic recovery measures during the Great Depression rather than war. Moreover, even if AI sparks rapid economic change, the public might not clamor for conflict. Prosperity through AI could just as easily foster self-sufficiency and reduce interest in global confrontation. While AI may inflame public opinion through social media echo chambers or misinformation, it’s unclear whether this would significantly shift governments toward war.
Autonomous Accidents and Escalation
A classic fear drawn from science fiction is the rogue AI system initiating war on its own. While this is unlikely in practice, the researchers explore whether AI might unintentionally escalate conflict through accidents or system errors. Autonomous drones might violate enemy airspace or misidentify targets in a high-stress scenario. But for such incidents to start a war, they must be severe, and leaders must fail to de-escalate. History shows that states are typically effective at containing accidents; for instance, the U.S. and Iraq defused tensions after the 1987 Stark incident. Moreover, efforts are already underway to prevent such outcomes. The 2024 agreement between the U.S. and China to retain human control over nuclear launches reflects a growing recognition of the risks AI poses.
Misguided Minds: AI Distorting Human Judgment
Perhaps the most credible and unsettling scenario involves AI influencing the decisions of human leaders. With AI-generated intelligence playing a growing role in military and strategic planning, leaders might place undue trust in flawed or biased outputs. Large language models can hallucinate facts, misinterpret intent, or reinforce preexisting beliefs. Automation bias may cause leaders to accept AI assessments they would otherwise challenge if offered by a human advisor. This can lead to misperceptions about an adversary’s intent or strength, increasing the chance of conflict. The risk is compounded if leaders feel rushed to act before rivals do the same with their own AI tools. On the flip side, well-designed AI systems could reduce misperception and improve crisis management, but this requires robust safeguards and transparency, elements that may be absent in authoritarian states or poorly governed systems.
The RAND researchers assert that while AI could play a role in future conflicts, it is unlikely to be the sole or primary cause of war. Instead, the real danger lies in how AI interacts with existing political incentives and human behaviors. Among the six pathways, the greatest risks lie in AI destabilizing power balances and corrupting strategic judgment. These risks, while serious, can be managed through prudent policies, transparency, and human oversight. Governments must ensure AI serves as a stabilizing force, not a reckless accelerant to global conflict.
- FIRST PUBLISHED IN:
- Devdiscourse
ALSO READ
Trade Tensions and Market Reactions: Navigating Uncertainty
Modi Urges BRICS to Lead Global Cooperation with AI and Multipolarity
Israel launches airstrikes targeting Yemen's Houthi rebels, Houthis launch missile at Israel.
A New Limb of Hope: Gazan Child Rides Again
Tom Hooper Set to Lead Brumbies Against Lions in Canberra Clash