From defense to danger: How AI is rewriting rules of cybersecurity
Criminal groups are leveraging AI to generate polymorphic malware such as Black Mamba, which mutates to evade detection. The financial toll is staggering: global cybercrime costs are estimated at 9.5 trillion US dollars in 2024, with the average data breach now costing 4.8 million. Supply chain ripple effects can multiply these damages by hundreds or even thousands.

Artificial intelligence is changing the way cyber threats emerge and how organizations must defend themselves, creating both new vulnerabilities and innovative protections. A team of researchers has examined these dynamics in a landmark study that reframes the future of cybersecurity.
Their article, “How Does AI Transform Cyber Risk Management?”, published in Systems, combines a literature review, expert workshops, and a case study of the Colonial Pipeline ransomware attack. The authors identify three new feedback structures introduced by AI that fundamentally change how cyber risks behave, concluding that risk management strategies must be redesigned for an era of intelligent and adaptive adversaries.
Why AI is reshaping cyber threats at scale
Cyberattacks are intensifying not only in number but in sophistication. Phishing incidents rose by nearly 500 percent in 2023, while deepfake-enabled fraud surged by 3000 percent in the same year. Criminal groups are leveraging AI to generate polymorphic malware such as Black Mamba, which mutates to evade detection. The financial toll is staggering: global cybercrime costs are estimated at 9.5 trillion US dollars in 2024, with the average data breach now costing 4.8 million. Supply chain ripple effects can multiply these damages by hundreds or even thousands.
The authors note that in complex environments only about 15 percent of cybersecurity investments prove truly effective. Most resources are poorly aligned with the threats they are meant to counter. Artificial intelligence exacerbates this mismatch by accelerating the tempo of attacks, automating exploitation, and creating new pathways that traditional defenses are unprepared for.
What new dynamics are introduced by AI?
Through their analysis, the researchers highlight three feedback structures that demonstrate how AI-driven threats move beyond linear cause-and-effect events to systemic, adaptive risks.
The first is deceptive defense structures, where defenders use AI to generate convincing decoys, honeypots, or synthetic systems. These absorb attackers’ attention and effort, buying critical time and safeguarding real assets. This creates a balancing loop, but it also raises cost and complexity for defenders who must maintain these environments.
The second is the two-step success-to-success attack. Here, adversaries use AI to target and neutralize a single protective function, such as a malware detection system or a model used for whitelisting. Once this initial success is achieved, a second, deeper breach can be launched undetected. The danger lies in the false sense of security this creates, as organizations may believe systems are functioning properly while attackers have already disabled core defenses.
The third is autonomous proliferation, which occurs when AI systems inside an organization drift, malfunction, or are repurposed for malicious purposes. These AI applications can begin to generate harmful actions independently, effectively creating automated insider threats that expand an organization’s exposure without direct human intervention.
Together, these feedback structures show that AI has shifted cyber risk from predictable patterns to evolving and recursive dynamics. This challenges defenders to rethink not just the tools they use but the principles guiding their risk management strategies.
How should leaders respond to the AI threat?
The study uses the Colonial Pipeline ransomware attack as a detailed case study to illustrate how gaps in prevention and detection can cascade into large-scale crises. The incident, which forced the shutdown of a critical energy pipeline and led to a ransom payment of 4.4 million US dollars, underscores how vulnerabilities in one part of a system can escalate into national-level disruptions.
From these lessons, the authors offer several recommendations for organizations. They argue that deception should become a mainstream component of cybersecurity standards, not a niche tactic. By diverting attacker effort into controlled spaces, organizations can weaken adversaries’ momentum and gain strategic insights.
They also warn that insider risk must now include artificial intelligence itself. AI systems require continuous monitoring, model integrity checks, and validation processes to ensure they do not drift into dangerous or exploitable behavior. This extends the scope of cybersecurity beyond perimeter defenses to the very tools organizations use internally.
The study calls for scenario-driven planning supported by dynamic policy and simulation. AI-enabled threats evolve rapidly, making static defenses obsolete. Organizations must therefore simulate complex threat environments, stress-test their AI-enabled systems, and recalibrate investment strategies in real time. The paper emphasizes the need for quantified simulation models that capture the new feedback loops AI introduces.
- FIRST PUBLISHED IN:
- Devdiscourse