More capable AI could force firms to deploy less
Businesses racing to adopt more powerful artificial intelligence (AI) systems may face an unexpected cybersecurity tradeoff: stronger AI does not always lead to wider deployment. A new study finds that firms in high-risk sectors may rationally limit or reject advanced AI tools when governance controls fail to keep pace with capability.
The study, titled “The Security Cost of Intelligence: AI Capability, Cyber Risk, and Deployment Paradox,” was published on arXiv. It develops an analytical model showing that advanced AI can raise productivity while also increasing cyber exposure when firms grant systems broad access to data, workflows, and delegated authority without matching governance safeguards.
Better AI can become harder to safely deploy
The key finding is what the author calls a deployment paradox. In normal technology adoption models, better tools are expected to increase deployment because they raise productivity. When more capable AI systems require broader operational authority, firms may deploy less, not more.
The mechanism is direct. High-value enterprise AI often needs access to sensitive data, integration with workflows, and permission to act across systems. These same features expand the attack surface and increase the damage that can follow from a breach. When governance controls are weak, the security burden attached to advanced AI can grow faster than the productivity gain.
The study calls this condition the governance-capability gap. It occurs when AI capability advances faster than organizational controls such as access limits, data segmentation, privilege boundaries, and containment design. The author stresses that capability itself does not automatically cause cyber damage. The risk emerges when capability is bundled with authority exposure under weak governance.
The paper points to real-world warning signs. It cites reported AI-related security incidents where many organizations lacked proper AI access controls, and notes that firms often give AI agents more privileges than necessary. This creates hard-to-monitor pathways and raises the cost of compromise, especially in sectors where data breaches are already expensive.
In the model, firms choose both AI deployment and cybersecurity investment. More deployment raises productivity, but it also expands the exposure that defenders must protect. More capable AI raises output, but under weak governance it also raises the potential damage of compromise. The study finds that in high-loss environments, this second effect can dominate.
Healthcare and finance face sharper tradeoffs
The research uses industry breach-cost differences to show how the paradox varies by sector. It calibrates breach-loss risk using IBM’s 2024 data breach cost estimates, treating the global average breach cost of $4.88 million as a benchmark. The study lists healthcare at $9.77 million, financial services at $6.08 million, industrial firms at $5.56 million, and retail at $3.48 million.
Under the model, retail does not enter the private deployment paradox region, while industrial and financial services do admit such a region. Healthcare sits at the most severe end of the risk scale. The study’s numerical exercise suggests that under weak governance, firms in healthcare and finance may reject frontier AI upgrades even when those systems are technically more productive.
This leads to another key result: the sandboxing trap. A firm may retain a legacy AI system or restrict advanced tools to limited use cases because the cybersecurity burden of a frontier system outweighs its productivity benefits under current governance conditions. The study says this is not mere caution or managerial inertia. It can be a rational response to the firm’s security environment.
For healthcare, the model shows the strongest warning. In the illustrative scenario, a frontier system can amplify breach severity faster than it raises productivity. Financial services also shows a potential rejection of advanced AI under high-loss conditions. Industrial firms may still upgrade under some assumptions, while retail is more likely to benefit from capability gains without facing the same deployment reversal.
The study also finds that the gap between ideal deployment and actual deployment grows as breach-loss magnitude rises. This gap is called the security discount. It represents the amount of AI deployment firms give up because of cyber risk. As capability increases, the no-risk deployment benchmark rises, but so does the amount withheld under weak governance.
The author’s model also shows that breach risks do not remain confined to individual firms. A compromised AI system with cross-system authority can affect customers, partners, infrastructure, and other organizations. Once these external harms are considered, the socially optimal level of AI deployment can be lower than what private firms choose. This means firms may still over-deploy from a public welfare perspective, even when they are privately holding back.
Governance, not just spending, determines AI value
Cybersecurity spending alone is not enough, the study asserts. Firms need governance controls that reduce the damage caused by compromise. These include data segmentation, least-privilege access, containment architecture, limits on lateral movement, auditable authority boundaries, and clearer logging of what AI systems can access and do.
The author separates two types of investment. Productivity-side investments, such as workflow redesign, data governance, process standardization, and organizational learning, help firms get more value from AI. Security-side governance investments reduce breach-loss magnitude by limiting what a compromised system can reach or damage. Both matter, but they operate through different channels.
Governance investment can shrink or eliminate the deployment paradox. When firms reduce breach-loss magnitude, more AI deployment becomes economically viable. In the model, once governance reduces the loss environment below a key threshold, the standard relationship returns: stronger AI again supports broader deployment.
Firms adopting AI agents may be tempted to pursue capability upgrades first and governance later. The author’s model suggests that sequence can be costly. If governance maturity lags, more capable AI may become less deployable. In some cases, firms may gain more from improving containment and access controls than from adopting a stronger model.
The findings also challenge simple regulation. Deployment caps and high-risk use restrictions may be necessary in some cases, but the study treats them as second-best tools. More direct policy would tie broader AI deployment permissions to governance conditions, including segmentation, least-privilege rules, containment standards, and authority logging.
The paper also notes limits. The model is static and does not fully capture dynamic learning, strategic attackers, competitive pressure, or the full variety of AI applications inside a firm. It also does not model how AI itself might improve cybersecurity through anomaly detection or automated access enforcement. Still, the core finding remains: the value of AI capability depends on whether organizations can separate capability from excessive authority exposure.
- FIRST PUBLISHED IN:
- Devdiscourse

