AI tools fuel surge in phishing, crypto scams and malware attacks

The theory of technological amplification adds another layer of concern. Technology doesn’t create intent, but it magnifies existing ones. GenAI serves as a force multiplier, boosting the efficiency and reach of bad actors by automating large-scale attacks. For instance, deepfake audio can impersonate corporate executives in fraud schemes, and code generation tools like WormGPT are used to produce custom malware without requiring advanced coding expertise.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 31-05-2025 09:23 IST | Created: 31-05-2025 09:23 IST
AI tools fuel surge in phishing, crypto scams and malware attacks
Representative Image. Credit: ChatGPT

The rapid proliferation of generative AI models has raised concerns over their dual-use potential, especially in cybersecurity. While these technologies can drive innovation and productivity, they are increasingly being leveraged for malicious purposes. A new study offers some of the strongest empirical evidence to date that the release of generative AI tools is associated with a measurable rise in cybercrime.

The peer-reviewed study, titled "Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side," evaluates the real-world consequences of widely accessible AI tools like ChatGPT, Claude, and others, particularly in relation to phishing attacks, malware development, and cryptocurrency-related scams. Through interrupted time series analyses of two large datasets comprising over 464 million malicious IP addresses and 281,000 crypto fraud reports, the study provides data-backed insights into how generative AI amplifies cybercriminal capabilities.

How does generative AI enable cybercrime?

The study applies two conceptual frameworks - affordance theory and technological amplification - to explain the mechanics of GenAI-fueled cyber threats. Affordance theory posits that technologies suggest their possible uses to users. In the case of GenAI, its capabilities to generate text, code, images, and voice mimicry are affordances that can be exploited by cybercriminals. Attackers can use these models to generate persuasive phishing emails, realistic websites, and malicious scripts with minimal effort or technical skill.

The theory of technological amplification adds another layer of concern. Technology doesn’t create intent, but it magnifies existing ones. GenAI serves as a force multiplier, boosting the efficiency and reach of bad actors by automating large-scale attacks. For instance, deepfake audio can impersonate corporate executives in fraud schemes, and code generation tools like WormGPT are used to produce custom malware without requiring advanced coding expertise.

According to the study, these systems enable real-time scam personalization and content generation that bypass traditional defenses. This creates a rapidly evolving threat landscape where novice attackers can execute complex cyberattacks, democratizing capabilities that were previously limited to experienced hackers.

What does the data say about the rise in AI-driven cybercrime?

The researchers conducted an interrupted time series analysis on two datasets: one from AbuseIPDB.com and another from Chainabuse.com. The goal was to examine cybercrime trends before and after the release of ChatGPT 3.0 on November 30, 2022.

In the first dataset, which tracked 464 million reports of malicious IP addresses, the analysis revealed a statistically significant increase in cybercrime immediately following the GenAI release. Specifically, there was a spike of approximately 1.12 million additional cyber incidents per week, with a minor, statistically insignificant decline in the growth rate afterward. The data also showed a marked increase in variance and unpredictability of attack patterns post-release.

The second dataset, focused on cryptocurrency-related scams, included over 281,000 reports. Results showed a similar pattern: a significant immediate jump of 721 additional weekly scam reports following the GenAI release, followed by a sustained upward trend over time. These findings indicate that not only did AI availability cause an initial spike in cybercrime, but it may also be contributing to a long-term escalation, particularly in high-risk sectors like cryptocurrency.

Robustness checks using ARIMA models confirmed these trends and accounted for other variables like Bitcoin price fluctuations. Even after controlling for external economic factors, the correlation between AI release and cybercrime held strong.

What are the broader implications for AI governance and public safety?

The implications of these findings are significant for policymakers, technology developers, and cybersecurity professionals. The study underscores the urgent need for proactive governance of generative AI tools. Existing safeguards, such as hard-coded restrictions and content moderation in commercial LLMs, are easily bypassed through jailbreaking or use of uncensored, open-source models.

The decentralization of AI model development poses a formidable challenge to regulators. Over 1.7 million models are currently hosted on platforms like Hugging Face, many of which are trained with minimal oversight. The study calls for multi-layered regulatory strategies that include ethical AI development, secure-by-design principles, and international frameworks like the EU AI Act.

At the organizational level, there is a pressing need to enhance end-user awareness and defensive capabilities. The paper advocates for targeted cybersecurity training, especially in sectors most vulnerable to social engineering and phishing attacks. Vulnerable populations such as seniors and crypto investors must be equipped with tools to identify and mitigate AI-enhanced scams.

Additionally, the researchers stress the importance of developing defensive AI tools capable of identifying and countering malicious AI-generated content. Innovations such as AI-driven spam filters, voice authentication systems, and behavioral analytics could provide critical countermeasures.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback