Generative AI fuels sophisticated cyberattacks, putting users and businesses at risk
The study reveals that GenAI allows cybercriminals to generate flawless, context-aware phishing content that mimics the tone, style, and structure of legitimate communications. These messages can replicate corporate branding, use personal information scraped from online sources, and even mimic the writing style of a colleague or manager. The result is phishing emails and chat messages that are almost impossible to distinguish from authentic ones.

Phishing, one of the most persistent forms of cybercrime, has become far more dangerous in the era of artificial intelligence (AI). Researchers have now warned that generative AI (GenAI) is giving cybercriminals unprecedented capabilities to craft hyper-realistic phishing messages, bypass security filters, and manipulate human behavior at scale.
Published in AI, the study “Phishing Attacks in the Age of Generative Artificial Intelligence: A Systematic Review of Human Factors”, the study provides one of the most detailed examinations yet of how AI-driven phishing exploits human psychology and why organizations must urgently adapt their defenses to counter this fast-evolving threat.
How has generative AI transformed the phishing landscape?
Phishing has traditionally relied on simple deception techniques, emails riddled with spelling errors, generic messages asking for urgent action, or poorly formatted links. For decades, users were taught to spot these warning signs. However, the emergence of large language models (LLMs) like ChatGPT and other GenAI tools has fundamentally altered the game.
The study reveals that GenAI allows cybercriminals to generate flawless, context-aware phishing content that mimics the tone, style, and structure of legitimate communications. These messages can replicate corporate branding, use personal information scraped from online sources, and even mimic the writing style of a colleague or manager. The result is phishing emails and chat messages that are almost impossible to distinguish from authentic ones.
Additionally, AI automates what was once a labor-intensive process. Attackers can now produce thousands of personalized phishing emails in seconds, dramatically lowering the cost of operations while increasing success rates. Unlike older campaigns that cast a wide net, AI-driven attacks are targeted and precise, exploiting specific vulnerabilities with surgical accuracy.
AI can also generate deepfake audio and video messages, adding a new layer of deception to spear-phishing campaigns. These capabilities mean that even sophisticated detection systems are struggling to keep up with the volume and quality of AI-generated attacks.
Why are human factors at the center of this new threat?
While technical defenses have evolved, humans remain the primary target and weakest link in cybersecurity chains. The review analyzed dozens of case studies and psychological experiments, concluding that AI-generated phishing attacks are far more successful at manipulating users than traditional ones.
GenAI’s ability to customize messages based on psychological triggers, such as urgency, fear, trust, or authority, dramatically increases the likelihood of users clicking malicious links or sharing sensitive information. For instance, an AI-crafted phishing email may reference a real-life project deadline or use personal details mined from social media, making it far more convincing than generic scams.
Remote work environments were also found to have amplified the problem. With employees increasingly relying on digital communication, they are more exposed to deceptive messages, especially in settings where quick responses are expected. Cybercriminals exploit this pressure, crafting emails that appear to come from superiors or clients to extract confidential information or initiate fraudulent transactions.
Interestingly, even trained users are falling victim to AI-generated phishing. Traditional cybersecurity training often relies on spotting obvious red flags, but the sophistication of GenAI means those indicators no longer apply. The research suggests that security awareness programs must evolve to include exposure to AI-generated simulations so users can adapt to this new reality.
What can organizations do to counter AI-enhanced phishing?
The study provides a few recommendations to counter the escalating threat. At the technological level, organizations must invest in AI-powered defensive tools capable of detecting patterns invisible to human eyes. These include advanced anomaly detection systems, behavioral analytics, and authentication mechanisms that can identify suspicious activity beyond the content of messages.
However, technology alone is not enough to address this threat. Human-focused interventions, as the authors stress, are crucial. Organizations should implement continuous training programs that simulate AI-generated phishing attempts, helping employees recognize subtle cues and avoid falling victim. Building a culture of vigilance, where employees feel empowered to report suspicious activity without fear of repercussions, is essential.
The research also calls for stronger policies and regulations to prevent the misuse of generative AI. This includes frameworks that ensure AI development aligns with ethical standards, as well as international collaboration to track and mitigate AI-powered cybercrime. The authors emphasize that governments, tech companies, and cybersecurity experts must work together to develop proactive strategies that keep pace with evolving threats.
The study urges organizations to adopt a multi-layered security approach. This means combining technological defenses, human awareness, and policy measures to build resilience. Relying on any single layer, whether it’s firewalls, training, or regulations, will not be enough to address the complexity of AI-driven phishing.
- FIRST PUBLISHED IN:
- Devdiscourse