Artificial intelligence (AI) has supercharged social engineering. Global management consulting firm McKinsey & Company reported a 1,200% global surge in phishing attacks since the rise of generative AI in the latter half of 2022. And it’s not just the number of attacks that’s climbing; it’s also the success rate. Arctic Wolf’s Human Risk Behaviour Snapshot: 2nd Edition reveals that nearly two-thirds of IT and security leaders self-reported falling for a phishing attempt. And IBM reports that AI-powered spear phishing attacks have a 47% success rate against trained security experts.
It’s not just phishing, either. Threat actors are leveraging the power of artificial intelligence to enhance everything from business email compromise (BEC) to ransomware attacks. And organisations are taking note. For the first time, Arctic Wolf’s 2025 The State of Cybersecurity Trends Report found that ransomware had finally been dethroned as the principal concern of security leaders. The new champion? Artificial intelligence.
But, as security experts know well, understanding a threat is the first step to defeating it. With phishing serving as a frequent entry point into an environment, and the FBI reporting that BEC attacks cost organisations a collective $2.77 billion (USD) in 2024, it’s time to take a closer look at how AI is enhancing these two forms of social engineering and learn how your organisation can defend itself from these latest evolutions.
What Is AI-Enhanced Phishing?
Threat actors are leveraging artificial intelligence — particularly generative models like GPT-style chatbots and image manipulation and creation software — to create more convincing and scalable phishing campaigns. Traditionally, phishing has been easier to spot thanks to the number of common grammar mistakes and generic language used. AI, however, allows these attacks to be hyper-personalised, free of spelling and usage errors, and capable of adapting in real time to a target’s responses and inputs.
How Does AI-Enhanced Phishing Work?
AI-enhanced phishing attacks tend to follow the same set of steps:
1. Reconnaissance via AI-powered data mining: Threat actors feed publicly available information like social media posts and LinkedIn job histories into large language models (LLMs) to generate detailed, unique profiles of targets.
2. AI-generation of content: Generative artificial intelligence (GenAI) is leveraged to craft individualised phishing, smishing, and/or direct messages to users based on these unique profiles. Thanks to the unique profiles and adaptability of AI, these messages can often mimic the tone, style, and rhythms of actual users.
3. Adaptive conversation loops: Threat actors use AI to automate — and increase the efficacy of — conversations with targets, allowing for real-time responses and the subtle adjustment of persuasion tactics based on keyword reactions.
4. Infostealer deployment and/or credential harvesting: Once the phishing link is clicked, AI-generated or enhanced malware gets to work harvesting credentials. Or a user might be taken to a spoofed landing page, also created with the help of AI.
What AI-Enhanced Phishing Looks Like in the Real World
In September, cybercriminals attempted to use AI-generated code to obscure a phishing payload hidden inside an SVG file, in an effort to avoid traditional security filters. The complexity and structure of the code led Microsoft threat intelligence researchers to believe it had been created by an LLM. The phishing email itself mimicked a file-sharing notification, which increased its credibility and the likelihood of successful deployment. Additionally, the sender and recipient email addresses matched, and the actual target addresses were hidden in the BCC field to sidestep detection heuristics. Despite the sophistication, Microsoft Defender for Office 365 was able to detect and block the campaign using AI-powered protection systems, highlighting the power of artificial intelligence for both threat actors and defenders.
How To Defend Against AI-Enhanced Phishing
The old methods of checking for spelling errors and asking for voice or video confirmation of requests can no longer be relied upon. Defending against this new evolution of AI-enhanced phishing requires an adaptive, multilayered security strategy.
- Enhance your training: Ensure your security awareness training includes phishing simulations with AI-generated lures for more realistic content and proper preparation to avoid emotional triggers.
- Deploy behavioural-based detection: AI can and should be leveraged by defenders, too. It can help detect anomalous writing styles, unnatural communication frequency, or unusual financial requests that slip by human eyes.
- Implement strict verification protocols: For any high-risk financial or operational requests like wire transfers or access to software, rely on in-person verification whenever possible, as threat actors can spoof an executive’s voice and likeness. When in-person is not possible, reach out to the executive through protected channels like company messaging systems.
- Adopt phishing-resistant authentication: FIDO2-based multi-factor authentication (MFA) or passkeys that use biometrics can help reduce credential theft risks.
- Monitor for infostealers: Since many AI-enhanced phishing attacks deliver credential-harvesting malware, 24×7 endpoint monitoring is essential.
What Is AI-Enhanced Business Email Compromise?
In an AI-enhanced business email compromise (BEC) attack, threat actors leverage GenAI to impersonate an organisation’s executives, vendors, or partners to convince users to transfer funds or share confidential information. Traditional BEC relies on the exploitation of trust and respect for authority, but AI enhances attacks by increasing a threat actor’s ability to scale and personalise attacks as well as imitate internal communication styles down to project names and individual writing quirks of the people the BEC attack claims to be.
How Do AI-Enhanced BEC Attacks Work?
These highly tailored and authority-driven attacks target specific users — typically those in finance or HR — by impersonating high-level members of the organisation requesting the transfer of funds or the changing of account information. The typical attack chain often includes:
1. Reconnaissance: Threat actors collect executive profiles from speaking engagements and social media accounts, record voices from earnings calls, and discover organisational structure, departmental hierarchies, and vendor histories using publicly available information (often referred to as open-source intelligence, or OSINT). They may also search the dark web for previously compromised credentials that allow them to access an actual user’s inbox from which to send their messages, increasing the likelihood of the target falling for the attack.
2. Style and voice cloning: AI models are then trained on the target organisations’ executive communication styles, including tone, style, and punctuation quirks, and spoken voice patterns.
3. AI-generated trigger email: The gathered information is leveraged to generate a highly contextualised email, one that often references real projects or ongoing financial milestones, which is then sent to the target.
4. Dynamic persuasion loops: If the target shows hesitation to the initial email ask, threat actors utilise secondary emails or deepfake audio and video to provide secondary “confirmation” of the request.
5. Funds transfer: In most BEC attacks, the end goal is the fraudulent transfer of funds, most commonly disguised as vendor payments or executive requests.
Experience a business email compromise attack safely in our on-demand webinar which demonstrates the impact of a simulated BEC attack on an organisation.
What AI-Enhanced Business Email Compromise Looks Like in the Real World
A major U.K.-based engineering firm fell victim to a sophisticated AI-enhanced attack in 2024 that not only used AI to create a convincing request but also incorporated deepfake technology to further trick the victim. A finance manager received a message from what appeared to be the firm’s CFO, complete with accurate references to ongoing projects. Thanks to generative AI, the highly contextualised email was able to mimic the phrasing, tone, and style of the CFO. Even so, the finance manager requested verbal confirmation of the request, as users are trained to do. But the threat actor was ready for this. Using AI deepfake technology, they created a digital clone of one of the firm’s senior managers, and then sent the target a virtual meeting link, where the request was “confirmed.” This led to a $25 million (USD) fraudulent wire transfer.
How To Defend Against AI-Enhanced Business Email Compromise Attacks
AI has clearly enhanced the efforts of threat actors, but it also offers defenders powerful detection and automation options. By combining technical controls, verification discipline, and effective security awareness training leading to proactive human response, the risk of a successful BEC attack can be greatly reduced.
- Create advanced approval processes: Require multi-person approval and delayed transfer windows for large transfers and vendor account changes.
- Leverage AI: Defenders can use the same technology to thwart AI-enhanced BEC attacks by using AI-powered anomaly detection to identify any irregularities in messaging tactics or atypical financial flows.
- Require human-only verification: Mandate the denial of requests using only links or phone numbers provided in emails.
- Enhance security awareness training: Ensure your program and/or solution provider train on AI-enhanced tactics like voice and video deepfakes.
- Establish rapid response procedures: Minimise financial exposure by enforcing banking recall for any unauthorised transfer which slips through.
Outfit Your Organisation To Overcome AI-Enhanced Social Engineering
Every October is Cybersecurity Awareness Month. This year, Arctic Wolf shone a spotlight on the dangers of AI-enhanced social engineering, and offered solutions to help organisations find a better path forward past the hazards of human risk.
As organisations expand their attack surface (cloud, IoT, hybrid work, etc.), employees have become an even more tempting social engineering target for threat actors looking for access, data, or funds. But organisations, and their employees, don’t have to traverse this new territory alone. Enable your employees and leaders to stay safe online, create a culture of security, and actively reduce human risk with the insights and expertise offered in our on-demand Cybersecurity Awareness Month Summit.

