Since its introduction into cybersecurity in the late 1980s as a tool for detecting unusual activity, artificial intelligence (AI) has grown in popularity and functionality, with a major surge of adoption happening in the past few years, thanks to its growing ability to perform tasks faster and more accurately than humans. However, AI has never operated in isolation; it has always relied on human input. And any advanced technology that requires human input can be used for both good and bad.
While current large language models (LLMs) already have impressive programming abilities, they still have significant limitations. For example, they haven’t yet sparked major breakthroughs in creating new threat actor initial access techniques or strategies. That said, generative AI is advancing at an unprecedented rate. In just a few years, we’ve seen remarkable progress in generating text, audio, images, and video.
One question, however, has remained: when will AI cause a major shift in the way cybercriminals carry out attacks? The answer is, it’s already happening. AI has made it easier to write code, develop malware, create convincing phishing schemes, and automate attacks. While AI’s limited ability to reason logically has so far kept it from driving significant advances in attack methods, it has made attacks more sophisticated, scalable, and increased the speed at which threat actors can operate.
Writing functional code, particularly for complex projects, requires strong logical reasoning. This is an area where current AI still falls short. However, based on current trends in AI development, it’s likely that more advanced reasoning will be just around the corner.
Security teams need to be proactive, understanding both how they can use AI to strengthen their defenses and how malicious actors are — or soon will be — using it against them.
Current Malicious AI: Chatbots, Phishing, and Deepfakes
Social engineering offers a cheap and effective way for threat actors to bypass technological defenses, and new AI tools — particularly generative AI — are making it easier to execute even more effective attacks.
Chatbots
On November 30, 2022, OpenAI released ChatGPT, marking the first time the general public became aware of the power of chatbots.
The most sophisticated chatbots train their models using a method called “reinforcement learning from human feedback” (RLHF), which helps them better mimic human conversation and provide more accurate, relevant responses.
Thanks to this training, chatbots can perform a wide range of tasks, from writing and debugging code to offering technical advice and answering complex questions. It can even craft stories and comedy sketches. However, chatbots are not perfect—they can still produce incorrect or sometimes confusing information.
Given their ability to write and debug code, it was only a matter of time before people began using chatbots for malicious purposes. In June 2023, it was revealed that the ChatGPT API could be used to create polymorphic (or “mutating”) malware—malicious software that can change its appearance in ways that help it avoid detection by security systems, such as endpoint detection and response (EDR) tools.
Phishing
The latest phishing threats to reach organizations worldwide are being built on the back of artificial intelligence (AI) models that generate highly accurate and realistic messages more likely to fool people. These enhanced social engineering attacks can lead to ransomware or business email compromise (BEC) attacks, which are already a major cause of cybercrime damages.
Take BEC attacks, for example. The FBI’s most recent Internet Crime Report revealed there were 21,489 BEC complaints with adjusted losses over $2.9B (USD). According to a recent survey of 1,000 global IT and security leaders conducted by Arctic Wolf®, 70% of organizations reported being targeted by BEC attacks in the past 12 months. Of those, only 21% were able to thwart the attack.
While BEC attacks traditionally target financial institutions and users who have access to the purse strings — think a CEO suddenly emailing the CFO about a wire transfer or a salesperson requiring the urgent purchase of gift cards — threat actors are branching out, utilizing AI to help them more effectively strike manufacturers, schools, and more.
Generative AI helps threat actors eliminate one of the most reliable indicators of a BEC attack; templated emails filled with spelling and grammatical errors. AI can quickly generate customized, error-free email communications that will stand a much better chance of slipping past security safeguards and fooling your employees.
But AI is not solely being leveraged to fix spelling and grammar mistakes. Security researchers have already demonstrated success with manipulating chatbots into impersonating a Microsoft employee through a technique known as indirect prompt injection, which could allow for the generation of phishing messages appearing to come from a legitimate internal user. Additionally, scripting and automation tools like autonomous agents can be leveraged to automate many phishing steps, from target selection and public data research to wide-scale phishing message delivery.
Deepfakes and Vishing
Generative AI is lowering the bar to entry for crafting convincing deepfakes — manipulated video, image, or audio recordings created with AI — that increase the effectiveness of phishing attacks.
Voice phishing (or vishing) is growing as a threat thanks to AI, with adversaries masquerading as employees and targeting call centers, help desks, and other departments that interact remotely and can grant access (e.g., via password recovery/reset flows). Plus, even live video feeds can be manipulated to make attackers look and sound like legitimate employees. Today’s deepfake tools require only a few still photographs — which are easily sourced from LinkedIn or a team member’s public social media presence.
Without a reliable mechanism for remote identity verification — security questions don’t count, but offline hardware keys do — anyone providing remote assistance will remain an attractive target to threat actors able to leverage AI.
Future Malicious AI: Novel Vulnerabilities and Defender Evasion
Malicious actors can already use artificial intelligence to find existing vulnerabilities in an environment. Soon, we expect advancements in AI’s reasoning capabilities to allow meaningful advancements in threat actor tactics, techniques, and procedures (TTPs), including the discovery of novel vulnerabilities.
Recent progress in large language models (LLMs) has already led to notable improvements in this area. For example, OpenAI’s o1-preview model, although still being tested, has shown promise in areas like mathematics, physics, chemistry, and formal logic. As LLMs become better at understanding how data flows through applications, they are expected to help uncover new vulnerabilities or link existing ones in ways that are harder for humans to replicate.
In fact, we are already seeing the emergence of open-source tools that use AI models to find zero-day vulnerabilities in Python code. These tools can even trace the full path of a vulnerability, from user input all the way to server output, demonstrating the increasing capabilities of AI in the cybersecurity field.
Soon, both penetration testers and cybercriminals may increasingly rely on advanced large language models (LLMs) in their work. This is a double-edged sword, however. While it can help developers write more secure code and improve organizational defenses, it can also be used for malicious purposes.
The warning signs are already visible. In 2024, OpenAI discovered a group of ChatGPT accounts being used for scripting and vulnerability research, traced to three threat actors with suspected ties to nation-states.
This is likely just the beginning of how state-backed hackers are experimenting with LLMs for their vulnerability research. As with any new technique, it won’t remain secret for long — successful methods tend to spread quickly across the cybercrime world. As these tactics become more accessible and affordable, they will be integrated into more threat actor toolkits and used with less caution.
How To Stay Safe From AI Cyber Attacks
The risks of AI-enhanced cyber attacks are clear. The good news? The same technology being leveraged by threat actors can be utilized by IT and security leaders to better, more proactively, protect their environments.
According to The Human — AI Partnership, a report from Arctic Wolf, a resounding 64% of respondents indicated that their organization is highly likely to adopt an AI-centric solution. This enthusiasm is grounded in these decision makers’ belief in AI’s potential to improve their overall security posture, particularly in the areas of “security data analysis” and “providing greater visibility” within their environment. Among these professionals, 32% believe AI will enhance their ability to detect threats, 21% see a potential to automate response and recovery actions, and another 21% hope to automate an analyst’s time-consuming repetitive tasks.
Yet despite the curiosity and enthusiasm, respondents also expressed hesitations around AI adoption. Our research found that less than a quarter (22%) of organizations plan to dedicate a majority of their cybersecurity budget towards these AI-powered solutions; and only about a third (36%) see adding AI-centric solutions as a top priority to improve their cybersecurity readiness. This is likely due to the steep learning curve surrounding this new technology, a particularly challenging hurdle to overcome, given the chronic lack of available security staff and budget constraints.
The most resilient cybersecurity strategies will be forged through a strategic human-AI partnership. This collaborative approach, leveraging the strengths of both entities, not only enhances threat detection and response capabilities but also positions organizations to navigate the evolving cybersecurity landscape with adaptability and resilience. The future of AI in cybersecurity is one where human intelligence guides and refines the immense potential of artificial intelligence, creating a formidable defense against the possible devastation of the next phase of emerging threats.
However, it’s not all about AI. There are plenty of actions organizations can take now, often using existing tools, talent, and technology, to better protect themselves against advanced AI cyber attacks.
Better Protection Now
To guard against phishing and other social engineering attacks, whether they use AI or not, build a culture of security through robust security awareness training that forgoes assigning blame, leverages real-world attack scenarios, and encourages retention through brief lessons focused on a single topic each time.
Additionally, the implementation of email and identity controls, including the restriction of external emails, use of security products from vendors like Mimecast, and enforcement of modern, phishing-resistant multi-factor authentication (MFA), can aid your users in recognizing, neutralizing, and avoiding phishing attacks altogether.
Better Protection Tomorrow
The implementation of network segmentation and the adoption of the principle of least privilege (PoLP) can isolate systems and limit a threat actor’s ability to escalate their attack, should they successfully achieve initial access.
Leverage your vulnerability management program to ensure that you are conducting regular audits against known vulnerabilities, mitigating gaps where they may exist, and conducting penetration tests and tabletop exercises to identify weak spots and areas of low visibility to ensure you’re better protected against future ones.
Partner with a security operations solutions provider to ensure you have 24×7 real-time monitoring of your entire environment , including endpoint, network, identity and cloud, so that you can detect and stop threat actor activity faster.
For more tips on protecting against AI-enhanced cyber attacks now and in the future, download The Human — AI Partnership, and don’t miss our webinar, How AI Impacts the Future of Cybersecurity, now available on demand.