The modern threat landscape is an ever-evolving battlefield of innovation and escalation. Thanks to the rapid adoption of artificial intelligence, both attackers and defenders now have powerful new tools at their disposal. But who has the edge when it comes to the artificial intelligence (AI) arms race?
Unsurprisingly, the answer is complicated.
What is clear is that AI is both accelerating the pace of cybercrime and improving the odds for organizations that embrace robust security operations, not just a stack of tools. On one side, attackers are leveraging AI to scale and improve social engineering, automate reconnaissance, and industrialize credential theft. Not to be outdone, defenders are using AI to triage alerts, correlate signals across noisy environments, and speed response.
- AI has overtaken ransomware as the top concern for security leaders, according to The Arctic Wolf State of Cybersecurity: 2025 Trends Report
- Synthetically generated text in malicious emails has doubled over the past two years, according to Verizon’s 2025 Data Breach Investigations Report
- There was an 84% increase in infostealers delivered via phishing emails year-over-year, according to the IBM 2025 X-Force Threat Intelligence Index
In short, the AI arms race isn’t about who’s using artificial intelligence. It’s about who turns AI into the best repeatable outcomes the fastest.
What is AI in Cybersecurity?
In practice, AI in security programs fall into three main categories:
Machine Learning (ML)
Models trained to classify events, detect anomalies, score risk, and reduce noise (e.g., “Is this login suspicious?”).
Generative AI (GenAI)
Large language models (LLMs) and multimodal models that can create text, images, audio, and code. GenAI is used for both defensive automation like summaries and investigation support, and offensive enablement like phishing, impersonation, and malware development.
AI-Adjacent Infrastructure Risk
The tooling, data pipelines, and integrations organizations deploy to enable AI, often at a rate faster than governance and security keep up — echoing the issues organizations faced with the race toward cloud adoption.
The rapid adoption of AI is expanding attack surfaces and placing new pressures on security teams to secure the entire AI pipeline.
What Are the Benefits of AI in Cybersecurity?
AI can materially improve security outcomes, especially for organizations drowning in alerts, telemetry, and tool sprawl. Here are a few of the major ways:
Faster Detection and Triage
In noisy environments, AI can help prioritize what matters most by correlating weak signals into stronger stories.
Better Consistency at Scale
Humans are excellent investigators, but they are also inconsistent under fatigue and time pressure. AI-driven enrichment and guided workflows can make investigations more repeatable, helping to reduce variance in the decisions that matter, like whether to isolate and contain an endpoint.
Time Reclaimed for High-Value Work
AI can reduce the routine toil of security teams by providing alert deduplication, enrichment, and first-pass classification, giving them back precious time to work on high-value tasks like hardening the identity environment, remediating exposures, and running tabletop exercises.
Increased Capacity for Understaffed Teams
Most organizations can’t hire their way to 24×7 in-house monitoring, detection, and response. AI that is embedded into security operations and managed by an external team of trained experts can expand capacity and improve practical resilience without requiring the need to increase internal headcount.
What Are the Challenges of AI in Cybersecurity?
AI introduces new risks and amplifies old ones by accelerating attacker capability, expanding the enterprise attack surface, increasing human-driven data exposure, and creating new layers of operational and governance complexity for security teams.
AI is Supercharging Social Engineering
Threat actors can now generate phishing emails free of broken grammar or syntax errors. Impersonation no longer requires extensive research or deep expertise. As mentioned above, Verizon’s finding that synthetically generated text in malicious emails has doubled is a clear signal that AI is improving both the speed and the efficacy of social engineering.
“Shadow AI” is Creating Data Leaks and Compliance Exposure
The Arctic Wolf 2025 Human Risk Report found that 80% of IT leaders and 63% of all employees are using GenAI for work. The issue is that 60% of those leaders and 41% of employees admit to feeding these GenAI tools confidential organizational data. This is leading to a widening gap between perceived resilience and actual exposure, making the use of shadow AI one of the most unpredictable variables in modern cybersecurity.
Overconfidence is Increasing Risk
According to our Human Risk Report, 75% of IT leaders believe their organization is safe. Yet nearly two-thirds of those same IT leaders self-reported that they’ve clicked on a malicious link, and 20% admit to not reporting it. When leaders are overconfident in their defenses while, at the same time, overlooking how they and their employees are using the technology and the rate at which they’re falling for scams, it creates the perfect conditions for mistakes to become breaches.
AI Systems Can Become New Attack Surfaces
AI pipelines involve data ingestion, model hosting, integrations, plugins, agents, and access tokens. As organizations rush to adopt artificial intelligence, governance often lags, creating opportunities for credential theft, prompt injection, data poisoning, and the abuse of privileged integrations.
How Are Security Teams Using AI?
Defenders are applying AI where it reduces friction and increases speed. This is especially true in security operations, where alert fatigue, talent shortages, and tool sprawl continue to strain in-house capacity. Rather than treating AI as a standalone capability, mature programs are embedding it directly into detection, triage, investigations, and response workflows. The objective isn’t automation for its own sake; it’s measurable operational efficiency that reduces meant time to detect (MTTD), mean time to response (MTTR), and analyst burnout while maintaining investigative rigor. Here are some specific ways organizations are achieving this:
Alert Reduction and Prioritization
Machine learning models help prioritize high-risk signals from vast telemetry streams, cluster related alerts into incidents, suppress known-benign patterns, and highlight suspicious chains. When incorporated effectively into security operations, AI can help reduce mean time to detect (MTTD) and mean time to respond (MTTR).
Investigation Acceleration
GenAI can be used to accelerate enrichment, summarize incident context, and support faster decision-making across analysts and stakeholders. Modern organizations embracing AI in security operations can often use it to:
- Summarize incidents from logs
- Explain “why this matters” in layperson language for non-technical leadership
- Draft containment recommendations and tickets
- Assist with query generation via SIEM and EDR
Threat Detection Engineering at Scale
In practice, AI is most effective when it augments human expertise, turning raw data into actionable insight and allowing security teams to operate at a scale and speed that matches today’s threat landscape. When it comes to threat detection, AI can help defenders:
- Generate detection logic candidates from TTP descriptions
- Compare detections across environments
- Identify gaps in telemetry coverage
Identity-Focused Detection and Response
In today’s world of hybrid work, the modern organization has shifted to an identity-driven model, and threat actors know it. The Arctic Wolf 2026 Threat Report found that, of the business email compromise cases responded to by Arctic Wolf Incident Response, 95% of them were caused by phishing or previously compromised credentials, highlighting how threat actors are exploiting trust at a much higher rate than technical flaws and serving as a stark reminder that tools alone can’t solve an identity problem. AI can help organizations shore up their identity defense by:
- Baselining normal user and service account behavior like login patterns, device posture, token usage and access frequency
- Flagging anomalous activity like impossible travel, suspicious OAuth consent grants, session hijacking indicators, or privilege escalation attempts in real time
- Automatically correlate identity signals across email, endpoint, and cloud logs
- Accelerate containment actions like account lockout, token revocation, forced credential resets, and conditional access enforcement.
In short: Organizations that pair AI-driven detection with strong identity controls, a punishment-free user reporting culture, and practiced incident response are seeing significant benefits.
How Are Threat Actors Using AI?
Attackers are using AI as a conversion-rate optimizer for cybercrime. Instead of pursuing fully autonomous attacks, they’re leveraging AI and automation to improve the efficiency of familiar tactics.
AI-Assisted Phishing and BEC at Scale
Artificial intelligence is a powerful tool for refining phishing lures, personalizing pretexts at scale, testing subject lines, translating scams into multiple languages, and iterating rapidly based on what gets the most clicks.
Credential Theft Acceleration
IBM observed an 84% increase in infostealers delivered via phishing emails, creating a direct pipeline from social engineering to the sale of compromised credentials on the dark web, to intrusions using those valid credentials. In the modern threat landscape, initial access is more often bought than it is built, and AI is helping threat actors generate, distribute and iterate delivery mechanisms faster.
Real-Time Voice and Video Manipulation
Attackers have shown time and again their willingness to adopt new technologies and to refine their approaches to make the best use of new tools. Last year, the groups designated UNC6040 (aka ShinyHunters) and UNC3944 (aka Scattered Spider or Octo Tempest) both found success employing complex pretexting and spear phishing. As capabilities continue to improve, expect to see threat actors doubling down on AI-powered:
- Email phishing, backed by convincing lures leveraging open-source intelligence (OSINT)
- Voice phishing (vishing), with attackers manipulating their voices in real time to masquerade as executives and other positions of influence
- Video deepfakes, using real-time face and voice swapping
AI is helping them reduce the operational costs of producing high-quality malicious content and shortening the feedback loop between attempt and success, enabling threat actors to operate more like digital marketers—optimizing engagement, maximizing credential harvest rates, and scaling fraud operations with measurable precision.
Malware and Scripting Assistance
AI is empowering a whole new generation of less-skilled threat actors by helping them write or modify code, troubleshoot errors in their malware, and generate “good-enough” scripts for automation. This doesn’t lower the bar so far that anyone can whip up malware in a moment, but it is expanding the pool of people who can produce functional offensive tools.
How Can Organizations Win the AI Arms Race?
Here’s the reality: AI is here to stay. And threat actors will continue to develop and evolve new ways to leverage it in their attacks. You can’t eliminate the risk; however, you can employ the same technology to reduce their ability to exploit users, better protect your identity environment, and limit attack exposure. Here’s how:
Treat Identity as a Primary Security Perimeter
The Arctic Wolf 2025 Human Risk Report found that only 54% of organizations enforce MFA for all users, which leaves predictable, preventable gaps in the identity environment that threat actors can easily exploit. Shore up the gap by:
- Enforcing MFA everywhere and prioritizing phishing-resistant methods for high-risk users and actions
- Monitoring for suspicious identity activity, including abnormal sign-ins, impossible travel, new device registrations, token anomalies, and mailbox rule changes
Build a Shame-Free Reporting Culture That Closes the Loop
AI-powered phishing is improving threat actors’ social engineering success. But organizations have multiple ways to reduce or eliminate that success:
- Make reporting a one-click workflow
- Reward reporting rather than punishing it
- Run relevant, timely phishing simulations that match today’s lures
- Establish a security awareness training program that focuses on frequent, brief trainings
Reduce “Shadow AI” Risk With Sanctioned Tools and Controls
Arctic Wolf’s findings that a majority of IT leaders and nearly half of all users have put confidential information into GenAI tools and chatbots underscores why governance is so crucial to preventing modern, AI-enhanced attacks.
- Provide approved GenAI tools with enterprise authentication
- Turn on logging and data protections
- Establish clear “never share” rules around proprietary information, credentials, customer information and data, and any regulated data
Harden Email Environment and Payment Protocols Against BEC
Because business email compromise remains a primary attack type and driver of financial losses, organizations need to build frictions where it matters and slow things down a little to gain a lot more protection:
- Enforce out-of-band verification for payment changes
- Require dual approval for high-value transactions
- Enforce email authentication protocols like DMARC, SPF, and DKIM
- Create detections for suspicious forwarding rules, VIP impersonation, and anomalous OAuth consent.
Assume Credentials Will Be Stolen
As attackers continue to pivot away from complex attacks and toward the relatively easier task of obtaining valid credential for initial access, organizations need to put a plan in place for the inevitable theft of user credentials:
- Rotate credentials more frequently
- Monitor for dark web exposures
- Segment access and minimize standing privileges
- Detect and respond to suspicious use, not just a suspicious login
Operationalize Detection and Response With Security Operations
Artificial intelligence is increasing threat actor attack speed. Your countermove is 24×7 monitoring, detection and response with holistic visibility into your entire attack surface. A security operations provider like Arctic Wolf provides both proactive and reactive resilience against modern threats.
Arctic Wolf’s unique combination of technology, security expertise, and risk transfer options provide end-to-end coverage to achieve security outcomes at an unprecedented scale, offering organizations a 90% reduction in the frequency of attacks, a 90% reduction in the severity of attacks, and up to $3 million (USD) in risk transfer, helping you End Cyber Risk® for your organization.
So … Who Will Win the AI Arms Race?
If you’re measuring by volume and velocity, threat actors appear to have the lead. AI is making attacks cheaper, faster, and more effective, while credential threat pipelines continue to grow.
If, however, you’re measuring by security outcomes, defenders are in prime position to make the decisions today that will lead to victory tomorrow. AI can improve an organization’s detection, triage, and response, as well as improve a security team’s fundamental operations in identity security, vulnerability management, user reporting, and response readiness.
While the AI arms race is currently raging, it’s clear that the war won’t be won by who “uses AI.” It will be decided by who turns AI into measurable outcomes. Right now, threat actors may have the edge. But, by turning to managed security operations that pair the power of AI with human expertise, organizations can achieve faster containment, limit successful intrusions, and reduce the impact of an incident.
Discover the ways our threat intelligence experts predict the attack landscape will evolve in the Arctic Wolf 2026 Threat Report.
Watch our exclusive on-demand webinar and learn how AI is playing into the day-to-day business of transforming the traditional cybersecurity role.
What does the future hold for AI? Download “The Human — AI Partnership,” our report packed with insights, predictions, and what industry leaders think will happen next when it comes to artificial intelligence and cybersecurity.



