This post reflects the author’s views as of the publication date and contains forward-looking statements and opinions about technology trends. Actual outcomes may differ based on attacker behavior, customer environments, and broader market and regulatory developments.
Last Friday’s announcement from Anthropic introducing tied to its Claude platform sparked immediate speculation about what AI-driven secure development could mean for cybersecurity.
The vision for the future of the security industry that’s currently being discussed online is extremely ambitious, with some experts claiming that Anthropic’s new capabilities will soon introduce AI systems that dramatically reduce, or potentially eliminate, software vulnerabilities before code ever reaches production.
If realised, that would be an important advancement, and one that Arctic Wolf would salute. Software vulnerabilities have long provided attackers with reliable entry points into organisations and reducing them at scale would improve the overall security posture of the digital ecosystem. Security leaders should welcome progress that makes secure development easier and more accessible.
However, enthusiasm about future innovation should not obscure reality. Even in a hypothetical world where software vulnerabilities could largely disappear, cybersecurity would remain both necessary and complex. Many of the most disruptive attacks in recent years have succeeded without exploiting a single software flaw, underscoring a broader truth about the nature of modern cyber risk.
Vulnerability Detection Is an Input, Not the Outcome
Major technology transitions often produce sweeping predictions about disruption. When markets cannot easily predict which companies will emerge strongest during a shift, the instinct is sometimes to assume everyone is equally threatened until proven otherwise. Cybersecurity is currently experiencing a moment like that as AI capabilities accelerate.
The assumption that AI-assisted vulnerability discovery fundamentally replaces security platforms or even security management products misunderstands how organisations actually achieve security outcomes. Vulnerability scanning and secure code analysis have existed for decades, delivered by highly capable vendors and adopted widely across enterprise development pipelines. Can AI improve the speed and scale of that work? Almost certainly. What it cannot do is eliminate a problem the industry has already spent years trying to solve. Finding bugs faster is an important input to security outcomes. It is not a replacement for security operations. Long before frontier AI models entered the conversation, developers relied on static analysis, dependency scanning, and automated testing tools to reduce risk earlier in the software lifecycle.
AI may significantly improve those capabilities, and that is a positive development. Faster identification of insecure patterns, automated remediation suggestions, and improved developer workflows can reduce exposure across industries. But discovering vulnerabilities does not equal protecting an entire enterprise environment. For example, Arctic Wolf published research last year that found that in 76% of intrusion cases, threat actors employed one or more of 10 specific vulnerabilities, all of which were previously known and contained a patch at the time of exploitation. This trend is similar when looking at ransomware cases, where zero-day exploits were only responsible for 0.4% of cases.
Security leaders and analysts are responsible for defending identities, monitoring cloud infrastructure, securing endpoints, managing third-party relationships, detecting anomalous behavior, and responding to incidents under real-world operational pressure. Tools that improve inputs into security operations ultimately strengthen platforms responsible for correlating signals and delivering outcomes. They do not remove the need for those platforms any more than better construction equipment eliminates the need for architects and builders.
Recent Breaches Show the Limits of a Vulnerability-Centric View
Recent high-profile incidents illustrate how frequently attackers bypass technical exploitation altogether. In 2023, attackers targeted major casino operators, including MGM Resorts International and Caesars Entertainment. Public reporting indicated that attackers relied heavily on social engineering tactics, impersonating employees and persuading IT help desk personnel to reset credentials.
The breach did not hinge on sophisticated malware engineering or undiscovered vulnerabilities. Instead, it exploited trust, process gaps, and human behavior. Once attackers obtained legitimate credentials, they operated within systems as authorised users, making traditional vulnerability defenses largely irrelevant.
A useful way to pressure-test the “AI will eliminate cybersecurity risk” narrative is to look at how some of the most damaging recent intrusions unfolded. The activity associated with Scattered Spider is a case in point. Their operations repeatedly bypassed hardened infrastructure not by exploiting zero-days, but by exploiting people and processes. Attackers impersonated employees, manipulated help desks, enrolled new devices, and reset credentials through legitimate workflows designed for customer service and operational continuity.
The lesson is uncomfortable, but clear: When identity becomes the perimeter, persuasion can become the exploit. The same dynamic appeared in the attack on Change Healthcare, where adversaries gained access through compromised credentials tied to remote access systems lacking sufficient MFA enforcement.
No novel software flaw was required. Instead, attackers combined credential access, authentication gaps, and operational blind spots to achieve systemic disruption at national scale. Even in a hypothetical world where AI eliminated memory-safety bugs or dramatically reduced exploitable code defects, these attacks would still succeed because they target trusted relationships, identity governance, and human decision making. That is precisely why security outcomes depend less on eliminating a single class of technical weakness and more on continuously managing exposure across identity, behavior, and operational controls.
This pattern has become increasingly common across sectors. Business email compromise campaigns rely on impersonation rather than malware. MFA fatigue attacks pressure users into approving fraudulent authentication requests. Cloud exposures frequently arise from configuration mistakes rather than exploitable bugs. Insider threats involve misuse of legitimate access privileges. Supply chain compromises leverage trusted vendor relationships to move laterally between organisations.
Attackers consistently demonstrate a preference for efficiency. If identity compromise or social engineering delivers faster results than technical exploitation, adversaries will choose those paths.
Perfect software cannot prevent deception, credential theft, or operational missteps. As organisations expand across hybrid cloud environments and distributed workforces, those risks often become more prominent, not less.
Frameworks Already Recognise Cybersecurity as an Operational Discipline
Established cybersecurity frameworks reflect this broader understanding of risk. The Cybersecurity Framework developed by the National Institute of Standards and Technology emphasises governance, asset visibility, detection, response, and recovery alongside technical protections. Similarly, adversary behavior models maintained by the MITRE Corporation map attacker activity across dozens of tactics that extend far beyond exploitation.
Credential access, persistence mechanisms, lateral movement, and data exfiltration remain central components of modern attacks, regardless of software quality. Organisations rarely experience catastrophic breaches simply because a vulnerability exists. More often, incidents escalate because visibility gaps prevent defenders from recognising attacker activity quickly enough or because fragmented tools slow response efforts. Cybersecurity therefore functions as an operational discipline rather than a purely technical one. Continuous monitoring, contextual analysis, and coordinated incident response remain essential even when preventive controls improve.
The Work Ahead
Cybersecurity has already navigated multiple technology inflection points, from cloud adoption to distributed workforces and SaaS sprawl. Each innovation improved speed and scale while expanding opportunity for attackers. AI will be no different. It will meaningfully reduce certain risks, but adversaries will continue to exploit identity, human workflows, and operational blind spots that exist far beyond the codebase.
That’s why lasting security outcomes don’t come from eliminating a single class of threats. Organisations need continuous visibility across their environments, expertise grounded in real adversary behavior, and security operations built to detect and respond when prevention inevitably fails. At Arctic Wolf, our focus remains the same: helping customers stay ahead of evolving threats by combining the Aurora™ Platform with human expertise to deliver measurable risk reduction across every attack surface.


