OpenAI recently introduced Daybreak, a cybersecurity initiative designed to apply frontier AI models to vulnerability discovery, secure code analysis, and earlier remediation across the software lifecycle. By combining advanced reasoning and planning capabilities, Daybreak aims to help organisations identify and address weaknesses before they reach production.
This is a meaningful step forward, but it is also a continuation of a long-standing approach. The cybersecurity industry has been using specific tools to find and fix vulnerabilities for decades, and frontier AI models are simply a better version of those tools, with stronger reasoning, broader coverage, and the ability to operate at greater speed and scale.
The progress is real and, in many cases, impressive. What is less clear, and often misunderstood, is what that progress means for cybersecurity outcomes.
There are two common reactions. One is that secure-by-design development will eliminate a significant portion of cyber risk, making security teams and platforms less necessary over time. The other is that attackers will use the same capabilities to exploit vulnerabilities to outpace defenders, shifting the advantage toward threat actors.
Both situations miss how attacks play out. Industry research consistently suggests zero-day vulnerabilities account for a relatively small fraction of overall cyberattacks. Most attacks rely on known vulnerabilities, credential theft, identity abuse, and gaps in operational execution. The limiting factor is not discovery but rather the ability to act.
This is why the conversation around initiatives like OpenAI’s Daybreak needs to focus less on capabilities and more on outcomes. In cybersecurity, what matters is not what a model can do in isolation but whether those capabilities make organisations measurably safer.
That requires accuracy, consistency, and context at machine speed. It also requires integration into real workflows, connection to the right telemetry, and human expertise to guide decisions. Without that operational layer, even advanced models remain disconnected from the environments where risk actually exists.
Better Tools Raise the Baseline. They Do Not Change the Problem.
AI will continue to improve how software is built and secured. Models like Daybreak are designed to help accelerate vulnerability discovery, improve remediation earlier in the development lifecycle, and reduce the number of issues that reach production. Over time, that will raise the baseline for software quality and reduce a class of preventable risk.
But raising the baseline does not remove the conditions that drive most breaches today. Organisations are not operating in clean environments where better code eliminates exposure. They are managing existing risks across endpoints, cloud infrastructure, identities, and user workflows, much of which has accumulated over time. Secure development reduces future risk, but it does not eliminate the exposure that exists or the ways attackers exploit it.
The Real Shift Is Speed, Scale, and Pressure on Execution
What frontier AI appears to be changing is the pace at which this problem unfolds. Initiatives like Project Glasswing and Daybreak demonstrate how quickly models can identify vulnerabilities and reason through potential attack paths. Industry testing reinforces this trend, with large-scale scans uncovering significant volumes of legitimate issues and demonstrating how smaller weaknesses can be chained together. In some research and testing scenarios, models have been shown to generate working exploits, highlighting how closely discovery and exploitation may converge.
This does not necessarily simplify defense and, in many environments, may increase operational complexity. It increases the volume of findings, compresses response windows, and raises the bar for precision.
At the same time, these outcomes are not achieved through automation alone. They require extensive engineering, customisation, and human expertise to connect models to real-world systems and interpret results in context. The models themselves are not operating independently. They are components within a larger operational system. This is an important distinction to maintain. AI is not magical. It is a force multiplier when it is applied within a disciplined framework that can translate findings into action.
Secure Development Is Necessary. It Is Not Sufficient.
AI will continue to strengthen secure-by-design practices, particularly by helping developers identify and remediate issues earlier. That is a meaningful advancement and one that will have long-term benefits for the industry.
But many of today’s most disruptive attacks do not depend on software vulnerabilities at all. They rely on credential theft, social engineering, identity misuse, and operational gaps that allow attackers to move undetected across environments. These attack paths are not addressed by improvements in code quality alone.
This reinforces a broader reality. Risk does not only live in software code. It is shaped by how systems are deployed, accessed, and connected, as well as how users interact with those systems over time. Addressing that risk requires continuous visibility and the ability to act across the entire environment, not just during development.
A Turning Point That Reinforces a Constant
This moment represents a turning point for the industry. AI will continue to transform vulnerability discovery and secure development. At the same time, it reinforces a truth that has defined cybersecurity for years. The challenge has rarely been a lack of visibility or tools. It has been the ability to consistently act on what matters in real environments.
Better tools improve the baseline, and AI will continue to raise that baseline across the industry. But security outcomes are shaped less by what is theoretically possible and more by how effectively organisations can detect, prioritise, and respond to risk in the environments they operate every day.
That has not changed.
