AI Bends the Vulnerability Curve

We are approaching the end of the linear relationship between code volume and security risk. As AI takes over the core work of development, it will fundamentally bend the vulnerability curve. I expect AI will drive a significant decrease in the volume of new vulnerable code appearing over the next one to two years. This is also my hope.
It's happening. Prominent technology leaders now report they have largely moved away from manual coding. They are now simply interacting with AI agents as proxy workhorses that are doing the actual code writing and wrangling. How long before the average employee in the average company can say the same? The answer seems to be, not very long. Apparently Spotify is crossing the Rubicon.
As discombobulating as this revolution is to relevant career fields, this transition presents a great opportunity. Maybe we call it a great inevitability. As AI becomes the designated driver, we can produce better, more secure code, and ultimately more secure systems. To put it another way, with humans out of the loop, there will be fewer mistakes and fewer neglected vulnerabilities.
With agentic systems development (or agentic engineering), there are fewer acceptable reasons for people and organizations to produce insecure code and architecture. AI can and will design secure systems and produce secure code. It's not that agents always make everything perfectly secure out of the gate, although the likelihood of them producing secure code increases daily. They can produce version one of a thing, and then near-instantly iterate to improve.
Agents can autonomously review their own output or new inputs. They will identify, fix, and test vulnerabilities, weaknesses, and control gaps. They will continue to loop until all issues are effectively resolved. This will be done before you finish your first cup of coffee.
Additionally, they can create clear requirements and recommendations for external security controls that can be handed off to other agents and/or people to implement. In this way, they can help establish a complete security architecture within the target environment. They just need the right context, of course.
In enterprises, I expect the volume of code produced to increase. Let’s consider the two competing trajectories.
Arc one is the code volume explosion. As agentic engineering takes the wheel, the sheer output of code will skyrocket. This will be incredible production, where code is generated at the “speed of thought”.
Arc two is the vulnerability spike. In the immediate term, vulnerabilities (e.g. CVEs, CWEs, etc.) will rise in tandem with volume. If we produce ten times more code, we may produce ten times more vulnerabilities.
However, we are approaching a critical inflection point. As the AI ecosystem evolves, these paths will diverge. We should expect to see arc two collapse, a sharp aggressive drop in vulnerabilities, as arc one continues to climb. This decoupling will mark the end of this kind of proportional risk. For the first time, we will produce more while risking less.
This will happen because the AI models and their harnesses get better, but also because people will learn how to get better results through improved tool configuration, operational processes, and their own behaviors. At least, that's what I hope we will see. Time will tell. The good news is that with this breakneck speed of change, we will not have to wait long.