WTF is going on with Vulnerability Management?
Why is vulnerability management so hard at the moment?
This week my co-founders and I announced that we’re working on Maze (mazehq.com) - a new product that uses AI to automate vulnerability management.
This is a personal blog and you won’t hear me talking about our product here. But, having spent ~12 months researching the problem and talking to hundreds of security leaders about vulnerability management, I figured it might be interesting to share a bit of what we’ve learnt.
All 3 of us founders led engineering & product teams in the past so we were aware of the pain but not necessarily how deep and widespread it was until we started digging in.
Around a year ago we started interviewing ~10 security leaders every week. We always started the conversation with the same question: “what’s your biggest problem right now?”.
We quickly became familiar with seeing same reaction time after time. The person on the other end of the call would sit back in their chair, roll their eyes, and murmur something along the lines of “f’ing vulnerabilities”.
The approach they were taking would differ (vendor products, homegrown tools, hardened images, etc.) but the pain was consistent and, in a lot of places, it seemed to be getting worse.
We’ve been dealing with vulnerabilities for decades now, why is it so hard?
Firstly, the number of new CVEs is growing fast. The headline stat is that roughly 40k new CVEs were published in 2024, a ~40% increase year on year. This data can be misleading in places, for example some of these CVEs were discovered before 2024 but registered later. There is also the problem of new CVE Numbering Authorities (CNAs) and overly eager researchers reporting CVEs that are more theoretical than threatening. What can’t be questioned though is what security leaders are seeing — so many reported to us the feeling that their backlogs are just getting bigger and bigger, no matter what they do. The sorry reality is that many are now just focussing on fixing the Criticals, whilst crossing their fingers on the rest.
Second, CVEs are being exploited faster than ever. As I’ve written about before, attackers constantly adapt their methods based on a) what’s working b) new technology at their disposal. We’re seeing this play out with exploitation with Mandiant estimating average time to exploit a new CVE falling from around 30 days in 2021 to 5 days in 2023. If we expect attackers to increasingly take advantage of AI, we can surely expect this number to drop further. When many patching cycles run on SLAs of 7/30/90 days, the difference really matters.
Third, development and operations teams are under more pressure than ever to deliver business value. Many organizations have gone through painful readjustments in the last few years as the ZIRP years ended. Typically this means fewer staff and a lot more pressure to deliver for the business. Security gets caught in the cross-fire here as development teams push back on requests from security to fix vulnerabilities on the (often justified) basis that their job is to deliver new features, not respond to scanner findings. All of this leads to a ton of friction between security and engineering with endless back and forth, escalations, and arguments.
So we basically have a perfect storm - more volume, more urgency, and less capacity. No wonder everyone’s having a tough time.
Can vulnerability management be fixed?
For most people, the obvious answer to the problem is better prioritization. We just need a better ranking of vulnerabilities, so we know what to fix first.
There is some truth to this, but as we’ve spent time with security leaders over the last year and countless hours with the data, we’ve realised it’s not quite right.
The #1 problem plaguing vulnerability backlogs today is not prioritization, it’s false positives.
The truth is that the majority of vulnerabilities detected shouldn’t just be low priority, they should be removed altogether. Our conservative estimates so far are that 80-90% of vulnerabilities detected by scanners actually represent zero risk when properly analyzed in the context of the environment by a human. If that is anything close to true, it represents monumental amounts of wasted effort and time.
Hopefully we can do something about it…