A Sober Look at the Claude Code Security Announcement
Should we care about Anthropic entering security?
It’s been a week since Anthropic announced Claude Code Security and the cybersecurity market lost its mind. Now the dust has settled, it seems like a good time to take a more sober look at what happened.
My take is that two things are true at the same time: a) the announcement itself was meaningless b) the market reaction was more rational than a lot of people think.
Why the announcement was meaningless
In August 2025, Anthropic released automated security reviews in Claude Code. In October, Google and OpenAI launched similar AI agents for code security with CodeMender and Aardvark. The markets didn’t move in any of these cases, and there was little more than a murmur about what these launches could mean for application security, let alone the broader security market.
So why was it when Anthropic released a UI wrapper for a code scanner that already existed, everyone reacted as if it was the end of cybersecurity as we know it? Not only did it only impact SAST, a small sub-category of application security, it wasn’t even close to being better than existing AI-native SAST offerings.
Anthropic are on fire at the moment. It’s reasonable to assume that the announcement signaled their intent to productise security capabilities, which led to the reaction. The thinking presumably was that even if Claude Code Security is underwhelming today, Anthropic has some of the best models in the world. Won’t they just keep going and eventually dominate the whole security market?
Nobody knows the answer to this, but personally I don’t think Anthropic is going to dominate the security market and leave no room for others. Anthropic’s market cap is currently $350bn, bigger than the entire security market combined. AGI aside, the opportunity they seem to be going after is to become the software company that businesses spend the most money on. A horizontal platform for the workplace, competing with Microsoft, Google, and AWS.
If this is true, then history can help us predict the future. Microsoft, AWS, and Google have all had the resources, distribution, and the technical capability to crush specific security categories for years. They’ve entered pretty much all of them and never become best-in-class in any. Microsoft in particular have made a lot of money in security, but it’s always been there to make their horizontal platform more appealing (M365 and Azure), not as their primary focus. Ask anyone who’s used Microsoft security products, and they’ll agree that extreme care and attention has rarely been paid.
The big horizontal B2B platforms just have too many other things to focus on to become truly best of class in one security category. More importantly, they just aren’t incentivised to do it. Crowdstrike is worth $100bn, Microsoft is worth $3tn. The extra effort it would take for Microsoft to crush Crowdstrike just isn’t worth the effort it would take, when compared to focussing more resources on their core business.
I think the same will hold true for the AI labs. Anthropic’s goal is to get as many companies as possible spending as many tokens as possible. Building the best endpoint security or the best cloud posture tool just isn’t strategic enough to justify the effort. Sure they’ll probably introduce offerings that might be good enough for a chunk of the market, but until we hit true AGI, there will always be room for vendors to build on their models and massively improve on them via specialised product offerings. Even when the underlying models get a lot better, there is a huge amount of work that sits on top of a model to make a great security product - context graphs, integrations, workflows, agent optimisation, cost management. The labs aren’t incentivised to do any of it as well as a standalone vendor is.
(If true superhuman intelligence arrives then of course none of this matters, but if the markets clearly aren’t pricing that in, otherwise almost all stocks should be zeroes anyway.)
Why the market reaction was kind of rational
So I don’t think Anthropic is going to dominate security. But I do think the announcement, and more importantly the reaction, was one of the most significant moments for the cybersecurity market in recent years. It was the moment a huge number of people in security woke up to how fast things are actually changing. And on that front, I think the market reaction was pretty rational. If anything, I think most people are still underestimating how much disruption is coming.
Nobody has a playbook for what’s happening right now. But there are two shifts underway that I think the market hasn’t reacted to strongly enough yet.
First, the way we build software is changing at a pace that has no precedent. Way more software is being created, by a much wider set of people, many of whom aren’t developers. Increasingly, no human reads code before it ships. Most of the security tools we rely on were built for a world where humans write code, humans review code, and humans decide what to fix. That world is disappearing fast.
Second, attackers are experiencing the exact same transition, and can now carry out sophisticated automated attacks, with orders of magnitude less effort and cost than before.
I spent years at Tessian watching the phishing market evolve and saw a similar pattern play out. When the cost of executing an attack drops significantly, volume explodes. Phishing as a Service kits turned what used to require real expertise into a few clicks and a few dollars. The result was exponentially more advanced attacks.
AI is about to do this across a much wider range of attack types. AI gives attackers the ability to automate complex, multi-step attacks that previously required serious skill and time. Attacks that were too expensive to bother with become viable, and attacks that required a whole team can be done by one person. If the pattern holds, the cost curve for offensive operations is about to collapse.
We don’t know how fast this will happen or where it will hit hardest, but we do know that defenders move slowly. We buy tools through twelve-month procurement cycles and implement them over quarters. Attackers don’t have procurement cycles, once they see something working they will double down.
As an industry, we have to adapt faster. We need to fundamentally rethink how we build security products for the world we’re now living in. We need to stop debating whether change is coming, and start acting.
The announcement itself doesn’t matter. But the wake-up call it delivered to the market is well overdue.

