A lot has been written about using AI for security and security for AI, but how will AI change cyber attacks?
There is a general awareness AI will change the nature of cyber attacks, but what’s actually going to happen?
After ~6 years working on stopping phishing at Tessian, I’ve found the best way to predict the behavior of attackers is to view them as businesses.
The majority of cyber attackers are not doing it for fun. They are either trying to earn money (like an employee in a private sector businesses) or trying to achieve something on behalf of a government (like an employee in the public sector).
Much like regular businesses, attackers have objectives and they have costs. Their goal is to achieve maximum return with the resources available to them. They act within a market economy with other bad actors - buyers, sellers, partners all trading with each other to achieve their objectives and drive down costs.
I saw over many years how the attacker market operates around phishing and one thing is clear - it is ruthlessly efficient. Attackers don’t wait on regulation, compliance, or for consultants to tell them which four letter acronym they should be investing in next. Attackers ruthlessly test which tools give them the most impact with the least cost. Increasingly, they are also selling tools to each other to reduce costs further. For example, over the years attackers learnt how to reliably bypass a Secure Email Gateway (it’s surprisingly easy) and Phishing as a Service kits were built to enable these phishing attacks to be conducted with a few clicks and a few $$. In other words, they learnt where the gaps were and relentlessly drove down their costs to scale the attacks that were successful.
So, how is AI going to change cyber attacks?
A lot of the early discussion around how AI will influence cyber attacks focussed on the ‘generative’ nature of Large Language Models (LLMs). Most people immediately think of writing a phishing email or creating deepfakes.
AI does make social engineering easier but I don’t think this should be the focus of our attention. In phishing for example, LLMs can absolutely help non-native english speakers more easily craft convincing phishing emails but this is not the biggest obstacle attackers face at the moment. Modern email security tools like Abnormal and Tessian (now Proofpoint) are not looking solely at how ‘convincing’ the email content is but at a huge array of behavioural factors - many of the emails they stop look perfectly convincing to the naked eye.
At the other end of the spectrum, you have some who predict some sort of cyber doomsday where impossibly intelligent AI overwhelms every defence imaginable and we’re all rendered redundant. I don’t see this happening either. If it does, we probably have bigger fish to fry as a society than security anyway.
To predict what’s going to happen, we can look at what we know:
Attackers are ruthlessly efficient - they seek max gain for the lowest cost
Generative AI is increasingly automating tasks that required human level intelligence
Agentic AI gives attackers the ability to automate even complex multi-step tasks
Many security teams (sensibly) focus on stopping easier to execute attacks first, and often deprioritise protecting against hard to execute attacks
Rather than looking purely at the generative nature of AI, we should think more in terms of its reasoning ability and the complex activities it can automate. If AI allows attackers to automate increasingly complex tasks, it enables them to execute what is currently deemed to be a complex cyber attack with orders of magnitude less cost. Given attackers work within an efficient market, this should lead bad actors to create AI tooling that allows any attacker to execute complex attacks at low cost. If attacks that were previously deemed complex can now be achieved with low cost, we should see a lot more of them.
Where this leads us is not a sudden cyber apocalypse but to a steady ratcheting up of the difficulty setting for defenders. Complex activities like writing new exploits or bypassing EDR should become increasingly simple and low cost for attackers. We’re already seeing this trend with CVEs - VulnCheck recently found that almost 25% of new CVEs are now exploited almost immediately, completely upending many of the practices used to manage vulnerabilities today.
The other thing to change will be security tooling - as it gets smarter using AI, the lower hanging fruit for attackers will start to disappear, forcing an even greater migration to more complex attacks.
All this means that the game is not going to look drastically different for defenders, it’s just going to get harder. For the most part, we should already know about the risks we need to take action against, but we’ll need to think differently about how we prioritise. We’ll need to focus on the things that today we think are just out of reach for most attackers. In a sense, everyone will need to go up a level in sophistication. Thankfully, we have access to all the same technology attackers do, we just need to use it wisely.