5 Comments
User's avatar
Aaron Shelmire's avatar

I agree in principle with some of the gist above - though you lost me in some of the details.

* re: the one single question eg “stop working if provider is offline” - there are a lot of targeted models and transformers that can have a large impact on security use cases, that you can download locally and be using (see: sentence transformers). With these you can deliver pretty good use cases that won’t fail when say OpenAI or some other larger provider goes down. I’m hopeful that over the next few months to year the performance (memory, speed and precision recall) and capability of smaller models will continue to exponentially improve - enabling more use cases that are untethered from saas apis.

Expand full comment
Chandrapal Badshah's avatar

That's an interesting view point Aaron.

I guess the author is trying to show a different angle. Let's say if you ask the question “If your LLM provider stopped working, what would happen to your product?” (be it proprietary or self hosted open source models) and the answers are:

1. "Ahh, our AI chatbot will be down" - Now, you know what features are powered by AI

2. "Oh, our capabilities to do X, Y and Z, which helps you to work fast/efficiently/proactively reduce attack surface will be down" - In this case, we can infer its deeply integrated within product.

If answer is 1, these products are clearly riding the AI hype train.

If the answer is similar to 2, the critic in me might still ask a follow up question: "Aren't these capabilities possible using plain code? Why do you want AI to do it?"

Expand full comment
Aaron Shelmire's avatar

For sure - and a lot of those chatbot use cases (answer help questions, build detection rules, explain log lines) can be reasonably done with local language models now, and the innovation in the local models seems to be progressing faster than (or arguably converging with) the global large models too!

Expand full comment
Harry Wetherald's avatar

Hey Aaron great point. Lots of nuance like this I didn't cover properly in my post but Chandrapal has it spot on that it was more meant to mean 'if you lost access to LLMs, what would happen to your product'.

Some of the local LLMs and even small transformer models are great. In particular I've found the biggest Llama model almost on par with Claude for a of the use cases we're playing with. My bet (and hope) is also that open source models continue to grow and that many AI based products will rely heavily on them in future.

Expand full comment
Damiano Bolzoni's avatar

Also, while LLMs were basically the only option available 18-24 months ago, it is crystal clear that SLMs are way more efficient, at least for more narrow use cases. SLMs work generally better for (most) cybersecurity tasks, at least in our experience. So I guess you will start seeing proprietary models being deployed at this stage. Which would basically mean the security vendor is less reliant on a provider like OpenAI, Anthropic or their alikes.

Expand full comment