Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Q&A with Adam Meyers: “It's going to be an absolute bloodbath.”

Cyber Daily chats Claude Mythos and how to tackle the flood of AI-powered vulnerability disclosures with CrowdStrike’s Senior VP, Counter Adversary.

Fri, 15 May 2026
Q&A with Adam Meyers: “It's going to be an absolute bloodbath.”

Cyber Daily: Everyone’s talking about frontier AI and its power to find vulnerabilities and speed and scale, but there seems to be a lot of noise and not much signal right now. What’s the real deal?

Adam Meyers: Well, you know the real deal is that vulnerabilities are always happening.

I think we've seen a couple of interesting experiments and some interesting data around one particular model, in particular Mythos, right? We've been saying since November, or even before, that we're looking down the barrel at an influx of vulnerabilities, because AI is ideally suited for being able to find and exploit vulnerabilities. When you think about how you find a vulnerability, there are really two ways.

 
 

There's what I like to call the artisanal way, where you find a target, and you completely reverse engineer everything about it, how it works, and you find a bug, and you write the world's most beautiful, perfect exploit that will enable you to exploit that target. But then what most people do to do this at scale is fuzzing. And with fuzzing, you're throwing a bunch of garbage at an input, and you're hoping it crashes the program or the software, and then when that software crashes, it creates a log or a crash dump, which has information about what caused it to crash, or the state when it crashed.

And then if you look at that log, you can see if it's exploitable, and perhaps it gives you a path to exploitation. So that's kind of the two ways that people do vulnerability exploitation.

What we've seen so far with AI is they're using it for static code analysis, which is great if you have the source code, which is why you see a lot of the work so far has been done that's public is on open source projects, because you have the source code for those; when you start getting into black box testing, you actually have to instrument the software. And there are a lot more steps to it that are, I think, a lot more complex. It's doable with AI, but for what we've seen so far right now, it's really been focused on the software source code and finding bugs there.

AI can be really useful at dialling in what garbage you throw at the input to try to break the software. And it's extremely good at analysing the crash dumps to see if they're exploitable. You know, for that, I would argue even smaller models, not general-purpose models like CHatGPT or Mythos or something like that, but you could build custom models that will really be dialled in and more deterministic, meaning you're going to have the same outcome every time. And then you can use a general-purpose model to help write the exploit.

So I think we'll see, over the next couple of months, more specific tools and models for different pieces of this. And you're already hearing about harnesses and scaffolding inside the AI. That's all because that's how you can help the AI get access to do the thing that you're trying to ask it to do.

But yeah, this is coming either way, and I think it's model independent. The thing that everybody's focused on is the exploits and zero days, right? We've been kind of trained by the media and the security industry that zero days are the thing that you can't plan for, and it's the worst possible situation, right? A cyber Pearl Harbor, or something to that effect. And the reality is, a zero day is not that big of a deal.

We find zero days once a quarter, on average, at CrowdStrike. And for us, zero day is not the end of the story. It's the start of the story, because everything that happens after that zero day gets exploited by the threat actor, whether it's a human or a machine, they still have to move laterally, they still have to escalate privilege, they still have to accomplish the thing that they're trying to do. They're not just finding a bug; they're trying to execute a mission.

So that's where we hunt, right? That's where we find bad guys. Every single day at CrowdStrike, we look at 6.7 trillion events per day, and we see something like 65 million events per second at peak that we're hunting on. So there's a tremendous amount of data, a tremendous amount of grey space for us to hunt adversaries, whether they be machine or human. So I don't really think that that's going to be the big issue here.

For me, the big issue is on the solution to this, which is… If you look at last year, there were 48,000 roughly, CVE or common vulnerability exposures, and that's a lot. We’re already looking at, I think, a 27 per cent increase in the first quarter of this year over last year. So there's more bugs being found. Whether it's by a human or a machine, is irrelevant. The Chinese can weaponise a vulnerability inside of two days when it's disclosed, and that's what we call an n-day, when there's a vulnerability that's found and a patch is available. So threat actors have figured out how to weaponize N days, China in particular, very quickly.

In fact, at their Tianfu Cup, which just wrapped up in January, there was a whole track that was really about “How do we weaponise known vulnerabilities”, right? Because they understand the value of that. But let's say AI has an impact here. Let's say it's modest, which I think, you know, 10x would not be a leap of imagination for what an AI can do.

So let's say a 10x is it, and there are now 480,000 CVEs. I don't think the CVE system can even handle that. But now, as a CISO, as a defender, as somebody that's responsible for patching systems, you have to start prioritising, because you can't patch everything at once. That's impossible, and most organisations… I mean, take a look at Salt Typhoon, what we call Operator Panda. They hacked into a telco and got access to the President of the United States, when he was president-elect; they got access to his cell phone data. That's a hard target in my mind, and they did it using not a zero-day but a two-year-old Cisco vulnerability that wasn't patched, right?

That's a huge problem. So now, if you 10x the number of vulnerabilities that all these folks are dealing with, it's going to be an absolute bloodbath.

Cyber Daily: So, how do we educate organisations to follow that basic cyber hygiene and patch?

Adam Meyers: Well, it's not just about patching, which is, I think, where a lot of people think “Oh, it's just simple, just patch”. But when you patch something…

Think about that telco example, right? Let's say it was a Cisco switch that was routing traffic at the telco. If they shut that down to patch, they're going to disrupt how many phone calls, how many text messages, how much network traffic? And so they have to have a good strategy for patching, and they have to schedule downtime, they have to have failover, and they have to have all of these conditions that are ideal so that they can do the patch, and if something goes wrong with the patch, they're running up against the clock, and they're going to have a real bad day.

So patching is not as simple as patching, and with the number of vulnerabilities, 48,000 vulnerabilities, you can't patch everything, so you have to prioritise. Organizations have historically prioritised based on one or two things. The first one is prevalence. Kind of like an old method, which is to say, if there's a bug or if there's a patch that I need to issue, how much of it is in my environment, and the one that's the highest amount, that's what I'm going to patch first.

That methodology has kind of gone by the wayside, and more organisations gravitate towards criticality-based patching. So they look at the CVSS, which is a component of the CVE, and they look at that score, and they say, “Okay, if it's above a certain threshold, that's break glass,” right? We're gonna have a network outage. We're gonna do whatever we need to do, because that's going to be a bad day if somebody exploits that.

The problem is that they're looking at these vulnerabilities, and they're looking at that CVSS score. So let's go with Palo Alto. They have this GlobalProtect VPN product, and a lot of people use it, and there were two vulnerabilities in GlobalProtect about a year and a half ago. The first one was a remote unauthenticated access. So any user who exploited this vulnerability could get unauthenticated access to the device, unprivileged but unauthenticated access to the device, and that had a CVSS score of like 5.5 and organisations would look at that, and they would say, "Well, that doesn't hit our threshold, so we're not going to patch it. We'll schedule that for the next maintenance window,” right?

Then they look at the next vulnerability that comes out; same time, same patch cycle. And this one is a local privilege escalation. That's 8.5, that means that if you have access to that device, you can escalate privilege to system or admin or whatever it is. And they look at that, and they say, “8.5 that's at our threshold. We should probably patch this”. And then somebody pipes in, and they say, “Well, it's 8.5, but it's a local privilege, so they'd have to have access to the box to be able to use it. So it's really not that big of a deal, right?”

And then they forgot about that first vulnerability. So now if you chain those two together, what is going to happen is it's a kill shot, right? Unauthenticated access with a local privilege escalation – game over. And they don't look at those two together. They're looking at them in a bubble, and so that really makes it difficult for them to leverage criticality-based patching.

And then the third model, which is one that I promote, and I think people should adopt, is to look at what is being exploited in the wild, because then at least you know this is actively being exploited. In that Palo Alto situation, both were being exploited together in the wild, CISA puts out something called the Known Exploited Vulnerability Catalog, and that lets organisations see all the things that are actively being exploited, and we contribute to that all the time. We'll notify CISA if we see vulnerabilities being exploited, they add it to the catalogue, and that helps people patch.

So I think particularly as we start to see the increase in vulnerabilities at the hands of AI, and the effectiveness of those vulnerabilities that organisations are going to need to certainly adopt a prioritisation based off of what is actively being exploited, and that's where threat intelligence will be really useful for them to understand what's out there, who's using it, and what are they doing with it.

Cyber Daily: You mentioned the volume of vulnerabilities that are being reported, and it's just skyrocketing year over year, quarter over quarter. And I know that recently, NIST said that it was no longer going to be enriching every CVE that crossed its desk; it was going to announce new prioritisation criteria. Is that a response to this Vulnerability Apocalypse, that we just can't cover all of them?

Adam Meyers: I think it's a response to a few things.

I mean, I think they've had some funding issues as well, associated with CVE. And I think, you know, CVE was designed many, many, many years ago, when we had far fewer vulnerabilities, far less software sprawl. And, oftentimes, as you look at the number of vulnerabilities year over year, I think it wasn't built to this scale. And I think they recognise that it's unmanageable.

And there are some other issues with CVE. For instance, you don't see a lot of cloud-based CVEs, because if it's a SaaS vulnerability, they patch it on the cloud side; there's nothing for the customer to do. So you don't really need to call it out as a CVE, because once the bug is found, you fix it, it's game over. So there are aspects of the whole vulnerability landscape that have been changed by cloud and by SaaS.

CVE also doesn't cover… If you look back at the last couple of weeks, we've seen a massive uptick in supply chain attacks, and they're targeting software libraries that are being used across the supply chain, and CVE doesn't account for that. So I think it is not the solution that it was back when you were dealing with the Morris worm and a handful of Unix vulnerabilities. To defend NIST here, you can't possibly expect them to cover every single product. And when you look at the enrichments, they're not super detailed, you know, it's not, it's basically metadata, usually, just what's come from the initial disclosure anyway. It's usually exactly the same thing, and the vendors typically have way more information in their disclosure than the CVE does anyway.

CVE is really useful for categorising vulnerabilities when you're doing a vulnerability assessment on a network, and then you could give them a list of all the CVEs that you found, and frankly, most of the auditors that were doing that had no idea what those vulnerabilities were anyway. I've been in places where they found a vulnerability that was 15 years old, and they're like, “You need to patch this”. But as far as I’m concerned, you need to burn that system out in the woods, because if it's been exposed for 15 years, vulnerability patching is not going to fix your problem.

Cyber DailyWant to see more stories from trusted news sources?
Make Cyber Daily a preferred news source on Google.

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags: