Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Why Anthropic’s Project Glasswing matters, and what CISOs need to know

Anthropic’s latest AI model can find vulnerabilities at speed and scale. Here’s why industry experts consider Claude Mythos to be a “watershed moment” for the industry and a wake-up call for developers.

Mon, 13 Apr 2026
Why Anthropic’s Project Glasswing matters, and what CISOs need to know

AI firm Anthropic announced its latest AI model last week, but declined to release it to a wide audience.

Instead, Claude Mythos will be released as an exclusive preview to a select group of technology and cyber security companies as a tool to identify software vulnerabilities at scale.

The reason? It’s too good at what it does. According to the company, the model has already found vulnerabilities in operating systems and web browsers alike, some of which have been around for decades, but had not been identified before.

 
 

In fact, 99 per cent of what Claude Mythos found had not been patched at all, making it a potentially dangerous tool in the wrong hands.

A zero-day tsunami

“The model is extremely effective at identifying software vulnerabilities that could lead to zero-day exploits,” Danny Jenkins, CEO and co-founder of cyber security firm ThreatLocker, said on LinkedIn.

“That same capability that helps defenders conduct penetration testing will also be used by attackers to find and exploit weaknesses at scale. Critical infrastructure systems are especially vulnerable, as many still rely on legacy systems the model can easily exploit.”

Jenkins, however, believes the focus on using AI to fight AI is incorrect.

“While defenders should certainly use the same penetration testing tools that attackers use, the conversation of AI stopping AI risks is distracting us from something more immediate,” Jenkins said.

“There are proven steps that organisations can deploy today that do not depend on AI, and we must do so with urgency because Anthropic won’t delay release indefinitely.”

Jenkins said that companies should instead focus on application containment to ensure that platforms can’t bypass traditional controls.

“My advice is straightforward: focus on controls that limit software behaviour, not just controls that detect what’s already happened,” Jenkins said.

“Focus on what you can do today to make yourself more secure, rather than waiting for the next innovation.”

Doug Britton, EVP and chief strategy officer of RunSafe Security, called Anthropic’s announcement a “watershed moment for AI’s runaway zero-day discovery and exploitation”.

“AI is now uncovering memory safety bugs at massive scale, including vulnerabilities that have been hiding in production code for over 25 years – the problem isn’t just that these bugs exist, it’s that they’re being found faster than organisations can fix them,” Britton told Cyber Daily.

“That means the traditional model (find, patch, repeat) can’t keep up anymore. Security has to shift from trying to eliminate every bug to protecting systems even when those bugs are still there.”

Britton added that the Claude Mythos Preview and Project Glasswing news shattered the illusion that just because software has been tested, it is therefore safe.

“OpenBSD has been audited and fuzzed an uncountable number of times over 26 years by world-class researchers,” Britton said.

“Mythos still found a remotely exploitable bug. If that’s possible there, it’s possible anywhere.”

Britton’s also concerned that this leap in technology could make traditional incident response mechanically impossible due to a “tsunami of zero-days across critical software”.

What a CISO needs to know

A more salient question for CISOs than what Anthropic’s new model may mean now, according to Douglas McKee, director of vulnerability intelligence at Rapid7, however, is a more practical and immediate one.

“CISOs do not need to decide this week whether Anthropic’s model changes the entire market,” McKee said in a blog post.

“They do need to ask a more practical question: if my environment starts surfacing materially more vulnerabilities tomorrow, what happens next?”

The answer, McKee said, is probably an uncomfortable one.

“That is where this news becomes relevant. AI-driven discovery does not reduce the need for an exposure-led security model. It increases it. The organisations that benefit most will not be the ones with the biggest pile of findings. They will be the ones that can connect those findings to business-critical assets, internet exposure, identity paths, existing detections, remediation workflows, and validation,” McKee said.

“A good board-level translation is that faster discovery only has value if the organisation can prioritise effectively, remediate quickly, and prove that the fix reduced real exposure. Otherwise, the result is more volume and more noise.”

Cyber DailyWant to see more stories from trusted news sources?
Make Cyber Daily a preferred news source on Google.

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags: