The concern was valid and attackers did use it! But so did defenders, and the net result was that the security professionals got better at finding and fixing things, and the race continued more or less as before.
Thirty years on, Anthropic's Mythos Preview has arrived, and once again the sky is falling.
What has Mythos done?
Credit where it's due: Mythos (from what we’ve read; we’re not allowed in the cool-kids treehouse yet!) seems genuinely impressive. The UK's AI Security Institute (AISI) tested it independently and Mythos is the first model to solve against a 32-step cyber range called "The Last Ones" (TLO), simulating a full attack chain from reconnaissance to network takeover.
It could also work autonomously (no humans needed!)!
And it could identify and exploit a 17-year-old FreeBSD remote code execution vulnerability, along with decades-old flaws in OpenBSD and FFmpeg.
These are real capabilities and they represent a genuine step change in what AI can do offensively.
The caveats that didn't make the headlines
But there are important qualifiers that tend to get dropped from the coverage. First up, the test ranges lacked active defenders and defensive tooling, and they did not penalise models for actions that would trigger security alerts. And Mythos failed AISI's operational technology range entirely.
The flaws in OpenBSD’s TCP SACK implementation and FFmpeg’s H.264 decoder? They’re real, but Mythos did have access to the source code.
And the Firefox JavaScript-engine vulnerabilities? Well, they’re real too but the testing did not represent exploitation of a real end-user Firefox browser: Anthropic says it used a testing harness mimicking a Firefox 147 content process, that did not use a browser process sandbox or other defence-in-depth mitigations.
And all of that aside, unauthorized users accessed the Mythos within days of the preview being released, just by guessing URL patterns using information exposed in a previous data breach. Mercor, an AI staffing firm supplying contractors to Anthropic, was compromised by an un-futuristic supply-chain attack that organisations need to be managing today, using a controls framework that includes risk management, credential rotation, penetration testing and vendor risk management.
So what is actually breaching Australian organisations?
Not Mythos. As described in the Verizon 2025 Data Breach Investigations Report (analysing over 22,000 incidents across 139 countries) the real attacks, the ones we should be panicking about, are much more prosaic:
- Stolen credentials remain the most common initial access vector, used in 22% of breaches.
- Exploitation of edge device and VPN vulnerabilities surged eightfold year-on-year, yet only 54% of those vulnerabilities were fully remediated, with a median time to patch of 32 days against a median time to exploitation of zero days.
- Ransomware was present in 44% of all breaches, up from 32%, with 88% of those incidents hitting small and medium-sized businesses.
Not exciting, not front page news, not glamorous… just the evidence-based reality of most days in real infosec!
The fix is boring, but it works
AISI's blog post ends with a recommendation that amounts to "do the basics well": Security updates, access controls, security configuration, and logging.
Risk management frameworks exist. The CIS Controls, the ACSC Essential Eight, and ISO 27001 all exist for a reason. None of them are new, none of them are exciting, and none of them will generate feverish media coverage. But they work, when they are actually implemented, verified and maintained.
But, if instead an organisation chooses inaction and complacency? AI will just accelerate the consequences! As iTnews reported, at least 75 Australian businesses with turnover above $3 million paid ransomware groups in the first eight months of mandatory disclosure. Between seven and thirteen a month! And those are just the ones bound by Australia’s limited reporting requirements.
The best defence against AI-powered attacks turns out to be the same as the best defence against every other kind of cyber attack: doing the boring stuff well, consistently, and verifying that it actually works.
Our full article describes all of this, and how we use AI to accelerate attacks, in more detail.