Linus’s Law, Eric Raymond's famous dictum about open source software, tells us that in software development, “given enough eyeballs, all bugs are shallow.”
If enough people look at a piece of code, someone will eventually spot the problems.
AI has supercharged this principle, making it possible to scan code and find vulnerabilities faster than ever before. But those same tools are available to attackers, and the gap between discovery and exploitation is shrinking fast.
The question for Australian security teams is, who will find the vulnerabilities first?
AI-Powered pen testing has arrived
XBOW's ascent to the top of HackerOne's US leaderboard marked a milestone for application security (AppSec). In just 90 days, its autonomous AI penetration tester submitted over 1,060 vulnerabilities, surpassing the output of thousands of human researchers.
Unlike a lot of unskilled AI slop, these findings weren't theoretical. Bug bounty programs helped companies resolve 130 critical vulnerabilities XBOW uncovered, with 300+ more triaged and awaiting resolution.
What makes XBOW's achievement particularly significant is its economies of scale. The system operates autonomously, requires no sleep, and addresses thousands of targets simultaneously. While human researchers cherry-pick high-value targets, AI systems can methodically test entire attack surfaces. HackerOne reports that autonomous agents submitted more than 560 valid reports in 2025 alone.
Known vulnerabilities that once required skilled security researchers to exploit are now discoverable at machine scale and speed. For Australian organisations operating under the Security of Critical Infrastructure Act, where reporting obligations are tight, and the threat landscape includes sophisticated state-sponsored actors, speed matters.
Transforming threat modelling from weeks to minutes
JPMorgan Chase's release of its AI Threat Modeling Co-Pilot research demonstrates how enterprise application security teams are already deploying AI to address velocity constraints. Its Auspex system captures threat modeling tradecraft in specialised prompts that guide AI through system decomposition, threat identification, and mitigation strategies, enabling developers to then address them through a self-service model.
Auspex combines generative AI with expert frameworks, industry best practices, and JPMorgan's institutional knowledge. The system encodes this context directly into AI prompts through a technique called "tradecraft prompting." It processes architecture diagrams and textual descriptions, then chains prompts to generate threat matrices that specify scenarios, types, security categorisations, and potential mitigations.
Traditional threat modeling can take weeks or months, but AI-driven approaches like this collapse this timeline to minutes while improving the quality of human analysis.
How AI will reshape modern AppSec teams
Emerging AI use cases illustrated by XBOW and Auspex offer AppSec teams an alternative to the traditional AppSec model, which consumes enormous resources during development while providing limited coverage.
Code review backlogs grow, security debt accumulates, and critical vulnerabilities slip into production because humans remain bottlenecks in the software development lifecycle. A recent GitLab survey found that teams in Australia lose 7 hours per week to inefficient processes.
AI changes this equation. Security teams can now systematically redeploy resources away from manual, repetitive activities toward building security-engineered solutions that integrate AI directly into developer workflows.
A few proven, AI-driven strategies can help a modern AppSec team scale efficiently:
● Build queryable security intelligence: Ingest every security bug, vulnerability report, and incident into structured data stores that support semantic search. This will transform historical security findings into embeddings that enable AI systems to identify similar patterns across codebases. When a new vulnerability class emerges, your AI can instantly query whether similar issues exist elsewhere.
● Fine-tune models for your environment: Rather than relying on generic commercial tools, your AppSec team should leverage RAG (Retrieval-Augmented Generation) approaches to augment LLMs with security anti-patterns and architectural standards specific to your organisation. Recent research demonstrates that combining static analysers like PMD and Checkstyle with fine-tuned LLMs significantly improves code review accuracy while reducing false positives.
● Integrate AI into your developer toolchains: Security findings that arrive days or weeks after code is written create friction and require more context switching. Instead, embed AI-powered analysis directly into your IDEs, CI/CD pipelines, and pull-request workflows. Developers will receive real-time security guidance as they write code, not after they've moved on.
● Apply AI to threat modeling at scale: Following JPMorgan's lead, implement AI-powered threat modeling that can analyse every new system design, API specification, and infrastructure change. The goal isn't perfection but breadth: It's better to have AI-generated threat models for 100% of your systems than expert-reviewed models for 10%.
● Leverage AI to improve your Static Application Security Testing (SAST): Traditional SAST tools generate high volumes of false positives that desensitise developers and create triage overhead. AI can dramatically improve the accuracy of these tools by understanding code context, analysing data flows, and identifying real vulnerabilities that pattern-matching tools miss.
Security prioritisation for Australia’s AI-driven development era
Australian security teams face a pivotal moment. The old approach of adding more people to code reviews doesn’t work when AI-assisted development is accelerating release cycles beyond what humans alone can protect. AI is the only way security can scale with the speed of modern development.
But this shift will not happen by accident. Security leaders need to proactively redirect their teams’ focus, redesign workflows, and rethink which skills matter when humans and AI collaborate
Australian organisations that invest in this transition early will come out with stronger security postures, lower costs, and faster shipping cycles. The window to get ahead of this curve is still open, but it's closing fast.