Artificial intelligence is transforming workplaces and work practices across Australia, but while the speed of adoption is rising, so is the risk associated with a lack of security preparedness.
That’s the bottom line of cyber security firm Proofpoint’s 2026 AI and Human Risk Landscape Report, which has revealed that while four out of five Australian organisations deployed AI assistants beyond a pilot stage, 60 per cent are not confident that their security controls would catch a compromised AI system.
“We’re already seeing Australian organisations grappling with the threats faced by AI, and particularly agentic adoption,” Adrian Covich, vice president, systems engineering for Proofpoint in APJ, said in a statement.
Covich said the recent release of sensitive NSW government agency information via ChatGPT was a perfect example of the kind of risk organisations face in their AI adoption journey.
“Australian organisations are scaling AI quickly, with huge potential for productivity gains; however, this unpreparedness carries real consequences. Without a significant change in the security posture of AI systems, these kinds of breaches are likely to become much more commonplace,” he said.
According to the report, which polled more than 1,400 security professionals across 12 countries, including Australia, the current pace of AI adoption is outstripping government frameworks while also expanding the attack surface.
Email remains the most common attack vector in the country; however, SaaS and cloud applications are catching up, with AI assistants or agents following closely.
And for organisations that do have adequate security controls in place, 44 per cent still reported AI-related incidents. Fifty-one per cent of organisations reported visibility gaps into agent activity, while issues were also reported in relation to training and governance across multiple teams.
“While AI has introduced new risks, such as prompt engineering, its bigger impact has been amplifying the risks we’ve always had. Running untrusted code, mishandling sensitive data, and losing control of credentials are the same challenges that humans have created for decades. AI executes them at machine speed and scale. When organisations hand AI the keys to act on their behalf – across customers, partners, and internal systems – the blast radius of any one of those failures grows dramatically,” Covich said.
“The answer isn’t to treat AI as a novel threat category, but to apply rigorous, proven controls to what AI touches, what it runs, and what it’s allowed to authenticate as. Organisations that get that foundation right early will scale AI confidently. Those that don’t are just automating their own exposure.”
You can read the full 2026 AI and Human Risk Landscape Report here.
Want to see more stories from trusted news sources?Make Cyber Daily a preferred news source on Google.
David Hollingworth
David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.