Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Your staff will click: why cyber security must be engineered, not trained

Phishing click rates in Australia have grown 140 per cent in 12 months. Training alone will not fix the problem. The answer lies in engineering controls that make the click irrelevant.

By Tim Redhead, dotSec Tue, 17 Mar 2026
Your staff will click: why cyber security must be engineered, not trained

Every organisation with a compliance obligation has invested in cyber security awareness training. Quarterly modules, lunchroom posters, simulated phishing campaigns. The logic is straightforward: teach people to spot the threat, and they will not click.

The data tells a different story. According to the Netskope Threat Labs Report for Australia (August 2025), the rate of Australian workers clicking on phishing links grew by 140 per cent in the preceding 12 months, from 0.5 per cent to 1.2 per cent of all users per month. In a firm of 200 staff, that translates to roughly two successful compromises every month. The Verizon 2024 Data Breach Investigations Report puts the median time from opening a phishing email to entering credentials at under 60 seconds: 21 seconds to click, 28 seconds to hand over the keys.

For practitioners advising regulated entities, those numbers should prompt a direct question: if training is the primary control, and click rates are rising, what is going wrong?

The phishing quality problem

It’s become a cliché: AI has materially changed the economics of phishing. And to some extent it’s true, but it’s not the whole story. A 2024 study by Heiding et al. at the Harvard Kennedy School found that fully AI-automated spear phishing emails achieved a 54 per cent click-through rate, compared with 12 per cent for traditional human-written templates. The AI-generated messages were grammatically correct, contextually tailored, and raised fewer of the red flags that awareness training teaches people to look for. The implication is that the observable cues training programmes rely on, broken grammar, suspicious sender addresses, generic greetings, are disappearing from the threat landscape.

But here is the uncomfortable part: As we can see just from the two screen shots in this article, organisations do not need to face AI-crafted attacks to be compromised. In practice, users still click on emails with broken grammar, mismatched sender fields, and credential harvesting pages hosted on obviously suspicious domains. The problem is not solely one of sophistication. It is one of volume, fatigue, and the basic psychology of how people process information under time pressure.

Why training cannot solve the problem alone

Modern employees process hundreds of signals per day across email, messaging platforms, and collaboration tools. Cognitive economy, the brain’s tendency to conserve effort by relying on heuristics rather than deliberate analysis, means that when an email triggers urgency or authority (“Action Required: Overdue Invoice”, “HR: Salary Update”), the emotional response overrides the analytical one. The amygdala fires before the frontal cortex catches up.

Training can improve awareness. It cannot reprogram human neurology. A car manufacturer does not train drivers never to crash; it installs airbags. The same principle applies here.

The cloud does not solve this

A common objection is that cloud-hosted environments shift this risk to the provider. This misunderstands the shared responsibility model. SaaS providers guarantee infrastructure availability within defined limits. They do not guarantee that a compromised account will not be used to encrypt files, exfiltrate data, or delete backups. Industry data indicates that the average downtime following a ransomware attack is now 24 days, much of it attributable to the practical difficulty of restoring large volumes of data through cloud APIs.

Engineering the controls that training cannot provide

If users will click, and the evidence says they will, then the question for boards and compliance teams is whether the environment is engineered to contain the consequences.

Two controls are particularly relevant. Secure web gateways inspect and filter all outbound traffic in real time, blocking connections to newly registered domains, known phishing infrastructure, and pages hosting credential harvesting kits. When a user clicks a malicious link, they see a block page rather than a login form. The mistake is contained before it causes harm.

Phishing-resistant authentication, using FIDO2 security keys or Windows Hello for Business, addresses the scenario where a gateway misses the threat and the user reaches a fake login page. These technologies bind the authentication response to the domain the user is actually visiting. If the user is on a spoofed site, the cryptographic handshake fails. There is no password transmitted over the wire, and no session token for an attacker to capture, even if the user actively cooperates with the phishing page.

Of course, logging and monitoring, and incident detection, containment and response are also key, but we’ve written volumes about those controls in the past, so we won’t dwell on that further here.

The regulatory dimension

Many of the controls ASIC itemised in the FIIG proceedings (things like MFA, patch management, vulnerability scanning, tested incident response plans, continuous security event monitoring) are the kinds of controls that will reduce the risks associated with a successful phishing attack.

ASIC’s 2026 key issues outlook lists cyber security and operational resilience as explicit enforcement priorities. For APRA-regulated entities, CPS 230 adds a further layer of obligation. The direction of travel is clear: there is an increasing need to be able to present evidence of implemented, maintained, and monitored controls, not just policies and training records.

The practical question

The question is not whether staff will click. They will. The question is whether, when they do, the environment is engineered so that the click does not matter. For any organisation holding sensitive client data, the ability to answer that question with documented evidence is rapidly becoming a baseline regulatory expectation.


Written by: Tim Redhead

Tim Redhead is the founder of dotSec, an Australian cyber security consultancy that has been helping organisations to prioritise, implement, manage and monitor cyber security controls for over 25 years. Further information and recommendation regarding phishing risk is available at dotsec.com.

Tags: