The new AI will detect patterns in payment and transaction data to identify scams and fraud, and then generate the process in taking them on.
“When suspicious patterns are identified, the system quickly assesses their severity, analyses context, and proposes new detection rules to help intercept them. The new agent goes beyond traditional AI by not only rapidly identifying new threats but also determining how it can seek to disrupt them,” said James Roberts, executive general manager of fraud and scams at CBA.
“The agent operates around the clock, continuously monitoring activity and adapting to emerging threats.”
Roberts said the AI builds on CBA’s existing AI capabilities and advanced fraud protection systems and that the rules generated by the AI are reviewed and approved by CBA’s fraud analytics team before they are implemented.
CBA has extensive payment data, monitoring over 80 million signals daily, such as card and online payments, transactions and digital banking channels.
Additionally, the bank processes over 20 million payments on average a day and sends over 40,000 proactive warning notifications to customers through its app.
All of this work has reduced fraud losses by over 20 per cent in the first half of the 2026 fiscal year, compared to the same time the previous year, according to the bank.
CBA said the AI agent has played a role in updating and creating three-quarters of the bank’s card fraud rules.
“The technology allows us to identify unusual events in highly complex patterns of activity at far greater speed and scale, helping us detect emerging threats sooner and update our controls faster,” Roberts said.
The bank, which has pushed for AI implementation heavily over the last 12 months, said the new agent is part of its $1 billion annual commitment to fraud, scam, financial crime and cyber crime prevention.
CBA’s other defensive AI agents
Just last month, CBA announced a pair of AI agents to support cyber security teams in threat hunting, as well as a response agent to collect information and help advise decisions.
At the Gartner Security and Risk Management Summit, CBA general manager of cyber defence operations Andrew Pade said the threat-hunting agent began development roughly a year ago.
While he said ready-made solutions that could be procured from vendors were usually preferred as they didn’t need to be watered and fed, Pade said there was a “gap between an emerging threat” and ready-made products.
“That gap is the area of our greatest risk … we don’t want to encounter an issue that we have to wait for a vendor to provide a solution for,” he said.
“I’m not waiting for someone to solve our problems. We are the ones to solve our own problems.”
The threat-hunting agent is responsible for creating hypotheses and theories that can be followed up by investigation before a cyber incident occurs, where applications and environments are searched for potential threats, before then returning to analysts with findings “for peer review”.
The second, which is being referred to as the response agent, helps collate data and context to advise on whether activity and other signs indicate malware or other hacker activity.
“When you think about your blue teams, there’s a general flow of detection, triage, analysis and response,” Pade said.
“I don’t know if people have seen what analysts do, but it’s quite monotonous, and it’s not just packaged beautifully for them to go and do the triage. They have to actually work it out and build that context.
“Our AI response agent builds that for them, and … lays it out for them.”
Together, Pade said the two agentic AI bots have reduced response time.
Want to see more stories from trusted news sources?Make Cyber Daily a preferred news source on Google.