You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

Op-Ed: Software's agentic future is less than 3 years away – Australia’s CISOs can start securing it now

The agentic AI clock is already ticking, but now is the perfect opportunity for Australian orgs to get ahead of the curve – and stay ahead.

Op-Ed: Software's agentic future is less than 3 years away – Australia’s CISOs can start securing it now
expand image

A survey of Australian C-level executives by GitLab found that the vast majority (90 per cent) believe agentic AI will become the industry standard for software development within the next three years.

While offering significant opportunities, it also presents an industry-wide security challenge, with 85 per cent of respondents warning that agentic AI will create unprecedented security risks.

CISOs in Australia must navigate the complexity of supporting AI adoption, while simultaneously finding ways to minimise the technology’s emerging security risks. With 90 per cent of executives planning to increase AI investment in software development over the next 18 months, every new AI breakthrough further raises the stakes.

Gaps in AI governance add complexity to agentic adoption

Most security leaders in Australia are well aware that the biggest agentic AI risks include data privacy and security (52 per cent), cyber security threats (42 per cent), and maintaining governance (41 per cent). The landscape and even definitions of these risks are evolving and deeply intertwined.

The problem isn’t awareness, it’s action. Establishing a governance model for AI is required for organisations to evolve their security strategy alongside emerging AI risks. However, doing so is not straightforward, with AI crossing many technology and security domains from data governance to identity and access management. Nevertheless, nearly half of those surveyed admitted their organisation has not implemented regulatory-aligned governance (47 per cent) nor internal policies (59 per cent) for AI.

The lag in AI governance stems from legitimate industry-wide challenges, making it difficult for leaders to identify the most effective places to invest their time and effort. The non-deterministic nature of agents causes them to behave in unexpected ways, which has been proven to disrupt existing security boundaries. Furthermore, security complexity is increasing with the introduction of universal protocols, such as Model Context Protocol and Agent2Agent, which simplify data access and enhance agent interoperability to build ecosystems.

But these challenges cannot stop security leaders from prioritising AI governance. If you’re awaiting comprehensive best practices for this dynamic technology, you'll be playing a perpetual game of catch-up. Any organisation that avoids AI adoption altogether will still be exposed to AI risk through vendors and shadow AI usage in their environment.

Preparing Australian CISOs for software’s agentic future

The time to prepare for AI agents is now, and CISOs can start by establishing AI observability capable of tracking, auditing, and attributing agentic behaviours across environments. Here are a few steps CISOs can take today to reduce AI risk and improve governance:

Establish identity policies that attribute agent actions
As AI systems proliferate, tracking and securing these non-human identities becomes just as important as managing human user access. One way to achieve this is through composite identities, which link an AI agent’s identity with that of the human user directing it. So, when an AI agent attempts to access a resource, you can authenticate and authorise the agent and clearly attribute activity to the responsible human user.

Adopt comprehensive monitoring frameworks
Operations, development, and security teams need ways to monitor the activities of AI agents across multiple workflows, processes, and systems. It’s not enough to know what an agent is doing in your codebase. You also need to be able to monitor its activity in both staging and production environments, as well as in the associated databases and any applications it accesses.

Upskill technical teams
A culture of security now requires AI literacy. Among respondents in Australia, 35 per cent acknowledged a widening AI skills gap. This gap is likely to grow unless technical leaders prioritise upskilling teams to understand model behaviour, prompt engineering, and how to critically evaluate model inputs and outputs.

Understanding where models are performant versus where their use is suboptimal helps teams avoid unnecessary security risk and technical debt. For example, a model trained on anti-patterns will perform well at detecting those patterns, but will not be effective against logic bugs it has never encountered before. Teams should also recognise that no model can replace human expertise. If the model performs suboptimally in an area a security engineer or developer is less familiar with, they will not be able to identify the security gaps the model has left behind.

CISOs should consider dedicating a portion of learning and development budgets to continuous technical education. This fosters AI security expertise in-house, allowing newly minted AI champions to educate their peers and reinforce best practices.

AI risks are real, but so is the opportunity

When AI is monitored and used in the right way, executives confirm that it improves security. In fact, 41 per cent of respondents ranked security as the top area where AI can add value for software development. AI used as an accelerant, not a replacement for expertise, can democratise security knowledge across development teams by automating routine security tasks, providing smart coding recommendations, and offering valuable security context directly within developers’ workflows. For example, AI can provide explanations for vulnerabilities, allowing developers to fix issues more quickly without waiting for security to provide the same context. The net result of capabilities like these is improved security outcomes, reduced risk, and greater understanding for enhanced collaboration between developers and their security peers.

For Australian organisations, those that thrive won’t be the ones that avoid AI or rush in blindly. The advantage will go to those who build security into their AI strategy from day one. Establishing basic controls now, even if imperfect, will help teams adjust quickly to a changing landscape.

If the survey respondents are right, the three-year clock is already ticking. Executives who focus on the right AI use cases will reduce their security risk and gain a competitive edge.

You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.