Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
Jamie Humphrey, General Manager, Infrastructure Solutions Group, Dell Technologies, Australia and New Zealand
As organisations embrace AI, they can fall victim to myths that make securing AI seem more complex than it truly is.
With cyber risk topping the agenda for many Australian organisations, understanding these myths is critical. Recent findings from PWC’s 2025 Global Digital Trust Insights report revealed that 67 per cent of Australian organisations identified cyber risk as their number one priority over the next year. This compared to 57 per cent of organisations globally.
When AI systems become more embedded across operations, organisations see their attack surface grow and become increasingly difficult to manage. The truth is safeguarding AI systems doesn’t require a complex overhaul of existing infrastructure or security frameworks. It starts with applying foundational cybersecurity principles and adapting them to the risks and behaviours of AI systems.
Myth 1: AI systems are too complex to secure
Threat actors are using AI to enhance a variety of attack types from ransomware, to zero-day exploits and even Distributed Denial of Service (DDoS). They can exploit unprotected AI systems to manipulate outcomes or escalate privileges, resulting in a broader attack surface. There’s a misconception that these risks make AI systems too complex to secure.
Truth:
Yes, AI comes with risks, but overcoming them is possible by reinforcing current cybersecurity practices and adapting them to AI-specific threats. Organisations can strengthen defences by:
Myth 2: No existing tools will secure AI
Organisations may feel they have to adopt new security solutions and tools to secure their AI systems because AI is a newer, rapidly evolving workload. As a result, there’s a misconception that none of an organisation’s existing tools will secure AI.
Truth:
Securing AI systems doesn’t require abandonment of current cybersecurity investments. AI may be a different workload with unique elements, but it still benefits from foundational security measures like identity management, network segmentation and data protection. Maintaining strong cyber hygiene through regular system patching, access control and vulnerability management remains essential.
To address AI-specific threats like prompt injection or compromised training data, organisations can tailor their current cybersecurity strategies rather than replace them. For example, regularly logging and auditing Large Language Model (LLM) inputs and outputs can help spot unusual activity or malicious use.
To secure AI, organisations should start by understanding how their current architecture and tools cover AI workloads. After reviewing their current security tools, organisations can spot where they need extra capabilities to address AI risks. This includes tools to monitor AI outputs, manage decisions and prevent unwanted actions.
Myth 3: Securing AI is only about protecting data
LLMs operate by analysing data and generating outputs based on their findings. Since AI uses and generates large amounts of data, there’s a misconception that securing it is just about protecting data.
Truth:
Securing AI goes beyond protecting data alone. While safeguarding inputs and outputs is essential, securing AI involves the entire AI ecosystem, including models, Application Programming Interfaces (APIs), systems and devices. LLMs, for example, are vulnerable to attacks that manipulate input data to produce misleading or harmful outputs. Addressing this risk requires tools and procedures to manage compliance policies and check AI inputs and outputs for safe responses. APIs, which serve as gateways to AI functionality, must be secured with strong authentication to block unauthorised access. And because AI systems continuously generate outputs, organisations need to monitor for anomalies or patterns that could indicate a breach or malfunction. By expanding the focus beyond data, organisations can build a more resilient and trustworthy AI environment.
Myth 4: Agentic AI will ultimately replace the need for human oversight
Agentic AI introduces autonomous agents that independently make decisions. Because these agents can make decisions independently, there’s a misconception that agentic AI will ultimately replace the need for human oversight.
Truth:
Agentic AI systems, which operate with a degree of autonomy, still need governance to ensure they act ethically, predictably and aligned with human values. Without human oversight, these systems risk deviating from assigned goals or exhibiting unintended and potentially harmful behaviors. To prevent misuse and ensure responsible deployment, organisations should set AI boundaries, use layered controls and involve humans in critical decisions. Regular audits and thorough testing are also essential to increase transparency and accountability across AI operations. Human oversight is foundational to safe and effective agentic AI.
AI-enhanced threats may seem daunting, but the path to securing AI is more familiar than it may appear. Grounding security strategies in cybersecurity principles and adapting them to AI risks helps organisations build confidence and resilience without unnecessary complexity or cost. Many existing tools and practices can extend to protect AI systems, saving time, reducing risk and maximising existing investments.
Debunking these myths isn’t just about correcting misconceptions; it’s about empowering teams to take informed, proactive steps toward responsible AI adoption.
The future of AI is here, and organisations must be prepared to secure it.
Be the first to hear the latest developments in the cyber industry.