Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
AI may be an exciting revolution, but its rapid uptake by individual workers brings with it a fundamental risk.
AI has quickly become a core driver of organisational productivity across Australia, revolutionising how businesses operate, automate, and complete tasks.
From copilots that summarise heavy content into digestible actions, to chatbots that draft email responses, or LLMs that spark ideation, organisations are implementing the technology to work more efficiently. But as innovation accelerates, governance has struggled to keep up. For organisations that lack strong oversight of employees’ day-to-day work, this has created a new challenge – shadow AI.
Shadow AI refers to the use of artificial intelligence without the knowledge, approval, or regulation of an organisation’s IT or security team. It occurs when employees adopt publicly available AI tools or free online models outside of established controls. Given AI tools are often embedded into everyday software, shadow AI can frequently go unnoticed and grow quickly, creating pockets of blind spots for organisations.
While most shadow AI usage isn’t malicious, often just employees trying to be more efficient, it opens the door to a number of cyber risks that employees may not even consider. As such, shadow AI should not be ignored by organisations, but treated as an issue that must be addressed.
Ignorance is not bliss
The Sophos Future of Cybersecurity in Asia Pacific and Japan 2025 report reveals that nearly one-third (32 per cent) of Australian organisations have reported shadow AI use by employees – a significant portion of the workforce experimenting with powerful technology without proper governance or oversight. Given the report also found that 30 per cent of Australian organisations still don’t have a formal AI strategy, this risk becomes even more pressing.
Because shadow AI tools have not been approved by IT leaders, they also have not been vetted for security, privacy, or data-handling practices, significantly expanding an organisation’s attack surface. Sensitive company data, customer records, or intellectual property can end up being fed into public AI models.
In other cases, shadow AI tools may even be infected with malware, which can then seep into the organisation. Concerningly, Sophos’ report found that 31 per cent of organisations across APAC had discussed a vulnerability in an AI tool they were using, potentially exposing the organisation.
In highly regulated industries such as finance, telecommunications, the public sector, and critical infrastructure, poor data handling, which is more likely when AI is used improperly, also becomes a compliance challenge. Businesses are facing increasing scrutiny to uphold strong data protection, and those that don’t can find themselves subjected to significant fines, as seen recently with Australian Clinical Labs paying $5.8 million in civil penalties because of a preventable data breach.
It is therefore imperative that when trying to improve data handling for employees and customers, organisations evaluate shadow AI usage within their company and strategise how to mitigate it.
More visibility means more safety
Organisations need to consider how they can strike a balance between AI innovation and governance. To mitigate the cyber risks of shadow AI and ensure responsible care of data, Australian businesses should prioritise:
Improving visibility: A strong AI governance framework needs to follow zero-trust thinking and rely on constant oversight. Organisations must have visibility into who is using AI tools, what data is being accessed, and how that information moves through systems. Since AI creates a new and complex attack surface, protection must extend across every layer – including data, identities, endpoints, and user behaviour.
Making AI policies practical: Many companies have drafted AI policies, but those documents alone don’t create real change. What’s needed are awareness programs that do more than outline technical rules. Employees should be equipped and trained to recognise when they’re engaging with external AI tools and understand that data governance is essential to protecting the organisation, not just an administrative requirement.
Leading from the top: Blanket prohibitions rarely work, as they tend to push AI usage out of sight rather than stopping it. Leaders, especially CISOs and technology decision-makers, should instead guide teams toward sanctioned, secure, and properly monitored AI solutions. Shadow AI thrives when innovation is constrained or when IT is viewed as an obstacle. Businesses should reverse this dynamic by encouraging responsible experimentation while still setting firm boundaries and expectations around how AI should be used.
As Australian businesses venture further into their artificial intelligence exploration, innovation cannot be seen as something that conflicts with security or governance – every aspect needs to work hand in hand. Shadow AI won’t disappear; human curiosity and the desire for faster, more efficient processes will continue. The real question is whether organisations are prepared to improve governance and visibility, or whether they will risk leaving the door open for data breaches and potential penalties.
Australian organisations that act now by building actionable frameworks, strengthening oversight, and holding employees accountable will not only reduce risk but also harness AI’s potential to transform processes and operations across the business.
AI is not the enemy; unmonitored AI is.
Be the first to hear the latest developments in the cyber industry.