Authored by the ASD’s Australian Cyber Security Centre (ACSC) in collaboration with the Council of Small Business Organisations Australia (COSBOA) and the New Zealand National Cyber Security Centre (NCSC-NZ), the new publication outlines the risks of adopting AI in a business environment.
“More small businesses are using AI through applications, websites and enterprise systems hosted in the public cloud like OpenAI’s ChatGPT, Google Gemini, Anthropic’s Claude, and Microsoft Copilot. AI adoption is growing fast in Australia. Based on data from the Department of Industry, Science and Resources (DISR), this is rising every year,” said the ASD.
“As AI becomes part of small-business operations, understanding the related cyber security risks is essential. Small businesses must take proactive steps to protect data, customer privacy and business systems. Having strong cyber security practices is crucial to reducing risks in an evolving and complex emerging technology space.”
The ASD said three of the main risks small businesses face are data leaks and privacy breaches, supply chain vulnerabilities, and manipulation and unreliable AI outputs.
AI tools rely on reserves of data, and in a business sense, may therefore need to access sensitive data to operate, such as an AI that works with rental properties or in the healthcare industry.
As a result, this data could be disclosed or reused in other contexts unexpectedly, or could be injected into the larger pool of data used by popular generative AI chatbots like OpenAI’s ChatGPT or xAI’s Grok.
“Small businesses should carefully review the configuration settings, terms and conditions, and privacy policies of any AI platform they engage with to understand how their data may be collected, stored, and used,” the ASD said.
This has happened before. In October 2025, the NSW Reconstruction Authority (RA), the state agency responsible for mitigating damage from natural disasters, announced that it was aware that a “data breach” had occurred, impacting thousands of those who applied to the Northern Rivers Resilient Homes Program (RHP), which provides financial assistance to those looking to improve the flood resistance of their homes.
“The breach occurred when a former contractor of the RA uploaded data containing personal information to an unsecured AI tool which was not authorised by the department,” the NSW government said.
“There is no evidence that any information has been made public; however, Cyber Security NSW will continue to monitor the internet and the dark web to see if any of the information is accessible online.”
According to a NSW government release, the former contractor posted 10 columns and over 12,000 rows of data from a Microsoft Excel spreadsheet into ChatGPT.
Based on “early forensic analysis”, as many as 3,000 people may have been impacted, with data exposed to ChatGPT including names, addresses, email addresses, phone numbers, and personal and health data.
ASD advice
The new publication advises that to avoid privacy issues when using AI, businesses should manage risks by reviewing data management and securing proprietary information, review the data handling practices of AI vendors, establishing an AI use policy that defines what data can and cannot be uploaded, train staff on responsible AI use and remove and anonymise personal details when used on an AI application.
When it comes to AI output manipulation issues, staff should verify AI outputs for incorrect answers with third-party sources, or note any biased, unethical, inaccurate or irrelevant answers.
Humans should also be included in the decision-making process, and AI models should be regularly updated.
The new publication has a checklist for small businesses that can assist in ensuring that AI practices are secure and businesses can use the technology responsibly without risk.
Daniel Croft