As benefition as the producitivy gains that AI presents are to a business, when not implemented and handled properly, they can create a range of risks and vulnerabilities strong enough to end a business.
AI implementation done properly and safely can create major benefits for an organisation. However, done quickly and without proper thought and plans in place creates a number of of risks, as Melissa Tan, Partner and Head of Cyber Insurance and Litigation at Lander & Rogers outlines.
She says that AI implementation can lead to a number of risks including “Unauthorised exposure or disclosure of personal, proprietary and sensitive information outside of the intended environment.
“ This can be caused by employees inputting sensitive, non-public data such as confidential customer data into the AI model which may then be used for training by the AI model beyond the authorised use of the information provided, [or] shadow AI usage, where employees use unauthorised AI tools for work without adequate IT and security safeguards or approvals.
“Sometimes, when a business implements a particular AI product in the business, employees may assume all other AI products beyond the particular AI product adopted by the business can also be used, which results in the use of unauthorised AI tools.”
Alongside this, Tan says that having AI increases the general cyber attack surface by creating new AI-specific attack vectors, such as data poisoning, where a threat actor may inject false, biased or corruptde data into a businesses AI model to ensure it behaves differently, causing it to fail, produce inaccurate results or even dangerous backdoors.
“Insecure integration or implementation of AI agents within a business can also create more vulnerable entry points for unauthorised access to internal systems by malicious actors,” she added.
Finally, if a business is using a third-party AI, they face supply-chain risk if the AI developer or provider is not properly vetted.
AI is also utilised by threat actors, completely changing the threat landscape. This includes deepfakes used for phishing and social engineering, and even dangerous AI GPT tools that can create malware.
Making matters worse is that the technology is moving faster than the legislation. Tan says that particularly with deepfakes. While Australia and much of the world has deepfake legislation, regulatory gaps still remain.
“ Advances in AI and machine learning have enabled the manipulation and generation of deepfakes which are extremely realistic and very difficult to detect by the naked eye. While not all AI-generated deepfakes are malicious, the potential for abuse has grown as deepfake tools become more widespread, cheap and accessible,” said Tan.
“When used maliciously, AI-generated deepfakes can potentially cause significant damage. AI-generated deepfakes can be used to create false pornographic material, malicious hoaxes, false news to perpetrate misinformation or political disinformation, identity theft and as part of cyber extortion. This can result in the person targeted to experience financial loss, reputational damage, humiliation, or shame.
“At this stage, there is no standalone AI-specific legislation that provides a wholistic legal framework governing the challenges and risks posed by AI in general and AI-generated deepfakes specifically. Rather, the Australian Government has utilised the existing regulatory frameworks to manage risks posed by AI, including the challenges posed by AI-generated deepfakes.
“For instance, the criminal laws in Australia at the federal level have been amended to specifically address sexual deepfakes. The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 (Cth) strengthened the Criminal Code Act 1995 (Cth) by introducing offences for the non-consensual creation and dissemination of sexually explicit material online, including material created or altered using generative AI. Such an offence carries a maximum prison sentence of six years.
“[Additionally], the criminal laws at the state level have, or are in the process of being updated to also address sexually explicit deepfakes specifically or AI-generated deepfakes more broadly.
“The NSW Government has introduced amendments to the Crimes Act 1900 (NSW) to make the production of a sexually explicit deepfake designed to be a genuine depiction of a real, identifiable person an offence punishable by up to three years’ jail. Sharing or threatening to share such images, even if the person hasn't created them, is also a crime punishable by up to three years’ jail. Further, the non-consensual creation, recording and distribution of sexually explicit audio, whether real or designed to sound like a real, identifiable person has also been criminalised. Existing court takedown powers will also apply to these offences.
“The South Australian Government has also introduced amendments to the Summary Offences Act 1953 (SA) via the Summary Offences (Humiliating, Degrading or Invasive Depictions) Amendment Act 2025 to make it an offence to use artificial intelligence or other digital technology to create "invasive, humiliating or degrading images" that either closely resemble, or purport to be a real person. This extends beyond AI-generated sexually explicit material. Those found guilty could face fines of up to $20,000 or four years imprisonment.”
Tan adds that The Online Safety Act 2021 (Cth) gives the eSafety Commissioner power to remove non-consensual, abusive or other harmful deepfakes, which can be used for phishing, ransom and more.
Additionally, the Office of the Australian Information Commissioner (OAIC) has confirmed that in the context of the Privacy Act 1988 (Cth), that “privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI, where it contains personal information. Inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes), where it is about an identified or reasonably identifiable individual, constitutes personal information and must be handled in accordance with the APPs. The tort of serious invasions of privacy may also apply to AI-generated deepfakes as it deals with the misuse of information and intrusion upon seclusion, both of which may be relevant to the non-consensual creation and distribution of deepfakes.”
Deepfake creators and distributors are also liable for defamation if they damage the reputation, while when used in marketing and advertising, any cases of misrepresented products or false advertising could also breach Australian Consumer Law.
So what can Australian organisations do? On one hand, this technology can bring about massive benefits for an organisation, but how can that be balanced with the risks.
Tan says that while the above is frightening, AI technologies should not be feared.
“Australian companies can definitely leverage AI technologies while staying on the right side of privacy, defamation, and cybersecurity laws in the context of deepfakes. The balance will come from two things - the organisation’s mindset towards implementing AI technologies; and having robust AI governance and compliance frameworks with a good understanding of the AI-related risks involved, particularly deepfakes,” she said.
“The implementation of AI technologies should be approached with the GRC mindset i.e. as a governance, risk and compliance issue, to ensure safe usage that is scalable and sustainable for business growth.
“It is not a tick-box compliance approach. It requires approaching the risks of AI and deepfakes in a proactive manner.”
This means having appropriate guardrails that outline the safe, acceptable uses of AI and those that are prohibited, embedding privacy-by-design into AI development to ensure personal information is not included, leaning on the expertise of legal counsel and other experts and managing defamation and reputational risks, by“ investing in tools and putting in place a process to adequately monitor and detect deepfakes including AI-generated deepfakes that identify individuals, being transparent and disclose the use of synthetic media, create and maintain a deepfake defence playbook with rapid takedown, correction and mitigation procedures in place which are appropriately documented, and most importantly, training staff and increasing awareness on the risks of deepfakes, how to identify them as best as possible, and the legal implications of creating and publishing deepfakes.”
Tan also advises on the best practices legal teams can implement to detect, document and pursue legal action against perpetrators of AI deepfake attacks and scams.
“Much like the best practices involved in detecting, documenting and pursuing legal action (if possible) against perpetrators of other cyber attacks, the best practices Australian legal teams should implement to detect, document and pursue legal action against perpetrators of AI deepfake attacks or scams is centred around having a legally defensible and evidence-based approach.
“[This means] working closely with the organisation's cybersecurity and IT teams to put in place the appropriate processes which are documented for monitoring, detecting and escalating through appropriate communication channels suspected AI deepfake attacks or scams. This will also require appropriate awareness training programs are in place to ensure that employees know how to detect and escalate suspicious content. This assists with ensuring evidence is collected early and appropriately to assist in any investigation.”
Legal teams should also have defensible process to collect and preserve evidence of AI deepfake scams and attacks for forensic investigations, having a defence playbook of immediate mitigation that can be used, which includes takedown notices and reports to platforms for during the investigation, and having appropriate documentation, inclduing incident timeline details, decisions and reasoning, copies of communications with perpetrators, law enforcement and platforms, copies of forensic reports and technical findings, and any evidence of harm.
“Once the evidence is collected and investigation underway or completed, the legal teams should have in place a structured approach to assessing the legal risks and exposures arising from the AI deepfake attack or scam, and identify any perpetrators for which criminal or civil action can be taken against if possible, any reports to law enforcement or regulators, or any other mitigation actions which should be taken,” she added.
“All of the above would be critical for any litigation, criminal investigation or insurance claims lodged following the AI deepfake attack or scam. “
Daniel Croft