Well, there used to be.
In Dune, there exists a universal mandate: “Thou shalt not make a machine in the likeness of a human mind.” This commandment arose shortly after a century-long crusade against computers, thinking machines, and conscious robots that almost led to the extinction of mankind across the cosmos.
While we may not be on the verge of our own crusade against AI, battles are happening behind the scenes to iron out AI policy domestically, such as the Artificial Intelligence Expert Group, part of the National AI Plan.
Here are just some of the major players on the frontline of AI policymaking in Australia.
Lawyers and Legal Experts
If artificial intelligence is a new engine, then the law is the braking system. Right now, lawyers and legal experts like Angus Lang SC and Professor Jeannie Paterson are testing just how responsive those brakes are.
AI does not exist in a legal vacuum. For legal professionals, including those pursuing advanced qualifications such as a Master of Business Law, artificial intelligence is no longer a theoretical concern but an urgent regulatory challenge reshaping corporate governance, compliance, and risk management. In short, AI policies are inherently intersecting with privacy law, consumer protection, intellectual property, anti-discrimination legislation, corporate governance obligations, and administrative law.
The challenge is that not one of these frameworks was written with generative models or autonomous systems in mind. This leaves many theoretical questions unanswered, such as who is to blame when AI systems fail, or can a company be held liable for shadow AI behaviour?
Legal experts are doing two things at once: interpreting how existing legislation applies to emerging AI systems and advising governments on where entirely new regulatory structures may be required. In many ways, they are attempting to write rules for a technology that is still evolving in real time, so the work is complex, technical, and often theoretical.
Government Representatives and Federal Workers
Behind closed doors in Canberra, AI is no longer a futuristic talking point but a very real issue. Many government bodies and their representatives, such as the Minister for Industry and Innovation, Senator the Hon Tim Ayres, and the Assistant Minister for Science, Technology, and the Digital Economy, the Hon Dr Andrew Charlton MP, have started tackling the delicate issue of AI, both in professional and personal life.
Australia must foster innovation and remain globally competitive, particularly as financial services, mining, defence, health, and education integrate AI into daily operations. But at the same time, policymakers must address risks such as misinformation, algorithmic bias, opaque decision-making, and economic disruption.
Rather than proposing sweeping prohibitions, recent discussions have centred around voluntary AI safety guardrails and risk-based regulatory models. This approach reflects a broader international trend: regulate the harm, not the technology itself.
Policymakers are attempting to ensure that Australia is not left behind while also avoiding a regulatory vacuum.
Industry Leaders and Financial Institutions
Nearly 98% of financial firms now use AI in some capacity, from fraud detection and credit risk modelling to automated trading and customer service chatbots. In this sense, industry is not waiting around for regulation before acting; it is already operating at scale, and this places corporate leaders squarely on the frontline.
Many have established internal AI ethics committees or governance frameworks while anticipating tighter regulation in the near future. As such, banks, financial firms, insurers, and superannuation funds must all consider reputational risk, compliance obligations, and operational resilience in the face of changing rules.
Meanwhile, many companies, for the most part, are deeply engaged in policy consultations. They advocate for innovation-friendly regulation, warning that overly restrictive laws could actually stifle growth and deter investment.
The tension is evident: industry wants clarity and predictability, but not rigidity, while policymakers want safeguards, but without stagnation. The negotiation between these priorities is ongoing and often invisible to the public eye.
Academics and Researchers
While lawyers interpret and politicians debate, researchers quietly help shape the intellectual foundation of AI policy on the sidelines. Ethicists, computer scientists, economists and sociologists, like Professors Bronwyn Fox, Simon Lucey, Jeannie Paterson, et al., are increasingly working together to inform national frameworks and lead cultural conversations.
Universities and public research bodies such as CSIRO contribute technical expertise on AI safety, transparency, and risk mitigation. Academic submissions to government consultations frequently influence draft proposals, and research into AI metrics and responsible innovation helps provide policymakers with evidence-based pathways forward.
In a landscape prone to hype and alarmism, academia often acts as a stabilising force, grounding the conversation in data rather than Dune-esque dystopia. And if anyone can take the warnings of science fiction and apply them realistically to our lives, it is our professors.
AI in Australia: Not a Crusade, but a Calibration
In Dune, humanity responded to thinking machines with annihilation. The response was absolute: no artificial intelligence, no exceptions.
But in the real world, Australia’s current approach could not be more different. Rather than waging a crusade against AI, what we are witnessing is instead a calibration, an attempt to balance innovation with accountability and competitiveness with rightful caution.
The frontline is not a single agency or committee but a web of lawyers, regulators, technologists, academics, and industry leaders, each pulling at different threads of the same complex tapestry.
AI has already been embedded deeply in financial markets, healthcare systems, cyber security, education, and public administration. The question is no longer whether Australia will use artificial intelligence, but how responsibly, transparently, and equitably it will govern it.
Unlike the universe of Dune, we probably will not outright forbid machines in the likeness of the human mind. Australian policymaking is attempting something far more intricate: ensuring that as those machines evolve, our laws, institutions and ethics actively evolve with them.