Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Op-ed: Australia’s National AI Plan looks good on paper, but where are the teeth?

Ryan Zahrai, founder and principal lawyer of Zed Law, questions the government’s failure to include binding obligations into its vaunted plan.

user icon Ryan Zahrai, Founder and Principal Lawyer of Zed Law Tue, 30 Dec 2025
Op-Ed: Australia's National AI Plan looks good on paper, but where are the teeth?

There is little comfort in yet another national strategy, institute and roadmap that arrives without real teeth.

No new binding legal duties of its own. No clear enforcement powers. There are few signs that the hard-won lessons from failures like Robodebt have been structurally absorbed into the machinery of government.

Guardrails are welcome, but what is missing is enforceable accountability and meaningful human oversight.

 
 

What the plan gets right

The plan’s three stated goals – capturing economic opportunity, spreading benefits across sectors and “keeping Australians safe” – follow the now familiar global AI governance script set by the EU, the US, and the UK. Investment in skills, data centre infrastructure, and the creation of an AI Safety Institute to test and monitor emerging systems is, on its face, a rational response to a general-purpose technology that will shape the economy for decades.

The Future Made in Australia agenda, backed by multibillion-dollar commitments and AI Accelerator funding, is designed to funnel capital into higher-value digital industries and AI-enabled jobs. Politically, that is appealing. Economically, it could be defensible if it is executed well.

It is also welcome that the plan explicitly references the inclusion of First Nations peoples, women, people with disability, and regional communities. It signals at least an awareness of the structural bias risks that AI can amplify; but how that acknowledgement translates into tangible fairness or protection remains unclear.

Guardrails without teeth

The core problem is that the plan leans heavily on existing legal frameworks and voluntary guidance rather than creating binding, sector-specific duties or enforceable rights around automated decision making.

Framed as a technical and coordination hub, the AI Safety Institute supports regulators and advises industry. It is not an independent authority with the power to halt, condition or penalise unsafe systems. Financial and prudential regulators hold those powers. The institute does not.

Australia’s regulatory history shows where this often leads; guidelines and “better practice” frameworks tend to be honoured only when convenient. Once implementation becomes politically or fiscally uncomfortable, compliance fades.

Lessons from algorithmic failures

Automated decision-making guidance has existed for years. So have Australia’s AI Ethics Principles. Both were in place during the later period of Robodebt. Yet an unlawful income averaging model was still pushed through against internal legal advice until litigation and a royal commission forced accountability.

Before that institutional reckoning was triggered, the saga cost hundreds of millions of dollars and caused deep psychological and financial harm to hundreds of thousands of people. This failure remains the clearest example of what happens when automation-first thinking meets weak governance and absent human review.

An incorrect legal assumption was embedded into code, which reversed the onus of proof. Case officers were sidelined so completely that individuals were left to disprove machine-generated debts, often with no real avenue for effective challenge.

These outcomes were policy decisions, driven by efficiency targets and cost-cutting goals; not technical inevitabilities. They were enabled by an environment where governance and oversight were treated as optional.

That same pattern continues to surface across sectors, from automated credit scoring and account terminations to risk profiling and limit adjustments. Whenever opaque systems are allowed to act without real human review, the same failure that Robodebt exposed will continue to repeat in new places.

Without enforceable legal obligations behind human-in-the-loop processes, a national AI plan risks repeating this harm at scale.

And the people most affected are those least equipped to challenge errors: casual workers, people with disability, migrants and the financially vulnerable.

International playbooks, local amnesia

Australia is not acting in isolation. The National AI Plan mirrors what other jurisdictions have done by creating an AI Safety Institute and prioritising coordination over legislation. But some jurisdictions have gone much further.

The European Union, through its AI Act, is moving into tiered risk obligations, mandatory transparency and hard limits on high-risk uses like social scoring and biometric surveillance, backed by regulators with real investigative and penalty powers.

Australia’s approach, by contrast, resembles earlier waves of digital strategy and cyber security uplift: glossy reports, advisory bodies and working groups, followed by limited real-world enforcement.

We have seen this pattern before. Cyber security strategies looked strong on paper. Evaluations later revealed patchy uptake, under-resourced regulators and heavy reliance on voluntary compliance. It was only after major breaches exposed systemic risk that laws were tightened.

Without a clear path from principle to obligation, the National AI Plan risks joining that archive of well-intentioned frameworks with little practical effect.

The backbone the plan still needs

If the goal is to ensure technology serves Australians rather than the other way around, we need to shift from aspiration to enforceable design.

  • At a minimum, serious regulation of high-impact AI and automated decision making should include statutory duties of explainability and contestability for decisions that materially affect rights, entitlements and livelihoods. A clear right to prompt human review must sit alongside that duty.

  • There must be mandatory impact assessments that examine legality, discrimination risk, consumer harm and systemic effect before AI is deployed in areas such as welfare, credit, insurance, migration, employment screening and essential services.

  • Independent oversight matters just as much. Not an advisory body, but one with investigative power, the ability to impose binding conditions, and the authority to suspend or prohibit unsafe systems.

  • And liability has to be clearly allocated across agencies, vendors and integrators, so harm does not vanish behind procurement chains and third-party complexity.

None of this blocks innovation. These measures are the institutional equivalent of brakes and seatbelts. Without them, public trust will disappear.

The National AI Plan and the AI Safety Institute can serve as starting points. But without enforceable duties, tested escalation pathways and real human oversight inside public and private systems, they remain exactly that. Starting points.

Until then, we are left with another expensive plan that reads well on paper and waits for a spine.

Tags:
You need to be a member to post comments. Become a member for free today!