You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

Reactions to Labor’s National AI Plan mixed, as Greens call the strategy a betrayal

Industry bodies praise the government’s new AI roadmap, but the strategy is a “toothless tiger”, says David Shoebridge.

Reactions to Labor’s National AI Plan mixed, as Greens call the strategy a betrayal
expand image

The Australian government has released its long-awaited National AI Plan, and the Minister for Industry and Innovation and Minister for Science, Tim Ayres, has been tirelessly touting Labor’s strategy to navigate the artificial intelligence revolution all morning.

Speaking on the Today Show, Ayres (pictured) was asked to lay out the plan’s three main goals.

“They are to make sure that we capture the economic opportunity here in Australia; that we share the benefits, from the central business districts and the tech sector all the way through the suburbs and the regions; and that we keep Australians safe from some of the risks and harms that exist in this new wave of technology,” Ayres said, before going on to pitch the plan and how it works to the wider community.

 
 

“We've set up a new Artificial Intelligence Safety Institute right at the heart of government, to make sure that we analyse the new models that come on stream, that we identify the risks, work across government, whether it's the eSafety Commissioner, our intelligence agencies, our police, right through to our financial regulators, to make sure that government's got the capability to deal with challenges, to deal with risks, to communicate with Australians and to give government the best advice that we can to make sure that we're mobile and effective.”

Ayres noted the challenge of keeping up with a technology that is in a constant state of change and evolution, adding that Labor needs to “make sure that the government’s got the capacity to keep Australians safe, but also to capture the enormous benefits here in Australia”.

Speaking to the press at a later event, Ayres said that the new AI Safety Institute would be looking closely at the intersection of artificial intelligence and social media.

“This government is absolutely up to cracking down hard where there’s harms in the digital landscape,” Ayres said.

“The eSafety commissioner and the government crack down hard on deepfake pornographic images, we cracked down hard on other areas of social media, we’re making sure we’re protecting our kids from social media harms and we will be watching very closely the interaction of artificial intelligence with social media and other digital platforms because of all of its implications.”

The good

Despite social media being called out as an area to watch, Sunita Bose, Managing Director of the industry association DIGI – which, among other companies, represents Meta, TikTok, and Google – welcomed the government’s National AI Plan.

“DIGI welcomes the release of Australia’s National AI Plan in bringing clarity to the Government’s approach. Realising the significant economic and social opportunities of AI requires thoughtful regulation that strengthens national capability and supports responsible innovation. DIGI welcomes the focus on building a strong and safe AI ecosystem for Australians, including through the establishment of an AI Safety Institute,” Bose said in a statement.

“Collaboration between industry and government will be essential to help drive responsible innovation and strong social and economic outcomes, and DIGI will continue to engage with the Government on Australian policies, industry codes and regulatory settings that support a trusted and thriving AI sector.”

Sovereign Australia AI’s CEO, Simon Kriss, joined in the praise, but warned risks remain.

“The Australian Government has landed in the right place with now sweeping regulation of AI. However, this does not remove the clear and present danger of relying on foreign-made AI models,” Kriss said.

“The announced AI Safety Institute is at risk of becoming a toothless tiger if all our AI is purchased from overseas where they care less about our values and laws. For Australian businesses to begin to trust in and adopt AI, we must be assured that the models we use are built under Australian law and that none of our data ever leaves Australian shores or is processed by servers owned by American companies who are subject to the US CLOUD Act.”

The bad

But a “toothless tiger” is exactly what Greens senator, and the party’s Digital Rights & IT spokesperson, David Shoebridge said of the newly announced strategy and its lack of legislated guardrails.

“Labor has delivered another toothless tiger with their version of an AI Safety Institute. We’ve seen this before with the NACC where there were big promises, no bite,” Shoebridge said.

“This plan betrays us all by abandoning AI guardrails under the guise of a delay, and choosing corporate profits over community rights.”

Shoebridge said that the government’s decision to open “the floodgates to unregulated AI agents” while also banning children from social media over mental health concerns was “cooked”.

“Labor’s plan completely ignores the mental health crisis being driven by AI algorithms,” Shoebridge said.

“Perhaps most disturbing is that Labor wants to feed more Australian data into AI systems without fixing our broken privacy or consent laws first. On top of an intrusive social media ban that will compromise everyone’s privacy this is a deeply dystopian path.”

The ugly

Electronic Frontiers Australia was similarly displeased with Labor’s announcement, warning the strategy “prioritises ‘opportunity-first’ adoption at the expense of citizen safety, fundamental digital rights and basic legal safeguards”.

“The early signals are clear. Many people are unaware that this Big Tech and Big Business-friendly light-touch approach to regulation was also used in the implementation of the National Privacy Principles made under the Federal Privacy Act back in 2000,” EFA Chair John Pane said.

“Those ‘light touch’ privacy principles were an abject failure due to poor design, regulatory capture and fear mongering from both Big Tech and Big Business interests, putting profit and productivity before people and digital rights. And now it looks like history is repeating itself with this National AI plan.

“EFA again calls on the Australian Government to build a human rights-based framework for AI regulation modelled on the European Union AI Act and to prioritise the
privacy, safety and rights of its citizens over rubbery short-term economic gains, a large proportion of which will flow out of our country.”

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.