Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

Survey: Experts excited by Australian AI Safety Institute opportunities

Australia’s position as a middle power could be a boon, but the dangers of a restrictive bureaucracy could scare key talent away from a role in the soon-to-be-launched industry body.

Tue, 13 Jan 2026
Survey: Experts excited by Australian AI Safety Institute opportunities

A raft of Australian AI experts have expressed excitement about the opportunities represented by the nation’s new AI Safety Institute (AISI), but have also warned that too much red tape could keep the best talent from signing up before it even starts.

Policy and research charity Good Ancestors polled 139 professionals with AI safety and policy expertise late last year to discover what appealed to them most about the AISI and their hopes for what it might achieve, alongside their most serious concerns.

The chance to do bold new work in the area of AI safety and deliver global impact was rated among the most exciting elements of the government’s proposal, alongside the ability to learn from other international institutions.

 
 

Australia’s position as a trusted middle power also caught the attention of many respondents, with the opportunity to create broader international partnerships a key driver.

“What excites me most about an Australian AISI is its potential to break the US-China binary that dominates AI safety discourse,” one respondent said.

“Australia occupies a unique middle-power position, close enough to major developments to be relevant, distant enough to be genuinely independent, and trusted enough in the Indo-Pacific to convene conversations that neither superpower could.”

The survey also polled respondents about what opportunities were most attractive in terms of joining the AISI, with the chance to make strong international connections the top dealmaker. Of those polled, 67.9 per cent said this would be a driving factor in attracting them or others like them to sign up for a role at the institute, while having a clear mission focused on identifying catastrophic risks and a mandate to address them was another key point of attraction.

Respondents were also asked to define what they thought the key mission of the AISI should be, whether it should focus on catastrophic frontier risks such as loss of control, or on broader issues of privacy and AI bias. More than half of those surveyed – 58.2 per cent – suggested focusing on both areas.

Additionally, 30.6 per cent said Australia’s AISI should instead focus on catastrophic risks, while 11.2 per cent would prefer the institute focus solely on broader AI-based harms.

Testing of advanced AI models, hardware governance, and the possibility for AI research to pose a chemical, biological, radiological, or nuclear risk were also recommended areas of focus.

“Australia has world-class biosecurity infrastructure and is a global biomedical leader,” another respondent said.

“Expertise in agricultural pathogens and quarantine systems … position us uniquely to develop evaluation frameworks for AI systems that pose biological design risks.”

However, while respondents found there was much to be excited about, many expressed concerns regarding getting the AISI’s culture right. Ninety per cent of those polled said that an overly bureaucratic culture would be a deal breaker, while more than half said a lack of funding – anything less than $10 million a year – would prevent them from taking a role with the AISI.

You can read the full results of Good Ancestors’ research here.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

Tags:
You need to be a member to post comments. Become a member for free today!