Powered by MOMENTUMMEDIA
For breaking news and daily updates, subscribe to our newsletter

ADF outlines its AI responsibilities as it weighs up benefits

The Australian Defence Force (ADF) has outlined its obligations and policy for AI use within Defence in a new release.

Thu, 19 Mar 2026
ADF outlines its AI responsibilities as it weighs up benefits

The release, titled “Policy Settings for Responsible Use of Artificial Intelligence in Defence”, outlines the reasoning behind the ADF’s exploration of AI implementation and how it plans to ensure that the technology’s use is safe and responsible.

“The strategic and operational environments in which the Australian Defence Force (ADF) and the Department of Defence (‘Defence’) operate are becoming increasingly challenging. The 2024 National Defence Strategy recognised that increasing strategic competition is a primary feature of Australia’s security environment,” the ADF said.

“Artificial intelligence (AI) and other new and emerging technologies are becoming an integral feature of this strategic competition. As a general-purpose technology, AI is permeating government and commercial institutions, infrastructure, products and services.

 
 

“AI has the potential for immense positive benefit across all sectors of our society and economy. In the national defence context, AI has particular potential for driving greater accuracy, efficiency, speed and safety in defence functions and ADF operations. Public trust in the adoption of AI in defence activities is critical to delivering that potential.”

The ADF plans, like many other militaries worldwide, are for AI to be used to support decisions, with the final button press made by human staff. It is currently exploring opportunities where harnessing AI could bring better and faster results in both civilian and military areas.

This includes analysing large amounts of data, taking on repetitive or dangerous tasks, optimising logistics, applying autonomy to operations to make them more effective, accurate and fast, and improving physical and cyber safety.

“These settings apply to research, design, development, deployment, use and decommissioning of AI for use by Defence. It includes all AI technologies and sub-technologies, including frontier models, and any capabilities supported by AI for both warfighting and enabling functions,” the ADF said.

The ADF outlined its principles in three categories: lawfulness, adherence to values-based principles, and proportionate controls.

The first will see the ADF ensure that its AI usage is compliant with domestic and international law, including humanitarian and human rights legislation. They will also have a defined “accountable officer” responsible for the AI’s capabilities.

“As the AI capability moves through the life cycle, the accountable officer will change; however, all officials will remain accountable for the decisions or contributions they made in any stage and for their contribution to the development or use of the AI,” the ADF said.

“As the accountable officer changes during the technology life cycle, for example, when an AI capability moves from acquisition to introduction to service, the transfer of accountability must be documented.”

Regarding AI in weapons systems, the ADF said it will abide by the Geneva Conventions and review all weapons and means of warfare, which AI implementation can fall under.

In regard to its values-based principles, accountability, bias and harm mitigation, explainability of inputs and outputs, reliable and secure use cases, and a human-centric approach are all priorities for the ADF.

“Defence will apply values-based principles to ensure that our use of AI technology is in line with Australia’s high legal and ethical standards and public expectations,” the ADF said.

Finally, the ADF said it will apply “risk-based control measures” in line with consequences and unintended outcomes.

“These risk-based controls consist of layers of policies, processes, training, and procedures, including ongoing assurance and after-action evaluation. Defence uses these to identify, assess and mitigate risks. They must be in place at every stage of the technology life cycle,” it said.

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags: