Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Defence releases findings from review in ethical AI use

The Department of Defence has published findings from a workshop on the ethics of AI use in the sector.

user icon Charbel Kadib
Thu, 18 Feb 2021
Defence releases findings from review in ethical AI use
expand image

A report detailing the findings from a workshop into the ethical use of artificial intelligence (AI) has been published by Defence in a bid to support science and technical considerations for the potential development of Defence policy, doctrine, research and project management.

The 2019 workshop was attended by over 100 representatives from Defence, other Australian government agencies, industry, academia, international organisations and media.

Participants contributed evidence-based hypotheses to discussions with a view to developing a report with suggestions as a starting point for principles, topics and methods relevant to Defence contexts for AI and autonomous systems to inform military leadership and ethics.

The report, titled A Method for Ethical AI in Defence, summarises the discussions from the workshop, and outlines a ‘Practical Methodology for Ethical AI in Defence’ to serve as a guide for future AI integration projects.

Three tools put forward by participants to assist Defence and Industry in developing AI systems included:

  • an ‘AI Checklist’ for the development of ethical AI systems;
  • an ‘Ethical AI Risk Matrix’ to describe identified risks and proposed treatment; and
  • a data item descriptor (DID) for contractors of larger programs to develop a formal Legal, Ethical and Assurance Program Plan (LEAPP) to be included in project documentation for AI programs where an ethical risk assessment is above a certain threshold.

“Upfront engagement on AI technologies, and consideration of ethical aspects needs to occur in parallel with technology development,” Chief Defence Scientist, Professor Tanya Monro said.

The Science, Technology and Research (STaR) Shots from the More, together: Defence Science and Technology Strategy 2030 are currently exploring the potential for AI adoption to meeting the needs of the national security science and technology priorities.

“Defence research incorporating AI and human-autonomy teaming continues to drive innovation, such as work on the Allied IMPACT (AIM) Command and Control (C2) System demonstrated at Autonomous Warrior 2018 and the establishment of the Trusted Autonomous Systems Defence CRC,” Professor Monro added.

Air Vice-Marshal Cath Roberts, Head of Air Force Capability, said artificial intelligence and human-machine teaming would play a pivotal role for air and space power into the future.

“We need to ensure that ethical and legal issues are resolved at the same pace that the technology is developed,” she said.

“This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control.

“Practical application of these tools into projects such as the Loyal Wingman will assist Defence to explore autonomy, AI, and teaming concepts in an iterative, learning and collaborative way.”

Charbel Kadib

Charbel Kadib

News Editor – Defence and Security, Momentum Media

Prior to joining the defence and aerospace team in 2020, Charbel was news editor of The Adviser and Mortgage Business, where he covered developments in the banking and financial services sector for three years. Charbel has a keen interest in geopolitics and international relations, graduating from the University of Notre Dame with a double major in politics and journalism. Charbel has also completed internships with The Australian Department of Communications and the Arts and public relations agency Fifty Acres

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.