You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

For breaking news and daily updates, subscribe to our newsletter.
Advertisement

Qld’s AI cameras failed to make ethical checks

The Queensland transport authority has failed to ethically manage risks generated through the use of AI-powered image recognition traffic cameras on the state’s roads.

QLD's AI cameras failed to make ethical checks
expand image

The cameras, which were developed by Acusensus to detect non-seatbelt wearers and drivers using their mobile phones, were first deployed in July 2021 by the Department of Transport and Main Roads (TMR).

While the cameras reduced the need for human review of cases by 98 per cent, an audit of the Mobile Phone and Seatbelt Technology (MPST) program has found that the cameras did not meet ethical expectations.

“The MPST program uses image-recognition AI to detect driving offences, which introduces a range of ethical risks,” the audit report stated.

 
 

While the audit acknowledged that some measures had been put in place to manage risks, the audit found that the TMR had “not yet undertaken a full ethical risk assessment as required by the Queensland government’s AI governance policy”.

“This means it does not know whether all ethical risks for the MPST program are identified and managed.”

The report also acknowledged that the MPST was launched prior to the Queensland government’s AI governance policy. However, it concluded that TMR “lacks full visibility” when it comes to the use of its AI systems, including the Queensland government AI assistant, QChat.

“Using this safeguard could improve the accuracy of responses from QChat and reduce the risk of users receiving misleading guidance,” the audit stated.

In response, TMR said it is currently developing a framework over the next 12 months for “centralised visibility of its AI systems”.

It also said that it expects to use the Queensland foundational artificial intelligence risk assessment framework by the end of the year.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
Tags:
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.