You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
lawyers weekly logo

Powered by MOMENTUMMEDIA

Breaking news and updates daily. Subscribe to our Newsletter
Advertisement

Murdered man addresses killer in court using AI-generated video

A murdered man has addressed his killer in court through an AI-generated video, a move that may represent the future of criminal proceedings.

Murdered man addresses killer in court using AI-generated video
expand image

Christopher Pelkey, 37, at the time of his death, addressed his murderer, Gabriel Paul Horcasitas, in a Maricopa County, Arizona Superior Court on 1 May.

The scripted video written by his sister, Stacey Wales, depicts Pelkey with a beard and green shirt and shows him calling the murder unfortunate.

“It is a shame we encountered each other that day in those circumstances,” said the AI-generated version of Pelkey.

“In another life, we probably could have been friends.”

The avatar also discloses that he is AI-generated in the clip, which is somewhat obvious based on the movement of his mouth not syncing with the audio.

While Wales said she wasn’t ready to forgive Horcasitas, she believed that Pelkey may have been more compassionate.

“You’re told that you cannot react, you cannot emote, you cannot cry,” she said.  “We looked forward to [the sentencing] because we finally were gonna be able to react.”

“The goal was to humanise Chris, to reach the judge, and let him know his impact on this world and that he existed.

“[AI] is just another avenue that you can use to reach somebody.”

The unique use of AI appears to be one of the first in the US court systems. Courts usually restrict AI in legal proceedings, and a number of lawyers have been sanctioned after citing fake AI-generated cases.

In this case, however, as the video did not present evidence, it was allowed during sentencing. Horcasitas was also convicted of manslaughter and endangerment. He is now facing 10.5 years in prison.

University of Waterloo Professor Maura Grossman said the use of AI in the court didn’t raise any ethical or major legal dilemmas in her opinion.

“Because this is in front of a judge, not a jury, and because the video wasn’t submitted as evidence per se, its impact is more limited,” she told NPR.

Additionally, a professor of law, ethics and emerging technologies at Arizona State University’s Sandra Day O’Connor College of Law, Gary Marchant, also said that this use of AI was largely harmless to the legal process.

“Victim statements like this that truly try to represent the dead victim’s voice are probably the least objectionable use of AI to create false videos or statements,” he told NPR.

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.
You need to be a member to post comments. Become a member for free today!

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.