Share this article on:
Powered by MOMENTUMMEDIA
For breaking news and daily updates,
subscribe to our newsletter.
A US federal judge has ruled that Anthropic’s AI training using published books without the consent of the author is legal.
In lawsuits by creatives against AI companies, Anthropic, OpenAI, Meta, and other AI giants have argued that copyright law’s fair use doctrine would allow them to train large language models (LLM).
The fair use doctrine, which is laid out in outdated copyright legislation from 1976, is up to interpretation by judges. This is legislation that well precedes the internet, let alone artificial intelligence.
In the case of Bartz v Anthropic, federal Judge William Alsup ruled in favour of Anthropic training its Claude LLMs on copyrighted books without the permission of the authors.
“The use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use under section 107 of the Copyright Act,” said Alsup.
According to the doctrine, fair use of copyrighted material weighs up whether or not the use is for parody, for commercial gain and how much the product has transformed from the original copyrighted work.
An Anthropic spokesperson said it was pleased that the judge recognised the transformative nature of its training.
“We are pleased that the court recognised that using ‘works to train LLMs was transformative,” a spokesperson told media, adding that Alsup’s ruling was “consistent with copyright’s purpose in enabling creativity and fostering scientific progress”.
While Alsup’s ruling has set precedent on the argument of fair use, it is unclear whether or not other judges will follow the decision.
Additionally, Alsup ruled against the Anthropic’s development of its “central library”, which saw it pirate, copy and store over 7 million books, which it believed would be the best material for training its AI.
“We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,” said Alsup.
“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for theft, but it may affect the extent of statutory damages.”
According to US copyright law, willful copyright infringement can earn statutory damage fines of up to US$150,000 per work.
Governments continue to prioritise AI development over creatives
The ruling closely follows the UK’s passing of a bill that would allow AI developers to train their products using copyrighted data without informing the creator.
After failing to pass the House of Lords four times, the Data (Use and Access) Bill has gone through both houses after the government proposed some amendments that improve transparency as to what data is being used.
Most of the bill will be laid out in future legislation and thus will not take effect until at least 2026, as the government is adamant that regulating AI practices should not be dealt with in the bill, but instead with an AI bill, which may not appear until next year.
“The deadlock has now cleared – but it leaves a regulatory gap. The government has made clear it will not be drawn into regulating AI training practices via fragmented amendments. Instead, it remains committed to introducing a ‘comprehensive’ AI bill in the next parliamentary session – though that could be as late as 2026,” said the managing associate in the commercial disputes team at Addleshaw Goddard, Rebecca Newman.
“This outcome leaves the question of whether AI developers must ensure their models are trained in accordance with UK copyright law unresolved – but given the ongoing Getty trial, the answer will likely be shaped first by the courts, not Parliament.”
British musician and icon Sir Elton John slammed the government for the bill, saying he felt “betrayed” by the current government and prime minister, who he has otherwise been a supporter of.
“The government are just being absolute losers, and I’m very angry about it,” he said.
“The danger is for young artists, they haven’t got the resources to keep checking or fight big tech. It’s criminal, and I feel incredibly betrayed.
“A machine ... doesn’t have a soul, doesn’t have a heart, it doesn’t have human feeling, it doesn’t have passion. Human beings, when they create something, are doing it ... to bring pleasure to lots of people.”
Elton John and other musicians, including Paul McCartney, Dua Lipa, Ed Sheeran, Florence Welch, and Coldplay, have echoed this idea of AI turning to industry theft with the new bill in a letter asking the government to update copyright laws. The artists are not against AI, but they want to protect their copyrighted material.
“Creative copyright is the lifeblood of the creative industries. It recognises the moral authority we have over our work and provides an income stream for 2.4 million people across the four nations of the United Kingdom. The fight to defend our creative industries has been joined by scores of UK businesses, including those who use and develop AI. We are not against progress or innovation,” said the letter.
“The creative industries have always been early adopters of technology. Indeed, many of the world’s greatest inventions, from the lightbulb to AI itself, have been a result of UK creative minds grappling with technology.”
The government’s push is likely fuelled by pressure on tech giants and the race to lead the AI industry. These tech companies believe that AI developers should have access to all data unless creators opt out.
Meta’s former president of global affairs, Sir Nick Clegg, believes that asking permission for every single copyright order would “kill the AI industry in this country”.
Be the first to hear the latest developments in the cyber industry.