A federal judge has issued a stinging critique of a proposed $1.5 billion copyright settlement between AI startup Anthropic and a class of authors, casting serious doubt on what was poised to become the largest copyright payout in U.S. history. The judge’s sharp rebuke, delivered during a high-stakes hearing this week, raises new questions about how artificial intelligence companies will be held accountable for their use of copyrighted content in training large language models.
The proposed settlement, if approved, would resolve a sweeping lawsuit brought by authors who allege their books were used without permission to train Anthropic’s AI system, Claude. But U.S. District Judge William Alsup made it clear during Monday’s hearing that he was not prepared to rubber-stamp the agreement without a thorough examination.
A Blistering Response from the Bench
In a packed courtroom in San Francisco, Judge Alsup spent nearly an hour dissecting the terms of the settlement, questioning both its fairness and transparency. His tone alternated between incredulous and frustrated as he pushed back on the proposed deal.
“I will not approve a settlement that hides critical information from the public,” he stated firmly. “We are dealing with fundamental questions about copyright, AI, and fairness. This cannot be brushed under the rug with a billion-dollar check and a vague claims process.”
Among the judge’s chief concerns were the lack of a publicly available list of the works allegedly used in AI training, the unclear methodology for determining payouts to authors, and the possibility that certain groups or organizations may receive preferential treatment in the settlement process.
He gave both parties two weeks to submit a detailed accounting of the books involved and to present a sample claims form. A follow-up hearing is scheduled for later this month, during which the judge will decide whether the agreement should proceed—or be sent back to the drawing board.
A Groundbreaking but Controversial Deal
The proposed $1.5 billion settlement was announced earlier this month and was initially viewed as a watershed moment in the ongoing legal clashes between AI developers and copyright holders. According to filings, Anthropic agreed to compensate authors for each instance where their copyrighted works were used to train its models. The average payment per book was expected to be around $3,000.
In addition to monetary compensation, the deal also required Anthropic to destroy datasets that included unauthorized materials and implement more stringent data curation policies moving forward.
From the outside, the agreement looked like a major concession—an AI company admitting fault and paying a hefty price for it. But inside the courtroom, the judge’s reaction suggested otherwise. His skepticism centered not only on the fairness to authors, but on the precedent such a settlement could set for the tech industry.
“Is this justice, or just damage control?” Judge Alsup asked rhetorically. “That’s what I’m here to figure out.”
The Broader Legal Battle Over AI and Copyright
At the heart of the case is a contentious issue that has sparked debate across the tech and publishing industries: Can AI companies legally use copyrighted material to train their systems?
The plaintiffs argue that AI firms are exploiting creators by harvesting books, articles, and other texts—sometimes obtained from unauthorized sources—and feeding them into massive training datasets without compensation or consent. These datasets form the backbone of generative AI systems like Anthropic’s Claude, enabling them to produce humanlike text and answer complex questions.
Anthropic, like several other AI developers, has defended its practices as transformative and educational in nature, claiming protection under the doctrine of fair use. However, the discovery that millions of pirated books had been used in model training shifted the legal landscape dramatically, forcing Anthropic to reconsider its legal position.

The proposed settlement was intended to avoid a lengthy and potentially devastating trial, one that could have exposed Anthropic to much larger financial penalties and intensified regulatory scrutiny.
The Judge’s Role as Gatekeeper
While parties in a class-action lawsuit can negotiate settlements, it is ultimately up to the presiding judge to determine whether the agreement is fair, adequate, and reasonable. In this case, Judge Alsup appears unconvinced.
Legal analysts following the case say his reaction is not surprising. Known for his rigor in evaluating complex technology-related lawsuits, Alsup has previously presided over major cases involving software, patents, and data privacy. His insistence on clarity and fairness suggests that even a billion-dollar settlement won’t be approved unless it meets strict legal standards.
Moreover, the case carries broader implications. As AI systems continue to proliferate, similar lawsuits are cropping up around the world. The outcome of the Anthropic case could influence future rulings and shape the legal framework for how AI companies interact with copyrighted materials.
What Comes Next
With a new round of deadlines now in place, Anthropic and the plaintiffs’ legal team must address the judge’s concerns swiftly. They are required to submit a full list of copyrighted works implicated in the case and present a transparent claims process that demonstrates equal treatment for all affected authors.
If the parties can satisfy the judge’s requirements, the deal may still go through—albeit in a revised form. If not, the settlement could be rejected outright, sending the case back to court and potentially opening the door for a full trial.
Either outcome will have ripple effects across the tech world. AI companies are now on notice: settlements alone may not be enough to avoid judicial scrutiny. Transparency, accountability, and respect for intellectual property rights are likely to play a central role in future legal and public debates surrounding artificial intelligence.
For now, the $1.5 billion check is on hold—and so is the future of how AI learns from the written word.









