A damning new study could put AI companies on the defensive.
In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data,
not “learning” from it.
Specifically, four prominent LLMs
— OpenAI’s GPT-4.1, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and Anthropic’s Claude 3.7 Sonnet
— happily reproduced lengthy excerpts from popular
— and protected
— works, with a stunning degree of accuracy.
They found that Claude outputted “entire books near-verbatim” with an accuracy rate of 95.8 percent.
Gemini reproduced the novel “Harry Potter and the Sorcerer’s Stone” with an accuracy of 76.8 percent,
while Claude reproduced George Orwell’s “1984” with a higher than 94 percent accuracy compared to the original
— and still copyrighted
— reference material.
“While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models,”
the researchers wrote.
Some of these reproductions required the researchers to jailbreak the models with a technique called "Best-of-N",
which essentially bombards the AI with different iterations of the same prompt.
(Those kinds of workarounds have already been used by OpenAI to defend itself in a lawsuit filed by the New York Times,
with its lawyers arguing that “normal people do not use OpenAI’s products in this way.”)
The implications of the latest findings could be substantial
as copyright lawsuits play out in courts across the country.
As The Atlantic‘s Alex Reisner points out,
the results further undermine the AI industry’s argument that LLMs “learn” from these texts
-- instead of storing information and recalling it later.
It’s evidence that “may be a massive legal liability for AI companies”
and “potentially cost the industry billions of dollars in copyright-infringement judgments
futurism.com/artificial-intell…
Researchers Just Found Something That Could Shake the AI Industry to Its Core
Researchers found compelling evidence that AI models are actually copying copyrighted data, not "learning" from it.Victor Tangermann (Futurism)
Lino Morales the no good ham
in reply to Jakob Rosin • • •