Comparing AI Training to Human Learning Is Cartoonishly Absurd

Analogizing AI processes to human processes may be helpful as a simplistic way to explain how certain aspects of AI work. However, relying on these analogies as a substitute for actual legal and policy analysis can lead to erroneous blurring of lines, resulting in poorly conceived laws and policies, which prioritize AI over humans and setting a very dangerous precedent for our future. People describe AI as “creating” output, “hallucinating,” or “learning.” But, in truth, AI does none of those things like a human does—and the differences matter in copyright law and in many other areas.
Why Anthropomorphizing AI is Dangerous
Anthropomorphizing AI matters in the copyright context because when incorrectly applied, it may seem that copyright law shouldn’t apply at all to AI uses. This is wrong and dangerous, as it prioritizes AI over human rights. Statements asserting that AI “learns” just as humans do are particularly damaging. While saying that AI “learns” from creative works may sound as harmless as an aspiring artist or a student studying prior works, in reality, the comparison is incongruous because there is no way humans “learn” at the scale or speed of AI’s unlicensed, wholesale mass scraping, and ingestion of copyright-protected works.
When the Framers of the U.S. Constitution decided to include the copyright clause in the Constitution, they did so because they understood that learning is fueled by those who make investments into the intellectual creation and dissemination of those creations. Through the Constitution, Congress was given the power to design copyright laws to incentivize creation, which in turn promotes learning, culture, and the dissemination of works. It is because of these laws that the creative economy in the U.S. contributed over $2 trillion to the U.S. GDP in 2023. In the publishing industry, higher education and professional books, including business, medical, law, technical and scientific publications, have brought in over $1 billion year-to-date in revenue. Copyright law fuels investments in scientific research and literature, progresses learning through innovative curricula, and supports pedagogical needs through the creation of a variety of critical educational materials.
It is now critical that copyright laws be upheld and respected, lest AI machines and the companies developing them, get better treatment than human beings under copyright law as a result of the erroneous belief that AI “learns” as humans do. As actor and artist advocate Joseph Gordon-Levitt pointed out, “as soon as we start saying an AI is just like a person under the law, I think we are asking for dystopia. AIs are not people … I think that this is really dangerous for us to trick ourselves, fall for the trick that these tech products are people.”
AI Training is NOT Like Human Learning
There are stark differences between AI training and human learning—and they matter in a copyright analysis. Recently, the court in the Bartz v. Anthropic decision fell into multiple traps by anthropomorphizing AI, equating AI training to “training schoolchildren how to write well” and stating:
“Everyone reads texts, too, then writes new texts. They may need to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable. For centuries, we have read and re-read books. We have admired, memorized, and internalized their sweeping themes, their substantive points, and their stylistic solutions to recurring writing problems.”
But the Bartz court’s analysis crumbles under a copyright lens on many levels.
AI Training Copies Creative Works at Scale, Unlike Human Learning
First, on a technical level, generative AI training involves copying and retaining complete versions of creative works at a scale unmatched by human learning. Countless legal, policy, and artist experts have recognized this important difference as part of their wholesale rejection of the analogy. The court in another AI case, Kadrey v. Meta, highlighted the repetitive process of AI training where text is ingested for expressive value and copied, spliced, paired, and cobbled “billions or trillions of times with different text.” In this case, the court concluded, “This is not how a human reads a book.” Similarly, the U.S. Copyright Office in its report on generative AI training rejected the analogy of AI training to human learning, by pointing out even more differences, stating:
“AI learning is different from human learning in ways that are material to the copyright analysis. Humans retain only imperfect impressions of the works they have experienced, filtered through their own unique personalities, histories, memories, and worldviews. Generative AI training involves the creation of perfect copies with the ability to analyze works nearly instantaneously.”
This is consistent with the views of influential copyright law scholars like Professor Jane Ginsburg and Professor Robert Brauneis, with the latter pointing out that: “Generative model training transcends the human limitations that underlie the structure of the exclusive rights.” In rejecting the analogy of AI training to human learning, Professor Ginsburg stated in an interview:
“I reject that analogy. Because a human being is not a machine. So, a machine makes copies. These machines make copies on a very, very big scale. And I think that is a significant difference. Even if, for example, you’re an art student and you look at Las Meninas because you want to study Velázquez, and you want to do your own variations on Las Meninas, you might also make sketches. Nobody is going to consider that you are a copyright infringer for making your private study sketches. But these systems are doing it on a massive scale.”
Artists have also long understood and voiced their opposition to equating AI training to human learning. More recently, actor-advocate Joseph Gordon-Levitt stated:
“When a human reads things, and takes inspiration, a human can read, you know, maybe a few books at a time and maybe remember some of them. What they’ve done is not like what a human does. What these companies have done is taken all the text that they can possibly scrape up, all the everything that everyone has made and just scraped it all up. A human can’t do that, obviously. And a human can’t do that much of it. A human can’t do it that fast.”
AI Training Results in Competing Outputs that Harm the Markets for Ingested Works
In the Kadrey case, the court explained another crucial difference between AI training and human learning. In pushing back against the Bartz court’s adoption of the analogy, the Kadrey court discussed the issue of scale, but also pointed out that:
“when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a minuscule fraction of time and creativity it would otherwise take.”
The Kadrey court explained that not only does AI training ingest copyrighted works at an incomparable scale, but the resulting generative AI outputs also damages the very markets of the copied works. The faulty analysis in Bartz presupposes that children reading books will invariably go on to create works that undermine the market for the very works they learned from.
Former Stability AI executive, Ed Newton-Rex, also pointed out, that “A single AI, trained on all the world’s content, can produce enough output to replace the demand for much of that content. No individual human can scale in this way …” He further explained:
“… human learning is part of a long-established social contract. Every creator who wrote a book, or painted a picture, or composed a song, did so knowing that others would learn from it. That was priced in. This is definitively not the case with AI. Those creators did not create and publish their work in the expectation that AI systems would learn from it and then be able to produce competing content at scale. The social contract has never been in place for the act of AI training.”
Learning Is Not Without Cost, Upholding Copyright Law Fosters Learning
In equating AI training to human learning, the Bartz court also forgets that when humans learn, there is a cost associated with it. People buy books, purchase tickets to see a performer at a concert or a play, pay a fee for a subscription music streaming service to listen to songs, watch advertisements and commercials during TV show. Museums, schools, and libraries are likewise supported by taxpayer dollars or private donations. These are just a few of the transactions that occur where value is being exchanged for experiencing the creative work under the incentive structure of copyright law.
Statements that anthropomorphize AI as “learning” lead to the ultimate falsehood of equating learning and knowledge with “free.” The acquisition of knowledge is never costless. The Framers understood this well when crafting the copyright clause of the Constitution. Ultimately, learning is not free for humans, and nor should it be free for AI companies—particularly where AI is using human-created works to train and then displacing the market for those works while simultaneously generating revenue and profits for technology corporations.
Copyright Law Has No General Exceptions for Learning
Despite these critical differences, AI companies still love to argue that they should get a free pass under copyright law to engage in mass unlicensed scraping and use of creative works because AI is “learning” and the same as human “learning.”
What’s important to understand here is that there is no “learning” exception in the Copyright Act and there is not categorical exception for learning in the fair use defense found in Section 107 of the Copyright Act. Educators are not broadly exempted from the copyright law when they use copyrighted works to teach students in classrooms. As noted above, schools and students pay for textbooks and other materials they use to learn. Do we really want to treat AI machines better than human educators and human students by giving AI companies an exception that no human has? That is a very dangerous road to go down.
As the U.S. Copyright Office pointed out in its report:
“. . . the analogy rests on a faulty premise, as fair use does not excuse all human acts done for the purpose of learning. A student could not rely on fair use to copy all the books at the library to facilitate personal education; rather, they would have to purchase or borrow a copy that was lawfully acquired, typically through a sale or license. Copyright law should not afford greater latitude for copying simply because it is done by a computer.”
Indeed, as the Copyright Office points out, the Copyright Act does not provide for broad exceptions for teaching, though it has some narrow and tailored provisions for some pedagogical needs. Teachers and students still pay for educational materials and books, or taxpayer dollars will pay for such materials. To accept that AI machines are exempted from copyright law because they are “learning” disproportionately favors artificial intelligence over human intelligence.
Moreover, as the Copyright Office points out that fair use has never provided a categorical exception for all uses characterized as educational. If a use is educational in nature that is only one consideration to take into account within the first fair use factor when determining whether the use qualifies as a fair use. There are many other considerations and three other factors to also consider when analyzing fair use—especially the factor on market harm. As the court in Kadrey noted, the “inapt analogy” of equating AI training to human learning “is not a basis for blowing off the most important factor of the fair use analysis.” The education and learning industries thrive within the framework of copyright. There are no sweeping exceptions for human learning, and thus there must not be sweeping exception for AI training either.
AI Generates but Humans Create
A crucial difference between AI training and human learning emerges when we examine how these processes are applied to achieve their end results. Unlike AI, human creators can learn from the same teacher with the same set of materials, taking the same course, learn under the same method, and yet demonstrate a unique variety of skills and range of creative expressions. Joseph Gordon-Levitt talked about his conversations with neuroscientists about equating AI training to human learning, which illustrated why the analogy is so faulty. He stated:
“And even just technically what’s going on with this algorithm that’s crunching ones and zeros, it’s not the same as what’s going on in your brain. They call this technology a neural net because it was sort of loosely inspired by how neurons work … most neuroscientists that I’ve talked to sort of roll their eyes and they’re like, look, there are certain similarities. Yes, your brain and has neurons and yes it might fire and not fire. But that’s only a small sliver of what’s going on in the brain. And frankly, we, the scientific community, don’t even understand what all is going on in the brain. So, it’s really kind of cartoonishly simple to compare these neural nets to a human. And especially silly to say that these neural nets deserve kind of equal rights under the law to a human. And so, to me, it’s not a real argument. It’s an excuse for, you know, to optimize business.”
Unlike AI, the ability of a human to create expressive works does not depend solely on a massive trove of quality and closed universe of creative works that person has read, listened to, viewed, and otherwise perceived over their lifetime. A human creator’s skill in composing a poem, novel, song, photograph, or any other creative work always involves a unique application of personal experiences, emotions, distinctive personality and voice, technical training and schooling, innate skill and ability, and recollection of prior creative works they have encountered across their entire lives. AI has none of these qualities.
Just take a look at all the videos of writers documenting their process of crafting novels and other literary works compared to AI generation. Best-selling author, David Baldacci also recently testified about the creative process and how he applies creative inspiration, stating:
“I was once such an aspiring writer. My favorite novelist in college was John Irving. I read everything that Irving wrote. None of my novels read remotely like an Irving novel. Why? Well, unlike AI, I can’t remember every line that Irving wrote, every detail about his characters, and his plots. The fact is, also unlike AI, I read other writers not to copy them but because I loved their stories, I appreciate their talent, it motivated me to up my game. What AI does is take what writers produce as an incredibly valuable shortcut to teach software programs what they need to know.”
Contrastingly, an AI model only draws upon the universe of inputs that are fed to it. They do not generate output by applying any of the innately human reactions described above. Without the copyrighted material that are slavishly copied and fed to the machine, AI is entirely incapable of independently generating output. That means its training process cannot be like human learning at all.
Conclusion
Analogies between AI and humans may be helpful to explain AI mechanics. But they should not be used as a substitute for traditional legal and policy analysis—especially when it comes to the application of copyright infringement and fair use principles to AI. Doing so results in misapplication of the law and bad copyright law policies that harm human creators and the public. Such misguided policies also harm AI developers. Without human-created works, AI companies would not be able to develop the models that sustain America’s AI dominance, because they lack the crucial, high-quality, expressive materials needed to develop leading AI models. Copyright law has been a tried-and-true framework under which learning and knowledge has been created, progressed, and promoted—and in the age of generative AI, it can continue to provide the same, if not even more innovation.
If you aren’t already a member of the Copyright Alliance, you can join today by completing our Individual Creator Members membership form! Members gain access to monthly newsletters, educational webinars, and so much more — all for free!