A recent panel discussion at UC Berkeley considered current legal challenges for developers of generative AI (GenAI), as well as economic impacts of the technology.
AI-powered chatbots such as ChatGPT, Claude and Llama have seen a proliferation of users in recent years. The companies developing these GenAI products have also been hit with lawsuits that challenge, among other concerns, the training of proprietary large language models (LLMs) on massive amounts of data that includes copyrighted content.
The conversation was hosted by the Berkeley Committee of the American Academy of Arts and Sciences. Expert panelists included: Pamela Samuelson, professor at Berkeley School of Law; Jennifer Chayes, dean of the College of Computing, Data Science, and Society; and Abhishek Nagaraj, associate professor at Haas School of Business.
A game-changing technology
Chayes presented an overview of GenAI technology and the key technical advancements that made it possible, including the transformer model – a neural network architecture that has accelerated AI training and prediction – and the development of LLMs based on generalized pre-trained transformers (GPTs). She also touched on the extension of LLMs to multimodal models that include images, audio and video as well as text, and described how genAI models are both pre-trained and post-trained with datasets.
“This really is a game-changing technology,” Chayes said. “I deeply believe that this is the most empowering technology in our lifetimes. It can be used for harm, and so we have to mitigate those harms, but it can also be used for good in leveling the playing field and providing access to benefits.”
As an example, Chayes highlighted her collaboration with Omar Yaghi, a winner of the 2025 Nobel Prize in Chemistry, in applying GenAI to the design and synthesis of metal-organic frameworks and covalent organic frameworks. Their recent work using LLMs and diffusion models for chemistry and materials science has sped up the design and synthesis of these materials by a factor of 50, Chayes said. The team is currently working to synthesize and scale materials that can capture carbon from the air, with applications to climate change mitigation.
Legal considerations: fair use and transformative use
The potential of AI as a positive force may lie in the eyes of the beholder. Samuelson said 59 lawsuits against GenAI developers were currently pending in the United States. “There were three new ones just this week,” she said at the Nov. 11 event.
According to Samuelson, plaintiffs are asserting infringement on the grounds that developers make copies of copyrighted works when training the foundation models that underpin GenAI systems. The type of copyrighted content varies across lawsuits, ranging from books, song lyrics, recorded music and news stories to visual art, photography and movie characters.
Samuelson suggests the application of fair use exceptions is what's really being debated by legal scholars and judges considering these U.S. copyright law cases. “Fair use is definitely a limit that provides breathing room for next generation creation, and fair use has also become a very important way for copyright to adapt in a time of rapid technological change,” she said.
As an example, the federal court decided in favor of Google’s “highly transformative” development of an online searchable database of copyrighted works through scanning and digitization, known as Google Books.
In that 2015 ruling, the court specifically referenced public benefit by providing new information about the books, such as keywords and other data, that augmented existing knowledge and helped users discover books without providing the full copied text. The court determined that there was no harm to the market for the original books at that time.
Samuelson suggested that, similarly, GenAI developers could argue their use of copyrighted works is “highly transformative” with a different intended purpose than the original work. “Developers could say: ‘We don’t care about the expression of the work, we are only interested in the work’s data. We’re interested in how words are in relation to each other, how sentences are constructed. We’re doing this for statistical analysis purposes; we’re not doing it to consume the expression,’” Samuelson said.
While she expects it will be five to 10 years before current legal challenges are fully resolved in the U.S., Samuelson acknowledged several other countries – including Japan, Singapore, Israel and members of the European Union – already have existing laws that provide broad exceptions for AI. If U.S. courts make unfavorable rulings, she said developers may determine business will be more favorable elsewhere.
Economic impacts of AI
Nagaraj provided a perspective on the economic implications of AI for copyright policy during the November panel discussion. He contributed to a report, released earlier this year by the U.S. Copyright Office, considering whether outputs of AI systems should be copyrightable. The economic scholars also explored whether the training of systems on copyrighted content should be considered legal, and by what definition.
“A main debate in economics is what we call the amount of augmentation versus automation,” said Nagaraj. He described how the field of animation moved from the creation of many duplicative hand drawings to its increased use of digital automation during the late 1980s and 1990s.
“I don't think any of us would say that the field of animation has become less creative now that people aren’t learning these skills by hand – maybe some people,” Nagaraj said. “It’s not clear to me whether the use of AI integrative arts will actually harm the production of original and innovative works.”
After presentations, the panel invited written questions from the audience. Several questions referenced economic competitiveness related to AI model developers based in China.
“It’s a relatively limited number of people who are developing these models in the U.S., and the companies developing them are expecting to make huge profits from the models,” Chayes said. “China has a very different attitude: building open-weight models so people can go in and change the weight, which means they can be easily modified.”
“I think the copyright and intellectual property elements, as well as the decision to build open-weight models – which requires more [federal] government support than we are giving in the U.S. – will mean that China has a chance to become the strongest contributor to generative AI,” she said.
Chayes was part of an expert working group on frontier AI models convened by California Governor Gavin Newsom last year. In examining the best available evidence on AI, the working group published a report proposing key policy principles to help inform the use, assessment and governance of frontier AI in California. This fall, Gov. Newsom signed SB 53 – touted as the first AI safety law in the United States – and cited the working group report.