Stuart Russell, a computer science professor at UC Berkeley, recently testified at the U.S. Senate hearing titled “Oversight of A.I.: Principles for Regulation.” The Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology, and the Law hosted the July 25 hearing.
Russell said artificial general intelligence – a significant milestone where AI could independently learn and complete tasks like human beings – could offer significant benefits to the public. It could be used to spur economic growth and improve healthcare and education, for example. However, it also presents significant risks “up to and including human extinction,” he said.
Russell offered suggestions on how to regulate these kinds of technologies, ranging from creating a regulatory agency to enforcing rigorous safety requirements for AI systems. Watch Russell testify starting at 50:55 to hear his full testimony. (Video is courtesy of C-SPAN.)
Read the beginning of Russell’s written testimony
Thank you, Chair Blumenthal, Ranking Member Hawley, and members of the Subcommittee, for the invitation to speak today. I am primarily an AI researcher, with over 40 years of experience in the field. I am motivated by the potential for AI to amplify the benefits of civilization for all of humanity. My research over the last decade has focused on the problem of control: how do we maintain power, forever, over entities that will eventually become more powerful than us? How do we ensure that AI systems are safe and beneficial for humans? These are not purely technological questions. In both the short term and the long term, regulation has a huge role to play in answering them. For this reason, I and many other AI researchers have greatly appreciated the Subcommittee's serious commitment to addressing the regulatory issues of AI and the bipartisan way in which its work has been conducted.
Executive summary
- Artificial intelligence has a long history and draws on well-developed mathematical theories in several areas. It is not a single technology.
- Many current systems, including large language models, are opaque in the sense that their internal principles of operation are unknown, leading to severe problems for safety and regulation.
- Progress on AI capabilities is extremely rapid and many researchers feel that artificial general intelligence (AGI) is on the horizon, possibly exceeding human capabilities in every relevant dimension.
- The potential benefits of (safe) AGI are enormous; this is already creating massive investment flows, which are only likely to increase as the goal gets closer.
- Given our current lack of understanding of how to control AGI systems and to ensure with absolute certainty that they remain safe and beneficial to humans, achieving AGI would present potential catastrophic risks to humanity, up to and including human extinction.
- It is essential to create a regulatory framework capable of adapting to these increasing risks while responding to present harms. A number of measures are proposed, including basic safety requirements whose violation should result in removal from the market.
Please see Russell’s full written testimony.
For more information
- U.S. Senate Committee on the Judiciary: Oversight of A.I.: Principles for Regulation
- The Financial Times: We must slow down the race to God-like AI
- CDSS News: Stuart Russell calls for new approach for AI, a ‘civilization-ending’ technology