Lecture | April 5 | 12-1 p.m. | Sutardja Dai Hall
Stuart Russell, UC Berkeley
CITRIS and the Banatao Institute
It is reasonable to expect that artificial intelligence (AI) capabilities will eventually exceed those of humans across a range of real-world decision-making scenarios. Should this be a cause for concern, as Alan Turing and others have suggested? Will we lose control over our future? Or will AI complement and augment human intelligence in beneficial ways? It turns out that both views are correct, but they are talking about completely different forms of AI. To achieve the positive outcome, a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. Russell will argue that this is possible as well as necessary. The new approach to AI opens up many avenues for research and brings into sharp focus several questions at the foundations of moral philosophy.
daisyh@berkeley.edu, 510-829-2250
Daisy Hernandez, daisyh@berkeley.edu, 510-829-2250
Sutardja Dai Hall
On Campus
Stuart Russell
UC Berkeley