The Future Of Life
Rohin Shah on the State of AGI Safety Research in 2021
- Autor: Vários
- Narrador: Vários
- Editor: Podcast
- Duración: 1:43:50
- Mas informaciones
Informações:
Sinopsis
Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page for the podcast here: https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021 Watch the video version of this episode here: https://youtu.be/_5xkh-Rh6Ec Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/ Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 00:02:22 What is AI alignment? 00:06:00 How has your perspective of this problem changed over the past year? 00:06:28 I