The Future Of Life

Informações:

Sinopsis

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLIs opinions or views.

Episodios

  • Alan Robock on Nuclear Winter, Famine, and Geoengineering

    20/10/2022 Duración: 41min

    Alan Robock joins us to discuss nuclear winter, famine and geoengineering. Learn more about Alan's work: http://people.envsci.rutgers.edu/robock/ Follow Alan on Twitter: https://twitter.com/AlanRobock Timestamps: 00:00 Introduction 00:45 What is nuclear winter? 06:27 A nuclear war between India and Pakistan 09:16 Targets in a nuclear war 11:08 Why does the world have so many nuclear weapons? 19:28 Societal collapse in a nuclear winter 22:45 Should we prepare for a nuclear winter? 28:13 Skepticism about nuclear winter 35:16 Unanswered questions about nuclear winter

  • Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity

    13/10/2022 Duración: 49min

    Brian Toon joins us to discuss the risk of nuclear winter. Learn more about Brian's work: https://lasp.colorado.edu/home/people/brian-toon/ Read Brian's publications: https://airbornescience.nasa.gov/person/Brian_Toon Timestamps: 00:00 Introduction 01:02 Asteroid impacts 04:20 The discovery of nuclear winter 13:56 Comparing volcanoes and asteroids to nuclear weapons 19:42 How did life survive the asteroid impact 65 million years ago? 25:05 How humanity could go extinct 29:46 Nuclear weapons as a great filter 34:32 Nuclear winter and food production 40:58 The psychology of nuclear threat 43:56 Geoengineering to prevent nuclear winter 46:49 Will humanity avoid nuclear winter?

  • Philip Reiner on Nuclear Command, Control, and Communications

    06/10/2022 Duración: 47min

    Philip Reiner joins us to talk about nuclear, command, control and communications systems. Learn more about Philip’s work: https://securityandtechnology.org/ Timestamps: [00:00:00] Introduction [00:00:50] Nuclear command, control, and communications [00:03:52] Old technology in nuclear systems [00:12:18] Incentives for nuclear states [00:15:04] Selectively enhancing security [00:17:34] Unilateral de-escalation [00:18:04] Nuclear communications [00:24:08] The CATALINK System [00:31:25] AI in nuclear command, control, and communications [00:40:27] Russia's war in Ukraine

  • Daniela and Dario Amodei on Anthropic

    04/03/2022 Duración: 02h01min

    Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Topics discussed in this episode include: -Anthropic's mission and research strategy -Recent research and papers by Anthropic -Anthropic's structure as a "public benefit corporation" -Career opportunities You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/ Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A Careers at Anthropic: https://www.anthropic.com/#careers Anthropic's Transformer Circuits research: https://transformer-circuits.pub/ Follow Anthropic on Twitter: https://twitter.com/AnthropicAI microCOVID Project: https://www.microcovid.org/ Follow Lucas on Twitter: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:44

  • Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest

    09/02/2022 Duración: 33min

    Anthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest. Topics discussed in this episode include: -Motivations behind the contest -The importance of worldbuilding -The rules of the contest -What a submission consists of -Due date and prizes Learn more about the contest here: https://worldbuild.ai/ Join the discord: https://discord.com/invite/njZyTJpwMz You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-yelizarova-on-flis-worldbuilding-contest/ Watch the video version of this episode here: https://www.youtube.com/watch?v=WZBXSiyienI Follow Lucas on Twitter here: twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:30 What is "worldbuilding" and FLI's Worldbuilding Contest? 6:32 Why do worldbuilding for 2045? 7:22 Why is it important to practice worldbuilding? 13:50 What are the rules of the contest? 19:53 What does a s

  • David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy

    26/01/2022 Duración: 01h42min

    David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: -Virtual reality as genuine reality -Why VR is compatible with the good life -Why we can never know whether we're in a simulation -Consciousness in virtual realities -The ethics of simulated beings You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-reality-virtual-worlds-and-the-problems-of-philosophy/ Watch the video version of this episode here: https://www.youtube.com/watch?v=hePEg_h90KI Check out David's book and website here: http://consc.net/ Follow Lucas on Twitter here: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 2:43 How this books fits into David's philosophical journey 9:40 David's favorite part(s) of the book 12:04 What is the thes

  • Rohin Shah on the State of AGI Safety Research in 2021

    02/11/2021 Duración: 01h43min

    Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk.  Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page for the podcast here: https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021 Watch the video version of this episode here: https://youtu.be/_5xkh-Rh6Ec Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/ Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 00:02:22 What is AI alignment? 00:06:00 How has your perspective of this problem changed over the past year? 00:06:28 I

  • Future of Life Institute's $25M Grants Program for Existential Risk Reduction

    18/10/2021 Duración: 24min

    Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - Max speaks about how receiving a grant changed his career early on - Daniel and Andrea provide details on the fellowships and future grant priorities Check out our grants programs here: https://grants.futureoflife.org/ Join our AI Existential Safety Community: https://futureoflife.org/team/ai-exis... Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

  • Filippa Lentzos on Global Catastrophic Biological Risks

    01/10/2021 Duración: 58min

    Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk. Topics discussed in this episode include: - The most pressing issue in biosecurity - Stories from when biosafety labs failed to contain dangerous pathogens - The lethality of pathogens being worked on at biolaboratories - Lessons from COVID-19 You can find the page for the podcast here: https://futureoflife.org/2021/10/01/filippa-lentzos-on-emerging-threats-in-biosecurity/ Watch the video version of this episode here: https://www.youtube.com/watch?v=I6M34oQ4v4w Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 2:35 What are the least understood aspects of biological risk? 8:32 Which groups are interested biotechnologies that could be used for harm? 16:30 Why countries may pursue the devel

  • Susan Solomon and Stephen Andersen on Saving the Ozone Layer

    16/09/2021 Duración: 01h44min

    Susan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster.  Topics discussed in this episode include: -The industrial and commercial uses of chlorofluorocarbons (CFCs) -How we discovered the atmospheric effects of CFCs -The Montreal Protocol and its significance -Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial roles in helping to solve the ozone hole crisis -Lessons we can take away for climate change and other global catastrophic risks You can find the page for this podcast here: https://futureoflife.org/2021/09/16/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/ Check out the video version of the episode here: https://www.youtube.com/watch?v=7hwh-uDo-6A&ab_channel=FutureofLifeInstitute Check out the story of the ozone hole crisis here: https://undsci.berkeley.edu/article/0_0_0/ozone_depletion_01 Have any feed

  • James Manyika on Global Economic and Technological Trends

    07/09/2021 Duración: 01h38min

    James Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it.  Topics discussed in this episode include: -The modern social contract -Reskilling, wage stagnation, and inequality -Technology induced unemployment -The structure of the global economy -The geographic concentration of economic growth You can find the page for this podcast here: https://futureoflife.org/2021/09/06/james-manyika-on-global-economic-and-technological-trends/ Check out the video version of the episode here: https://youtu.be/zLXmFiwT0-M Check out the McKinsey Global Institute here: https://www.mckinsey.com/mgi/overview Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 2:14 What are the most important problems in the world today? 4:30 The issue of inequality 8:17 How the structure of the global economy is changing 10:21 How does the r

  • Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse

    30/07/2021 Duración: 01h35min

    Michael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse. Topics discussed in this episode include: -How the US military views and takes action on climate change -Examples of existing climate related difficulties and what they tell us about the future -Threat multiplication from climate change -The risks of climate change catalyzed nuclear war and major conflict -The melting of the Arctic and the geopolitical situation which arises from that -Messaging on climate change You can find the page for this podcast here: https://futureoflife.org/2021/07/30/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/ Check out the video version of the episode here: https://www.youtube.com/watch?v=bn57jxEoW24 Check out Michael's website here: http://michaelklare.com/ Apply for the Podcast Producer p

  • Avi Loeb on UFOs and if they're Alien in Origin

    09/07/2021 Duración: 40min

    Avi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat.  Topics discussed in this episode include: -Evidence counting for the natural, human, and extraterrestrial origins of UAPs -The culture of science and how it deals with UAP reports -How humanity should respond if we discover UAPs are alien in origin -A project for collecting high quality data on UAPs You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/ Apply for the Podcast Producer position here: futureoflife.org/job-postings/ Check out the video version of the episode here: https://www.youtube.com/watch?v=AyNlLaFTeFI&ab_channel=FutureofLifeInstitute Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 1:41 Why is the US Government report on UAPs significant? 7:08 Multiple differen

  • Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

    09/07/2021 Duración: 02h04min

    Avi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos.  Topics discussed in this episode include: -Whether 'Oumuamua is alien or natural in origin -The culture of science and how it affects fruitful inquiry -Looking for signs of alien life throughout the solar system and beyond -Alien artefacts and galactic treaties -How humanity should handle a potential first contact with extraterrestrials -The relationship between what is true and what is good You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/ Apply for the Podcast Producer position here: https://futureoflife.org/job-postings/ Check out the video version of the episode here: https://www.youtube.com/watch?v=qcxJ8QZQkwE&ab_channel=FutureofLifeInstitute See our second interview with Avi here: https://

  • Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

    01/06/2021 Duración: 01h08min

    Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century. Topics discussed in this episode include: -What wisdom consists of -The role of ideas in society and civilization  -The increasing concentration of power and wealth -The technological displacement of human labor -Democracy, universal basic income, and universal basic capital  -Living an examined life You can find the page for this podcast here: https://futureoflife.org/2021/05/31/nicolas-berggruen-on-the-dynamics-of-power-wisdom-technology-and-ideas-in-the-age-of-ai/ Check out Nicolas' thoughts archive here: www.nicolasberggruen.com Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 1:45 The race between the power of our technology and the wisdom with which we manage it 5:19 What is wisdom?  8:30 The power of ideas  11:06 Humanity’s investment in wisdom vs the power of our technology  15:39

  • Bart Selman on the Promises and Perils of Artificial Intelligence

    20/05/2021 Duración: 01h41min

    Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to powerful AI systems -AI consciousness You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  1:35 Futures that Bart is excited about                   4:08 Positive futures in the short, medium, and long-terms 7:23 AGI timelines  8:11 Bart’s research on “planning” through the game of Sokoban 13:10 If

  • Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

    21/04/2021 Duración: 01h26min

    Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incentive structures -The short-term and long-term AI safety communities You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 1:29 How can humanity improve? 3:10 The importance of intelligence and coordination 8:30 The bottlenecks of input and output bandwidth as well as processing speed betwe

  • Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

    01/04/2021 Duración: 01h38min

    Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.  Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 3:17 What is truth and knowledge? 11:39 What is subjectivity and objectivity? 14:32 What is the universe ultimately? 19:22 Is the unive

  • Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

    20/03/2021 Duración: 01h12min

    Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  2:35 Roman’s primary research interests  4:09 How theoretical proofs help AI safety research  6:23 How impossibility results constrain

  • Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons

    25/02/2021 Duración: 01h39min

    Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.  Topics discussed in this episode include: -The current state of the deployment and development of lethal autonomous weapons and swarm technologies -Drone swarms as a potential weapon of mass destruction -The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons -The difficulty of attribution, verification, and accountability with autonomous weapons -Autonomous weapons governance as norm setting for global AI issues You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/ Have any feedback about the podcast? You can share your though

página 4 de 11