The Future Of Life

Informações:

Sinopsis

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLIs opinions or views.

Episodios

  • Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

    01/06/2021 Duración: 01h08min

    Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century. Topics discussed in this episode include: -What wisdom consists of -The role of ideas in society and civilization  -The increasing concentration of power and wealth -The technological displacement of human labor -Democracy, universal basic income, and universal basic capital  -Living an examined life You can find the page for this podcast here: https://futureoflife.org/2021/05/31/nicolas-berggruen-on-the-dynamics-of-power-wisdom-technology-and-ideas-in-the-age-of-ai/ Check out Nicolas' thoughts archive here: www.nicolasberggruen.com Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 1:45 The race between the power of our technology and the wisdom with which we manage it 5:19 What is wisdom?  8:30 The power of ideas  11:06 Humanity’s investment in wisdom vs the power of our technology  15:39

  • Bart Selman on the Promises and Perils of Artificial Intelligence

    20/05/2021 Duración: 01h41min

    Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to powerful AI systems -AI consciousness You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  1:35 Futures that Bart is excited about                   4:08 Positive futures in the short, medium, and long-terms 7:23 AGI timelines  8:11 Bart’s research on “planning” through the game of Sokoban 13:10 If

  • Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

    21/04/2021 Duración: 01h26min

    Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incentive structures -The short-term and long-term AI safety communities You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 1:29 How can humanity improve? 3:10 The importance of intelligence and coordination 8:30 The bottlenecks of input and output bandwidth as well as processing speed betwe

  • Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

    01/04/2021 Duración: 01h38min

    Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.  Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 3:17 What is truth and knowledge? 11:39 What is subjectivity and objectivity? 14:32 What is the universe ultimately? 19:22 Is the unive

  • Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

    20/03/2021 Duración: 01h12min

    Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  2:35 Roman’s primary research interests  4:09 How theoretical proofs help AI safety research  6:23 How impossibility results constrain

  • Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons

    25/02/2021 Duración: 01h39min

    Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.  Topics discussed in this episode include: -The current state of the deployment and development of lethal autonomous weapons and swarm technologies -Drone swarms as a potential weapon of mass destruction -The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons -The difficulty of attribution, verification, and accountability with autonomous weapons -Autonomous weapons governance as norm setting for global AI issues You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/ Have any feedback about the podcast? You can share your though

  • John Prendergast on Non-dual Awareness and Wisdom for the 21st Century

    09/02/2021 Duración: 01h46min

    John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existential risk issues. Topics discussed in this episode include: -The experience of egocentricity and ego-identification -Waking up into heart awareness -The movement towards and qualities of non-dual consciousness -The ways in which the condition of our minds collectively affect the world -How waking up may be relevant to the creation of AGI You can find the page for this podcast here: https://futureoflife.org/2021/02/09/john-prendergast-on-non-dual-awareness-and-wisdom-for-the-21st-century/ Have any feedback about the

  • Beatrice Fihn on the Total Elimination of Nuclear Weapons

    22/01/2021 Duración: 01h17min

    Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world. Topics discussed in this episode include: -The current nuclear weapons geopolitical situation -The risks and mechanics of accidental and intentional nuclear war -Policy proposals for reducing the risks of nuclear war -Deterrence theory -The Treaty on the Prohibition of Nuclear Weapons -Working towards the total elimination of nuclear weapons You can find the page for this podcast here: https://futureoflife.org/2021/01/21/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/ Timestamps:  0:00 Intro 4:28 Overview of the current nuclear weapons situation 6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war 9:27 Accidental nuclear war and human systems 12:08 The risks of nuclear war in 202

  • Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

    08/01/2021 Duración: 01h41s

    Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021. Topics discussed in this episode include: -FLI's perspectives on 2020 and hopes for 2021 -What our favorite projects from 2020 were -The biggest lessons we've learned from 2020 -What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety You can find the page for this podcast here: https://futureoflife.org/2021/01/08/max-tegmark-and-the-fli-team-on-2020-and-existential-risk-reduction-in-the-new-year/ Timestamps:  0:00 Intro 00:52 First question: What was your favorite project from 2020? 1:03 Max Tegmark on the Future of Life Award 4:15 Anthony Aguirre on AI Loyalty 9:18 David Nicholson on the Future of Life Award 12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation 14:03 Jared Brown on developing comments on

  • Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox

    11/12/2020 Duración: 01h54min

    The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events.  Topics discussed in this episode include: -William Foege's and Victor Zhdanov's efforts to eradicate smallpox -Personal stories from Foege's and Zhdanov's lives -The history of smallpox -Biological issues of the 21st century You can find the page for this podcast here: https://futureoflife.org/2020/12/11/future-of-life-award-2020-saving-200000000-lives-by-eradicating-smallpox/ You can watch the 2020 Future of Life Award ceremony here: https://www.youtube.com/watch?v=73WQvR5iIgk&feature=emb_title&ab_channel=FutureofLifeInstitute You can learn more about the Future of Life Award here: https://futureoflife.org/future-of-life-award/ Timestamps:  0:00 Intro 3:13 Part 1: How William Foege got into

  • Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

    02/12/2020 Duración: 01h30min

    Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far.  Topics discussed in this episode include: -Important intellectual movements and their merits -The evolution of metaphysical and epistemological views over human history -Consciousness, free will, and philosophical blunders -Lessons for the 21st century You can find the page for this podcast here: https://futureoflife.org/2020/12/01/sean-carroll-on-consciousness-physicalism-and-the-history-of-intellectual-progress/ Timestamps:  0:00 Intro 2:06 The problem of beliefs and the strengths and weaknesses of religion 6:40 The Age of Enlightenment and importance of reason 10:13 The importance of humility and the is--ought gap 17:53 The advantages of religion and mysticism 19:50 Materialism and Newtonianism 28:00 Duality, self, sufferi

  • Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity

    17/11/2020 Duración: 01h22min

    Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation. Topics discussed in this episode include: -How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible -The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation -How Big Tech and Big Tobacco work to influence universities, scientists, researchers, and policy makers -How to combat the problem of ethics-washing in Big Tech You can find the page for this podcast here: https://futureoflife.org/2020/11/17/mohamed-abdalla-on-big-tech-ethics-washing-and-the-threat-on-academic-integrity/ The Future of Life Institute AI policy page: https://futureoflife.org/AI-policy/ Timestamps:  0:00 Intro 1:55 How Big Tech actively distorts the academic landscape and what counts as big tech 6:00 How Big Tobacco has shaped industry resea

  • Maria Arpa on the Power of Nonviolent Communication

    02/11/2020 Duración: 01h12min

    Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication. Topics discussed in this episode include: -What nonviolent communication (NVC) consists of -How NVC is different from normal discourse -How NVC is composed of observations, feelings, needs, and requests -NVC for systemic change -Foundational assumptions in NVC -An NVC exercise You can find the page for this podcast here: https://futureoflife.org/2020/11/02/maria-arpa-on-the-power-of-nonviolent-communication/ Timestamps: 0:00 Intro 2:50 What is nonviolent communication? 4:05 How is NVC different from normal discourse? 18:40 NVC’s four components: observations, feelings, needs, and requests 34:50 NVC for systemic change 54:20 The foundational assumptions of NVC 58:00 An exercise in NVC This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable,

  • Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

    15/10/2020 Duración: 01h39min

    Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.  Topics discussed in this episode include: -The projects of awakening and growing the wisdom with which to manage technologies -What might be possible of embarking on the project of waking up -Facets of human nature that contribute to existential risk -The dangers of the problem solving mindset -Improving the effective altruism and existential risk communities You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/ Timestamps:  0:00 Intro 3:40 Albert Einstein and the quest for awakening 8:45 Non-self, emptiness, and non-duality 25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise 33:32 The importance of insight 49:45 Th

  • Kelly Wanser on Climate Change as a Possible Existential Threat

    30/09/2020 Duración: 01h45min

    Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.  Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Climate intervention via marine cloud brightening and releasing particles in the stratosphere - The benefits and risks of climate intervention techniques  - The international politics of climate change and weather modification You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/ Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU Timestamps:  0:00 Intro 2:30 What is SilverLining’s mission?  4:27 Why is climate change thought to be very risky in the next 10-30 years?  8:40 Tipping points and tipping cascades 13:25 Is climate change an existential risk?  17:39 Earth systems that help to stabilize the climate  21:23 Days wh

  • Andrew Critch on AI Research Considerations for Human Existential Safety

    16/09/2020 Duración: 01h51min

    In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.  Topics discussed in this episode include: - The mainstream computer science view of AI existential risk - Distinguishing AI safety from AI existential safety  - The need for more precise terminology in the field of AI existential safety and alignment - The concept of prepotent AI systems

  • Iason Gabriel on Foundational Philosophical Questions in AI Alignment

    03/09/2020 Duración: 01h54min

    In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.     Topics discussed in this episode include: -How moral philosophy and political theory are deeply related to AI alignment -The p

  • Peter Railton on Moral Learning and Metaethics in AI Systems

    18/08/2020 Duración: 01h41min

    From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Topics discussed in this episode include: -Moral epistemology -The potential relevance of metaethics to AI alignment -The importance of moral learning in AI sy

  • Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

    01/07/2020 Duración: 01h37min

    It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.  Topics discussed

  • Barker - Hedonic Recalibration (Mix)

    26/06/2020 Duración: 43min

    This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A Different Dream Rrose - Horizon Alexandroid - lvpt3 Datassette - Drizzle Fort Conrad Sprenger - Opening JakoJako - Wavetable#1 Barker & David Goldberg - #3 Barker & Baumecker - Organik (Intro) Anthony Linell - Fractal Vision Ametsub - Skydroppin’ Ladyfish\Mewark - Comfortable JakoJako & Barker - [unreleased] Where to follow Sam Barker : Soundcloud: @voltek Twitter: twitter.com/samvoltek Instagram: www.instagram.com/samvoltek/ Website: www.voltek-labs.net/ Bandcamp: sambarker.bandcamp.com/ Where to follow Sam's label,

página 1 de 7