The Future Of Life

Informações:

Sinopsis

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLIs opinions or views.

Episodios

  • Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

    24/06/2020 Duración: 01h42min

    Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Topics discussed in this episode include: -The relationship between Sam's music and David's writing -Existential hope -Ideas from the Hedonistic Imperative -Sam's albums -Th

  • Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

    15/06/2020 Duración: 01h52min

    Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.  Topics discussed in this episode include: -The historical and intellectual foundations of AI  -How AI systems achieve or do not achieve intelligence in the same way as the human mind -Th

  • Sam Harris on Global Priorities, Existential Risk, and What Matters Most

    01/06/2020 Duración: 01h32min

    Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them. Topics discussed in this episode include: -The problem of communication  -Global priorities  -Existential risk  -Animal suffering in both wild animals and factory farmed animals  -Global poverty  -Artificial general intelligence risk and AI alignment  -Ethics -Sam’s book, The Moral Landsc

  • FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

    15/05/2020 Duración: 01h13min

    Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities? Topics discussed in this episode include: -Existential risk -Computational substrates and AGI -Genetics and aging -Risks of synthetic biology -Obstacles to space colonization -Great Filters, consciousness, and eliminating suffering You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synth

  • FLI Podcast: On Superforecasting with Robert de Neufville

    30/04/2020 Duración: 01h20min

    Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making.  Topics discussed in this episode include: -What superforecasting is and what the community looks like -How superforecasting is done and its potential use in decision making -The challenges of making predictions -Predictions about and lessons from COVID-19 You can find the page for th

  • AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

    15/04/2020 Duración: 02h21min

    Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing.  Topics discussed in this episode include: -Rohin's and Buck's optimism and pessimism about different approaches to aligned AI -Traditional arguments for AI as an x-risk -Modeling agents as expected utility maximizers -Ambitious value learning and specifi

  • FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

    09/04/2020 Duración: 01h26min

    The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk. Topics discussed in this episode include: -The importance of taking expected value calculations seriously -The need for making accurate predictions -The difficulty of taking probabilities seriously -Human psychological bias around estimating and acting on risk -The massive online prediction solic

  • FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

    01/04/2020 Duración: 01h10min

    Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time. Topics discussed in this episode include: -An overview of Toby's new book -What it means to be standing at the precipi

  • AIAP: On Lethal Autonomous Weapons with Paul Scharre

    16/03/2020 Duración: 01h16min

    Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines around the acceptable and unacceptable uses of this technology will set precedents and grounds for future international AI collaboration and governance. Such regulation efforts or lack thereof will also shape the kinds of weapons technologies that proliferate in the 21st century. On this episode of the AI Alignment Podcast, Paul Scharre joins us to discuss autonomous weapons, their potential benefits and risks, and the ongoing debate around the regulation of their development and use.  Topics discussed in this episode include: -What autonomous weapons are and how they may be used -The debate around acceptable and unacceptable uses of autonomous weapons -Degrees and kinds of wa

  • FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

    28/02/2020 Duración: 01h04min

    As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally. Topics discussed in this episode include: -What the Windfall Clause is and how it might function -The need for such a mechanism given AGI generated economic windfall -Problems the Windfall Clause would help to remedy  -The mechanism for distributing windfall profit and the function for defining such profit -The legal permissibility of the Windfall

  • AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

    18/02/2020 Duración: 01h11min

    From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.  Topics discussed in this episode include: -The importance of current AI policy work for long-term AI risk -Where we currently stand in the process of forming AI policy -Why persons worried about existential risk should care about present day AI policy -AI and the global community -The rational

  • FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

    31/01/2020 Duración: 01h45min

    Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity. Topics discussed in this episode include: - Views on the nature of reality - Quantum mechanics and the implications of quantum uncertainty -

  • AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

    16/01/2020 Duración: 02h03min

    In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world wher

  • On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

    31/12/2019 Duración: 01h58s

    Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's views and intuitions about c

  • FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

    28/12/2019 Duración: 01h39min

    As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond. Topics discussed include: -Introductions to the FLI team and our work -Motivations for our projects and existential risk mitigation efforts

  • AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

    16/12/2019 Duración: 58min

    Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind.  Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety benchmarking at DeepMind

  • FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

    02/12/2019 Duración: 58min

    We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Topics discussed include: -The psychology of existential risk, longtermism, effective altruism, and speciesism -Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" -Various works and studies Stefan Schubert has co-authored in these spaces -How this enables us to be more altruistic You can find the page a

  • Not Cool Epilogue: A Climate Conversation

    27/11/2019 Duración: 04min

    In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.

  • Not Cool Ep 26: Naomi Oreskes on trusting climate science

    26/11/2019 Duración: 51min

    It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we should listen to experts, how we can identify the best experts in a field, and why we should be open to the idea of more than one type of "scientific method." She also discusses industry-funded science, scientists’ misconceptions about the public, and the role of the media in proliferating bad research. Topics discussed include: -Why Trust Science? -5 tenets of reliable science -How to decide which experts to trust -Why non-scientists can't debate science -Industry disinformation -How to communicate science -Fact-value di

  • Not Cool Ep 25: Mario Molina on climate action

    21/11/2019 Duración: 35min

    Most Americans believe in climate change — yet far too few are taking part in climate action. Many aren't even sure what effective climate action should look like. On Not Cool episode 25, Ariel is joined by Mario Molina, Executive Director of Protect our Winters, a non-profit aimed at increasing climate advocacy within the outdoor sports community. In this interview, Mario looks at climate activism more broadly: he explains where advocacy has fallen short, why it's important to hold corporations responsible before individuals, and what it would look like for the US to be a global leader on climate change. He also discusses the reforms we should be implementing, the hypocrisy allegations sometimes leveled at the climate advocacy community, and the misinformation campaign undertaken by the fossil fuel industry in the '90s. Topics discussed include: -Civic engagement and climate advocacy -Recent climate policy rollbacks -Local vs. global action -Energy and transportation reform -Agricultural reform -Overcoming

página 2 de 7