The Future Of Life

Informações:

Sinopsis

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLIs opinions or views.

Episodios

  • The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi

    07/02/2019 Duración: 50min

    Three generals are voting on whether to attack or retreat from their siege of a castle. One of the generals is corrupt and two of them are not. What happens when the corrupted general sends different answers to the other two generals? A Byzantine fault is "a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine Generals' Problem", developed to describe this condition, where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable." The Byzantine Generals' Problem and associated issues in maintaining reliable distributed computing networks is illuminating for both AI alignment and modern networks we interact with like Youtube, Facebook, or Google. By exploring this space, we are shown the limits of reliable distributed computing, the safety concerns and threats in this space,

  • AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

    31/01/2019 Duración: 01h02min

    Every January, we like to look back over the past 12 months at the progress that’s been made in the world of artificial intelligence. Welcome to our annual “AI breakthroughs” podcast, 2018 edition. Ariel was joined for this retrospective by researchers Roman Yampolskiy and David Krueger. Roman is an AI Safety researcher and professor at the University of Louisville. He also recently published the book, Artificial Intelligence Safety & Security. David is a PhD candidate in the Mila lab at the University of Montreal, where he works on deep learning and AI safety. He's also worked with safety teams at the Future of Humanity Institute and DeepMind and has volunteered with 80,000 hours. Roman and David shared their lists of 2018’s most promising AI advances, as well as their thoughts on some major ethical questions and safety concerns. They also discussed media coverage of AI research, why talking about “breakthroughs” can be misleading, and why there may have been more progress in the past year than it seems.

  • Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

    25/01/2019 Duración: 32min

    Our phones, our cars, our televisions, our homes: they’re all getting smarter. Artificial intelligence is already inextricably woven into everyday life, and its impact will only grow in the coming years. But while this development inspires much discussion among members of the scientific community, public opinion on artificial intelligence has remained relatively unknown. Artificial Intelligence: American Attitudes and Trends, a report published earlier in January by the Center for the Governance of AI, explores this question. Its authors relied on an in-depth survey to analyze American attitudes towards artificial intelligence, from privacy concerns to beliefs about U.S. technological superiority. Some of their findings--most Americans, for example, don’t trust Facebook--were unsurprising. But much of their data reflects trends within the American public that have previously gone unnoticed. This month Ariel was joined by Baobao Zhang, lead author of the report, to talk about these findings. Zhang is a

  • AIAP: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)

    17/01/2019 Duración: 51min

    What motivates cooperative inverse reinforcement learning? What can we gain from recontextualizing our safety efforts from the CIRL point of view? What possible role can pre-AGI systems play in amplifying normative processes? Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell is the eighth podcast in the AI Alignment Podcast series, hosted by Lucas Perry and was recorded at the Beneficial AGI 2019 conference in Puerto Rico. For those of you that are new, this series covers and explores the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, Lucas will speak with technical and non-technical researchers across areas such as machine learning, governance,  ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your pref

  • Existential Hope in 2019 and Beyond

    21/12/2018 Duración: 02h06min

    Humanity is at a turning point. For the first time in history, we have the technology to completely obliterate ourselves. But we’ve also created boundless possibilities for all life that could enable just about any brilliant future we can imagine. Humanity could erase itself with a nuclear war or a poorly designed AI, or we could colonize space and expand life throughout the universe: As a species, our future has never been more open-ended. The potential for disaster is often more visible than the potential for triumph, so as we prepare for 2019, we want to talk about existential hope, and why we should actually be more excited than ever about the future. In this podcast, Ariel talks to six experts--Anthony Aguirre, Max Tegmark, Gaia Dempsey, Allison Duettmann, Josh Clark, and Anders Sandberg--about their views on the present, the future, and the path between them. Anthony and Max are both physics professors and cofounders of FLI. Gaia is a tech enthusiast and entrepreneur, and with her newest venture, 7

  • AIAP: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah

    18/12/2018 Duración: 01h07min

    What role does inverse reinforcement learning (IRL) have to play in AI alignment? What issues complicate IRL and how does this affect the usefulness of this preference learning methodology? What sort of paradigm of AI alignment ought we to take up given such concerns? Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah is the seventh podcast in the AI Alignment Podcast series, hosted by Lucas Perry. For those of you that are new, this series is covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, governance, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your prefe

  • Governing Biotechnology: From Avian Flu to Genetically-Modified Babies With Catherine Rhodes

    30/11/2018 Duración: 32min

    A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning. As biotechnology and other emerging technologies become more powerful, the dual-use nature of research -- that is, research that can have both beneficial and risky outcomes -- is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats? On this month’s podcast, Ariel spoke wi

  • Avoiding the Worst of Climate Change with Alexander Verbeek and John Moorhead

    31/10/2018 Duración: 01h21min

    “There are basically two choices. We're going to massively change everything we are doing on this planet, the way we work together, the actions we take, the way we run our economy, and the way we behave towards each other and towards the planet and towards everything that lives on this planet. Or we sit back and relax and we just let the whole thing crash. The choice is so easy to make, even if you don't care at all about nature or the lives of other people. Even if you just look at your own interests and look purely through an economical angle, it is just a good return on investment to take good care of this planet.” - Alexander Verbeek On this month’s podcast, Ariel spoke with Alexander Verbeek and John Moorhead about what we can do to avoid the worst of climate change. Alexander is a Dutch diplomat and former strategic policy advisor at the Netherlands Ministry of Foreign Affairs. He created the Planetary Security Initiative where representatives from 75 countries meet annually on the climate change-secur

  • AIAP: On Becoming a Moral Realist with Peter Singer

    18/10/2018 Duración: 51min

    Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI? On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, w

  • On the Future: An Interview with Martin Rees

    11/10/2018 Duración: 53min

    How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks? In this special podcast episode, Ariel speaks with cosmologist Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Topics discussed in this episode include: - Why Martin remains a technical optimist even as he focuses on existential risks - The economics and ethics of climate change - How AI and automation will make it harder for Africa and the Middle East to economically develop - How high expectations for health care and quality of life also put society at risk - Why growing inequality could be our most underappreciated global risk - Martin’s view that biotechnology poses greater risk than AI - Earth’s carrying capacity and the dangers of overpopulati

  • AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

    28/09/2018 Duración: 51min

    On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics. Topics discussed in this episode include: The sophisticated military robots developed by Soviets during the Cold War How technology shapes human decision-making in war “Automation bias” and why having a “human in the loop” is much trickier than it sounds The United States’ stance on automation with nuclear weapons Why weaker countries might have more incentive to build AI into warfare How the US and Russia perc

  • AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill

    18/09/2018 Duración: 56min

    How are we to make progress on AI alignment given moral uncertainty?  What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty? Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope th

  • AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

    31/08/2018 Duración: 44min

    Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction? On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica Cussins about how different countries are addressing the risks and benefits of AI, and why AI is such a unique and challenging technology to effectively govern. Allan is the Director of the Governance of AI Program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. Jessica is an AI Policy Specialist with the Future of Life Institute, and she's also a Research Fellow with the UC Berkeley Center for Long-term Cybersecurity, where she conducts

  • The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce

    16/08/2018 Duración: 01h45min

    What role does metaethics play in AI alignment and safety? How might paths to AI alignment change given different metaethical views? How do issues in moral epistemology, motivation, and justification affect value alignment? What might be the metaphysical status of suffering and pleasure?  What's the difference between moral realism and anti-realism and how is each view grounded?  And just what does any of this really have to do with AI? The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial

  • Six Experts Explain the Killer Robots Debate

    31/07/2018 Duración: 02h12s

    Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it’s complicated. In this month’s podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre, artificial intelligence professor Toby Walsh, Article 36 founder Richard Moyes, Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty, and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro. If you don't have time to listen to the podcast in full, or if you want to skip around through the interviews, each interview starts at the timestamp below: Paul Scharre: 3:40 Toby Walsh: 40:50 Richard Moyes: 53:30 Mary Wareham & Bonnie Docherty

  • AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy

    16/07/2018 Duración: 01h22min

    What role does cyber security play in alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway? AI Safety, Possible Minds, and Simulated Worlds is the third podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following

  • Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams

    29/06/2018 Duración: 52min

    How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI's development? On this month's podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include everyone's voice. Charlie is the founder and CEO of the digital media strategy company Served Fresh Media, and she's also the founder of Tech 2025, which is a platform and community for people to learn about emerging technologies and discuss the implications of emerging tech on society. Randi is a doctoral student in the personal robotics group at the MIT Media Lab. She wants to understand children's interactions with AI, and she wants to develop educational platforms that empower non-experts to develop their own AI systems

  • AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala

    14/06/2018 Duración: 01h14min

    In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity? In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research. Topics discussed in this episode include: -The definition of and a taxonomy of suffering risks -How superintelligence has special leverage for generating or mitigating suffering risks -How different moral systems view suffering risks -What is possible of minds in general and how this plays into suffering risks -The probability of suffering risks

  • Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler

    31/05/2018 Duración: 42min

    With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these countries are monitored, Ariel spoke with Melissa Hanham and Dave Schmerler on this month's podcast. Melissa and Dave are both nuclear weapons experts with the Center for Nonproliferation Studies at Middlebury Institute of International Studies, where they research weapons of mass destruction with a focus on North Korea. Topics discussed in this episode include: the progression of North Korea's quest for nukes, what happened and what’s next regarding the Iran deal, how to use open-source data to monitor nuclear

  • What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville

    30/04/2018 Duración: 57min

    What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing. On this month's podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability of Nuclear War. The report examines 60 historical incidents that could have escalated to nuclear war and presents a model for determining the odds are that we could have some type of nuclear war in the future.

página 9 de 11