The Future Of Life

Informações:

Sinopsis

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLIs opinions or views.

Episodios

  • Not Cool Ep 3: Tim Lenton on climate tipping points

    05/09/2019 Duración: 38min

    What is a climate tipping point, and how do we know when we’re getting close to one? On Episode 3 of Not Cool, Ariel talks to Dr. Tim Lenton, Professor and Chair in Earth System Science and Climate Change at the University of Exeter and Director of the Global Systems Institute. Tim explains the shifting system dynamics that underlie phenomena like glacial retreat and the disruption of monsoons, as well as their consequences. He also discusses how to deal with low certainty/high stakes risks, what types of policies we most need to be implementing, and how humanity’s unique self-awareness impacts our relationship with the Earth. Topics discussed include: Climate tipping points: impacts, warning signals Evidence that climate is nearing tipping point? IPCC warming targets Risk management under uncertainty Climate policies Human tipping points: social, economic, technological The Gaia Hypothesis

  • Not Cool Ep 2: Joanna Haigh on climate modeling and the history of climate change

    03/09/2019 Duración: 28min

    On the second episode of Not Cool, Ariel delves into some of the basic science behind climate change and the history of its study. She is joined by Dr. Joanna Haigh, an atmospheric physicist whose work has been foundational to our current understanding of how the climate works. Joanna is a fellow of The Royal Society and recently retired as Co-Director of the Grantham Institute on Climate Change and the Environment at Imperial College London. Here, she gives a historical overview of the field of climate science and the major breakthroughs that moved it forward. She also discusses her own work on the stratosphere, radiative forcing, solar variability, and more. Topics discussed include: History of the study of climate change Overview of climate modeling Radiative forcing What’s changed in climate science in the past few decades How to distinguish between natural climate variation and human-induced global warming Solar variability, sun spots, and the effect of the sun on the climate

  • Not Cool Ep 1: John Cook on misinformation and overcoming climate silence

    03/09/2019 Duración: 36min

    On the premier of Not Cool, Ariel is joined by John Cook: psychologist, climate change communication researcher, and founder of SkepticalScience.com. Much of John’s work focuses on misinformation related to climate change, how it’s propagated, and how to counter it. He offers a historical analysis of climate denial and the motivations behind it, and he debunks some of its most persistent myths. John also discusses his own research on perceived social consensus, the phenomenon he’s termed “climate silence,” and more. Topics discussed include: History of of the study of climate change Climate denial: history and motivations Persistent climate myths How to overcome misinformation How to talk to climate deniers Perceived social consensus and climate silence

  • Not Cool Prologue: A Climate Conversation

    03/09/2019 Duración: 03min

    In this short trailer, Ariel Conn talks about FLI's newest podcast series, Not Cool: A Climate Conversation. Climate change, to state the obvious, is a huge and complicated problem. Unlike the threats posed by artificial intelligence, biotechnology or nuclear weapons, you don’t need to have an advanced science degree or be a high-ranking government official to start having a meaningful impact on your own carbon footprint. Each of us can begin making lifestyle changes today that will help. We started this podcast because the news about climate change seems to get worse with each new article and report, but the solutions, at least as reported, remain vague and elusive. We wanted to hear from the scientists and experts themselves to learn what’s really going on and how we can all come together to solve this crisis.

  • FLI Podcast: Beyond the Arms Race Narrative: AI and China with Helen Toner and Elsa Kania

    30/08/2019 Duración: 49min

    Discussions of Chinese artificial intelligence often center around the trope of a U.S.-China arms race. On this month’s FLI podcast, we’re moving beyond this narrative and taking a closer look at the realities of AI in China and what they really mean for the United States. Experts Helen Toner and Elsa Kania, both of Georgetown University’s Center for Security and Emerging Technology, discuss China’s rise as a world AI power, the relationship between the Chinese tech industry and the military, and the use of AI in human rights abuses by the Chinese government. They also touch on Chinese-American technological collaboration, technological difficulties facing China, and what may determine international competitive advantage going forward. Topics discussed in this episode include: The rise of AI in China The escalation of tensions between U.S. and China in AI realm Chinese AI Development plans and policy initiatives The AI arms race narrative and the problems with it Civil-military fusion in China vs. U.S. Th

  • AIAP: China's AI Superpower Dream with Jeffrey Ding

    16/08/2019 Duración: 01h12min

    "In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China." (FLI's AI Policy - China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China's AI development and strategy, as well as China's approach to strategic technologies more generally. Topics discussed in this episode include: -China's historical relationships with technology development -China's AI goals and some recently released principles -Jeffrey Ding's work, Deciphering China's AI Dream -The central driv

  • FLI Podcast: The Climate Crisis as an Existential Threat with Simon Beard and Haydn Belfield

    01/08/2019 Duración: 01h09min

    Does the climate crisis pose an existential threat? And is that even the best way to formulate the question, or should we be looking at the relationship between the climate crisis and existential threats differently? In this month’s FLI podcast, Ariel was joined by Simon Beard and Haydn Belfield of the University of Cambridge’s Center for the Study of Existential Risk (CSER), who explained why, despite the many unknowns, it might indeed make sense to study climate change as an existential threat. Simon and Haydn broke down the different systems underlying human civilization and the ways climate change threatens these systems. They also discussed our species’ unique strengths and vulnerabilities –– and the ways in which technology has heightened both –– with respect to the changing climate.

  • AIAP: On the Governance of AI with Jade Leung

    22/07/2019 Duración: 01h14min

    In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict. Topics discussed in this episode include: -The landscape of AI governance -The Center for the Governance of AI’s research agenda and priorities -Aligning government and companies with ideal governance and the common good -Norms and efforts in the AI alignment community in this space -Technical AI alignment vs. AI Governance vs. malicious use cases -Lethal autonomous

  • FLI Podcast: Is Nuclear Weapons Testing Back on the Horizon? With Jeffrey Lewis and Alex Bell

    28/06/2019 Duración: 37min

    Nuclear weapons testing is mostly a thing of the past: The last nuclear weapon test explosion on US soil was conducted over 25 years ago. But how much longer can nuclear weapons testing remain a taboo that almost no country will violate? In an official statement from the end of May, the Director of the U.S. Defense Intelligence Agency (DIA) expressed the belief that both Russia and China were preparing for explosive tests of low-yield nuclear weapons, if not already testing. Such accusations could potentially be used by the U.S. to justify a breach of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). This month, Ariel was joined by Jeffrey Lewis, Director of the East Asia Nonproliferation Program at the Center for Nonproliferation Studies and founder of armscontrolwonk.com, and Alex Bell, Senior Policy Director at the Center for Arms Control and Non-Proliferation. Lewis and Bell discuss the DIA’s allegations, the history of the CTBT, why it’s in the U.S. interest to ratify the treaty, and more. Topics dis

  • FLI Podcast: Applying AI Safety & Ethics Today with Ashley Llorens & Francesca Rossi

    31/05/2019 Duración: 38min

    In this month’s podcast, Ariel spoke with Ashley Llorens, the Founding Chief of the Intelligent Systems Center at the John Hopkins Applied Physics Laboratory, and Francesca Rossi, the IBM AI Ethics Global Leader at the IBM TJ Watson Research Lab and an FLI board member, about developing AI that will make us safer, more productive, and more creative. Too often, Rossi points out, we build our visions of the future around our current technology. Here, Llorens and Rossi take the opposite approach: let's build our technology around our visions for the future.

  • AIAP: On Consciousness, Qualia, and Meaning with Mike Johnson and Andrés Gómez Emilsson

    23/05/2019 Duración: 01h26min

    Consciousness is a concept which is at the forefront of much scientific and philosophical thinking. At the same time, there is large disagreement over what consciousness exactly is and whether it can be fully captured by science or is best explained away by a reductionist understanding. Some believe consciousness to be the source of all value and others take it to be a kind of delusion or confusion generated by algorithms in the brain. The Qualia Research Institute takes consciousness to be something substantial and real in the world that they expect can be captured by the language and tools of science and mathematics. To understand this position, we will have to unpack the philosophical motivations which inform this view, the intuition pumps which lend themselves to these motivations, and then explore the scientific process of investigation which is born of these considerations. Whether you take consciousness to be something real or illusory, the implications of these possibilities certainly have tremendous

  • The Unexpected Side Effects of Climate Change with Fran Moore and Nick Obradovich

    30/04/2019 Duración: 51min

    It’s not just about the natural world. The side effects of climate change remain relatively unknown, but we can expect a warming world to impact every facet of our lives. In fact, as recent research shows, global warming is already affecting our mental and physical well-being, and this impact will only increase. Climate change could decrease the efficacy of our public safety institutions. It could damage our economies. It could even impact the way that we vote, potentially altering our democracies themselves. Yet even as these effects begin to appear, we’re already growing numb to the changing climate patterns behind them, and we’re failing to act. In honor of Earth Day, this month’s podcast focuses on these side effects and what we can do about them. Ariel spoke with Dr. Nick Obradovich, a research scientist at the MIT Media Lab, and Dr. Fran Moore, an assistant professor in the Department of Environmental Science and Policy at the University of California, Davis. They study the social and economic impacts

  • AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 2)

    25/04/2019 Duración: 01h06min

    The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the second of two parts attempting to partially remedy this by providing an overview of technical AI alignment efforts. In particular, this episode seeks to continue the discussion from Part 1 by going in more depth with regards to the specific approaches to AI alignment. In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student at UC Berkeley with the Center for Human-Compatible AI, working with Anca Dragan, Pieter Abbeel and Stuart Russell. Every week, he collects and summarizes recent progress relevant to AI alignment in the Alignment Newsletter.  Topics discussed in this episode include: -Embedded agency -The field of "getting AI systems to do what we want" -Ambitious value learning -Corrigibility, including iterated amplification, debate, and factored cognition -AI boxing and impact measures -Robustness through verification, adverserial ML, and ad

  • AIAP: An Overview of Technical AI Alignment with Rohin Shah (Part 1)

    11/04/2019 Duración: 01h16min

    The space of AI alignment research is highly dynamic, and it's often difficult to get a bird's eye view of the landscape. This podcast is the first of two parts attempting to partially remedy this by providing an overview of the organizations participating in technical AI research, their specific research directions, and how these approaches all come together to make up the state of technical AI alignment efforts. In this first part, Rohin moves sequentially through the technical research organizations in this space and carves through the field by its varying research philosophies. We also dive into the specifics of many different approaches to AI safety, explore where they disagree, discuss what properties varying approaches attempt to develop/preserve, and hear Rohin's take on these different approaches. You can take a short (3 minute) survey to share your feedback about the podcast here: https://www.surveymonkey.com/r/YWHDFV7 In this podcast, Lucas spoke with Rohin Shah. Rohin is a 5th year PhD student a

  • Why Ban Lethal Autonomous Weapons

    03/04/2019 Duración: 49min

    Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. We've compiled their arguments, along with many of our own, and now, we want to turn the discussion over to you. We’ve set up a comments section on the FLI podcast page (www.futureoflife.org/whyban), and we want to know: Which argument(s) do you find most compelling? Why?

  • AIAP: AI Alignment through Debate with Geoffrey Irving

    07/03/2019 Duración: 01h10min

    See full article here: https://futureoflife.org/2019/03/06/ai-alignment-through-debate-with-geoffrey-irving/ "To make AI systems broadly useful for challenging real-world tasks, we need them to learn complex human goals and preferences. One approach to specifying complex goals asks humans to judge during training which agent behaviors are safe and useful, but this approach can fail if the task is too complicated for a human to directly judge. To help address this concern, we propose training agents via self play on a zero sum debate game. Given a question or proposed action, two agents take turns making short statements up to a limit, then a human judges which of the agents gave the most true, useful information...  In practice, whether debate works involves empirical questions about humans and the tasks we want AIs to perform, plus theoretical questions about the meaning of AI alignment. " AI safety via debate (https://arxiv.org/pdf/1805.00899.pdf) Debate is something that we are all familiar with. Usually

  • Part 2: Anthrax, Agent Orange, and Yellow Rain With Matthew Meselson and Max Tegmark

    28/02/2019 Duración: 51min

    In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Part Two focuses on three major incidents in the history of biological weapons: the 1979 anthrax outbreak in Russia, the use of Agent Orange and other herbicides in Vietnam, and the Yellow Rain controversy in the early 80s. Dr. Meselson led the investigations into all three and solved some perplexing scientific mysteries along the way.

  • Part 1: From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

    28/02/2019 Duración: 56min

    In this special two-part podcast Ariel Conn is joined by Max Tegmark for a conversation with Dr. Matthew Meselson, biologist and Thomas Dudley Cabot Professor of the Natural Sciences at Harvard University. Dr. Meselson began his career with an experiment that helped prove Watson and Crick’s hypothesis on the structure and replication of DNA. He then got involved in disarmament, working with the US government to halt the use of Agent Orange in Vietnam and developing the Biological Weapons Convention. From the cellular level to that of international policy, Dr. Meselson has made significant contributions not only to the field of biology, but also towards the mitigation of existential threats. In Part One, Dr. Meselson describes how he designed the experiment that helped prove Watson and Crick’s hypothesis, and he explains why this type of research is uniquely valuable to the scientific community. He also recounts his introduction to biological weapons, his reasons for opposing them, and the efforts he undert

  • AIAP: Human Cognition and the Nature of Intelligence with Joshua Greene

    21/02/2019 Duración: 37min

    See the full article here: https://futureoflife.org/2019/02/21/human-cognition-and-the-nature-of-intelligence-with-joshua-greene/ "How do we combine concepts to form thoughts? How can the same thought be represented in terms of words versus things that you can see or hear in your mind's eyes and ears? How does your brain distinguish what it's thinking about from what it actually believes? If I tell you a made up story, yesterday I played basketball with LeBron James, maybe you'd believe me, and then I say, oh I was just kidding, didn't really happen. You still have the idea in your head, but in one case you're representing it as something true, in another case you're representing it as something false, or maybe you're representing it as something that might be true and you're not sure. For most animals, the ideas that get into its head come in through perception, and the default is just that they are beliefs. But humans have the ability to entertain all kinds of ideas without believing them. You can believe

  • The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi

    07/02/2019 Duración: 50min

    Three generals are voting on whether to attack or retreat from their siege of a castle. One of the generals is corrupt and two of them are not. What happens when the corrupted general sends different answers to the other two generals? A Byzantine fault is "a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine Generals' Problem", developed to describe this condition, where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable." The Byzantine Generals' Problem and associated issues in maintaining reliable distributed computing networks is illuminating for both AI alignment and modern networks we interact with like Youtube, Facebook, or Google. By exploring this space, we are shown the limits of reliable distributed computing, the safety concerns and threats in this space,

página 8 de 11