80,000 Hours Podcast With Rob Wiblin

AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani

Informações:

Sinopsis

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?Would road-mobile launchers still be able to hide in tunnels and under netting?Would missile defence become so accurate that the United States