Sinopsis
A show about the world's most pressing problems and how you can use your career to solve them.Subscribe by searching for '80,000 Hours' wherever you get podcasts.Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
Episodios
-
Will MacAskill – how we survive the 'intelligence explosion', AI character, and the case for Viatopia
22/04/2026 Duración: 03h09minHundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems will shape culture on an even grander scale, ultimately becoming “the personality of most of the world’s workforce.”So… should they be designed to push us towards the better angels of our nature? Or simply do as we ask? Will MacAskill, philosopher and senior research fellow at Forethought, has been thinking through that and the other thorniest issues that come up in designing an AI personality.He’s also been exploring how we might coexist peacefully with the ‘superintelligent AI’ companies are racing to build. He concludes that we should train such systems to be very risk averse, pay them for their work, and build institutions that enable humans to make credible contracts with AIs themselves.Will and host Rob Wiblin also discuss what a good world after superintelligence would actually look like — a subject that h
-
Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)
16/04/2026 Duración: 01h29minHundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve considered risks from AI to be the world’s most pressing problem since 2016. But what led us to this conclusion? Could AI really cause human extinction? We’re not certain, but we think the risk is worth taking very seriously. In particular, as companies create increasingly powerful AI systems, there’s a concerning chance that:These AI systems may develop dangerous long-term goals we don’t want.To pursue these goals, they may seek power and undermine the safeguards meant to contain them.They may even aim to disempower humanity and potentially cause our extinction.This article is written by Cody Fenwick and Zershaaneh Qureshi, and narrated by Zershaaneh Qureshi. It discusses why future AI systems could disempower humanity, what current AI research reveals about behaviours like power-seeking and deception, and how you ca
-
How scary is Claude Mythos? 303 pages in 21 minutes
10/04/2026 Duración: 21minWith Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin works through its 244-page System Card and 59-page Alignment Risk Update to explain why: Mythos is a nightmare for computer securityIt has arrived far ahead of scheduleIt might be great news for alignment and safetyBut 3 key problems mean we can’t take its alignment results at face valueMythos isn’t building its replacement yet, probablyAnthropic staff are, for the first time, kinda scared of ClaudeHe's losing sleepLearn more & full transcript: https://80k.info/mythosThis episode was recorded on April 9, 2026.Chapters:Why people are panicking about computer security (01:05)Mythos could break out of containment (04:23)Anthropic is losing billions in revenue by not releasing Mythos (06:21)Mythos is actually the most aligned model to date, except… (07:48)Mythos knows when it’s being tested (09:52)Mythos can hide its
-
Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health
07/04/2026 Duración: 04h06minWhat does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.What’s so striking is how some of the most effective interventions sound almost too simple to work: banning certain pesticides, replacing thatch roofs, or identifying village “influencers” to spread health information.Full transcript and links to learn more: https://80k.info/ghdChapters:Cold open (00:00:00)Luisa’s intro (00:00:58)Development consultant Karen Levy on why pushing for “sustainable” programmes isn’t as good as it sounds (00:02:15)Economist Dean Spe
-
Is there a case against Anthropic? And: The Meta leaks are worse than you think.
03/04/2026 Duración: 20minWhen the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.)Watch on YouTube: What Everyone is Missing about Anthropic vs The PentagonPlus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from ads for scams and goods Meta had itself banned. These likely enabled the theft of around $50 billion dollars a year from Americans alone. But when an internal anti-fraud team developed a screening method that halved the rate of scams coming from China... well, it wasn't well received.Watch on YouTube: The Meta Leaks Are Worse Than You ThinkChapters:Introduction (00:00:00)What Ev
-
Could a biologist armed with AI kill a billion people? | Dr Richard Moulange
31/03/2026 Duración: 03h07minLast September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right.But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertis
-
A Ukraine ceasefire could accidentally set Europe up for a bigger war | RAND's top Russia expert Samuel Charap
24/03/2026 Duración: 01h12minMany people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways.That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars.What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that give
-
Why automating human labour will break our political system | Rose Hadshar, Forethought
17/03/2026 Duración: 02h14minThe most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects.Almost nobody wants this to happen — but we may find ourselves unable to prevent it.If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage w
-
AGI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani
10/03/2026 Duración: 01h11minHow AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?Would road-mobile launchers still be able to hide in tunnels and under netting?Would missile defence become so accurate that the United States
-
Using AI to enhance societal decision making (article by Zershaaneh Qureshi)
06/03/2026 Duración: 31minThe arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.This article is narrated by the author, Zershaaneh Qureshi. It explores why AI decision-making tools could be a big deal, who might be a good fit to help shape this new field, and what the downside risks of getting involved might be. Read the original article on the 80,000 Hours website: https://80000hours.org/problem-profiles/ai-enhanced-decision-making/Chapters:Check out our new narrations feed (00:00:00)Summary (00:01:21)Section 1: Why advancing AI decision making tools might matter a lot (00:02:52)AI tools could hel
-
We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI
03/03/2026 Duración: 03h25minClaude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with that?Robert Long founded Eleos AI to explore questions like these, on the basis that AI may one day be capable of suffering — or already is. In today’s episode, Robert and host Luisa Rodriguez explore the many ways in which AI consciousness may be very different from anything we’re used to.Things get strange fast: If AI is conscious, where does that consciousness exist? In the base model? A chat session? A single forward pass? If you close the chat, is the AI asleep or dead?To Robert, these kinds of questions aren’t just philosophical exercises: not being clear on AI’s moral status as it transitions from human-level to superhuman intelligence could be dangerous. If we’re too dismissive, we risk unintentionally exploiting sentient beings. If we’re too sympathetic, we might rush to “liberate” AI systems in ways th
-
Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI
24/02/2026 Duración: 02h41minMost people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all.In Max’s view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough.It’s a vision that springs from the worldview laid out in If Anyone Builds It, Everyone Dies, the recent book by Eliezer Yudkowsky and Nate Soares, two of Max’s colleagues at the Machine Intelligence Research Institute.To Max, the book’s core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinc
-
Every AI Company's Safety Plan is 'Use AI to Make AI Safe'. Is That Crazy? | Ajeya Cotra
17/02/2026 Duración: 02h54minEvery major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan?Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?Ajeya agrees that
-
What the hell happened with AGI timelines in 2025?
10/02/2026 Duración: 25minIn early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of 2025, sentiment swung all the way back in the other direction, with people's forecasts for when AI might really shake up the world blowing out even further than they had been before reasoning models came along.What the hell happened? Was it just swings in vibes and mood? Confusion? A series of fundamentally unexpected and unpredictable research results?Host Rob Wiblin has been trying to make sense of it for himself, and here's the best explanation he's come up with so far.Links to learn more, video, and full transcript: https://80k.info/tlChapters:Making sense of the timelines madness in 2025 (00:00:00)The great timelines contraction (00:00:46)Why timelines went back out again (00:02:10)Other longstanding reasons AGI could take a good while (00:11:13)So what's the upshot of all of these updates? (00:14:47)5 reaso
-
#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
03/02/2026 Duración: 02h51minMental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get arou
-
Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead
27/01/2026 Duración: 02h31minDemocracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.'Links to learn more, video, and full transcript: https://80k.info/ddHe argues democracy wasn’t the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting?“The reason that states have been treating us so well in the West, at least
-
#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable
20/01/2026 Duración: 02h56minIn many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success.It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress.But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire during the 18th and 19th centuries — believes the story of how slavery became unacceptable suggests moral progress is far from inevitable.Rebroadcast: This episode was originally aired in February 2023.Links to learn more, video, and full transcript: https://80k.link/CLBWhile most of us today feel that the abolition of slavery was sure to happen sooner or later as humans became richer and more educated, Christopher do
-
How to Prevent a Mirror Life Catastrophe | James Smith (Director, Mirror Biology Dialogues Fund)
13/01/2026 Duración: 02h09minWhen James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described. What convinced him?Mirror bacteria would be constructed entirely from molecules that are the mirror images of their naturally occurring counterparts. This seemingly trivial difference creates a fundamental break in the tree of life. For billions of years, the mechanisms underlying immune systems and keeping natural populations of microorganisms in check have evolved to recognise threats by their molecular shape — like a hand fitting into a matching glove.Learn more, video, and full transcript: https://80k.info/js26Mirror bacteria would upend that assumption, creating two enormous problems:Many critical immune pathways would likely fail to activate, creating risks of fatal infection across many species.Mirror bacteria could have substantial resistance to natural predators: for example, they would be essentially
-
#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena
09/01/2026 Duración: 03h30minWhat’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.Rebroadcast: this episode was originally released in January 2023.Links to learn more, video, and full transcript: https://80k.link/AA As Athena explains in her book The Cheating Cell, what we see with cancer is a breakdown in each of the foundations of cooperation that allowed multicellularity to arise: Cells will proliferate when they shouldn't. Cells won't die when they should. Cells won't engage in the kind of division of labour that they should.
-
#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language
06/01/2026 Duración: 01h35minJohn McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work, he's written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.Rebroadcast: this episode was originally released in December 2022.YouTube video version: https://youtu.be/MEd7TT_nMJELinks to learn more, video, and full transcript: https://80k.link/JMWe ask him what we think are the most important things everyone ought to know about linguistics, including:Can you communicate faster in some languages than others, or is there some constraint that prevents that?Does learning a second or third language make you smarter or not?Can a language decay and get worse at communicating what people want to say?If children aren't taught a language, how many generations does it take them to invent a fu