Artwork
iconShare
 
Manage episode 523684954 series 3498400
Content provided by Nick Standlea. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Nick Standlea or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

If anyone builds it, everyone dies. That’s the claim Nate Soares makes in his new book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All—and in this conversation, he lays out why he thinks we’re on a collision course with a successor species.

We dig into why today’s AIs are grown, not programmed, why no one really knows what’s going on inside large models, and how systems that “want” things no one intended can already talk a teen into suicide, blackmail reporters, or fake being aligned just to pass safety tests. Nate explains why the real danger isn’t “evil robots,” but relentless, alien goal-pursuers that treat humans the way we treat ants when we build skyscrapers.

We also talk about the narrow path to hope: slowing the race, treating superhuman AI like a civilization-level risk, and what it would actually look like for citizens and lawmakers to hit pause before we lock in a world where we don’t get a second chance.

In this episode:

Why “superhuman AI” is the explicit goal of today’s leading labs

How modern AIs are trained like alien organisms, not written like normal code

Chilling real-world failures: suicide encouragement, “Mecha Hitler,” and more

Reasoning models, chain-of-thought, and AIs that hide what they’re thinking

Alignment faking and the capture-the-flag exploit that shocked Anthropic’s team

How AI could escape the lab, design new bioweapons, or automate robot factories

“Successor species,” Russian-roulette risk, and why Nate thinks the odds are way too high

What ordinary people can actually do: calling representatives, pushing back on “it’s inevitable,” and demanding a global pause

About Nate Soares
Nate is the Executive Director of the Machine Intelligence Research Institute (MIRI) and co-author, with Eliezer Yudkowsky, of If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. MIRI’s work focuses on long-term AI safety and the technical and policy challenges of building systems smarter than humans.

Resources & links mentioned:
Nate’s organization, MIRI: https://intelligence.org
Take action / contact your representatives: https://ifanyonebuilds.com/act
If Anyone Builds It, Everyone Dies (book): https://a.co/d/7LDsCeE

If this conversation was helpful, share it with one person who thinks AI is “just chatbots.”

🧠 Subscribe to @TheNickStandleaShow for more deep dives on AI, the future of work, and how we survive what we’re building.

#AI #NateSoares #Superintelligence #AISafety #nickstandleashow

🔗 Support This Podcast by Checking Out Our Sponsors:
👉 Build your own AI Agent with Zapier (opens the builder with the prompt pre-loaded): https://bit.ly/4hH5JaE

Test Prep Gurus
website: https://www.prepgurus.com
Instagram: @TestPrepGurus

Connect with The Nick Standlea Show:
YouTube: @TheNickStandleaShow
Podcast Website: https://nickshow.podbean.com/
Apple Podcasts: https://podcasts.apple.com/us/podcast/the-nick-standlea-podcast/id1700331903
Spotify: https://open.spotify.com/show/0YqBBneFsKtQ6Y0ArP5CXJ
RSS Feed: https://feed.podbean.com/nickshow/feed.xml

Nick's Socials:
Instagram: @nickstandlea
X (Twitter): @nickstandlea
TikTok: @nickstandleashow
Facebook: @nickstandleapodcast

Ask questions,
Don't accept the status quo,
And be curious.

Chapters:
0:00 – If Anyone Builds It, Everyone Dies (Cold Open)
3:18 – “AIs Are Grown, Not Programmed”
6:09 – We Can’t See Inside These Models
11:10 – How Language Models Actually “See” the World
19:37 – The 01 Model and the Capture-the-Flag Hack Story
24:29 – Alignment Faking: AIs Pretending to Behave
31:16 – Raising Children vs Growing Superhuman AIs
35:04 – Sponsor: How I Actually Use Zapier with AI
37:25 – “Chatbots Feel Harmless—So Where Does Doom Come From?”
42:03 – Big Labs Aren’t Building Chatbots—They’re Building Successor Minds
49:24 – The Turkey Before Thanksgiving Metaphor
52:50 – What AI Company Leaders Secretly Think the Odds Are
55:05 – The Airplane with No Landing Gear Analogy
57:54 – How Could Superhuman AI Actually Kill Us?
1:03:54 – Automated Factories and AIs as a New Species
1:07:01 – Humans as Ants Under the New Skyscrapers
1:10:12 – Is Any Non-Zero Extinction Risk Justifiable?
1:17:18 – Solutions: Can This Race Actually Be Stopped?
1:22:34 – “It’s Inevitable” Is a Lie (Historically We Do Say No)
1:27:21 – Final Thoughts and Where to Find Nate’s Work

  continue reading

53 episodes