Artwork
iconShare
 
Manage episode 515559388 series 3650637
Content provided by The AI Risk Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The AI Risk Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.

Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.

They discuss:

* Why even AI insiders are losing faith in control

* What a “Chernobyl moment” could actually look like

* Why regulation isn’t anti-innovation — it’s survival

* The myth that America is “allergic” to AI rules

* How liability, accountability, and provable safety could still save us

* Whether we can ever truly coexist with a superintelligence

This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.

🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street.

📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat.

#AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow


This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
  continue reading

114 episodes