Hanselminutes is Fresh Air for Developers. A weekly commute-time podcast that promotes fresh technology and fresh voices. Talk and Tech for Developers, Life-long Learners, and Technologists.
…
continue reading
MP3•Episode home
Manage episode 508231889 series 2943147
Content provided by Foresight Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foresight Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement.
They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.
Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.
The conversation spans AI collaboration, secure operating frameworks, cryptographic separation, and lessons from nuclear non-proliferation. Despite their differences, both aim for a future where AI benefits humanity without posing existential threats.
Hosted on Acast. See acast.com/privacy for more information.
195 episodes