Go offline with the Player FM app!
AI expert Connor Leahy on superintelligence and the threat of human extinction
Manage episode 485759199 series 3641728
Many of the brightest minds in artificial intelligence believe models that are smarter than a human in every way will be built within a few years. Whether it turns out to be two years or 10, the changes will be epoch-making. Life will never be the same.
Today’s guest Connor Leahy is one of many AI experts who believe that far from ushering in an era of utopian abundance, superintelligent AI could kill us all. Connor is CEO of the firm Conjecture AI, a prominent advocate for AI safety and the lead author of the AI Compendium, which lays out how rapidly advancing AI could become an existential threat to humanity.
He discusses the Compendium’s thesis, the question of whether AGI will necessarily form its own goals, the risks of so-called autonomous AI agents which are increasingly a focus of the major AI labs, the need to align AI with human values, and the merits of forming a global Manhattan Project to achieve this task. He also talks about the incentives being created by the commercial and geopolitical races to reach AGI and the need for a grassroots movement of ordinary people raising AI risks with their elected representatives.
Control AI report on briefing UK MPs: https://leticiagarciamartinez.substack.com/p/what-we-learned-from-briefing-70
The AI Compendium is available here: https://www.thecompendium.ai/
65 episodes
Manage episode 485759199 series 3641728
Many of the brightest minds in artificial intelligence believe models that are smarter than a human in every way will be built within a few years. Whether it turns out to be two years or 10, the changes will be epoch-making. Life will never be the same.
Today’s guest Connor Leahy is one of many AI experts who believe that far from ushering in an era of utopian abundance, superintelligent AI could kill us all. Connor is CEO of the firm Conjecture AI, a prominent advocate for AI safety and the lead author of the AI Compendium, which lays out how rapidly advancing AI could become an existential threat to humanity.
He discusses the Compendium’s thesis, the question of whether AGI will necessarily form its own goals, the risks of so-called autonomous AI agents which are increasingly a focus of the major AI labs, the need to align AI with human values, and the merits of forming a global Manhattan Project to achieve this task. He also talks about the incentives being created by the commercial and geopolitical races to reach AGI and the need for a grassroots movement of ordinary people raising AI risks with their elected representatives.
Control AI report on briefing UK MPs: https://leticiagarciamartinez.substack.com/p/what-we-learned-from-briefing-70
The AI Compendium is available here: https://www.thecompendium.ai/
65 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.