Artwork
iconShare
 
Manage episode 485759199 series 3641728
Content provided by Australian Strategic Policy Institute (ASPI). All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Australian Strategic Policy Institute (ASPI) or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Many of the brightest minds in artificial intelligence believe models that are smarter than a human in every way will be built within a few years. Whether it turns out to be two years or 10, the changes will be epoch-making. Life will never be the same.

Today’s guest Connor Leahy is one of many AI experts who believe that far from ushering in an era of utopian abundance, superintelligent AI could kill us all. Connor is CEO of the firm Conjecture AI, a prominent advocate for AI safety and the lead author of the AI Compendium, which lays out how rapidly advancing AI could become an existential threat to humanity.

He discusses the Compendium’s thesis, the question of whether AGI will necessarily form its own goals, the risks of so-called autonomous AI agents which are increasingly a focus of the major AI labs, the need to align AI with human values, and the merits of forming a global Manhattan Project to achieve this task. He also talks about the incentives being created by the commercial and geopolitical races to reach AGI and the need for a grassroots movement of ordinary people raising AI risks with their elected representatives.

Control AI report on briefing UK MPs: https://leticiagarciamartinez.substack.com/p/what-we-learned-from-briefing-70

The AI Compendium is available here: https://www.thecompendium.ai/

  continue reading

95 episodes