Artwork
iconShare
 
Manage episode 505876537 series 1334308
Content provided by Gus Docker and Future of Life Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gus Docker and Future of Life Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.

You can learn more about David's work at ARIA here:

https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/

Timestamps:

00:00 What is Safeguarded AI?

16:28 Implementing Safeguarded AI

22:58 Can we trust Safeguarded AIs?

31:00 Formalizing more of the world

37:34 The performance cost of verified AI

47:58 Changing attitudes towards AI

52:39 Flexible Hardware-Enabled Guarantees

01:24:15 Mind uploading

01:36:14 Lessons from David's early life

  continue reading

477 episodes