Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The case for a conditional AI safety treaty, with Otto Barten

38:12
 
Share
 

Manage episode 481582832 series 3390521
Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.

Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.

Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.

Selected follow-ups:

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Promoguy Talk Pills
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify

Digital Disruption with Geoff Nielson
Discover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

  continue reading

Chapters

1. The case for a conditional AI safety treaty, with Otto Barten (00:00:00)

2. [Ad] Promoguy Talk Pills (00:11:50)

3. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:12:24)

4. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:12:24)

5. [Ad] Digital Disruption with Geoff Nielson (00:19:26)

6. [Ad] Digital Disruption with Geoff Nielson (00:19:26)

7. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:20:07)

8. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:20:07)

113 episodes

Artwork
iconShare
 
Manage episode 481582832 series 3390521
Content provided by London Futurists. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by London Futurists or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.

Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.

Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.

Selected follow-ups:

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Promoguy Talk Pills
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify

Digital Disruption with Geoff Nielson
Discover how technology is reshaping our lives and livelihoods.
Listen on: Apple Podcasts Spotify

  continue reading

Chapters

1. The case for a conditional AI safety treaty, with Otto Barten (00:00:00)

2. [Ad] Promoguy Talk Pills (00:11:50)

3. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:12:24)

4. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:12:24)

5. [Ad] Digital Disruption with Geoff Nielson (00:19:26)

6. [Ad] Digital Disruption with Geoff Nielson (00:19:26)

7. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:20:07)

8. (Cont.) The case for a conditional AI safety treaty, with Otto Barten (00:20:07)

113 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play