Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Google DeepMind and Hannah Fry. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Google DeepMind and Hannah Fry or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Ethics of AI Assistants with Iason Gabriel

43:58
 
Share
 

Manage episode 449613575 series 2532352
Content provided by Google DeepMind and Hannah Fry. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Google DeepMind and Hannah Fry or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.

In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.

Timecodes:

  • 00:00 Intro
  • 01:13 Definition of AI assistants
  • 04:05 A utopic view
  • 06:25 Iason’s background
  • 07:45 The Ethics of Advanced AI Assistants paper
  • 13:06 Anthropomorphism
  • 14:07 Turing perspective
  • 15:25 Anthropomorphism continued
  • 20:02 The value alignment question
  • 24:54 Deception
  • 27:07 Deployed at scale
  • 28:32 Agentic inequality
  • 31:02 Unfair outcomes
  • 34:10 Coordinated systems
  • 37:10 A new paradigm
  • 38:23 Tetradic value alignment
  • 41:10 The future
  • 42:41 Reflections from Hannah

Thanks to everyone who made this possible, including but not limited to:

  • Presenter: Professor Hannah Fry
  • Series Producer: Dan Hardoon
  • Editor: Rami Tzabar, TellTale Studios
  • Commissioner & Producer: Emma Yousif
  • Music composition: Eleni Shaw
  • Camera Director and Video Editor: Daniel Lazard
  • Audio Engineer: Perry Rogantin
  • Video Studio Production: Nicholas Duke
  • Video Editor: Bilal Merhi
  • Video Production Design: James Barton
  • Visual Identity and Design: Eleanor Tomlinson
  • Production support: Mo Dawoud
  • Commissioned by Google DeepMind

Please leave us a review on Spotify or Apple Podcasts if you enjoyed this episode. We always want to hear from our audience whether that's in the form of feedback, new idea or a guest recommendation!

  continue reading

38 episodes

Artwork
iconShare
 
Manage episode 449613575 series 2532352
Content provided by Google DeepMind and Hannah Fry. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Google DeepMind and Hannah Fry or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.

In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.

Timecodes:

  • 00:00 Intro
  • 01:13 Definition of AI assistants
  • 04:05 A utopic view
  • 06:25 Iason’s background
  • 07:45 The Ethics of Advanced AI Assistants paper
  • 13:06 Anthropomorphism
  • 14:07 Turing perspective
  • 15:25 Anthropomorphism continued
  • 20:02 The value alignment question
  • 24:54 Deception
  • 27:07 Deployed at scale
  • 28:32 Agentic inequality
  • 31:02 Unfair outcomes
  • 34:10 Coordinated systems
  • 37:10 A new paradigm
  • 38:23 Tetradic value alignment
  • 41:10 The future
  • 42:41 Reflections from Hannah

Thanks to everyone who made this possible, including but not limited to:

  • Presenter: Professor Hannah Fry
  • Series Producer: Dan Hardoon
  • Editor: Rami Tzabar, TellTale Studios
  • Commissioner & Producer: Emma Yousif
  • Music composition: Eleni Shaw
  • Camera Director and Video Editor: Daniel Lazard
  • Audio Engineer: Perry Rogantin
  • Video Studio Production: Nicholas Duke
  • Video Editor: Bilal Merhi
  • Video Production Design: James Barton
  • Visual Identity and Design: Eleanor Tomlinson
  • Production support: Mo Dawoud
  • Commissioned by Google DeepMind

Please leave us a review on Spotify or Apple Podcasts if you enjoyed this episode. We always want to hear from our audience whether that's in the form of feedback, new idea or a guest recommendation!

  continue reading

38 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play