Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo

10:46
 
Share
 

Manage episode 478158871 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Subtitle: Bad for loss of control risks, bad for concentration of power risks
I’ve had this sitting in my drafts for the last year. I wish I’d been able to release it sooner, but on the bright side, it’ll make a lot more sense to people who have already read AI 2027.
  1. There's a good chance that AGI will be trained before this decade is out.
    1. By AGI I mean “An AI system at least as good as the best human X’ers, for all cognitive tasks/skills/jobs X.”
    2. Many people seem to be dismissing this hypothesis ‘on priors’ because it sounds crazy. But actually, a reasonable prior should conclude that this is plausible.[1]
    3. For more on what this means, what it might look like, and why it's plausible, see AI 2027, especially the Research section.
  2. If so, by default the existence of AGI will be a closely guarded [...]
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
April 18th, 2025
Source:
https://www.lesswrong.com/posts/FGqfdJmB8MSH5LKGc/training-agi-in-secret-would-be-unsafe-and-unethical-1
---
Narrated by TYPE III AUDIO.
  continue reading

502 episodes

Artwork
iconShare
 
Manage episode 478158871 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Subtitle: Bad for loss of control risks, bad for concentration of power risks
I’ve had this sitting in my drafts for the last year. I wish I’d been able to release it sooner, but on the bright side, it’ll make a lot more sense to people who have already read AI 2027.
  1. There's a good chance that AGI will be trained before this decade is out.
    1. By AGI I mean “An AI system at least as good as the best human X’ers, for all cognitive tasks/skills/jobs X.”
    2. Many people seem to be dismissing this hypothesis ‘on priors’ because it sounds crazy. But actually, a reasonable prior should conclude that this is plausible.[1]
    3. For more on what this means, what it might look like, and why it's plausible, see AI 2027, especially the Research section.
  2. If so, by default the existence of AGI will be a closely guarded [...]
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
April 18th, 2025
Source:
https://www.lesswrong.com/posts/FGqfdJmB8MSH5LKGc/training-agi-in-secret-would-be-unsafe-and-unethical-1
---
Narrated by TYPE III AUDIO.
  continue reading

502 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play