Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

13 - First Principles of AGI Safety with Richard Ngo

1:33:53
 
Share
 

Manage episode 324188788 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.

Topics we discuss, and timestamps:

- 00:00:40 - The nature of intelligence and AGI

- 00:01:18 - The nature of intelligence

- 00:06:09 - AGI: what and how

- 00:13:30 - Single vs collective AI minds

- 00:18:57 - AGI in practice

- 00:18:57 - Impact

- 00:20:49 - Timing

- 00:25:38 - Creation

- 00:28:45 - Risks and benefits

- 00:35:54 - Making AGI safe

- 00:35:54 - Robustness of the agency abstraction

- 00:43:15 - Pivotal acts

- 00:50:05 - AGI safety concepts

- 00:50:05 - Alignment

- 00:56:14 - Transparency

- 00:59:25 - Cooperation

- 01:01:40 - Optima and selection processes

- 01:13:33 - The AI alignment research community

- 01:13:33 - Updates from the Yudkowsky conversation

- 01:17:18 - Corrections to the community

- 01:23:57 - Why others don't join

- 01:26:38 - Richard Ngo as a researcher

- 01:28:26 - The world approaching AGI

- 01:30:41 - Following Richard's work

The transcript: axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html

Richard on the Alignment Forum: alignmentforum.org/users/ricraz

Richard on Twitter: twitter.com/RichardMCNgo

The AGI Safety Fundamentals course: eacambridge.org/agi-safety-fundamentals

Materials that we mention:

- AGI Safety from First Principles: alignmentforum.org/s/mzgtmmTKKn5MuCzFJ

- Conversations with Eliezer Yudkowsky: alignmentforum.org/s/n945eovrA3oDueqtq

- The Bitter Lesson: incompleteideas.net/IncIdeas/BitterLesson.html

- Metaphors We Live By: en.wikipedia.org/wiki/Metaphors_We_Live_By

- The Enigma of Reason: hup.harvard.edu/catalog.php?isbn=9780674237827

- Draft report on AI timelines, by Ajeya Cotra: alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines

- More is Different for AI: bounded-regret.ghost.io/more-is-different-for-ai/

- The Windfall Clause: fhi.ox.ac.uk/windfallclause

- Cooperative Inverse Reinforcement Learning: arxiv.org/abs/1606.03137

- Imitative Generalisation: alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

- Draft report on existential risk from power-seeking AI, by Joseph Carlsmith: alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai

- The Most Important Century: cold-takes.com/most-important-century

  continue reading

54 episodes

Artwork
iconShare
 
Manage episode 324188788 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.

Topics we discuss, and timestamps:

- 00:00:40 - The nature of intelligence and AGI

- 00:01:18 - The nature of intelligence

- 00:06:09 - AGI: what and how

- 00:13:30 - Single vs collective AI minds

- 00:18:57 - AGI in practice

- 00:18:57 - Impact

- 00:20:49 - Timing

- 00:25:38 - Creation

- 00:28:45 - Risks and benefits

- 00:35:54 - Making AGI safe

- 00:35:54 - Robustness of the agency abstraction

- 00:43:15 - Pivotal acts

- 00:50:05 - AGI safety concepts

- 00:50:05 - Alignment

- 00:56:14 - Transparency

- 00:59:25 - Cooperation

- 01:01:40 - Optima and selection processes

- 01:13:33 - The AI alignment research community

- 01:13:33 - Updates from the Yudkowsky conversation

- 01:17:18 - Corrections to the community

- 01:23:57 - Why others don't join

- 01:26:38 - Richard Ngo as a researcher

- 01:28:26 - The world approaching AGI

- 01:30:41 - Following Richard's work

The transcript: axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html

Richard on the Alignment Forum: alignmentforum.org/users/ricraz

Richard on Twitter: twitter.com/RichardMCNgo

The AGI Safety Fundamentals course: eacambridge.org/agi-safety-fundamentals

Materials that we mention:

- AGI Safety from First Principles: alignmentforum.org/s/mzgtmmTKKn5MuCzFJ

- Conversations with Eliezer Yudkowsky: alignmentforum.org/s/n945eovrA3oDueqtq

- The Bitter Lesson: incompleteideas.net/IncIdeas/BitterLesson.html

- Metaphors We Live By: en.wikipedia.org/wiki/Metaphors_We_Live_By

- The Enigma of Reason: hup.harvard.edu/catalog.php?isbn=9780674237827

- Draft report on AI timelines, by Ajeya Cotra: alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines

- More is Different for AI: bounded-regret.ghost.io/more-is-different-for-ai/

- The Windfall Clause: fhi.ox.ac.uk/windfallclause

- Cooperative Inverse Reinforcement Learning: arxiv.org/abs/1606.03137

- Imitative Generalisation: alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

- Draft report on existential risk from power-seeking AI, by Joseph Carlsmith: alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai

- The Most Important Century: cold-takes.com/most-important-century

  continue reading

54 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play