Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

17 - Training for Very High Reliability with Daniel Ziegler

1:00:59
 
Share
 

Manage episode 338517759 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

54 episodes

Artwork
iconShare
 
Manage episode 338517759 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned.

Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript).

Topics we discuss, and timestamps:

- 00:00:40 - Summary of the paper

- 00:02:23 - Alignment as scalable oversight and catastrophe minimization

- 00:08:06 - Novel contribtions

- 00:14:20 - Evaluating adversarial robustness

- 00:20:26 - Adversary construction

- 00:35:14 - The task

- 00:38:23 - Fanfiction

- 00:42:15 - Estimators to reduce labelling burden

- 00:45:39 - Future work

- 00:50:12 - About Redwood Research

The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html

Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ

Research we discuss:

- Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663

- Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment

- Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286

- Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472

- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

  continue reading

54 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play