Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Will Cady. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Will Cady or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 7: The Alignment Problem

22:40
 
Share
 

Manage episode 477321562 series 3651597
Content provided by Will Cady. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Will Cady or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

There’s a question keeping the scientists up at night.

Are we aligned?

You’ve most certainly heard of alignment before. Maybe from an auto mechanic talking about your tires. Maybe you heard your chiropractor mutter something about aligning your spine before cracking your neck. Or maybe you’ve got some core childhood memories of your mother, eyebrows raised, asking “are we aligned?” at the end of a stern talking to.

Well, the ‘alignment problem’ as its known in scientific circles probably resembles that last context of stern parenting the best, but with a dash of auto-mechanic and an extra helping of profound existential dread.

The short of it is this: if we are to develop a super-powered artificial intelligence (referred to as AGI) that is not aligned with humanity’s values, wants, and needs; we stand to risk total destruction of the human species. The long and dry of it is this proper definition: “alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.”

The alignment problem is often articulated with a story about paper clips. Seemingly benign, the task is given to a super-powered AGI to ‘manufacture as many paper clips as possible’. Given that simple set of instructions, it arguably would inevitably consume all available matter, including human flesh, as means to achieve its end goal to ‘manufacture as many paper clips as possible.’ We should have known it would be Clippy to bring about humanity’s doom in the end. It was always Clippy. The alignment problem was always there as a warning every time we tried to resize an image in Microsoft Word.

Anyways. This is a real problem! It’s one that has quite a lot of the brightest minds in the scientific community darkened by deep, urgent concern. It’s quite sensible given the daily yield of new headlines from the rapid acceleration of AI technology; a march of progress propelled by developers whose profit motivations match - perhaps exceed - researcher’s concerns. One technology spanning two communities at the spearhead of human development. One moves at the speed of business growth, the other at the speed of scientific certainty, which leads me to what I believe is the true core of this issue:

Alignment is a technology problem second and a culture problem first.

How can we build AI to be aligned with humanity when humanity can’t even align with itself?

  continue reading

8 episodes

Artwork
iconShare
 
Manage episode 477321562 series 3651597
Content provided by Will Cady. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Will Cady or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

There’s a question keeping the scientists up at night.

Are we aligned?

You’ve most certainly heard of alignment before. Maybe from an auto mechanic talking about your tires. Maybe you heard your chiropractor mutter something about aligning your spine before cracking your neck. Or maybe you’ve got some core childhood memories of your mother, eyebrows raised, asking “are we aligned?” at the end of a stern talking to.

Well, the ‘alignment problem’ as its known in scientific circles probably resembles that last context of stern parenting the best, but with a dash of auto-mechanic and an extra helping of profound existential dread.

The short of it is this: if we are to develop a super-powered artificial intelligence (referred to as AGI) that is not aligned with humanity’s values, wants, and needs; we stand to risk total destruction of the human species. The long and dry of it is this proper definition: “alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.”

The alignment problem is often articulated with a story about paper clips. Seemingly benign, the task is given to a super-powered AGI to ‘manufacture as many paper clips as possible’. Given that simple set of instructions, it arguably would inevitably consume all available matter, including human flesh, as means to achieve its end goal to ‘manufacture as many paper clips as possible.’ We should have known it would be Clippy to bring about humanity’s doom in the end. It was always Clippy. The alignment problem was always there as a warning every time we tried to resize an image in Microsoft Word.

Anyways. This is a real problem! It’s one that has quite a lot of the brightest minds in the scientific community darkened by deep, urgent concern. It’s quite sensible given the daily yield of new headlines from the rapid acceleration of AI technology; a march of progress propelled by developers whose profit motivations match - perhaps exceed - researcher’s concerns. One technology spanning two communities at the spearhead of human development. One moves at the speed of business growth, the other at the speed of scientific certainty, which leads me to what I believe is the true core of this issue:

Alignment is a technology problem second and a culture problem first.

How can we build AI to be aligned with humanity when humanity can’t even align with itself?

  continue reading

8 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play