Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Craig S. Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Craig S. Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#248 Pedro Domingos: How Connectionism Is Reshaping the Future of Machine Learning

59:56
 
Share
 

Manage episode 477558265 series 2455219
Content provided by Craig S. Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Craig S. Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This episode is sponsored by Indeed.

Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.

Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI

In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.

From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today’s most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.

We also dive into:

  • The tribal war between Connectionists and Symbolists

  • The surprising origins of Backpropagation

  • How transformers redefined machine translation

  • Why GANs and generative models exploded (and then faded)

  • The myth of modern reinforcement learning (DeepSeek, RLHF, etc.)

  • The danger of AI research narrowing too soon around one dominant approach

Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what’s coming next.

Don’t forget to subscribe for more episodes on AI, data, and the future of tech.

Stay Updated:

Craig Smith on X:https://x.com/craigss

Eye on A.I. on X: https://x.com/EyeOn_AI

(00:00) What Are Generative Models?

(03:02) AI Progress and the Local Optimum Trap

(06:30) The Five Tribes of AI and Why They Matter

(09:07) The Rise of Connectionism

(11:14) Rosenblatt’s Perceptron and the First AI Hype Cycle

(13:35) Backpropagation: The Algorithm That Changed Everything

(19:39) How Backpropagation Actually Works

(21:22) AlexNet and the Deep Learning Boom

(23:22) Why the Vision Community Resisted Neural Nets

(25:39) The Expansion of Deep Learning

(28:48) NetTalk and the Baby Steps of Neural Speech

(31:24) How Transformers (and Attention) Transformed AI

(34:36) Why Attention Solved the Bottleneck in Translation

(35:24) The Untold Story of Transformer Invention

(38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem

(42:29) GANs: The Evolutionary Arms Race in AI

(48:53) Reinforcement Learning Explained

(52:46) Why RL Is Mostly Just Supervised Learning in Disguise

(54:35) Where AI Research Should Go Next

  continue reading

252 episodes

Artwork
iconShare
 
Manage episode 477558265 series 2455219
Content provided by Craig S. Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Craig S. Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This episode is sponsored by Indeed.

Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.

Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI

In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.

From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today’s most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.

We also dive into:

  • The tribal war between Connectionists and Symbolists

  • The surprising origins of Backpropagation

  • How transformers redefined machine translation

  • Why GANs and generative models exploded (and then faded)

  • The myth of modern reinforcement learning (DeepSeek, RLHF, etc.)

  • The danger of AI research narrowing too soon around one dominant approach

Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what’s coming next.

Don’t forget to subscribe for more episodes on AI, data, and the future of tech.

Stay Updated:

Craig Smith on X:https://x.com/craigss

Eye on A.I. on X: https://x.com/EyeOn_AI

(00:00) What Are Generative Models?

(03:02) AI Progress and the Local Optimum Trap

(06:30) The Five Tribes of AI and Why They Matter

(09:07) The Rise of Connectionism

(11:14) Rosenblatt’s Perceptron and the First AI Hype Cycle

(13:35) Backpropagation: The Algorithm That Changed Everything

(19:39) How Backpropagation Actually Works

(21:22) AlexNet and the Deep Learning Boom

(23:22) Why the Vision Community Resisted Neural Nets

(25:39) The Expansion of Deep Learning

(28:48) NetTalk and the Baby Steps of Neural Speech

(31:24) How Transformers (and Attention) Transformed AI

(34:36) Why Attention Solved the Bottleneck in Translation

(35:24) The Untold Story of Transformer Invention

(38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem

(42:29) GANs: The Evolutionary Arms Race in AI

(48:53) Reinforcement Learning Explained

(52:46) Why RL Is Mostly Just Supervised Learning in Disguise

(54:35) Where AI Research Should Go Next

  continue reading

252 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play