Go offline with the Player FM app!
#248 Pedro Domingos: How Connectionism Is Reshaping the Future of Machine Learning
Manage episode 477558265 series 2455219
This episode is sponsored by Indeed.
Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.
Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI
In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.
From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today’s most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.
We also dive into:
The tribal war between Connectionists and Symbolists
The surprising origins of Backpropagation
How transformers redefined machine translation
Why GANs and generative models exploded (and then faded)
The myth of modern reinforcement learning (DeepSeek, RLHF, etc.)
The danger of AI research narrowing too soon around one dominant approach
Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what’s coming next.
Don’t forget to subscribe for more episodes on AI, data, and the future of tech.
Stay Updated:
Craig Smith on X:https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI
(00:00) What Are Generative Models?
(03:02) AI Progress and the Local Optimum Trap
(06:30) The Five Tribes of AI and Why They Matter
(09:07) The Rise of Connectionism
(11:14) Rosenblatt’s Perceptron and the First AI Hype Cycle
(13:35) Backpropagation: The Algorithm That Changed Everything
(19:39) How Backpropagation Actually Works
(21:22) AlexNet and the Deep Learning Boom
(23:22) Why the Vision Community Resisted Neural Nets
(25:39) The Expansion of Deep Learning
(28:48) NetTalk and the Baby Steps of Neural Speech
(31:24) How Transformers (and Attention) Transformed AI
(34:36) Why Attention Solved the Bottleneck in Translation
(35:24) The Untold Story of Transformer Invention
(38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem
(42:29) GANs: The Evolutionary Arms Race in AI
(48:53) Reinforcement Learning Explained
(52:46) Why RL Is Mostly Just Supervised Learning in Disguise
(54:35) Where AI Research Should Go Next
252 episodes
Manage episode 477558265 series 2455219
This episode is sponsored by Indeed.
Stop struggling to get your job post seen on other job sites. Indeed's Sponsored Jobs help you stand out and hire fast. With Sponsored Jobs your post jumps to the top of the page for your relevant candidates, so you can reach the people you want faster.
Get a $75 Sponsored Job Credit to boost your job’s visibility! Claim your offer now: https://www.indeed.com/EYEONAI
In this episode, renowned AI researcher Pedro Domingos, author of The Master Algorithm, takes us deep into the world of Connectionism—the AI tribe behind neural networks and the deep learning revolution.
From the birth of neural networks in the 1940s to the explosive rise of transformers and ChatGPT, Pedro unpacks the history, breakthroughs, and limitations of connectionist AI. Along the way, he explores how supervised learning continues to quietly power today’s most impressive AI systems—and why reinforcement learning and unsupervised learning are still lagging behind.
We also dive into:
The tribal war between Connectionists and Symbolists
The surprising origins of Backpropagation
How transformers redefined machine translation
Why GANs and generative models exploded (and then faded)
The myth of modern reinforcement learning (DeepSeek, RLHF, etc.)
The danger of AI research narrowing too soon around one dominant approach
Whether you're an AI enthusiast, a machine learning practitioner, or just curious about where intelligence is headed, this episode offers a rare deep dive into the ideological foundations of AI—and what’s coming next.
Don’t forget to subscribe for more episodes on AI, data, and the future of tech.
Stay Updated:
Craig Smith on X:https://x.com/craigss
Eye on A.I. on X: https://x.com/EyeOn_AI
(00:00) What Are Generative Models?
(03:02) AI Progress and the Local Optimum Trap
(06:30) The Five Tribes of AI and Why They Matter
(09:07) The Rise of Connectionism
(11:14) Rosenblatt’s Perceptron and the First AI Hype Cycle
(13:35) Backpropagation: The Algorithm That Changed Everything
(19:39) How Backpropagation Actually Works
(21:22) AlexNet and the Deep Learning Boom
(23:22) Why the Vision Community Resisted Neural Nets
(25:39) The Expansion of Deep Learning
(28:48) NetTalk and the Baby Steps of Neural Speech
(31:24) How Transformers (and Attention) Transformed AI
(34:36) Why Attention Solved the Bottleneck in Translation
(35:24) The Untold Story of Transformer Invention
(38:35) LSTMs vs. Attention: Solving the Vanishing Gradient Problem
(42:29) GANs: The Evolutionary Arms Race in AI
(48:53) Reinforcement Learning Explained
(52:46) Why RL Is Mostly Just Supervised Learning in Disguise
(54:35) Where AI Research Should Go Next
252 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.