Artwork

World Models vs LLMs

Generative AI 101

29 subscribers

published

iconShare
 
Manage episode 520206243 series 3578824
Content provided by Emily Laird. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Emily Laird or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Large Language Models might sound smart, but can they predict what happens when a cat sees a cucumber? In this episode, host Emily Laird throws LLMs into the philosophical ring with World Models, AI systems that learn from watching, poking, and pushing stuff around (kind of like toddlers). Meta’s Yann LeCun isn’t impressed by chatbots, and honestly, he might have a point. We break down why real intelligence might need both brains and brawn—or at least a good sense of gravity.
Join the AI Weekly Meetups
Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about world models vs LLMs and that's pretty cool.

Connect with Emily Laird on LinkedIn

  continue reading

226 episodes