Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Mark Moyou, PhD and Mark Moyou. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Moyou, PhD and Mark Moyou or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Sanyam Bhutani: LLM Experimentation, Podcasting Insights, and AI Innovations - AI Portfolio Podcast

1:21:46
 
Share
 

Manage episode 437345387 series 3596668
Content provided by Mark Moyou, PhD and Mark Moyou. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Moyou, PhD and Mark Moyou or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.
Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.
πŸ“² Radek Osmulski Socials:
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Twitter: https://x.com/bhutanisanyam1?lang=en
πŸ“² Mark Moyou, PhD Socials:
LinkedIn: https://www.linkedin.com/in/markmoyou/
Twitter: https://twitter.com/MarkMoyou
πŸ“— Chapters
00:00 Intro
02:46 200 days of LLMs
06:16 Venture Capital
08:40 Setting Goals in Public
09:45 Fine tuning Experiment
14:02 Kaggle Grandmasters Team
15:55 Doing Challenges & Reading Research Papers
17:47 Hardest topic to learn in AI
19:05 Are you afraid to ask stupid questions?
20:43 Learning how LLMs work
22:54 Academic vs Product First Mindset
27:51 Training or Inference on LLMs
29:15 Favorite LLM Agent
32:10 How to go about learning LLMs?
36:55 Open Source LLMs on Research Papers
37:41 Capability of Modern GPUs
45:48 Journey to H20.ai
50:07 Why Sanyam stopped podcasting?
56:25 Podcasting Experience
58:39 Top Data Scientists
01:00:19 Advice for New Podcasts
01:03:32 Breaking into Data Science
01:12:23 Career Optimization Function
01:14:02 Making Progress Everyday
01:15:05 Advice for New Professionals
01:17:00 Book Recommendations
01:18:04 Rapid Round

  continue reading

22 episodes

Artwork
iconShare
 
Manage episode 437345387 series 3596668
Content provided by Mark Moyou, PhD and Mark Moyou. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Moyou, PhD and Mark Moyou or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.
Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.
πŸ“² Radek Osmulski Socials:
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Twitter: https://x.com/bhutanisanyam1?lang=en
πŸ“² Mark Moyou, PhD Socials:
LinkedIn: https://www.linkedin.com/in/markmoyou/
Twitter: https://twitter.com/MarkMoyou
πŸ“— Chapters
00:00 Intro
02:46 200 days of LLMs
06:16 Venture Capital
08:40 Setting Goals in Public
09:45 Fine tuning Experiment
14:02 Kaggle Grandmasters Team
15:55 Doing Challenges & Reading Research Papers
17:47 Hardest topic to learn in AI
19:05 Are you afraid to ask stupid questions?
20:43 Learning how LLMs work
22:54 Academic vs Product First Mindset
27:51 Training or Inference on LLMs
29:15 Favorite LLM Agent
32:10 How to go about learning LLMs?
36:55 Open Source LLMs on Research Papers
37:41 Capability of Modern GPUs
45:48 Journey to H20.ai
50:07 Why Sanyam stopped podcasting?
56:25 Podcasting Experience
58:39 Top Data Scientists
01:00:19 Advice for New Podcasts
01:03:32 Breaking into Data Science
01:12:23 Career Optimization Function
01:14:02 Making Progress Everyday
01:15:05 Advice for New Professionals
01:17:00 Book Recommendations
01:18:04 Rapid Round

  continue reading

22 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play