Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Episode 21: Deploying LLMs in Production: Lessons Learned

1:08:11
 
Share
 

Manage episode 383681385 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.

They talk about generative AI, large language models, the business value they can generate, and how to get started.

They delve into

  • Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
  • Common misconceptions about LLMs;
  • The skills you need to work with LLMs and GenAI models;
  • Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
  • Vendor APIs vs OSS models.

LINKS

  continue reading

47 episodes

Artwork
iconShare
 
Manage episode 383681385 series 3317544
Content provided by Hugo Bowne-Anderson. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Hugo Bowne-Anderson or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.

They talk about generative AI, large language models, the business value they can generate, and how to get started.

They delve into

  • Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
  • Common misconceptions about LLMs;
  • The skills you need to work with LLMs and GenAI models;
  • Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
  • Vendor APIs vs OSS models.

LINKS

  continue reading

47 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play