Go offline with the Player FM app!
Episode 21: Deploying LLMs in Production: Lessons Learned
Manage episode 383681385 series 3317544
Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.
They talk about generative AI, large language models, the business value they can generate, and how to get started.
They delve into
- Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
- Common misconceptions about LLMs;
- The skills you need to work with LLMs and GenAI models;
- Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
- Vendor APIs vs OSS models.
LINKS
- Our upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!
- Our recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.
- Extended Guide: Instruction-tune Llama 2 by Philipp Schmid
- The livestream recoding of this episode!
- Hamel on twitter
47 episodes
Manage episode 383681385 series 3317544
Hugo speaks with Hamel Husain, a machine learning engineer who loves building machine learning infrastructure and tools 👷. Hamel leads and contributes to many popular open-source machine learning projects. He also has extensive experience (20+ years) as a machine learning engineer across various industries, including large tech companies like Airbnb and GitHub. At GitHub, he led CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot. Hamel is the founder of Parlance-Labs, a research and consultancy focused on LLMs.
They talk about generative AI, large language models, the business value they can generate, and how to get started.
They delve into
- Where Hamel is seeing the most business interest in LLMs (spoiler: the answer isn’t only tech);
- Common misconceptions about LLMs;
- The skills you need to work with LLMs and GenAI models;
- Tools and techniques, such as fine-tuning, RAGs, LoRA, hardware, and more!
- Vendor APIs vs OSS models.
LINKS
- Our upcoming livestream LLMs, OpenAI Dev Day, and the Existential Crisis for Machine Learning Engineering with Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs): Sign up for free!
- Our recent livestream Data and DevOps Tools for Evaluating and Productionizing LLMs with Hamel and Emil Sedgh, Lead AI engineer at Rechat -- in it, we showcase an actual industrial use case that Hamel and Emil are working on with Rechat, a real estate CRM, taking you through LLM workflows and tools.
- Extended Guide: Instruction-tune Llama 2 by Philipp Schmid
- The livestream recoding of this episode!
- Hamel on twitter
47 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.