Go offline with the Player FM app!
Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production
Manage episode 461478222 series 3317544
Hugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex's extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn't—when deploying AI agents in production.
In this episode, we dive into:
- The current state of AI agents in production, from successes to common failure modes
- Practical lessons learned from analyzing hundreds of real-world LLM deployments
- How companies like Anthropic, Klarna, and Dropbox are using patterns like ReAct, RAG, and microservices to build reliable systems
- The evolution of LLM capabilities, from expanding context windows to multimodal applications
- Why most companies still prefer structured workflows over fully autonomous agents
We also explore real-world case studies of production hurdles, including cascading failures, API misfires, and hallucination challenges. Alex shares concrete strategies for integrating LLMs into your pipelines while maintaining reliability and control.
Whether you're scaling agents or building LLM-powered systems, this episode offers practical insights for navigating the complex landscape of LLMOps in 2025.
LINKS
47 episodes
Manage episode 461478222 series 3317544
Hugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex's extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn't—when deploying AI agents in production.
In this episode, we dive into:
- The current state of AI agents in production, from successes to common failure modes
- Practical lessons learned from analyzing hundreds of real-world LLM deployments
- How companies like Anthropic, Klarna, and Dropbox are using patterns like ReAct, RAG, and microservices to build reliable systems
- The evolution of LLM capabilities, from expanding context windows to multimodal applications
- Why most companies still prefer structured workflows over fully autonomous agents
We also explore real-world case studies of production hurdles, including cascading failures, API misfires, and hallucination challenges. Alex shares concrete strategies for integrating LLMs into your pipelines while maintaining reliability and control.
Whether you're scaling agents or building LLM-powered systems, this episode offers practical insights for navigating the complex landscape of LLMOps in 2025.
LINKS
47 episodes
All episodes
×Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.