Everything Hard About Building AI Agents Today
Manage episode 488522733 series 3241972
Willem Pienaar and Shreya Shankar discuss the challenge of evaluating agents in production where "ground truth" is ambiguous and subjective user feedback isn't enough to improve performance.
The discussion breaks down the three "gulfs" of human-AI interaction—Specification, Generalization, and Comprehension—and their impact on agent success.
Willem and Shreya cover the necessity of moving the human "out of the loop" for feedback, creating faster learning cycles through implicit signals rather than direct, manual review.The conversation details practical evaluation techniques, including analyzing task failures with heat maps and the trade-offs of using simulated environments for testing.
Willem and Shreya address the reality of a "performance ceiling" for AI and the importance of categorizing problems your agent can, can learn to, or will likely never be able to solve.
// Bio
Shreya Shankar
PhD student in data management for machine learning.
Willem Pienaar
Willem Pienaar, CTO of Cleric, is a builder with a focus on LLM agents, MLOps, and open source tooling. He is the creator of Feast, an open source feature store, and contributed to the creation of both the feature store and MLOps categories.
Before starting Cleric, Willem led the open source engineering team at Tecton and established the ML platform team at Gojek, where he built high scale ML systems for the Southeast Asian decacorn.
// Related Links
https://www.google.com/about/careers/applications/?utm_campaign=profilepage&utm_medium=profilepage&utm_source=linkedin&src=Online/LinkedIn/linkedin_pagehttps://cleric.ai/
~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~
Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore
MLOps Swag/Merch: [https://shop.mlops.community/]
Connect with Demetrios on LinkedIn: /dpbrinkm
Connect with Shreya on LinkedIn: /shrshnk
Connect with Willem on LinkedIn: /willempienaar
Timestamps:
[00:00] Trust Issues in AI Data
[04:49] Cloud Clarity Meets Retrieval
[09:37] Why Fast AI Is Hard
[11:10] Fixing AI Communication Gaps
[14:53] Smarter Feedback for Prompts
[19:23] Creativity Through Data Exploration
[23:46] Helping Engineers Solve Faster
[26:03] The Three Gaps in AI
[28:08] Alerts Without the Noise
[33:22] Custom vs General AI
[34:14] Sharpening Agent Skills
[40:01] Catching Repeat Failures
[43:38] Rise of Self-Healing Software
[44:12] The Chaos of Monitoring AI
443 episodes