Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Fiddler AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fiddler AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Tracking Drift to Monitor LLM Performance

11:50
 
Share
 

Manage episode 455149512 series 3623668
Content provided by Fiddler AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fiddler AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode, we discuss how to monitor the performance of Large Language Models (LLMs) in production environments. We explore common enterprise approaches to LLM deployment and evaluate the importance of monitoring for LLM quality or the quality of LLM responses over time. We discuss strategies for "drift monitoring" — tracking changes in both input prompts and output responses — allowing for proactive troubleshooting and improvement via techniques like fine-tuning or augmenting data sources.

Read the article by Fiddler AI and explore additional resources on how AI observability can help developers build trust into AI services.

  continue reading

4 episodes

Artwork
iconShare
 
Manage episode 455149512 series 3623668
Content provided by Fiddler AI. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fiddler AI or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode, we discuss how to monitor the performance of Large Language Models (LLMs) in production environments. We explore common enterprise approaches to LLM deployment and evaluate the importance of monitoring for LLM quality or the quality of LLM responses over time. We discuss strategies for "drift monitoring" — tracking changes in both input prompts and output responses — allowing for proactive troubleshooting and improvement via techniques like fine-tuning or augmenting data sources.

Read the article by Fiddler AI and explore additional resources on how AI observability can help developers build trust into AI services.

  continue reading

4 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play