Artwork

Metrics Driven Development

Practical AI

1,528 subscribers

published

iconShare
 
Manage episode 436930184 series 2385063
Content provided by Changelog Media and Practical AI LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Changelog Media and Practical AI LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

How do you systematically measure, optimize, and improve the performance of LLM applications (like those powered by RAG or tool use)? Ragas is an open source effort that has been trying to answer this question comprehensively, and they are promoting a “Metrics Driven Development” approach. Shahul from Ragas joins us to discuss Ragas in this episode, and we dig into specific metrics, the difference between benchmarking models and evaluating LLM apps, generating synthetic test data and more.

Join the discussion

Changelog++ members save 5 minutes on this episode because they made the ads disappear. Join today!

Sponsors:

  • Assembly AI – Turn voice data into summaries with AssemblyAI’s leading Speech AI models. Built by AI experts, their Speech AI models include accurate speech-to-text for voice data (such as calls, virtual meetings, and podcasts), speaker detection, sentiment analysis, chapter detection, PII redaction, and more.

Featuring:

Show Notes:

Something missing or broken? PRs welcome!

  continue reading

Chapters

1. Welcome to Practical AI (00:00:00)

2. What is Ragas (00:00:43)

3. General LLM evaluation (00:05:19)

4. Current unit testing workflow (00:10:10)

5. Metrics driven development (00:14:37)

6. Sponsor: Assembly AI (00:17:20)

7. Most used metrics (00:20:59)

8. Data burdens (00:26:27)

9. Exciting things coming (00:35:50)

10. Thanks for joining us! (00:40:49)

11. Outro (00:41:25)

321 episodes