Artwork
iconShare
 
Manage episode 411395483 series 3012777
Content provided by Daliana Liu. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daliana Liu or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Most experimentations fail, Kristi Angel shares her expertise on scaling experimentation and avoiding common A/B testing pitfalls. Learn five things that can help boost test velocity, designing impactful experiments, and leveraging knowledge repos. (Chapters below)

Kristi Angel’s LinkedIn: ⁠https://www.linkedin.com/in/kristiangel/

Subscribe to Daliana's newsletter on ⁠www.dalianaliu.com⁠ for more on data science and career.

Daliana's Twitter: ⁠https://twitter.com/DalianaLiu⁠

Daliana’s LinkedIn: ⁠https://www.linkedin.com/in/dalianaliu/⁠

(00:00:00) Intro

(00:01:26) Why do most experimentations fail?

(00:07:05) Mistakes in choosing metrics

(00:10:05) Is revenue a good metric?

(00:13:18) Split metrics in three ways

(00:15:10) Daliana's story with too many category breakdowns

(00:16:59) What makes the best data science team?

(00:19:24) Data scientist work in silo vs in a data science team

(00:21:15) Building a knowledge center

(00:23:40) Example of knowledge center; nuance of experimentations

(00:26:09) How many metrics and variants?

(00:30:56) How to reduce noise - CUPED

(00:33:01) Future of A/B testing

(00:38:33) Q&A: Low statistical power

  continue reading

95 episodes