Artwork
iconShare
 
Manage episode 522268346 series 3404634
Content provided by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Send us a text

Why does it take three years to deploy a digital pathology tool that only took three weeks to build? That’s the reality no one talks about—but every lab feels every time they deploy a new tool...

In this episode, I sit down with Andrew Janowczyk, Assistant Professor at Emory University and one of the leading voices in computational pathology, to unpack the practical, messy, real-world truth behind deploying, validating, and accrediting digital pathology tools in the clinic.

We walk through Andrew’s experience building and implementing an H. pylori detection algorithm at Geneva University Hospital—a project that exposed every hidden challenge in the transition from research to a clinical-grade tool.

From algorithmic hardening, multidisciplinary roles, usability studies, and ISO 15189 accreditation, to the constant tug-of-war between research ambition and clinical reality… this conversation is a roadmap for anyone building digital tools that actually need to work in practice.

Episode Highlights

  • [00:00–04:20] Why multidisciplinary collaboration is the non-negotiable cornerstone of clinical digital pathology deployment
  • [04:20–08:30] Real-world insight: The H. pylori detection tool and how it surfaces “top 20” likely regions for pathologist review
  • [08:30–12:50] The painful truth: Algorithms take weeks to build—but years to deploy, validate, and accredit
  • [12:50–17:40] Why curated research datasets fail in the real world (and how to fix it with unbiased data collection)
  • [17:40–23:00] Algorithmic hardening: turning fragile research code into production-ready clinical software
  • [23:00–28:10] Why every hospital is a snowflake: no standard workflows, no copy-paste deployments
  • [28:10–33:00] The 12 validation and accreditation roles every lab needs to define (EP, DE, QE, IT, etc.)
  • [33:00–38:15] Validation vs. accreditation—what they are, how they differ, and when each matters
  • [38:15–43:40] Version locking, drift prevention, and why monitoring is as important as deployment
  • [43:40–48:55] Deskilling concerns: how AI changes perception and what pathologists need before adoption
  • [48:55–55:00] Usability testing: why naive users reveal the truth about your UI
  • [55:00–61:00] Scaling to dozens of algorithms: bottlenecks, documentation, and the future of clinical digital pathology and AI workflows

Resources From This Episode

Key Takeaways

  • Algorithm creation is the easy part—deployment is the mountain.
  • Clinical algorithms require multidisciplinary ownership across 12 institutional roles.
  • Real-world data is messy—and that’s exactly why algorithms must be trained on it.
  • No two hospitals are alike; every deployment requires local adaptation.
  • Usability matters as much as accuracy—naive users expose real workflow constraints.
  • Patho

Support the show

Get the "Digital Pathology 101" FREE E-book and join us!

  continue reading

175 episodes