Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

132: Ethical and Bias Considerations in Artificial Intelligence/Machine Learning

51:06
 
Share
 

Manage episode 476411716 series 3404634
Content provided by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Send us a text

In this episode of the Digital Pathology Podcast, I explore the ethical and bias considerations in AI and machine learning through the lens of pathology. This is part six of our special seven-part series based on the landmark Modern Pathology review co-authored by the UPMC group, including Matthew Hanna, Liam Pantanowitz, and Hooman Rashidi.

From data bias and algorithmic bias to labeling, sampling, and representation issues, I break down where biases in AI can arise—and what we, as medical data stewards, must do to recognize, mitigate, and avoid them.

🔬 Key Topics Covered:

  • [00:00:00] Introduction and post-USCAP 2025 reflections
  • [00:03:00] Overview of AI and ethics paper from Modern Pathology
  • [00:06:00] What it means to be a “data steward” in pathology
  • [00:08:00] Core ethical principles: autonomy, beneficence, justice & more
  • [00:13:00] Types of bias in AI systems: data, sampling, algorithmic, labeling
  • [00:22:00] Temporal and feedback loop bias examples in pathology
  • [00:29:00] FDA involvement and global guidelines for ethical AI
  • [00:34:00] Bias mitigation: from diverse datasets to ongoing monitoring
  • [00:43:00] The FAIR principles for responsible data use
  • [00:49:00] AI development & reporting frameworks: QUADAS, CONSORT, STARD

🩺 Why This Episode Matters:
If we want to deploy AI ethically and reliably in pathology, we must check our bias—not just once, but at every stage of AI development. This episode gives you practical tools, frameworks, and principles for building responsible AI workflows from the ground up.

🎧 Listen now and become a more conscious and capable digital pathology data steward.

👉 Get the Paper here: Ethical and Bias Considerations in Artificial Intelligence/Machine Learning

📘 Explore more on this topic: https://digitalpathologyplace.com

Support the show

Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!

  continue reading

137 episodes

Artwork
iconShare
 
Manage episode 476411716 series 3404634
Content provided by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Aleksandra Zuraw, DVM, PhD, Aleksandra Zuraw, and DVM or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Send us a text

In this episode of the Digital Pathology Podcast, I explore the ethical and bias considerations in AI and machine learning through the lens of pathology. This is part six of our special seven-part series based on the landmark Modern Pathology review co-authored by the UPMC group, including Matthew Hanna, Liam Pantanowitz, and Hooman Rashidi.

From data bias and algorithmic bias to labeling, sampling, and representation issues, I break down where biases in AI can arise—and what we, as medical data stewards, must do to recognize, mitigate, and avoid them.

🔬 Key Topics Covered:

  • [00:00:00] Introduction and post-USCAP 2025 reflections
  • [00:03:00] Overview of AI and ethics paper from Modern Pathology
  • [00:06:00] What it means to be a “data steward” in pathology
  • [00:08:00] Core ethical principles: autonomy, beneficence, justice & more
  • [00:13:00] Types of bias in AI systems: data, sampling, algorithmic, labeling
  • [00:22:00] Temporal and feedback loop bias examples in pathology
  • [00:29:00] FDA involvement and global guidelines for ethical AI
  • [00:34:00] Bias mitigation: from diverse datasets to ongoing monitoring
  • [00:43:00] The FAIR principles for responsible data use
  • [00:49:00] AI development & reporting frameworks: QUADAS, CONSORT, STARD

🩺 Why This Episode Matters:
If we want to deploy AI ethically and reliably in pathology, we must check our bias—not just once, but at every stage of AI development. This episode gives you practical tools, frameworks, and principles for building responsible AI workflows from the ground up.

🎧 Listen now and become a more conscious and capable digital pathology data steward.

👉 Get the Paper here: Ethical and Bias Considerations in Artificial Intelligence/Machine Learning

📘 Explore more on this topic: https://digitalpathologyplace.com

Support the show

Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!

  continue reading

137 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play