Artwork
iconShare
 
Manage episode 496327680 series 3678189
Content provided by Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Vasanth Sarathy & Laura Hagopian, Vasanth Sarathy, and Laura Hagopian or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

What if a convincing medical article you read online—citing peer-reviewed journals and quoting real-sounding experts—was entirely fabricated by AI?

In this episode, we dive into the unsettling world of AI-generated health disinformation. Researchers recently built custom GPT-based chatbots trained to spread myths. The result? Persuasive narratives full of fabricated studies, misleading statistics, and plausible-sounding jargon—powerful enough to sway even savvy readers.

We break down how these AI systems were created, why today’s safeguards failed to stop them, and what this means for public health. With disinformation spreading faster than truth on social media, even a single viral post can lead to real-world consequences: lower vaccination rates, delayed treatments, or widespread mistrust in medical authorities.

But there’s hope. Using a four-pronged approach—fact-checking, digital literacy, communication design, and policy—we explore how society can fight back. This episode is a call to action: to become vigilant readers, ethical technologists, and thoughtful citizens in a world where even falsehoods can be generated on demand.

References:

How to Combat Health Misinformation: A Psychological Approach
Jon Roozenbeek & Sander van der Linden
American Journal of Health Promotion, 2022

Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation
Bradley D. Menz, Natansh D. Modi, Michael J. Sorich, Ashley M. Hopkins
JAMA Internal Medicine, 2024

Current Safeguards, Risk Mitigation, and Transparency Measures of Large Language Models Against the Generation of Health Disinformation
Bradley D. Menz et al.
BMJ, 2024

Urgent Need for Standards and Safeguards for Health-Related Generative Artificial Intelligence
Reed V. Tuckson & Brinleigh Murphy-Reuter
Annals of Internal Medicine, 2025

Assessing the System-Instruction Vulnerabilities of Large Language Models to Malicious Conversion Into Health Disinformation Chatbots
Natansh D. Modi, Bradley D. Menz, and colleagues
Annals of Internal Medicine, 2025

Credits:

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

  continue reading

Chapters

1. Understanding Health Disinformation (00:00:00)

2. Sunscreen Myth Case Study (00:04:40)

3. Public Health Consequences of Disinformation (00:09:28)

4. How LLMs Generate Convincing Falsehoods (00:12:12)

5. Creating Disinformation Chatbots Without Coding (00:16:20)

6. Fighting the Information Epidemic (00:25:53)

7. Solutions and Sobering Conclusions (00:31:12)

2 episodes