Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Darshan Kulkarni. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Darshan Kulkarni or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Should You Use AI to Draft Informed Consent?

12:39
 
Share
 

Manage episode 490989617 series 3506216
Content provided by Darshan Kulkarni. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Darshan Kulkarni or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Darshan Kulkarni and Edye Edens discuss the potential and pitfalls of using AI—like ChatGPT—to draft informed consent documents in clinical research. With both legal and regulatory expertise, they explore how AI could save time, whether it fits institutional IRB requirements, and the real-world value (or lack thereof) for different types of organizations.

Key Takeaways:

  • Drafting vs. Final Use: AI can be useful as a first-draft tool, especially for high-volume sponsors. But using it for final documents without oversight is risky.

  • IRB Templates Matter: Many IRBs (especially academic or VA-affiliated) require strict templates—limiting AI's value unless those are integrated upfront.

  • Regulatory Landscape: AI-drafted consents must meet not just FDA standards, but also OHRP requirements. Compliance and clarity are non-negotiable.

  • Customization Is Key: Most current AI tools are just wrappers over ChatGPT. Real ROI comes from domain-specific models trained for clinical research.

  • Data & IP Risks: Feeding protocols into AI raises confidentiality concerns. Plus, who owns the output remains unclear due to copyright issues with training data.

  • Why ROI Falls Short: Companies often reassign internal staff instead of building bespoke solutions. Without a clear use case or strategic planning, results disappoint.

AI shouldn’t replace people—it should support them. Darshan and Edye agree: if you're using AI to generate a first draft of informed consent documents, it could help streamline high-volume workflows. But expecting it to generate a compliant final version is unrealistic. Most current tools feel more like flashy "wrappers" around existing models and often lack a strong ROI. And let’s not forget the hidden risks—like IP concerns and exposing proprietary protocols.

Bottom line? AI has potential, but without strategic investment and oversight, it’s just another overhyped shortcut. Use it wisely, or not at all.

Support the show

  continue reading

240 episodes

Artwork
iconShare
 
Manage episode 490989617 series 3506216
Content provided by Darshan Kulkarni. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Darshan Kulkarni or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Darshan Kulkarni and Edye Edens discuss the potential and pitfalls of using AI—like ChatGPT—to draft informed consent documents in clinical research. With both legal and regulatory expertise, they explore how AI could save time, whether it fits institutional IRB requirements, and the real-world value (or lack thereof) for different types of organizations.

Key Takeaways:

  • Drafting vs. Final Use: AI can be useful as a first-draft tool, especially for high-volume sponsors. But using it for final documents without oversight is risky.

  • IRB Templates Matter: Many IRBs (especially academic or VA-affiliated) require strict templates—limiting AI's value unless those are integrated upfront.

  • Regulatory Landscape: AI-drafted consents must meet not just FDA standards, but also OHRP requirements. Compliance and clarity are non-negotiable.

  • Customization Is Key: Most current AI tools are just wrappers over ChatGPT. Real ROI comes from domain-specific models trained for clinical research.

  • Data & IP Risks: Feeding protocols into AI raises confidentiality concerns. Plus, who owns the output remains unclear due to copyright issues with training data.

  • Why ROI Falls Short: Companies often reassign internal staff instead of building bespoke solutions. Without a clear use case or strategic planning, results disappoint.

AI shouldn’t replace people—it should support them. Darshan and Edye agree: if you're using AI to generate a first draft of informed consent documents, it could help streamline high-volume workflows. But expecting it to generate a compliant final version is unrealistic. Most current tools feel more like flashy "wrappers" around existing models and often lack a strong ROI. And let’s not forget the hidden risks—like IP concerns and exposing proprietary protocols.

Bottom line? AI has potential, but without strategic investment and oversight, it’s just another overhyped shortcut. Use it wisely, or not at all.

Support the show

  continue reading

240 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play