Artwork
iconShare
 
Manage episode 514929014 series 3658196
Content provided by Eva Johnson & Emily Brady, Eva Johnson, and Emily Brady. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Eva Johnson & Emily Brady, Eva Johnson, and Emily Brady or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode, we’re diving into the world of AI and how it’s showing up in speech-language pathology. We looked at two articles—one on using AI to rate dysarthria severity, and another on using ChatGPT to help make therapy materials. We’ll break down the basics of machine learning and deep learning, talk about what works (and what’s still kind of clunky), and share how we’ve been using these tools in real-life sessions. Whether you’re AI-curious or already experimenting, this one’s for you.

You’ll learn:

  • The difference between machine learning and deep learning in speech assessment

  • How AI models can rate dysarthria severity with up to 90% accuracy

  • Why acoustic features like pitch, jitter, and shimmer are key inputs in AI analysis

  • How SLPs can use ChatGPT to generate therapy prompts for speech, language, and cognition

  • The limitations of AI, including hallucinated references and lack of language comprehension

  • Practical ideas for applying AI-generated content to your caseload

  • Why AI won’t replace SLPs—but can absolutely make our jobs easier

Get in Touch: [email protected]

Or Visit Us At: ⁠www.SpeechTalkPod.com⁠

Instagram: @speechtalkpod

Part of the Human Content Podcast Network

Learn more about your ad choices. Visit megaphone.fm/adchoices

  continue reading

14 episodes