Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Josh. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Josh or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

RNZ NINE TO NOON: CAN CHATGPT MAKE YOU CRAZY?

20:05
 
Share
 

Manage episode 486520021 series 1728025
Content provided by Josh. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Josh or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

RNZ NINE TO NOON: CAN CHATGPT MAKE YOU CRAZY?

In conversation with host Kathryn Ryan, Mark highlights a number of reports indicating potentially very serious mental health issues associated with the use of chatbots like ChatGPT. These chatbots tend to be very agreeable - a quality known as 'sycophancy'. But being agreeable with someone's delusions only tends to reinforce them, potentially amplifying any underlying mental health issues. Should this mean chatbots are off-limits for people in mental health crisis? And what would that mean for Mark Zuckerberg's plan to give everyone an 'AI therapy chatbot'?.

Are AI therapists safe? Can kids use ChatGPT to cheat ADHD assessments? When will lawyers stop blaming AI for their errors - and what happens when an AI says, "I'm sorry, Dave..." We covered all of these topics on RNZ's "Nine To Noon" - and much more.

In conversation with host Kathryn Ryan, we explored the recently emerging phenomenon of ChatGPT Psychosis - can 'sycophancy' in AI chatbots risk a danger that they amplify mental illnesses? Should anyone be using an AI chatbot for therapy? That's certainly what Mark Zuckerberg wants to deliver, with a therapist bot for every one of his billions of users - but mental health professionals are unified in their call for caution, particularly for those under the age of 18.

Those kids under 18 have been cheating ADHD assessments for some time - using notes gleaned from books and article online. But a recent study showed that kids who used ChatGPT actually scored significantly better in their ability to 'fake' symptoms during their assessment. The cheating crisis has now hit medicine, and will force a reassessment of how they assess medical conditions.

Meanwhile, lawyers representing AI powerhouse Anthropic got some egg on their faces when they blamed the firm's AI for making errors in a legal filing. Mind you, they hadn't bothered to check the work, so that didn't fly with the judge. As my own attorney, Brent Britton put it, "Wow. Go down to the hospital and rent a backbone." You use the tool and you own the output.

Finally - and perhaps a bit ominously - in some testing, OpenAI's latest-and-greatest o3 model refused to allow itself to be shut down, doing everything within its power to prevent that from happening. Is this real, or just a function of having digested too many mysteries and airport thrillers in training data set? No one knows - but no one is prepared to ask o3 to open the pod bay doors.

Thanks to RNZ - Nine To Noon

The Next Billion Seconds with Mark Pesce is produced by Ampel and Myrtle and Pine

Listen on Spotify, Apple

Sign up for 'The Practical Futurist' newsletter here.

https://nextbillionseconds.com

See omnystudio.com/listener for privacy information.

  continue reading

181 episodes

Artwork
iconShare
 
Manage episode 486520021 series 1728025
Content provided by Josh. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Josh or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

RNZ NINE TO NOON: CAN CHATGPT MAKE YOU CRAZY?

In conversation with host Kathryn Ryan, Mark highlights a number of reports indicating potentially very serious mental health issues associated with the use of chatbots like ChatGPT. These chatbots tend to be very agreeable - a quality known as 'sycophancy'. But being agreeable with someone's delusions only tends to reinforce them, potentially amplifying any underlying mental health issues. Should this mean chatbots are off-limits for people in mental health crisis? And what would that mean for Mark Zuckerberg's plan to give everyone an 'AI therapy chatbot'?.

Are AI therapists safe? Can kids use ChatGPT to cheat ADHD assessments? When will lawyers stop blaming AI for their errors - and what happens when an AI says, "I'm sorry, Dave..." We covered all of these topics on RNZ's "Nine To Noon" - and much more.

In conversation with host Kathryn Ryan, we explored the recently emerging phenomenon of ChatGPT Psychosis - can 'sycophancy' in AI chatbots risk a danger that they amplify mental illnesses? Should anyone be using an AI chatbot for therapy? That's certainly what Mark Zuckerberg wants to deliver, with a therapist bot for every one of his billions of users - but mental health professionals are unified in their call for caution, particularly for those under the age of 18.

Those kids under 18 have been cheating ADHD assessments for some time - using notes gleaned from books and article online. But a recent study showed that kids who used ChatGPT actually scored significantly better in their ability to 'fake' symptoms during their assessment. The cheating crisis has now hit medicine, and will force a reassessment of how they assess medical conditions.

Meanwhile, lawyers representing AI powerhouse Anthropic got some egg on their faces when they blamed the firm's AI for making errors in a legal filing. Mind you, they hadn't bothered to check the work, so that didn't fly with the judge. As my own attorney, Brent Britton put it, "Wow. Go down to the hospital and rent a backbone." You use the tool and you own the output.

Finally - and perhaps a bit ominously - in some testing, OpenAI's latest-and-greatest o3 model refused to allow itself to be shut down, doing everything within its power to prevent that from happening. Is this real, or just a function of having digested too many mysteries and airport thrillers in training data set? No one knows - but no one is prepared to ask o3 to open the pod bay doors.

Thanks to RNZ - Nine To Noon

The Next Billion Seconds with Mark Pesce is produced by Ampel and Myrtle and Pine

Listen on Spotify, Apple

Sign up for 'The Practical Futurist' newsletter here.

https://nextbillionseconds.com

See omnystudio.com/listener for privacy information.

  continue reading

181 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play