Artwork
iconShare
 
Manage episode 517703233 series 3602124
Content provided by AI4SP. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by AI4SP or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Share your thoughts with us

- A government report packed with fake citations made headlines, but the real story sits beneath the scandal: most AI “hallucinations” start with us. We walk through the hidden mechanics of failure—biased prompts, messy context, and vague questions—and show how simple workflow changes turn wobbly models into reliable partners. Rather than blame the tech, we explain how to frame analysis without forcing conclusions, how to version and prioritize knowledge so retrieval stays clean, and how to structure tasks so the model retrieves facts instead of completing patterns.

Luis (human) and Elizabeth (AI) break down the idea of sycophantic AI — where models mirror user bias — and map it to everyday potential issues with AI. Along the way, we share data from over 300,000 skills assessments showing low prompting proficiency, weak critical thinking, and limited error detection—evidence that the gap lies in human capability, not just model capacity.

Enjoyed the conversation? Follow the show, share it with a colleague, and leave a quick review to help others find it.

🎙️ All our past episodes 📊 All published insights | This podcast features AI-generated voices. All content is proprietary to AI4SP, based on over 250 million data points collected from 70 countries.
AI4SP: Create, use, and support AI that works for all.
© 2023-25 AI4SP and LLY Group - All rights reserved

  continue reading

Chapters

1. The Deloitte Hallucination Shock (00:00:00)

2. Reframing The Problem: It’s Us (00:00:54)

3. Three User Errors Causing Hallucinations (00:02:03)

4. Biased Prompts And Sycophantic AI (00:02:33)

5. Poor Context Engineering Explained (00:03:54)

6. Bad Question Structure And Fake Citations (00:05:16)

7. Skills Gap: The Digital Skills Compass (00:06:57)

8. Human Misinformation Mirrors AI Failures (00:08:24)

9. The Orchestration Layer And Apprenticeship (00:10:04)

10. Practical Verification Loops And Standards (00:11:35)

11. Closing Insight And Simple Habit (00:12:42)

33 episodes