Artwork
iconShare
 
Manage episode 516671174 series 3535718
Content provided by Kieran Gilmurray. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kieran Gilmurray or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

The loudest voices say AI will fix everything; the quieter truth is that behaviour, incentives, and trust decide whether it works at all. We sit down with CPO Brian Parkes to get past the noise and talk about what actually moves an organisation from AI theatre to real outcomes.

Fear, imposter syndrome, and quarterly pressure make leaders cling to control, yet value shows up only when teams collaborate across silos and redesign workflows end to end.

TL;DR:

  • Hype versus behavioural reality of AI
  • Executive fear, imposter syndrome, and incentives
  • Gap between C-suite intent and frontline capacity
  • Reverse mentoring and small experiments
  • Board-level questions on mission, analysts, and ethics
  • Reframing AI as augmentation, not replacement

We dig into the gap between C-suite intent and frontline reality—where overloaded teams are asked to learn a “second job” just to keep up. Instead of rolling out tech first and patching people later, we map a different path: start with a precise problem, anchor it to mission, and assemble a cross-functional owner group with shared accountability.

Reverse mentoring becomes a zero-budget unlock that lets senior leaders learn from practitioners already using AI. Small, time-boxed experiments replace 18-month slide decks. And partner selection shifts from brand comfort to proven speed and scars, because the right three-month pilot often beats the wrong long programme.
Brian also offers three board-level questions to stop “faster crap” before it starts: What problem are we solving and how does it align to mission? Which analyst concerns or customer pains does this address? What are the ethical and risk implications?

The throughline is culture: trust enables decentralised decisions, and decentralised decisions let AI cut through politics and deliver across workflows.

The final nudge is a reframe—treat AI like the jump from dial-up to broadband. It felt awkward until it didn’t. Respect the risks, invest in people as much as platforms, and let human intelligence amplify artificial intelligence.

If this resonated, follow the show, share it with a colleague who’s wrestling with AI adoption, and leave a review to help others find it.

And if you need help implementing AI in your business then lets chat about getting you the help you need - https://calendly.com/kierangilmurray/executive-leadership-and-development

Support the show

𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK

  continue reading

Chapters

1. Cutting Through AI Hype (00:00:00)

2. Fear, Imposter Syndrome, And Cost (00:00:39)

3. Leadership Gap Versus Org Reality (00:03:21)

4. From Tech Project To Shared Mission (00:07:08)

5. Decentralisation Versus Hierarchy (00:10:58)

6. Trust, Culture, And Collective Ownership (00:14:45)

7. Board-Level Questions And Ethics (00:18:15)

8. Reframing AI And Closing (00:21:18)

161 episodes