Artwork
iconShare
 
Manage episode 522461313 series 3704725
Content provided by Ashok Sivanand. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Ashok Sivanand or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

What does it actually take to build agentic AI applications that hold up in the real world? In this episode, Ashok sits down with Austin, founder of Focused, to share field stories and hard-won lessons from building AI systems that go beyond flashy demos. From legal assistants to government transparency tools, Austin breaks down the concrete criteria for identifying where AI makes sense — and where it doesn't.

They unpack how to find the right starting point for your first agentic app, why integration with legacy systems is the real hurdle, and the engineering must-haves that keep AI behavior safe and reliable. You'll hear practical guidance on designing eval frameworks, using abstraction layers like LangChain, and how observability can shape your development roadmap just like in traditional software. Whether you're a product leader or a CTO, this conversation will help you distinguish hype from real opportunity in AI.

Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.

Inside the episode...

  • A practical checklist for identifying your first AI-powered app

  • The hidden cost of "AI for AI's sake" and where traditional software is better

  • Why repetitive knowledge work is prime territory for automation

  • How Focused helped Hamlet build an AI for parsing government meeting data

  • Where read-only data access gives you a safe starting point

  • Why integration is often more complex than the AI itself

  • The importance of eval frameworks and test-driven LLM development

  • How to use observability to continuously improve AI agent behavior

  • Speed vs. believability: surprising lessons from Groq-powered inference

  • Using multiple models in one system and LLMs to QA each other

Mentioned in this episode

Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.

Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast

Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.

Follow the Pod

Linkedin: https://www.linkedin.com/company/convergence-podcast/

X: https://twitter.com/podconvergence

Instagram: @podconvergence

  continue reading

80 episodes