Artwork
iconShare
 
Manage episode 514312370 series 3684643
Content provided by Jacob Andra. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jacob Andra or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Send us a text

Large language models have captured headlines, but they represent only a fraction of what AI can accomplish. Talbot West co-founders Jacob Andra and Stephen Karafiath explore the fundamental limitations of LLMs and why neurosymbolic AI offers a more robust path forward for enterprise applications.
LLMs sometimes display remarkable contextual awareness, like when ChatGPT proactively noticed specific tile flooring in a photo's background and offered unsolicited cleaning advice. These moments suggest genuine intelligence. But as Jacob and Stephen explain, push these systems harder and the cracks appear.
The hosts examine specific failure modes that emerge when deploying LLMs at scale. Jacob documents persistent formatting errors where models swing between extremes—overusing lists, then refusing to use them at all, even when instructions explicitly define appropriate use cases. These aren't random glitches. They reveal systematic overcorrection behaviors where LLMs bounce off guardrails rather than operating within defined bounds.
More troubling are the logical inconsistencies. When working with large corpuses of information, LLMs demonstrate what Jacob calls cognitive fallacies—errors that mirror human reasoning failures but stem from different causes. The models cannot maintain complex instructions across extended tasks. They hallucinate citations, fabricate data, and contradict themselves when context windows stretch too far. Even the latest reasoning models cannot eliminate certain habits, like the infamous em-dash overuse, no matter how explicitly you prompt against it.
Stephen introduces the deny-affirm construction as another persistent pattern: "It's not X, it's Y" formulations that plague AI-generated content. Tell the model to avoid this construction and watch it appear anyway, sometimes in the very next paragraph. These aren't bugs to be patched. They're symptoms of fundamental architectural limitations.
The solution lies in neurosymbolic AI, which combines neural networks with symbolic reasoning systems. Jacob and Stephen use an extended biological analogy: LLMs are like organisms without skeletons. A paramecium works fine at microscopic scale, but try to build something elephant-sized from the same squishy architecture and it collapses under its own weight. The skeleton—knowledge graphs, structured data, formal logic—provides the rigid structure necessary for complex reasoning at scale.
Learn more about neurosymbolic approaches: https://talbotwest.com/ai-insights/what-is-neurosymbolic-ai
About the hosts:
Jacob Andra is CEO of Talbot West and serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He pushes the limits of what AI can accomplish in high-stakes use cases and publishes extensively on AI, enterprise transformation, and policy, covering topics including explainability, responsible AI, and systems integration.
Stephen Karafiath is co-founder of Talbot West, where he architects and deploys AI solutions that bridge the gap between theoretical capabilities and practical business outcomes. His work focuses on identifying the specific failure modes of AI systems and developing robust approaches to enterprise implementation.
About Talbot West:
Talbot West delivers Fortune 500-level AI consulting and implementation to midmarket and enterprise organizations. The company specializes in practical AI deployment through its proprietary APEX (AI Prioritization and Execution) framework and Cognitive Hive AI (CHAI) architecture, which emphasizes modular, explainable AI systems over monolithic black-box models.
Visit talbotwest.com to learn how we help organizations cut through AI hype and implem

  continue reading

9 episodes