Artwork
iconShare
 
Manage episode 502544565 series 1418208
Content provided by Podcast Archives - Software Engineering Daily. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Podcast Archives - Software Engineering Daily or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

A key challenge with designing AI agents is that large language models are stateless and have limited context windows. This requires careful engineering to maintain continuity and reliability across sequential LLM interactions. To perform well, agents need fast systems for storing and retrieving short-term conversations, summaries, and long-term facts.

Redis is an open‑source, in‑memory data store widely used for high‑performance caching, analytics, and message brokering. Recent advances have extended Redis’ capabilities to vector search and semantic caching, which has made it an increasingly popular part of the agentic application stack.

Andrew Brookins is a Principal Applied AI Engineer at Redis. He joins the show with Sean Falconer to discuss the challenges of building AI agents, the role of memory in agents, hybrid search versus vector-only search, the concept of world models, and more.

Full Disclosure: This episode is sponsored by Redis.

Sean’s been an academic, startup founder, and Googler. He has published works covering a wide range of topics from AI to quantum computing. Currently, Sean is an AI Entrepreneur in Residence at Confluent where he works on AI strategy and thought leadership. You can connect with Sean on LinkedIn.

Please click here to see the transcript of this episode.

Sponsorship inquiries: [email protected]

The post Redis and AI Agent Memory with Andrew Brookins appeared first on Software Engineering Daily.

  continue reading

871 episodes