BONUS Building Reliable Software with Unreliable AI Tools With Lada Kesseler
Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Manage episode 521604332 series 92756
In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering.
From Skeptic to Pioneer: Lada's AI Coding Journey"I got a new skill for free!"
Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI.
Understanding Vibecoding vs. AI-Assisted Development"AI assisted coding requires judgment, and it's never been as important to exercise judgment as now."
Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices.
The Answer Injection Anti-Pattern When Working With AI"You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect."
One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability.
Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you.
Never Trust a Single LLM: Multi-Agent Collaboration"Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements."
Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review.
Code Quality Matters MORE with AI"This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!"
Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively.
Managing Complexity: The Open Question"If I just let it do things, it'll just run itself to the wall at crazy speeds, because it's really good at running. So I have to be there managing complexity for it."
One of the most honest insights Lada shares is the current limitation of AI: complexity management. While AI excels at implementing features quickly, it struggles to manage the growing complexity of systems over time. Lada finds herself acting as the complexity manager, making architectural decisions and keeping the system maintainable while AI handles implementation details. She poses a critical question for the future: "Can it manage complexity? Can we teach it to manage complexity? I don't know the answer to that." This honest assessment reminds us that fundamental software engineering skills—architecture, refactoring, testing—remain as vital as ever.
Context is Everything: Highway vs. Parking Lot"You need to be attuned to the environment. You can go faster or slow, and sometimes going slow is bad, because if you're on a highway, you're gonna get hurt."
Lada introduces a powerful metaphor for choosing development speed: highway versus parking lot. When learning or experimenting with non-critical systems, you can go fast, don't worry about perfection, and leverage AI's speed fully. But when building production systems where reliability matters, different rules apply. The key is matching your development approach to the risk level and context. She emphasizes safety nets: "In one project, we used AI, and we didn't pay attention to the code, as it wasn't important, because at any point, we could actually step back and refactor. We were not unsafe." This perspective helps developers make better judgment calls about when to accelerate and when to slow down.
The Era of Discovery: We've Only Just Begun"We haven't even touched the possibilities of what is there out there right now. We're in the era of gentleman scientists—newbies can make big discoveries right now, because nobody knows what AI really is capable of."
Perhaps most exciting is Lada's perspective on where we stand in the AI-assisted development journey: we're at the very beginning. Even the creators of these tools are figuring things out as they go. This creates unprecedented opportunities for practitioners at all levels to experiment, discover patterns, and share learnings with the community. Lada has documented her discoveries in an interactive patterns and anti-patterns website, a Calgary Software Crafters presentation, and her Substack blog—contributing to the collective knowledge base that's being built in real-time.
Resources For Further StudyVideo of Lada's talk: https://www.youtube.com/watch?v=_LSK2bVf0Lc&t=8654s
Lada's Patterns and Anti-patterns website: https://lexler.github.io/augmented-coding-patterns/
Lada's Substack https://lexler.substack.com/
AI Assisted Coding episode with Dawid Dahl
About Lada Kesseler
Lada Kesseler is a passionate software developer specializing in the design of scalable, robust software systems. With a focus on best development practices, she builds applications that are easy to maintain, adapt, and support. Lada combines technical expertise with a keen eye for clean architecture and sustainable code, driving innovation in modern software engineering.
Currently exploring how these values translate to AI-assisted development and figuring out what it takes to build reliable software with unreliable tools.
You can link with Lada Kesseler on LinkedIn.
233 episodes