Manage episode 509935720 series 3399111
Ever had your AI pair programmer stop helping and start breaking everything? I did—and this time, the data proves it wasn’t just me.
Claude fell off.
TypeScript that wouldn’t compile, migrations stuck in loops, refactors that went completely sideways. Turns out Anthropic’s own postmortem revealed three separate bugs causing degraded output—context routing issues, output corruption, and TLA-X blah blah blah error.
Let's dive in.
Shameless Plugs
Free 5 day email course to go from HTML to AI
Got a question you want answered on the pod? Drop it here
Apply for 1 of 12 spots at Parsity - Learn to build complex software, work with LLMs and launch your career.
AI Bootcamp (NEW) - for software developers who want to be the expert on their team when it comes to integrating AI into web applications.
Chapters
1. Setting The Stage: Claude Felt Off (00:00:00)
2. Real-World Failures And Frustration (00:01:03)
3. Postmortem Overview And Trust (00:03:43)
4. Rumors vs Reality Of Downgrades (00:04:34)
5. The Three Bugs Explained Simply (00:05:33)
6. Non‑Determinism And Sampling Pitfalls (00:09:17)
7. Black Boxes And Developer Risk (00:11:02)
8. Practical Guardrails For Using AI (00:12:45)
9. Culture, Context, And Code Quality (00:15:06)
298 episodes