Artwork
iconShare
 
Manage episode 516156544 series 3625301
Content provided by Chatcyberside. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chatcyberside or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

When Amazon Web Services went down on October 20, 2025, the impact rippled around the world. The outage knocked out Slack messages, paused financial trades, grounded flights, and even stopped people from charging their electric cars. From Coinbase to college classrooms, from food delivery apps to smart homes, millions discovered just how deeply their lives depend on a single cloud provider.

In this episode, Sherri Davidoff and Matt Durrin break down what really happened inside AWS’s U.S.-East-1 region, why one glitch in a database called DynamoDB cascaded across the globe, and what it teaches us about the growing risk from invisible “fourth-party” dependencies that lurk deep in our digital supply chains.

Key Takeaways

  1. Map and monitor your vendor ecosystem — Identify both third- and fourth-party dependencies and track their health.
  2. Require vendors to disclose key dependencies — Request a “digital bill of materials” that identifies their critical cloud and service providers.
  3. Diversify critical workloads — Don’t rely on a single hyperscaler region or platform for mission-critical services.
  4. Integrate vendor outages into incident response playbooks — Treat SaaS and cloud downtime as security events with defined response paths.
  5. Test your resilience under real-world conditions — Simulate large-scale SaaS or cloud failures in tabletop exercises.

Resources:

#cybersecurity #thirdpartyrisk #riskmanagement #infosec #ciso #cyberaware #Fourthpartyrisk #cybersidechats #lmgsecurity #aws #awsoutage

  continue reading

44 episodes