Thanks for visiting The Cell Phone Junkie! I will be taking the time each week to discuss my favorite topic, cell phones. Any feedback is appreciated and welcome. You can email me at: questions (AT) thecellphonejunkie (DOT) com or call: 206-203-3734 Thanks and welcome!
…
continue reading
MP3•Episode home
Manage episode 518728690 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Guest:
- Ari Herbert-Voss, CEO at RunSybil
Topics:
- The market already has Breach and Attack Simulation (BAS), for testing known TTPs. You're calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it?
- Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first?
- You're asking customers to unleash a 'hacker AI' in their production environment. That's terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'?
- You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws?
- Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away?
- So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous purple team engine that's constantly training our defenses?
- Also, what about fixing? What makes your findings more fixable?
- What will happen to red team testing in 2-3 years if this technology gets better?
Resource:
- Kim Zetter Zero Day blog
- EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google
- EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?
- EP68 How We Attack AI? Learn More at Our RSA Panel!
- EP71 Attacking Google to Defend Google: How Google Does Red Team
257 episodes