Artwork
iconShare
 
Manage episode 469435399 series 2892548
Content provided by Anton Chuvakin. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anton Chuvakin or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Guest:

Topic:

  • Where do you see a gap between the "promise" of LLMs for security and how they are actually used in the field to solve customer pains?
  • I know you use LLMs for anomaly detection. Explain how that "trick" works? What is it good for? How effective do you think it will be?
  • Can you compare this to other anomaly detection methods? Also, won't this be costly - how do you manage to keep inference costs under control at scale?
  • SOC teams often grapple with the tradeoff between "seeing everything" so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
  • We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
  • What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
  • So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?

Resource:

  continue reading

256 episodes