Federal Tech Podcast: for innovators, entrepreneurs, and CEOs who want to increase reach and improve brand awareness
»
Ep. 234 Generative AI and the Federal Cybersecurity Challenge
Manage episode 478866684 series 3610832
Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/
Want to listen to other episodes? www.Federaltechpodcast.com
Artificial Intelligence can be applied to code generation, predictive analytics, and what is called “generative” AI. " Generative means the AI can look at a library of information (Large Language Model) and create text or images that provide some value.
Because the results can be so dazzling, many forget to be concerned about some of the ways the starting point, the LLM, can be compromised.
Just because LLMs are relatively new does not mean they are not being attacked. Generative AI expands the federal government's attack surface. Malicious actors are trying to poison data, leak data, and even exfiltrate secure information.
Today, we sit down with Elad Schulman from Lasso Security to examine ways to ensure the origin of your AI is secure. He begins the interview by outlining the challenges federal agencies face in locking down LLMs.
For example, a Generative AI system can produce results, but you may not know their origin. It's like a black box that produces a list, but you have no idea where the list came from.
Elad Shulman suggests that observability should be a key element when using Generative AI. In more detail, Elad Shulman details observability from a week ago vs. observability in real-time.
What good is a security alert if a federal leader cannot react promptly?
Understanding the provenance of data and how Generative AI will be infused into future federal systems means you should realize LLM security practices.
234 episodes