Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by The Oakmont Group and John Gilroy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Oakmont Group and John Gilroy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Ep. 234 Generative AI and the Federal Cybersecurity Challenge

20:59
 
Share
 

Manage episode 478866684 series 3610832
Content provided by The Oakmont Group and John Gilroy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Oakmont Group and John Gilroy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/

Want to listen to other episodes? www.Federaltechpodcast.com

Artificial Intelligence can be applied to code generation, predictive analytics, and what is called “generative” AI. " Generative means the AI can look at a library of information (Large Language Model) and create text or images that provide some value.

Because the results can be so dazzling, many forget to be concerned about some of the ways the starting point, the LLM, can be compromised.

Just because LLMs are relatively new does not mean they are not being attacked. Generative AI expands the federal government's attack surface. Malicious actors are trying to poison data, leak data, and even exfiltrate secure information.

Today, we sit down with Elad Schulman from Lasso Security to examine ways to ensure the origin of your AI is secure. He begins the interview by outlining the challenges federal agencies face in locking down LLMs.

For example, a Generative AI system can produce results, but you may not know their origin. It's like a black box that produces a list, but you have no idea where the list came from.

Elad Shulman suggests that observability should be a key element when using Generative AI. In more detail, Elad Shulman details observability from a week ago vs. observability in real-time.

What good is a security alert if a federal leader cannot react promptly?

Understanding the provenance of data and how Generative AI will be infused into future federal systems means you should realize LLM security practices.

  continue reading

234 episodes

Artwork
iconShare
 
Manage episode 478866684 series 3610832
Content provided by The Oakmont Group and John Gilroy. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Oakmont Group and John Gilroy or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/

Want to listen to other episodes? www.Federaltechpodcast.com

Artificial Intelligence can be applied to code generation, predictive analytics, and what is called “generative” AI. " Generative means the AI can look at a library of information (Large Language Model) and create text or images that provide some value.

Because the results can be so dazzling, many forget to be concerned about some of the ways the starting point, the LLM, can be compromised.

Just because LLMs are relatively new does not mean they are not being attacked. Generative AI expands the federal government's attack surface. Malicious actors are trying to poison data, leak data, and even exfiltrate secure information.

Today, we sit down with Elad Schulman from Lasso Security to examine ways to ensure the origin of your AI is secure. He begins the interview by outlining the challenges federal agencies face in locking down LLMs.

For example, a Generative AI system can produce results, but you may not know their origin. It's like a black box that produces a list, but you have no idea where the list came from.

Elad Shulman suggests that observability should be a key element when using Generative AI. In more detail, Elad Shulman details observability from a week ago vs. observability in real-time.

What good is a security alert if a federal leader cannot react promptly?

Understanding the provenance of data and how Generative AI will be infused into future federal systems means you should realize LLM security practices.

  continue reading

234 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play