Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Chatcyberside. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chatcyberside or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The AI Insider Threat: EchoLeak and the Rise of Zero-Click Exploits

13:54
 
Share
 

Manage episode 490553458 series 3625301
Content provided by Chatcyberside. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chatcyberside or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.

They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt.

Key Takeaways

  1. Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links.
  1. Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models.
  1. Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models.
  1. Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases.
  1. Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams.

Resources

#EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement

  continue reading

25 episodes

Artwork
iconShare
 
Manage episode 490553458 series 3625301
Content provided by Chatcyberside. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Chatcyberside or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.

They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt.

Key Takeaways

  1. Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links.
  1. Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models.
  1. Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models.
  1. Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases.
  1. Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams.

Resources

#EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement

  continue reading

25 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play