Artwork
iconShare
 
Manage episode 522188582 series 2936583
Content provided by Mark Smith [nz365guy]. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Mark Smith [nz365guy] or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
👉 Full Show Notes
https://www.microsoftinnovationpodcast.com/768

Agentic AI is transforming enterprise technology by moving beyond content generation to autonomous actions. In this episode of Copilot Show, Mehrnoosh Sameki explores the risks, guardrails, and governance frameworks needed to deploy AI agents safely and effectively.
🎙️ What you’ll learn

  • How agentic AI differs from generative AI and why it matters
  • Key risks: task misalignment, prohibited actions, sensitive data leakage
  • Practical guardrails and evaluation strategies for AI agents
  • How to manage agent sprawl with Microsoft Foundry Control Plane
  • Why red teaming and observability are critical for AI safety

Highlights

  • “Everything that I hear at work is about agentic AI.”
  • “Agents don’t just output text or image. They take actions.”
  • “Task alignment and staying on task is a huge one.”
  • “Sensitive data leakage is more and more important.”
  • “Bad actors could overwrite those information with different techniques.”
  • “If you don’t know how many agents are out there, huge safety risk.”
  • “We released something called Foundry Control Plane.”
  • “Each agent gets a unique identity to suspend, quarantine, or stop.”
  • “You can set org-wide policies against your agents.”
  • “Red teaming is huge for identifying the risks.”
  • “Our AI red teaming agent gives you a scorecard of vulnerabilities.”

🧰Mentioned

✅Keywords
agentic ai, generative ai, responsible ai, guardrails, observability, task misalignment, sensitive data leakage, agent hijacking, foundry control plane, entra, red teaming, ai governance

Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption

Support the show

If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith

  continue reading

Chapters

1. Agentic AI: Why It’s the Next Big Shift in Tech (00:00:00)

2. The Shift to Agentic AI (00:04:36)

3. New Safety Challenges (00:06:17)

4. Top Risks for Enterprises (00:08:51)

5. Managing Agent Sprawl (00:12:18)

6. Building Robust Guardrails (00:15:51)

7. Red Teaming for AI Systems (00:20:14)

8. Humans + AI: Augmentation, Not Replacement (00:24:33)

770 episodes