Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Principles, agents, and the chain of accountability in AI systems

46:26
 
Share
 

Manage episode 481289076 series 3475282
Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Dr. Michael Zargum provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.

Show highlights
• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability
• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions
• LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components
• Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")
• Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development
• Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control
• The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior
• Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards

Explore Dr. Zargham's work

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chapters

1. Introduction to Dr. Michael Zargum (00:00:00)

2. Defining agents and principals (00:01:20)

3. Robotics example: Arctic Rover (00:03:45)

4. LLMs vs agents: Key distinctions (00:07:40)

5. Systems engineering perspective (00:13:20)

6. Constraints and guardrails (00:21:40)

7. Engineering with uncertainty (00:31:44)

8. Accountability and responsibility (00:37:42)

9. Final thoughts and conclusion (00:44:00)

31 episodes

Artwork
iconShare
 
Manage episode 481289076 series 3475282
Content provided by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dr. Andrew Clark & Sid Mangalik, Dr. Andrew Clark, and Sid Mangalik or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Dr. Michael Zargum provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.

Show highlights
• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability
• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions
• LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components
• Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds")
• Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development
• Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control
• The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior
• Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards

Explore Dr. Zargham's work

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  continue reading

Chapters

1. Introduction to Dr. Michael Zargum (00:00:00)

2. Defining agents and principals (00:01:20)

3. Robotics example: Arctic Rover (00:03:45)

4. LLMs vs agents: Key distinctions (00:07:40)

5. Systems engineering perspective (00:13:20)

6. Constraints and guardrails (00:21:40)

7. Engineering with uncertainty (00:31:44)

8. Accountability and responsibility (00:37:42)

9. Final thoughts and conclusion (00:44:00)

31 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play