Artwork

How to Trust AI on the Battlefield

From the Crows' Nest

26 subscribers

published

iconShare
 
Manage episode 512394236 series 2904453
Content provided by Association of Old Crows. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Association of Old Crows or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode of From the Crows’ Nest, host Ken Miller unpacks one of the key challenges with using artificial intelligence and machine learning (AI/ML) in combat: How can human agents trust AI in a live, complex military operation?

Jeff Druce, Senior Scientist, Human-Centered AI at Charles River Analytics, is at the heart of trying to answer this question. Jeff says that neural networks are inherently opaque; a system can perform millions of computations in seconds with a user being in the dark of how a system arrived at a certain recommendation or action. He tells Ken that their RELAX (Reinforcement Learning with Adaptive Explainability) research effort aims to add ways that AI systems can explain their decision making to human operators.

Jeff says that efforts to improve transparency and trust in these AI tools are key, arguing bottlenecks for AI use soon may not be from the technology plateauing but operators being unprepared and ill-equipped to effectively use this technology.

To learn more about today’s topics or to stay updated on EMSO and EW developments, visit our homepage.

  continue reading

166 episodes