Artwork

Defining AI safety

The Turing Podcast

30 subscribers

published

iconShare
 
Manage episode 449770301 series 2645410
Content provided by The Alan Turing Institute. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Alan Turing Institute or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Ed and David chat with Professor Ibrahim Habli, Research Director at the Centre for Assuring Autonomy in the University of York, and director of the UKRI Centre for Doctoral Training in Safe AI Systems. The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the "Trustworthy and Ethical Assurance platform", or "TEA" for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.

  continue reading

65 episodes