Artwork
iconShare
 
Manage episode 516658607 series 3567138
Content provided by Kevin Werbach. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kevin Werbach or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Kevin Werbach speaks with Trey Causey about the precarious state of the responsible AI (RAI) field. Causey argues that while the mission is critical, the current organizational structures for many RAI teams are struggling. He highlights a fundamental conflict between business objectives and governance intentions, compounded by the fact that RAI teams' successes (preventing harm) are often invisible, while their failures are highly visible.

Causey makes the case that for RAI teams to be effective, they must possess deep technical competence to build solutions and gain credibility with engineering teams. He also explores the idea of "epistemic overreach," where RAI groups have been tasked with an impossibly broad mandate they lack the product-market fit to fulfill. Drawing on his experience in the highly regulated employment sector at Indeed, he details the rigorous, science-based approach his team took to defining and measuring bias, emphasizing the need to move beyond simple heuristics and partner with legal and product teams before analysis even begins.

Trey Causey is a data scientist who most recently served as the Head of Responsible AI for Indeed. His background is in computational sociology, where he used natural language processing to answer social questions.

Transcript

Responsible Ai Is Dying. Long Live Responsible AI

  continue reading

51 episodes