Artwork
iconShare
 
Manage episode 492344414 series 2806859
Content provided by Jodi and Justin Daniels and Justin Daniels. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jodi and Justin Daniels and Justin Daniels or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Andrew Clearwater is a Partner at Dentons’ Privacy and Cybersecurity Team and a recognized authority in privacy and AI governance. Formerly a founding leader at OneTrust, he oversaw privacy and AI initiatives, contributed to key data protection standards, and holds over 20 patents. Andrew advises businesses on responsible tech implementation, helping navigate global regulations in AI, data privacy, and cybersecurity. A frequent speaker, he offers insight into emerging compliance challenges and ethical technology use.

In this episode…

Many companies are diving into AI without first putting governance in place. They often move forward without defined goals, leadership, or alignment across privacy, security, and legal teams. This leads to confusion about how AI is being used, what risks it creates, and how to manage those risks. Without coordination and structure, programs lose momentum, transactions are delayed, and expectations become harder to meet. So how can companies build a responsible AI governance program?

Building effective AI governance programs starts with knowing what’s in use, why it’s in use, what data AI tools and systems collect, the risk it creates, and how to manage it. Standards like ISO 42001 and the NIST AI Risk Management Framework help companies guide this process. ISO 42001 offers the benefit of certification and supports cross-functional consistency, while NIST may be better suited for organizations already using it in related areas. Both frameworks help companies define the scope of AI use cases, understand the risks, and inform policies before jumping into controls. Conducting data inventories and utilizing existing risk management processes are also essential in identifying shadow AI introduced by employees or third-party vendors.

In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Andrew Clearwater, Partner at Dentons, about how companies can build responsible AI governance programs. Andrew explains how standards and legal frameworks support consistent AI governance implementation and how to encourage alignment between privacy, security, legal, and ethics teams. He also outlines the importance of monitoring shadow AI across third-party vendors and practical steps companies can take to effectively structure their AI governance programs.

  continue reading

227 episodes