Artwork
iconShare
 
Manage episode 518909433 series 3335763
Content provided by Foundry. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Foundry or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

As companies rush to implement AI and automated decision-making tools, they may be walking into a legal minefield. On this episode of Today in Tech, host Keith Shaw speaks with attorney Rob Taylor from Carstens, Allen & Gourley about the growing legal risks tied to agentic AI, automated hiring, and the rise of ADM (automated decision-making) regulations.

Rob breaks down:

* Why AI tools used in hiring and insurance may trigger liability

* How companies are getting ADM compliance wrong

* What laws already apply even without new AI regulations

* Real-world examples like credit scoring, job screening, and sentiment analysis

* Why disclosure, explainability, and data retention are essential

* Who’s liable: the company or the AI developer?

Chapters

00:00 Legal risks in AI and ADM

01:00 Common mistakes companies make

06:00 High-risk use cases: hiring, credit, insurance

10:00 Disclosure and consent pitfalls

15:00 Explainability and record-keeping laws

20:00 Unintentional bias in hiring algorithms

28:00 Who is liable: developer or deployer?

34:00 What future lawsuits might target

37:00 Fixing flawed AI governance

41:00 Litigation as the great teacher

  continue reading

519 episodes