Artwork
iconShare
 
Manage episode 520229984 series 3641934
Content provided by Jeff Wilser. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jeff Wilser or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

What if your robot vacuum accidentally leaked naked photos of you onto Facebook—and that was just the tip of the iceberg for how your data trains AI?

In this episode of The People’s AI, presented by Vana, we kick off Season 3 with a deep-dive primer on the real stakes of AI and data: in our homes, in our work, and across society. We start with a jaw-dropping story from MIT Technology Review senior reporter Eileen Guo, who uncovered how images from “smart” robot vacuums—including a woman on a toilet—ended up in a Facebook group for overseas gig workers labeling training data.

From there, we zoom out: what did this investigation reveal about how AI systems are actually trained, who’s doing the invisible labor of data labeling, and how consent quietly gets stretched (or broken) along the way? We hear from Professor Alan Rubel about how seemingly mundane data—from smart devices to license-plate readers—feeds powerful surveillance infrastructures and tests the limits of long-standing privacy protections.

Then we move into the workplace. Partners Jennifer Maisel and Steven Lieberman of Rothwell Figg walk us through the New York Times’ landmark lawsuit against OpenAI and Microsoft, and why they see it as a fight over whether copyrighted work—and the broader creative economy—can simply be ingested as free raw material for AI. We explore what this means not just for journalists, but for anyone whose job involves producing text, images, music, or other digital output.

Finally, we widen the lens with Michael Casey, chairman of the Advanced AI Society, who argues that control of our data is now inseparable from individual agency itself. If a small number of AI companies own the data that defines us, what does that mean for democracy, power, and the risk of a “digital feudalism”?

We cover:

  • How a robot vacuum’s “beta testing” led to intimate photos being shared with gig workers abroad
  • Why data labeling and annotation work—often done by low-paid workers in crisis-hit regions—is a critical but opaque part of the AI supply chain
  • How consent language like “product improvement” quietly expands to include AI training
  • The New York Times’ legal theory against OpenAI and Microsoft, and what’s at stake for copyright, fair use, and the creative class
  • How AI-generated “slop” can flood the internet, dilute original work, and undercut creators’ livelihoods
  • Why everyday workplace output—emails, docs, Slack messages, meeting transcripts—may become fuel for corporate AI systems
  • The emerging risks of pervasive data capture, from license-plate tracking to always-on devices, and the pressure this puts on Fourth Amendment protections
  • Michael Casey’s argument that data control is a fundamental human right in the digital age—and what a more decentralized, user-owned future might look like

Guests

  • Eileen Guo – Senior Reporter, MIT Technology Review
  • Professor Alan Rubel – Director, Information School, University of Wisconsin
  • Jennifer Maisel – Partner, Rothwell Figg, counsel to The New York Times
  • Steven Lieberman – Partner, Rothwell Figg, lead counsel in the NYT v. OpenAI/Microsoft case
  • Michael Casey – Chairman, Advanced AI Society

The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates. Learn more at vana.org.

  continue reading

20 episodes