Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Humanise Live. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Humanise Live or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

28. Susie Alegre on the Algorithmic Assault on Human Rights: How AI Threatens Our Core Freedoms

39:58
 
Share
 

Manage episode 470449163 series 3511126
Content provided by Humanise Live. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Humanise Live or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

AI technologies pose significant threats to fundamental human rights, reinforcing historical biases and power imbalances. This week, we are joined by Susie Alegre, international human rights lawyer and author, to explore the impact of generative AI on gender and racial equality, labour markets, and information ecosystems.

Susie has worked for international NGOs like Amnesty International and organisations including the UN, the EU and the Council of Europe.
Susie has published two books covering the critical issue or technology's impact on human rights; “Freedom to Think” (2022) was a Financial Times Technology Book of the Year 2022 and shortlisted for the Royal Society of Literature Christopher Bland Prize 2023 and “Human Rights, Robot Wrongs: Being Human in the Age of AI published in 2024.
The episode covers;

  • How AI systems, like ChatGPT, perpetuate gender and racial biases
  • The "Pygmalion" pattern in AI design
  • Potential longterm effects on skills, education and social interactions
  • The rise of "ultra-processed information" and its consequences for the internet
  • Legal risks and the role of effective regulation
  • Enforcement in addressing AI's human rights risks
  • When AI applications may be valuable—and when they are not

📅 To learn more about Susie Alegre’s work, visit:
🔗 susiealegre.com
🔗 alegre.ai
🔗 CIGI Profile

📩 [email protected]
📷 @susiealegre
💼 Linkedin: Susie Alegre

Send us a text

Support the show

Support us on Patreon
Advertising opportunities

Click here to submit questions, nominate guest & topics.

Follow Humanism Now @HumanismNowPod

Humanism Now is produced by Humanise Live

Contact us at [email protected]

  continue reading

Chapters

1. 28. Susie Alegre on the Algorithmic Assault on Human Rights: How AI Threatens Our Core Freedoms (00:00:00)

2. When ChatGPT wrote me out of my own book (00:01:35)

3. The Pygmalion effect: AI's gender problem (00:05:10)

4. Can AI be de-biased or made more inclusive? (00:06:45)

5. Automating the future of work: Productivity myth (00:11:25)

6. Impact of automation & mechanical in care jobs (00:18:30)

7. Dangers for skills and professional qualifications (00:20:12)

8. Will we reach an AI content fatigue tipping point? (00:23:06)

9. Legal liability and regulation (00:27:25)

10. Balancing risks and rewards of transformative technologies (00:32:12)

11. Finding genuine positives in AI (00:34:35)

12. Changing minds (00:38:32)

35 episodes

Artwork
iconShare
 
Manage episode 470449163 series 3511126
Content provided by Humanise Live. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Humanise Live or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

AI technologies pose significant threats to fundamental human rights, reinforcing historical biases and power imbalances. This week, we are joined by Susie Alegre, international human rights lawyer and author, to explore the impact of generative AI on gender and racial equality, labour markets, and information ecosystems.

Susie has worked for international NGOs like Amnesty International and organisations including the UN, the EU and the Council of Europe.
Susie has published two books covering the critical issue or technology's impact on human rights; “Freedom to Think” (2022) was a Financial Times Technology Book of the Year 2022 and shortlisted for the Royal Society of Literature Christopher Bland Prize 2023 and “Human Rights, Robot Wrongs: Being Human in the Age of AI published in 2024.
The episode covers;

  • How AI systems, like ChatGPT, perpetuate gender and racial biases
  • The "Pygmalion" pattern in AI design
  • Potential longterm effects on skills, education and social interactions
  • The rise of "ultra-processed information" and its consequences for the internet
  • Legal risks and the role of effective regulation
  • Enforcement in addressing AI's human rights risks
  • When AI applications may be valuable—and when they are not

📅 To learn more about Susie Alegre’s work, visit:
🔗 susiealegre.com
🔗 alegre.ai
🔗 CIGI Profile

📩 [email protected]
📷 @susiealegre
💼 Linkedin: Susie Alegre

Send us a text

Support the show

Support us on Patreon
Advertising opportunities

Click here to submit questions, nominate guest & topics.

Follow Humanism Now @HumanismNowPod

Humanism Now is produced by Humanise Live

Contact us at [email protected]

  continue reading

Chapters

1. 28. Susie Alegre on the Algorithmic Assault on Human Rights: How AI Threatens Our Core Freedoms (00:00:00)

2. When ChatGPT wrote me out of my own book (00:01:35)

3. The Pygmalion effect: AI's gender problem (00:05:10)

4. Can AI be de-biased or made more inclusive? (00:06:45)

5. Automating the future of work: Productivity myth (00:11:25)

6. Impact of automation & mechanical in care jobs (00:18:30)

7. Dangers for skills and professional qualifications (00:20:12)

8. Will we reach an AI content fatigue tipping point? (00:23:06)

9. Legal liability and regulation (00:27:25)

10. Balancing risks and rewards of transformative technologies (00:32:12)

11. Finding genuine positives in AI (00:34:35)

12. Changing minds (00:38:32)

35 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play