Artwork
iconShare
 
Manage episode 478346261 series 3474034
Content provided by Antonio Santos, Debra Ruh, Neil Milliken, Antonio Santos, Debra Ruh, and Neil Milliken. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Antonio Santos, Debra Ruh, Neil Milliken, Antonio Santos, Debra Ruh, and Neil Milliken or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Lia Raquel Neves,, founder of EITIC Consulting, offers a thought-provoking exploration into the ethical dimensions of artificial intelligence and its profound implications for accessibility and inclusion. Drawing from her background in philosophy and bioethics, Lia challenges the common assumption that technology is neutral, instead arguing that our creations inherently reflect our values, biases, and blind spots.
The conversation delves into crucial gaps between AI regulations and accessibility requirements. Lia points out that the European AI Act doesn't explicitly define disability as a risk factor, meaning systems that significantly impact disabled users might not be classified as high-risk. "This is not just a legal oversight," she explains, "it's an ethical failure." Without structural requirements prioritizing accessibility, technologies from virtual assistants to facial recognition systems continue to exclude people with disabilities.
When discussing data ethics, Lia confronts the uncomfortable reality of historical bias. Training AI on decades-old data inevitably reproduces historical patterns of discrimination and inequality. While diversity in datasets helps, Lia emphasizes it's insufficient alone: “We must actively detect offensive or discriminatory language and prevent models from amplifying harmful content.” She advocates for continuous human oversight, transparency, and creating mechanisms for people to challenge automated outcomes.
Perhaps most powerful is Lia's reflection on representation: "Digital accessibility is still seen as a technical requirement when it is, in fact, a matter of social justice." She notes how the invisibility of people with disabilities in media, business, and technology perpetuates exclusion, creating a cycle where decision-makers don't prioritize what they rarely encounter. True inclusion means asking who's missing from the data, who's excluded by design, and who's absent when systems are being developed.
Ready to dive deeper into creating ethical, inclusive technology? Connect with Lia on LinkedIn and join the conversation about building technology that truly serves everyone.

Support the show

Follow axschat on social media.
Bluesky:
Antonio https://bsky.app/profile/akwyz.com

Debra https://bsky.app/profile/debraruh.bsky.social

Neil https://bsky.app/profile/neilmilliken.bsky.social

axschat https://bsky.app/profile/axschat.bsky.social

LinkedIn
https://www.linkedin.com/in/antoniovieirasantos/
https://www.linkedin.com/company/axschat/
Vimeo
https://vimeo.com/akwyz
https://twitter.com/axschat
https://twitter.com/AkwyZ
https://twitter.com/neilmilliken
https://twitter.com/debraruh

  continue reading

Chapters

1. Introduction to Lea and AITIC Consulting (00:00:00)

2. Lea's Journey into AI Ethics (00:01:41)

3. Technology is Not Neutral (00:04:15)

4. Bias and Progress in AI Ethics (00:08:50)

5. Historical Data Problems and Solutions (00:13:18)

6. Human Oversight and Ethical AI (00:18:25)

7. Visibility and Representation in Technology (00:21:13)

8. Data, Inclusion and Final Thoughts (00:25:10)

272 episodes