Flash Forward is a show about possible (and not so possible) future scenarios. What would the warranty on a sex robot look like? How would diplomacy work if we couldn’t lie? Could there ever be a fecal transplant black market? (Complicated, it wouldn’t, and yes, respectively, in case you’re curious.) Hosted and produced by award winning science journalist Rose Eveleth, each episode combines audio drama and journalism to go deep on potential tomorrows, and uncovers what those futures might re ...
…
continue reading
MP3•Episode home
Manage episode 373367204 series 2291923
Content provided by Charles Cassidy and Igor Grossmann, Charles Cassidy, and Igor Grossmann. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Charles Cassidy and Igor Grossmann, Charles Cassidy, and Igor Grossmann or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.
Special Guest: Sina Fazelpour.
Links:
- Sina Fazelpour's Website
- AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock, Willian A. Cunningham (2023)
- Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020
- Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)
- Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)
- Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)
- Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)
- Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)
- Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf & Kristian Kersting (2022)
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender , Timnit Gebru , Angelina McMillan-Major , Shmargaret Shmitchell (2021)
- In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)
65 episodes