Interviews with Anthropologists about their New Books Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/anthropology
…
continue reading
MP3•Episode home
Manage episode 365474509 series 2607952
Content provided by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Fin Moorhouse and Luca Righetti, Fin Moorhouse, and Luca Righetti or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.
In this episode, we talk about:
- The basic case for working on existential risk from AI
- How to begin figuring out what to do to reduce the risks
- Threat models for the risks of advanced AI
- 'Theories of victory' for how the world mitigates the risks
- 'Intermediate goals' in AI governance
- What useful (and less useful) research looks like for reducing AI x-risk
- Practical advice for usefully contributing to efforts to reduce existential risk from AI
- Resources for getting started and finding job openings
Key links:
- Apply to be a Compute Governance Researcher or Research Assistant at Rethink Priorities (applications open until June 12, 2023)
- Rethink Priority's survey on intermediate goals in AI governance
- The Rethink Priorities newsletter
- The Rethink Priorities tab on the Effective Altruism Forum
- Some AI Governance Research Ideas compiled by Markus Anderljung & Alexis Carlier
- Strategic Perspectives on Long-term AI Governance by Matthijs Maas
- Michael's posts on the Effective Altruism Forum (under the username "MichaelA")
- The 80,000 Hours job board
90 episodes