Audio versions of essays by Joe Carlsmith. Philosophy, futurism, and other topics. Text versions at joecarlsmith.com.
…
continue reading
Joe Carlsmith Podcasts
What should be the trajectory of intelligence beyond humanity? The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.
…
continue reading
Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
…
continue reading
1
Joe Carlsmith - A Wiser, AI-Powered Civilization is the “Successor” (Worthy Successor, Episode 15)
1:52:41
1:52:41
Play later
Play later
Lists
Like
Liked
1:52:41This new installment of the Worthy Successor series is an interview with Joe Carlsmith, a senior advisor at Open Philanthropy, whose work spans AI alignment, moral uncertainty, and the philosophical foundations of value. In this conversation, Joe joins us in his personal capacity, not representing any brand or company, and offering his own thoughtf…
…
continue reading
1
How human-like do safe AI motivations need to be?
1:23:32
1:23:32
Play later
Play later
Lists
Like
Liked
1:23:32AIs with alien motivations can still follow instructions safely on the inputs that matter. Text version here: https://joecarlsmith.com/2025/11/12/how-human-like-do-safe-ai-motivations-need-to-be/By Joe Carlsmith
…
continue reading
1
Leaving Open Philanthropy, going to Anthropic
32:09
32:09
Play later
Play later
Lists
Like
Liked
32:09On a career move, and on AI-safety-focused people working at AI companies. Text version here: https://joecarlsmith.com/2025/11/03/leaving-open-philanthropy-going-to-anthropic/By Joe Carlsmith
…
continue reading
1
#84 – Dean Spears on the Case for People
1:43:24
1:43:24
Play later
Play later
Lists
Like
Liked
1:43:24Dean Spears is an an Economic Demographer, Development Economist, and Associate Professor of Economics at the University of Texas at Austin. With Michael Geruso, Dean is the co-author of After the Spike: Population, Progress, and the Case for People. You can see a full transcript and a list of resources on the episode page on our website. We're bac…
…
continue reading
1
Blaise Agüera y Arcas - AGI Symbiosis and the Arrow of Intelligence (Worthy Successor, Episode 14)
1:23:58
1:23:58
Play later
Play later
Lists
Like
Liked
1:23:58This new installment of the Worthy Successor series is an interview with Blaise Agüera y Arcas, Vice President and Fellow at Google, and CTO of Technology & Society. In this conversation, Blaise talks about how life and intelligence seem to move in a single direction - toward greater complexity and deeper interdependence. Blaise describes this move…
…
continue reading
1
Brad Carson - AGI Competition with Civility and Understanding (US-China AGI Relations, Episode 5)
1:04:25
1:04:25
Play later
Play later
Lists
Like
Liked
1:04:25This is an interview with Brad Carson, who served as a U.S. Congressman and as Under Secretary of the Army. Later, he served as the Acting Under Secretary of Defense for Personnel & Readiness, and now serves as President of Americans for Responsible Innovation (ARI). You might expect someone with deep roots in national security to see AGI through a…
…
continue reading
1
Irakli Beridze - Can the UN Help with Global AGI Governance? (AGI Governance, Episode 11)
52:52
52:52
Play later
Play later
Lists
Like
Liked
52:52Joining us in our eleventh episode of our series AGI Governance on The Trajectory is Irakli Beridze, Director of the UNICRI Centre for Artificial Intelligence and Robotics under the United Nations mandate. In this conversation, Irakli draws a stark contrast between yesterday’s arms-control templates and tomorrow’s AI. Chemical weapons were narrow, …
…
continue reading
1
Dean Xue Lan - A Multi-Pronged Approach to Pre-AGI Coordination (AGI Governance, Episode 10)
37:01
37:01
Play later
Play later
Lists
Like
Liked
37:01Joining us in our tenth episode of our AGI Governance series on The Trajectory is Dean Xue Lan, longtime scholar of public policy and global governance, whose recent work centers on AI safety and international coordination. In this episode, Xue stresses that AGI governance must evolve as an adaptive network. The UN can set frameworks among nations,…
…
continue reading
On boxing AIs, and on making deals with them. Text version here: https://joecarlsmith.com/2025/09/29/controlling-the-options-ais-can-pursueBy Joe Carlsmith
…
continue reading
1
RAND’s Joel Predd - Competitive and Cooperative Dynamics of AGI (US-China AGI Relations, Episode 4)
1:09:41
1:09:41
Play later
Play later
Lists
Like
Liked
1:09:41This is an interview with Joel Predd, a senior engineer at the RAND Corporation and co-author of RAND’s work on “five hard national security problems from AGI,”. In this conversation, Joel lays out a sober frame for leaders: treat AGI as technically credible but deeply uncertain; assume it will be transformational if it arrives; and recognize that …
…
continue reading
1
Drew Cukor - AI Adoption as a National Security Priority (US-China AGI Relations, Episode 3)
51:15
51:15
Play later
Play later
Lists
Like
Liked
51:15USMC Colonel Drew Cukor spent 25 years as decades in uniform and helped spearhead early Department of Defense AI efforts, eventually leading project including the Pentagon’s Project Maven. After government service, he’s led AI initiatives in the private sector, first with JP Morgan and now with TWG Global. Drew argues that when it comes to the US-C…
…
continue reading
1
Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)
1:04:32
1:04:32
Play later
Play later
Lists
Like
Liked
1:04:32Joining us in our ninth episode of our AGI Governance series on The Trajectory is Stuart Russell, Professor of Computer Science at UC Berkeley and author of Human Compatible. In this episode, Stuart explores why current AI race dynamics resemble a prisoner’s dilemma, why governments must establish enforceable red lines, and how international coordi…
…
continue reading
1
Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)
36:45
36:45
Play later
Play later
Lists
Like
Liked
36:45Joining us in our eighth episode of our AGI Governance series on The Trajectory is Craig Mundie, former Chief Research and Strategy Officer at Microsoft and longtime advisor on the evolution of digital infrastructure, AI, and national security. In this episode, Craig and I explore how bottom-up governance could emerge from commercial pressures and …
…
continue reading
1
Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)
1:33:06
1:33:06
Play later
Play later
Lists
Like
Liked
1:33:06This is an interview with Jeremie and Edouard Harris, Canadian researchers with backgrounds in AI governance and national security consulting, and co-founders of Gladstone AI. In this episode, Jeremie and Edouard explain why trusting China on AGI is dangerous, highlight ongoing espionage in Western labs, explore verification tools like tamper-proof…
…
continue reading
1
Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)
1:19:27
1:19:27
Play later
Play later
Lists
Like
Liked
1:19:27This new installment of the Worthy Successor series features Ed Boyden, an American neuroscientist and entrepreneur at MIT, widely known for his work on optogenetics and brain simulation - his breakthroughs have helped shape the frontier of neurotechnology. In this episode, we explore Ed’s vision for what kinds of posthuman intelligences deserve to…
…
continue reading
A four-step picture. Text version here: https://joecarlsmith.com/2025/08/18/giving-ais-safe-motivationsBy Joe Carlsmith
…
continue reading
1
Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)
1:28:56
1:28:56
Play later
Play later
Lists
Like
Liked
1:28:56This is an interview with Roman V. Yampolskiy, a computer scientist at the University of Louisville and a leading voice in AI safety. Everyone has heard Roman's p(doom) arguments, that isn't the focus of our interview. We instead talk about Roman's "untestability" hypothesis, and the fact that there maybe untold, human-incomprehensible powers alrea…
…
continue reading
1
Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)
1:24:49
1:24:49
Play later
Play later
Lists
Like
Liked
1:24:49Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity. Toby is one of the world’s most influential thinkers on long-term risk - and one of the clearest voices on how advan…
…
continue reading
1
Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)
1:17:06
1:17:06
Play later
Play later
Lists
Like
Liked
1:17:06This new installment of the Worthy Successor series is an interview with the brilliant Martin Rees - British cosmologist, astrophysicist, and 60th President of the Royal Society. In this interview we explore his belief that humanity is just a stepping stone between Darwinian life and a new form of intelligent design - not divinely ordained, but con…
…
continue reading
1
Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)
1:30:42
1:30:42
Play later
Play later
Lists
Like
Liked
1:30:42This is an interview with Emmett Shear - CEO of SoftMax, co-founder of Twitch, former interim CEO of OpenAI, and one of the few public-facing tech leaders who seems to take both AGI development and AGI alignment seriously. In this episode, we explore Emmett’s vision of AGI as a kind of living system, not unlike a new kind of cell, joining the tissu…
…
continue reading
1
Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)
1:51:37
1:51:37
Play later
Play later
Lists
Like
Liked
1:51:37This is an interview with Joshua Clymer, AI safety researcher at Redwood Research, and former researcher at METR. Joshua has spent years focused on institutional readiness for AGI, especially the kinds of governance bottlenecks that could become breaking points. His thinking is less about far-off futures and more about near-term institutional failu…
…
continue reading
1
Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]
1:25:55
1:25:55
Play later
Play later
Lists
Like
Liked
1:25:55This is an interview with Peter Singer, one of the most influential moral philosophers of our time. Singer is best known for his groundbreaking work on animal rights, global poverty, and utilitarian ethics, and his ideas have shaped countless conversations about the moral obligations of individuals, governments, and societies. This interview is our…
…
continue reading
1
David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]
1:55:59
1:55:59
Play later
Play later
Lists
Like
Liked
1:55:59This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of the Gradual Disempowerment paper, and former researcher at Anthropic. This is the first episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives. This episode referred to the foll…
…
continue reading
1
Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]
1:47:40
1:47:40
Play later
Play later
Lists
Like
Liked
1:47:40This is an interview with Kristian Rönn, author, successful startup founder, and now CEO of Lucid, and AI hardware governance startup based in SF. This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the f…
…
continue reading
On seeing and not seeing souls. Text version here: https://joecarlsmith.com/2025/05/21/the-stakes-of-ai-moral-status/By Joe Carlsmith
…
continue reading
1
Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]
1:41:56
1:41:56
Play later
Play later
Lists
Like
Liked
1:41:56This is an interview with Jack Shanahan, a three-star General and former Director of the Joint AI Center (JAIC) within the US Department of Defense. This the first installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races. This episode referred t…
…
continue reading
1
Can we safely automate alignment research?
1:29:38
1:29:38
Play later
Play later
Lists
Like
Liked
1:29:38It's really important; we've got a real shot; there are a ton of ways to fail. Text version here: https://joecarlsmith.com/2025/04/30/can-we-safely-automate-alignment-research/. There's also a video and transcript of a talk I gave on this topic here: https://joecarlsmith.com/2025/04/30/video-and-transcript-of-talk-on-automating-alignment-research/…
…
continue reading
1
Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]
1:46:15
1:46:15
Play later
Play later
Lists
Like
Liked
1:46:15This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind. This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the following other essays and re…
…
continue reading
1
Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]
1:14:19
1:14:19
Play later
Play later
Lists
Like
Liked
1:14:19This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades). Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and …
…
continue reading
1
Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]
26:06
26:06
Play later
Play later
Lists
Like
Liked
26:06This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris. See the full article from this episode: https://danfaggella.com/tegmark1 Listen to the full podcast episode: https:…
…
continue reading
1
Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]
1:16:35
1:16:35
Play later
Play later
Lists
Like
Liked
1:16:35This is an interview with Dr. Michael Levin, a pioneering developmental biologist at Tufts University. This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. Listen to this episode on The Trajectory Youtube Channel: https://www.yout…
…
continue reading
We should try extremely hard to use AI labor to help address the alignment problem. Text version here: https://joecarlsmith.com/2025/03/14/ai-for-ai-safetyBy Joe Carlsmith
…
continue reading
1
#83 – Max Smeets on Barriers To Cyberweapons
1:36:19
1:36:19
Play later
Play later
Lists
Like
Liked
1:36:19Max Smeets is a Senior Researcher at ETH Zurich's Center for Security Studies and Co-Director of Virtual Routes You can find links and a transcript at www.hearthisidea.com/episodes/smeets In this episode we talk about: The different types of cyber operations that a nation state might launch How international norms formed around what kind of cyber a…
…
continue reading
On the structure of the path to safe superintelligence, and some possible milestones along the way. Text version here: https://joecarlsmith.substack.com/p/paths-and-waystations-in-ai-safetyBy Joe Carlsmith
…
continue reading
1
When should we worry about AI power-seeking?
46:54
46:54
Play later
Play later
Lists
Like
Liked
46:54Examining the conditions required for rogue AI behavior. Text version here: https://joecarlsmith.substack.com/p/when-should-we-worry-about-ai-powerBy Joe Carlsmith
…
continue reading
1
What is it to solve the alignment problem?
40:13
40:13
Play later
Play later
Lists
Like
Liked
40:13Also: to avoid it? Handle it? Solve it forever? Solve it completely? Text version here: https://joecarlsmith.substack.com/p/what-is-it-to-solve-the-alignmentBy Joe Carlsmith
…
continue reading
Introduction to a series of essays about paths to safe and useful superintelligence. Text version here: https://joecarlsmith.substack.com/p/how-do-we-solve-the-alignment-problemBy Joe Carlsmith
…
continue reading
When the line pulls at your hand. Text version here: https://joecarlsmith.com/2025/01/28/fake-thinking-and-real-thinking/.By Joe Carlsmith
…
continue reading
1
Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]
1:14:45
1:14:45
Play later
Play later
Lists
Like
Liked
1:14:45This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute. This is the sixth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. Watch this episode on The Trajectory Youtube Channel: ht…
…
continue reading
1
Connor Leahy - Slamming the Brakes on the AGI Arms Race [AGI Governance, Episode 5]
1:45:12
1:45:12
Play later
Play later
Lists
Like
Liked
1:45:12This is an interview with Connor Leahy, the Founder and CEO of Conjecture. This is the fifth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. Watch this episode on The Trajectory Youtube Channel: https://youtu.be/1j--6JYRLVk See t…
…
continue reading
1
Andrea Miotti - A Human-First AI Future [AGI Governance, Episode 4]
1:41:09
1:41:09
Play later
Play later
Lists
Like
Liked
1:41:09This is an interview with Andrea Miotti, the Founder and Executive Director of ControlAI. This is the fourth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. Watch this episode on The Trajectory Youtube Channel: https://youtu.be/L…
…
continue reading
1
Takes on "Alignment Faking in Large Language Models"
1:27:54
1:27:54
Play later
Play later
Lists
Like
Liked
1:27:54What can we learn from recent empirical demonstrations of scheming in frontier models? Text version here: https://joecarlsmith.com/2024/12/18/takes-on-alignment-faking-in-large-language-models/By Joe Carlsmith
…
continue reading
1
#82 – Tom Kalil on Institutions for Innovation (with Matt Clancy)
1:17:37
1:17:37
Play later
Play later
Lists
Like
Liked
1:17:37Tom Kalil is the CEO of Renaissance Philanthropy. He also served in the White House for two presidents (under Obama and Clinton); where he helped establish incentive prizes in government through challenge.gov; in addition to dozens of science and tech program. More recently Tom served as the Chief Innovation Officer at Schmidt Futures, where he hel…
…
continue reading
1
Stephen Ibaraki - The Beginning of AGI Global Coordination [AGI Governance, Episode 3]
50:25
50:25
Play later
Play later
Lists
Like
Liked
50:25This is an interview with Stephen Ibaraki, the Founder of the ITU's (part of the United Nations) AI for Good initiative, and Chairman REDDS Capital. This is the third installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence. This episode r…
…
continue reading
1
Mike Brown - AI Cooperation and Competition Between the US and China [AGI Governance, Episode 2]
1:09:51
1:09:51
Play later
Play later
Lists
Like
Liked
1:09:51This is an interview with Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit at the U.S. Department of Defense. This is the second installment of our "AGI Governance" series - where we explore how important AGI governance is, what it should achieve, and how it should be implemented. Watch this episode on Th…
…
continue reading
1
#81 – Cynthia Schuck on Quantifying Animal Welfare
1:37:16
1:37:16
Play later
Play later
Lists
Like
Liked
1:37:16Dr Cynthia Schuck-Paim is the Scientific Director of the Welfare Footprint Project, a scientific effort to quantify animal welfare to inform practice, policy, investing and purchasing decisions. You can find links and a transcript at www.hearthisidea.com/episodes/schuck. We discuss: How to begin thinking about quantifying animal experiences in a cr…
…
continue reading
1
Sébastien Krier - Keeping a Pulse on AGI's Takeoff [AGI Governance, Episode 1]
1:33:40
1:33:40
Play later
Play later
Lists
Like
Liked
1:33:40This is an interview with Sebastien Krier, who works in Policy Development and Strategy at Google DeepMind. This is the first installment of our "AGI Governance" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. Watch this episode on The Trajectory YouTube channel: https://youtu.be/SKl7…
…
continue reading
1
#80 – Dan Williams on How Persuasion Works
1:48:43
1:48:43
Play later
Play later
Lists
Like
Liked
1:48:43Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/williams. We discuss: If reasoning is so useful, why are we so bad at it? Do some bad ideas re…
…
continue reading
1
Joscha Bach - Building an AGI to Play the Longest Games [Worthy Successor, Episode 6]
2:01:04
2:01:04
Play later
Play later
Lists
Like
Liked
2:01:04This is an interview with Joscha Bach, cognitive scientist, AI researcher, and AI Strategist at Liquid AI. This is the sixth and final installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. Watch this episode on The Trajectory YouTube channel: https://yo…
…
continue reading
1
Jeff Hawkins - Building a Knowledge-Preserving AGI to Live Beyond Us (Worthy Successor, Episode 5)
1:14:18
1:14:18
Play later
Play later
Lists
Like
Liked
1:14:18This is an interview with Jeff Hawkins, Founder of Numenta and author of “A Thousand Brains." This is the fifth installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. Watch this episode on The Trajectory YouTube channel: https://youtu.be/pfqsbT0cW0o This…
…
continue reading