Manage episode 519419707 series 3610999
This new installment of the Worthy Successor series is an interview with Joe Carlsmith, a senior advisor at Open Philanthropy, whose work spans AI alignment, moral uncertainty, and the philosophical foundations of value. In this conversation, Joe joins us in his personal capacity, not representing any brand or company, and offering his own thoughtful perspectives.
In this episode, we explore Joe’s reflections on what a “worthy successor” might really mean - not as a single entity, but as a civilization guided toward greater wisdom. His framing is rare among alignment thinkers: he treats AI not as a replacement for humanity, but as a set of tools that might help us become clearer thinkers, better philosophers, and truer to what is actually good.
The interview is our fifteenth installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This episode referred to the following other essays:
-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy/
-- Our Final Imperatives - Humanity's Last Goals at the Dawn of AI: https://danfaggella.com/imperatives
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/bsTwkg6eYCs
See the full article from this episode: https://danfaggella.com/carlmsith1
...
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: danfaggella.com/trajectory
-- X: x.com/danfaggella
-- LinkedIn: linkedin.com/in/danfaggella
-- Newsletter: bit.ly/TrajectoryTw
-- YouTube: https://www.youtube.com/@trajectoryai
42 episodes