Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Joe Carlsmith public
[search 0]
More
Download the App!
show episodes
 
Artwork

1
Hear This Idea

Fin Moorhouse and Luca Righetti

icon
Unsubscribe
icon
Unsubscribe
Monthly
 
Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
  continue reading
 
Loading …
show series
 
Max Smeets is a Senior Researcher at ETH Zurich's Center for Security Studies and Co-Director of Virtual Routes You can find links and a transcript at www.hearthisidea.com/episodes/smeets In this episode we talk about: The different types of cyber operations that a nation state might launch How international norms formed around what kind of cyber a…
  continue reading
 
Tom Kalil is the CEO of Renaissance Philanthropy. He also served in the White House for two presidents (under Obama and Clinton); where he helped establish incentive prizes in government through challenge.gov; in addition to dozens of science and tech program. More recently Tom served as the Chief Innovation Officer at Schmidt Futures, where he hel…
  continue reading
 
Dr Cynthia Schuck-Paim is the Scientific Director of the Welfare Footprint Project, a scientific effort to quantify animal welfare to inform practice, policy, investing and purchasing decisions. You can find links and a transcript at www.hearthisidea.com/episodes/schuck. We discuss: How to begin thinking about quantifying animal experiences in a cr…
  continue reading
 
Dan Williams is a Lecturer in Philosophy at the University of Sussex and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/williams. We discuss: If reasoning is so useful, why are we so bad at it? Do some bad ideas re…
  continue reading
 
Extended audio from my conversation with Dwarkesh Patel. This part focuses on the basic story about AI takeover. Transcript available on my website here: https://joecarlsmith.com/2024/09/30/part-2-ai-takeover-extended-audio-transcript-from-my-conversation-with-dwarkesh-patelBy Joe Carlsmith
  continue reading
 
Extended audio from my conversation with Dwarkesh Patel. This part focuses on my series "Otherness and control in the age of AGI." Transcript available on my website here: https://joecarlsmith.com/2024/09/30/part-1-otherness-extended-audio-transcript-from-my-conversation-with-dwarkesh-patel/
  continue reading
 
Tamay Besiroglu is a researcher working on the intersection of economics and AI. He is currently the Associate Director of Epoch AI, a research institute investigating key trends and questions that will shape the trajectory and governance of AI. You can find links and a transcript at www.hearthisidea.com/episodes/besiroglu In this episode we talked…
  continue reading
 
Jacob Trefethen oversees Open Philanthropy’s science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge. You can find links and a transcript at www.hearthisidea.com/episodes/trefethen In this episode we talked about open source the risks and benefits of open source AI models. We…
  continue reading
 
Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI. You can find links and a transcript at www.hearthisidea.com/episodes/seger In this episode we talked about open source the risks and benefits of open source AI models. We talk about: What ‘open source’ really means What is (a…
  continue reading
 
Second half of the full audio for my series on how agents with different values should relate to one another, and on the ethics of seeking and sharing power. First half here: https://joecarlsmithaudio.buzzsprout.com/2034731/15266490-first-half-of-full-audio-for-otherness-and-control-in-the-age-of-agi PDF of the full series here: https://jc.gatspres…
  continue reading
 
First half of the full audio for my series on how agents with different values should relate to one another, and on the ethics of seeking and sharing power. Second half here: https://joecarlsmithaudio.buzzsprout.com/2034731/15272132-second-half-of-full-audio-for-otherness-and-control-in-the-age-of-agi PDF of the full series here: https://jc.gatspre…
  continue reading
 
Garden, campfire, healing water. Text version here: https://joecarlsmith.com/2024/06/18/loving-a-world-you-dont-trust This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summaries of the essays that have been release…
  continue reading
 
Examining a certain kind of meaning-laden receptivity to the world. Text version here: https://joecarlsmith.com/2024/03/25/on-attunement This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summaries of the essays tha…
  continue reading
 
Examining a philosophical vibe that I think contrasts in interesting ways with "deep atheism." Text version here: https://joecarlsmith.com/2024/03/21/on-green This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summa…
  continue reading
 
Joe Carlsmith is a writer, researcher, and philosopher. He works as a senior research analyst at Open Philanthropy, where he focuses on existential risk from advanced artificial intelligence. He also writes independently about various topics in philosophy and futurism, and holds a doctorate in philosophy from the University of Oxford. You can find …
  continue reading
 
Eric Schwitzgebel is a professor of philosophy at the University of California, Riverside. His main interests include connections between empirical psychology and philosophy of mind and the nature of belief. His book The Weirdness of the World can be found here. We talk about: The possibility of digital consciousness Policy ideas for avoiding major…
  continue reading
 
What does it take to avoid tyranny towards to the future? Text version here: https://joecarlsmith.com/2024/01/18/on-the-abolition-of-man This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summaries of the essays tha…
  continue reading
 
Let's be the sort of species that aliens wouldn't fear the way we fear paperclippers. Text version here: https://joecarlsmith.com/2024/01/16/being-nicer-than-clippy/ This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief tex…
  continue reading
 
Who isn't a paperclipper? Text version here: https://joecarlsmith.com/2024/01/11/an-even-deeper-atheism This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far: https:/…
  continue reading
 
Examining Robin Hanson's critique of the AI risk discourse. Text version here: https://joecarlsmith.com/2024/01/09/does-ai-risk-other-the-ais This essay is part of a series of essays called "Otherness and control in the age of AGI." I'm hoping the individual essays can be read fairly well on their own, but see here for brief summaries of the essays…
  continue reading
 
On the connection between deep atheism and seeking control. Text version here: https://joecarlsmith.com/2024/01/08/when-yang-goes-wrong This essay is part of a series of essays called "Otherness and control in the age of AGI." I'm hoping the individual essays can be read fairly well on their own, but see here for brief summaries of the essays that …
  continue reading
 
On a certain kind of fundamental mistrust towards Nature. Text version here: https://joecarlsmith.com/2024/01/04/deep-atheism-and-ai-risk This is the second essay in my series “Otherness and control in the age of AGI. I’m hoping that the individual essays can be read fairly well on their own, but see here for brief summaries of the essays released …
  continue reading
 
AIs as fellow creatures. And on getting eaten. Link: https://joecarlsmith.com/2024/01/02/gentleness-and-the-artificial-other This is the first essay in a series of essays that I’m calling “Otherness and control in the age of AGI.” See here for more about the series as a whole: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-…
  continue reading
 
Sonia Ben Ouagrham-Gormley is an associate professor at George Mason University and Deputy Director of their Biodefence Programme In this episode we talk about: Where the belief that 'bioweapons are easy to make' came from and why it has been difficult to change Why transferring tacit knowledge is so difficult -- and the particular challenges that …
  continue reading
 
In this bonus episode we are sharing an episode by another podcast: How I Learned To Love Shrimp. It is co-hosted by Amy Odene and James Ozden, who together are "showcasing innovative and impactful ways to help animals". In this interview they speak to David Coman-Hidy, who is the former President of The Humane –League, one of the largest farm anim…
  continue reading
 
Michelle Lavery is a Program Associate with Open Philanthropy’s Farm Animal Welfare team, with a focus on the science and study of animal behaviour & welfare. You can see more links and a full transcript at hearthisidea.com/episodes/lavery In this episode we talk about: How do scientists study animal emotions in the first place? How is a "science" …
  continue reading
 
This is section 4.4 through 4.7 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here…
  continue reading
 
This is section 3 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://joec…
  continue reading
 
This is section 2.2.4.3 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https:…
  continue reading
 
This is section 2.3.1.1 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https:…
  continue reading
 
This is section 4.3 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://jo…
  continue reading
 
This is sections 4.1 and 4.2 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: h…
  continue reading
 
This is section 2.3.2 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://…
  continue reading
 
This is section 2.3.1.2 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https:…
  continue reading
 
This is sections 2.2.4.1-2.2.4.2 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary her…
  continue reading
 
This is section 6 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://joec…
  continue reading
 
This is section 5 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://joec…
  continue reading
 
This is section 2.2.1 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://…
  continue reading
 
This is section 2.1 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://jo…
  continue reading
 
This is section 2.2.2 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://…
  continue reading
 
This is section 2.2.3 of my report “Scheming AIs: Will AIs fake alignment during training in order to get power?” Text of the report here: https://arxiv.org/abs/2311.08379 Summary of the report here: https://joecarlsmith.com/2023/11/15/new-report-scheming-ais-will-ais-fake-alignment-during-training-in-order-to-get-power Audio summary here: https://…
  continue reading
 
Loading …
Listen to this show while you explore
Play