Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma. If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
…
continue reading
LessWrong Podcasts
A conversational podcast for aspiring rationalists.
…
continue reading
We live in a world where our civilization and daily lives depend upon institutions, infrastructure, and technological substrates that are _complicated_ but not _unknowable_. Join Patrick McKenzie (patio11) as he discusses how decisions, technology, culture, and incentives shape our finance, technology, government, and more, with the people who built (and build) those Complex Systems.
…
continue reading
Welcome to the Heart of the Matter, a series in which we share conversations with inspiring and interesting people and dive into the core issues or motivations behind their work, their lives, and their worldview. Coming to you from somewhere in the technosphere with your hosts Bryan Davis and Jay Kannaiyan.
…
continue reading
1
"Recent LLMs can use filler tokens or problem repeats to improve (no-CoT) math performance" by ryan_greenblatt
36:52
36:52
Play later
Play later
Lists
Like
Liked
36:52Prior results have shown that LLMs released before 2024 can't leverage 'filler tokens'—unrelated tokens prior to the model's final answer—to perform additional computation and improve performance.[1]I did an investigation on more recent models (e.g. Opus 4.5) and found that many recent LLMs improve substantially on math problems when given filler t…
…
continue reading
Hello and happy holiday season to you all! Eneasz is back and we’re here to say hi and give a shoutout to Skyler’s awesome annual LessWrong survey and Lighthaven’s fundraising event. See the links below for more details. LINKS The LessWrong survey will remain open from now until at least January 7th, 2026. Lighthaven is once again seeking support. …
…
continue reading
1
"Turning 20 in the probable pre-apocalypse" by Parv Mahajan
5:03
5:03
Play later
Play later
Lists
Like
Liked
5:03Master version of this on https://parvmahajan.com/2025/12/21/turning-20.html I turn 20 in January, and the world looks very strange. Probably, things will change very quickly. Maybe, one of those things is whether or not we’re still here. This moment seems very fragile, and perhaps more than most moments will never happen again. I want to capture a…
…
continue reading
1
"Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment" by Cam, Puria Radmard, Kyle O’Brien, David Africa, Samuel Ratnam, andyk
20:57
20:57
Play later
Play later
Lists
Like
Liked
20:57TL;DR LLMs pretrained on data about misaligned AIs themselves become less aligned. Luckily, pretraining LLMs with synthetic data about good AIs helps them become more aligned. These alignment priors persist through post-training, providing alignment-in-depth. We recommend labs pretrain for alignment, just as they do for capabilities. Website: align…
…
continue reading
1
"Dancing in a World of Horseradish" by lsusr
8:29
8:29
Play later
Play later
Lists
Like
Liked
8:29Commercial airplane tickets are divided up into coach, business class, and first class. In 2014, Etihad introduced The Residence, a premium experience above first class. The Residence isn't very popular. The reason The Residence isn't very popular is because of economics. A Residence flight is almost as expensive as a private charter jet. Private j…
…
continue reading
1
"Contradict my take on OpenPhil’s past AI beliefs" by Eliezer Yudkowsky
5:50
5:50
Play later
Play later
Lists
Like
Liked
5:50At many points now, I've been asked in private for a critique of EA / EA's history / EA's impact and I have ad-libbed statements that I feel guilty about because they have not been subjected to EA critique and refutation. I need to write up my take and let you all try to shoot it down. Before I can or should try to write up that take, I need to fac…
…
continue reading
1
"Opinionated Takes on Meetups Organizing" by jenn
15:53
15:53
Play later
Play later
Lists
Like
Liked
15:53Screwtape, as the global ACX meetups czar, has to be reasonable and responsible in his advice giving for running meetups. And the advice is great! It is unobjectionably great. I am here to give you more objectionable advice, as another organizer who's run two weekend retreats and a cool hundred rationality meetups over the last two years. As the ad…
…
continue reading
TL;DR: In 2025, we were in the 1-4 hour range, which has only 14 samples in METR's underlying data. The topic of each sample is public, making it easy to game METR horizon length measurements for a frontier lab, sometimes inadvertently. Finally, the “horizon length” under METR's assumptions might be adding little information beyond benchmark accura…
…
continue reading
1
"Activation Oracles: Training and Evaluating LLMs as General-Purpose Activation Explainers" by Sam Marks, Adam Karvonen, James Chua, Subhash Kantamneni, Euan Ong, Julian Minder, Clément Dumas, Owain_Evans ...
20:15
20:15
Play later
Play later
Lists
Like
Liked
20:15TL;DR: We train LLMs to accept LLM neural activations as inputs and answer arbitrary questions about them in natural language. These Activation Oracles generalize far beyond their training distribution, for example uncovering misalignment or secret knowledge introduced via fine-tuning. Activation Oracles can be improved simply by scaling training d…
…
continue reading
1
"Scientific breakthroughs of the year" by technicalities
5:55
5:55
Play later
Play later
Lists
Like
Liked
5:55A couple of years ago, Gavin became frustrated with science journalism. No one was pulling together results across fields; the articles usually didn’t link to the original source; they didn't use probabilities (or even report the sample size); they were usually credulous about preliminary findings (“...which species was it tested on?”); and they es…
…
continue reading
1
"A high integrity/epistemics political machine?" by Raemon
19:04
19:04
Play later
Play later
Lists
Like
Liked
19:04I have goals that can only be reached via a powerful political machine. Probably a lot of other people around here share them. (Goals include “ensure no powerful dangerous AI get built”, “ensure governance of the US and world are broadly good / not decaying”, “have good civic discourse that plugs into said governance.”) I think it’d be good if ther…
…
continue reading
1
"How I stopped being sure LLMs are just making up their internal experience (but the topic is still confusing)" by Kaj_Sotala
52:20
52:20
Play later
Play later
Lists
Like
Liked
52:20How it started I used to think that anything that LLMs said about having something like subjective experience or what it felt like on the inside was necessarily just a confabulated story. And there were several good reasons for this. First, something that Peter Watts mentioned in an early blog post about LaMDa stuck with me, back when Blake Lemoine…
…
continue reading
1
“My AGI safety research—2025 review, ’26 plans” by Steven Byrnes
22:06
22:06
Play later
Play later
Lists
Like
Liked
22:06Previous: 2024, 2022 “Our greatest fear should not be of failure, but of succeeding at something that doesn't really matter.” –attributed to DL Moody[1] 1. Background & threat model The main threat model I’m working to address is the same as it's been since I was hobby-blogging about AGI safety in 2019. Basically, I think that: The “secret sauce” o…
…
continue reading
1
“Weird Generalization & Inductive Backdoors” by Jorio Cocola, Owain_Evans, dylan_f
17:32
17:32
Play later
Play later
Lists
Like
Liked
17:32This is the abstract and introduction of our new paper. Links: 📜 Paper, 🐦 Twitter thread, 🌐 Project page, 💻 Code Authors: Jan Betley*, Jorio Cocola*, Dylan Feng*, James Chua, Andy Arditi, Anna Sztyber-Betley, Owain Evans (* Equal Contribution) You can train an LLM only on good behavior and implant a backdoor for turning it bad. How? Recall that the…
…
continue reading
1
“Insights into Claude Opus 4.5 from Pokémon” by Julian Bradshaw
17:41
17:41
Play later
Play later
Lists
Like
Liked
17:41Credit: Nano Banana, with some text provided. You may be surprised to learn that ClaudePlaysPokemon is still running today, and that Claude still hasn't beaten Pokémon Red, more than half a year after Google proudly announced that Gemini 2.5 Pro beat Pokémon Blue. Indeed, since then, Google and OpenAI models have gone on to beat the longer and more…
…
continue reading
1
“The funding conversation we left unfinished” by jenn
4:54
4:54
Play later
Play later
Lists
Like
Liked
4:54People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven't started already). Peopl…
…
continue reading
1
“The behavioral selection model for predicting AI motivations” by Alex Mallen, Buck
36:07
36:07
Play later
Play later
Lists
Like
Liked
36:07Highly capable AI systems might end up deciding the future. Understanding what will drive those decisions is therefore one of the most important questions we can ask. Many people have proposed different answers. Some predict that powerful AIs will learn to intrinsically pursue reward. Others respond by saying reward is not the optimization target, …
…
continue reading
In this episode, Patrick McKenzie (patio11) walks through how perpetual futures work, from funding rates to liquidations to the surprise of automatic deleveraging. Perps are the dominant trading mechanism in crypto (6-8X larger than spot volume) and exist primarily to let exchanges and market makers run casinos more capital-efficiently. He explains…
…
continue reading
1
252 – The 12 Virtues of Rationality, with Alex and David
1:52:21
1:52:21
Play later
Play later
Lists
Like
Liked
1:52:21Join me as I sit down with Alex and David, both previous guests on the show and both cofounders of the Guild of the Rose. Together, we go over the core – the heart – of what we consider to be the Rationalist tradition. The 12 Virtues are an awesome distillation of what the rest of the sequences build on. Be sure to check out the Guild of the Rose. …
…
continue reading
I believe that we will win. An echo of an old ad for the 2014 US men's World Cup team. It did not win. I was in Berkeley for the 2025 Secular Solstice. We gather to sing and to reflect. The night's theme was the opposite: ‘I don’t think we’re going to make it.’ As in: Sufficiently advanced AI is coming. We don’t know exactly when, or what form it w…
…
continue reading
1
“A Pragmatic Vision for Interpretability” by Neel Nanda
1:03:58
1:03:58
Play later
Play later
Lists
Like
Liked
1:03:58Executive Summary The Google DeepMind mechanistic interpretability team has made a strategic pivot over the past year, from ambitious reverse-engineering to a focus on pragmatic interpretability: Trying to directly solve problems on the critical path to AGI going well[[1]] Carefully choosing problems according to our comparative advantage Measuring…
…
continue reading
This is the editorial for this year's "Shallow Review of AI Safety". (It got long enough to stand alone.) Epistemic status: subjective impressions plus one new graph plus 300 links. Huge thanks to Jaeho Lee, Jaime Sevilla, and Lexin Zhou for running lots of tests pro bono and so greatly improving the main analysis. tl;dr Informed people disagree ab…
…
continue reading
1
“Eliezer’s Unteachable Methods of Sanity” by Eliezer Yudkowsky
16:13
16:13
Play later
Play later
Lists
Like
Liked
16:13"How are you coping with the end of the world?" journalists sometimes ask me, and the true answer is something they have no hope of understanding and I have no hope of explaining in 30 seconds, so I usually answer something like, "By having a great distaste for drama, and remembering that it's not about me." The journalists don't understand that ei…
…
continue reading
1
“An Ambitious Vision for Interpretability” by leogao
8:49
8:49
Play later
Play later
Lists
Like
Liked
8:49The goal of ambitious mechanistic interpretability (AMI) is to fully understand how neural networks work. While some have pivoted towards more pragmatic approaches, I think the reports of AMI's death have been greatly exaggerated. The field of AMI has made plenty of progress towards finding increasingly simple and rigorously-faithful circuits, incl…
…
continue reading
1
The economics of discovery, with Ben Reinhardt
47:38
47:38
Play later
Play later
Lists
Like
Liked
47:38In this episode, Patrick McKenzie (patio11) is joined by Ben Reinhardt, founder of Speculative Technologies, to examine how science gets funded in the United States and why the current system leaves much to be desired. They dissect the outdated taxonomy of basic, applied, and development research, categories encoded into law that fail to capture ho…
…
continue reading
1
“6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes
32:39
32:39
Play later
Play later
Lists
Like
Liked
32:39Tl;dr AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter g…
…
continue reading
1
“Three things that surprised me about technical grantmaking at Coefficient Giving (fka Open Phil)” by null
9:45
9:45
Play later
Play later
Lists
Like
Liked
9:45Open Philanthropy's Coefficient Giving's Technical AI Safety team is hiring grantmakers. I thought this would be a good moment to share some positive updates about the role that I’ve made since I joined the team a year ago. tl;dr: I think this role is more impactful and more enjoyable than I anticipated when I started, and I think more people shoul…
…
continue reading