Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Flux Community Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Flux Community Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Creationism, AI, and techno-oligarchy: Understanding the new age of pseudoscience

1:12:49
 
Share
 

Manage episode 482242140 series 2563788
Content provided by Flux Community Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Flux Community Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Episode Summary

Donald Trump's second presidential administration has been remarkably different from his first one, primarily through his acceptance of long-standing reactionary goals to attack government and expertise—particularly federal agencies that produce and teach science such as NASA, the National Institutes for Health, and the Department of Education. What’s curious about this assault on science is that while it aligns perfectly with the radical Christian rights goal to destroy education and secular knowledge, the man who is administering the offensive is Elon Musk, a technology oligarch who built his entire personal brand and fortune on the claim that he was supporting science and had a scientific worldview.

Musk’s actions seem incongruous, but they should not be surprising because the ideology that Musk is exhibiting has existed within the Silicon Valley right wing for many decades, a strange mix of poorly understood science fiction, quack nutrition beliefs, and militant metaphysics.

In this episode, author and astrophysicist Adam Becker and I talk about how this mishmash of incoherent thoughts and dollar bills has a history—and an extensive desire to control the future of humanity.

Our discussion is organized around his latest book, “More Everything Forever, AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.”

The video of our conversation is available, the transcript is below. Because of its length, some podcast apps and email programs may truncate it. Access the episode page to get the full page.

Theory of Change and Flux are entirely community-supported. We need your help to keep doing this. Please subscribe to stay in touch.

Related Content

The far-right origins of Bitcoin and cryptocurrency

After failing in the marketplace of ideas, the right is using political power to force its ideologies onto the public and independent businesses

Republicans declared war on academic expertise and no university is safe from their bullying

How Mastodon and the Fediverse are building a decentralized internet

Big finance and tech monopolies are ultimately why social media is so terrible

Audio Chapters

00:00 — Introduction

03:29 — Ray Kurzweil and the "futurist" industry

06:45 — Techno-optimism's biggest problem is timetables

19:14 — The myth of self-sustaining Mars colonies

23:28 — The religious undertones of techno-optimism

24:00 — George Gilder and Christian fundamentalism among tech reactionaries

34:29 — Bad fiction reading and techno-reaction

38:36 — AI and the misunderstanding of intelligence

44:28 — The problem with large language models

54:00 — Billionaires' flawed vision of AI

58:39 — Carl Sagan's warning

01:05:13 — Conclusion

Audio Transcript

The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.

MATTHEW SHEFFIELD: And joining me now is Adam Becker. Hey Adam, welcome to the show.

ADAM BECKER: Oh, thanks for having me. It's really good to be here.

SHEFFIELD: Yeah. So, your book is really interesting and I strongly encourage everybody to check this out because I think especially for people who may come from a more science oriented background, I think, a lot of people who are professional scientists or engineers or physicists or something, they just live in their own little world, their own domain specific knowledge and just are like, well, I, my, I'm secure in my job and nothing's gonna happen to me.

And well, Donald Trump is showing that that's not the

BECKER: Yeah. Yeah. Unfortunately that's completely true.

SHEFFIELD: And in some ways, like he's, he Trump with his NIH censorship and of, of various expenditures. It, he's, he's making the case that you are right in this book that the, this radical, right wing ideology is trying to destroy knowledge.

BECKER: yeah, yeah. No, that's exactly what they're trying to do. They're trying to replace science with their vaguely scientific sounding ideas, which have no real basis, in fact.

SHEFFIELD: No. And, and, and it is very much a, an attempt to pseudoscience, I would say is, is really what this

BECKER: Mm-hmm.

SHEFFIELD: I. and of course the most prominent figure, I guess chronologically speaking, the guy who really got a lot of this started is Ray Kurzwell who was very famous in the 1990s. But I think, in the years since, most people have not really heard of the guy much,

BECKER: I mean, he had a new book out last year. I think I,

SHEFFIELD: don't, did anyone read it? I

BECKER: yeah, I mean he talked about it at South by Southwest, so that's not nothing. But yeah, I mean, I think his, his big books, the Age of Spiritual Machines, [00:04:00] which I think was the late nineties and the Singularity is near, which was the early to mid two thousands are, are the big ones.

So I think you're right about that. The new book that came out last year though was titled The Singularity is Nearer, which sounds like a parody of itself.

SHEFFIELD: I know. Yeah. It's

BECKER: Yeah.

SHEFFIELD: because, and, and I would say that his star maybe fell because the dude's been writing the same fucking book for years now.

BECKER: Yes, indeed. Yeah, yeah,

SHEFFIELD: so, okay. So, but tell for those who haven't heard of him give, give us an overview

BECKER: yeah. So, Ray Kurzweil is best known for these books where he popularized and, and sort of unpacked the idea of the singularity. This idea that at some point in the near future, the rate of technological change will become so fast that, it'll transform everyday life. And if that sounds vague, that's because it is he did not originate this idea, but he's, he's, I think in the past decades, the most well-known exponent of this idea, the, the, the best known popularizer of it.

And I should also say that in addition to that, he, he made his. Career as a very successful inventor, right? The guy, the guy was one of the first people to create, I think, text to speech and and also invented some really good other computer vision stuff and computer keyboards or electronic keyboards and stuff like that.

So, he knows what he's talking about in that domain of expertise. But like a lot of the people in tech and who show up in my book, he seems to have confused expertise in one narrow area for expertise in everything.

SHEFFIELD: Yeah. Well, and then of course, you know his, I think most odious act was the attempting to create this idea of a futurist as if he [00:06:00] knows what the future is.

BECKER: Yeah. Yeah. I mean, futurists have been around since before Kurzweil, but Kurzweil certainly made very specific claims and it's still making very specific claims about what's gonna happen when, he's claiming that by 2029 we're all gonna get biologically younger with each year that passes because of advances in biotechnology.

And he claims that by 2045, the singularity will be here. He thinks, he actually said last year that anyone who makes it to 2029 will live for 500 years or more, which is Yeah,

SHEFFIELD: nice.

BECKER: Yeah.

SHEFFIELD: that be nice?

Techno-optimism's biggest problem is timetables

SHEFFIELD: But yeah. And, and I mean, look, I, generically speaking, it is probably the case that these things will happen.

BECKER: Some of them, I mean, we don't, we don't, yeah.

SHEFFIELD: of it, yeah.

BECKER: Like we don't,

SHEFFIELD: Being able to, stop. not impossible

BECKER: yeah,

SHEFFIELD: scientists could not unlock how cancer cells are immortal in the sense that they, are able to continue reproducing.

BECKER: Maybe.

SHEFFIELD: very conceivable

BECKER: Sure.

SHEFFIELD: possible.

BECKER: Yeah,

SHEFFIELD: And, and a lot of these things I mean, what the idea of current ais being sent in, which we'll get into later seems unlikely. But, some of these other things that he said they're not, they're not inconceivable. But it, with the specific timetables that

BECKER: yeah.

SHEFFIELD: you know, hopelessly optimistic.

BECKER: I think that's right. I think that's exactly right. I, it is possible that at some point in the future somebody will figure out how to radically extend human lifespan, but there is no indication that anything like that is in the offering in the next five years. Or even in the next 25.

SHEFFIELD: Yeah. And, and even if it was, be just simply being able to take that from, just very simple, lab grown meat, [00:08:00] and then applying that to a actual living organism, that's a gigantic step. And of

BECKER: Yeah,

SHEFFIELD: would be with something very simple, like a, sponge or something like that, like, and so that's, so much more complexity in between,

BECKER: yeah.

I, I think,

SHEFFIELD: they would start

BECKER: yeah,

SHEFFIELD: nematode, they'd start with an nematode, which, hey. God bless 'em. I love nematodes. They're important for scientific research. But

BECKER: yeah,

SHEFFIELD: We're, we're, we're a little bit more complicated than those, those

BECKER: yeah. I, I, complicated iss a good word because like, I, I actually think that a lot of the problem here ultimately stems from an unwillingness to accept that the world is a complicated place. That, questions might have difficult answers, and some questions might not have good answers at all.

Like, it is possible that we'll figure out how to radically extend human lifespan. It's also possible that we may figure out that it's not possible to radically extend human lifespan. We don't know curse oil is just way too confident that radical life extension is coming. That fully conscious, super intelligent AI is coming, that brain computer interfaces are coming that, make anything that currently exists look like a toy.

And he's convinced that, nanotechnology, like the stuff championed in the eighties by Drexler nanotechnology that would reshape everyday life. He's, he's fully convinced that that's coming, even though experts in that field do not think that that kind of nanotech really makes any sort of sense.

SHEFFIELD: Well, and then it's also predicated on quantum computing actually being

BECKER: yeah, there's, there's plenty of that too. Yeah. I mean, and quantum computing may end up working out there's, there's, I mean there's already

SHEFFIELD: at that

BECKER: Yeah. Not the kind of, yeah. The kind of quantum computers that these people will often talk about. There's no reason to think that quantum computers are gonna be able to help with most of these scientific miracles that that kwe and others are promising.

Yeah.

SHEFFIELD: [00:10:00] Yeah. And what's, and what's interesting though is about this as a as a, an epistemology is that is very, very similar to religion. It is a scientism as it sometimes is referred to. And yet at the same time, it is using the methods of science. It is, it is this gross kind of distorted mirror, like zombie fi version of science.

I think.

BECKER: yeah,

SHEFFIELD: And you call it in the book you, you, you refer to it as this idea of a technological salvation.

BECKER: yeah,

SHEFFIELD: expand that concept more for us, if.

BECKER: yeah. So, basically there's this idea that these people have Kurzweil as one of them, but also, the, the tech oligarchs of Silicon Valley people like Musk and Sam Altman and Jeff Bezos and, and that whole crowd. There's this, this idea or set of ideas that that basically technology is gonna solve all of our problems and that only technology can solve our problems, and all problems can be reduced to technological ones and usually problems of computer programming.

So, there's this idea that you can reduce these problems to that, and that this will lead to, perpetual growth and profitability and ultimately transcendence of, all possible boundaries and, and problems. So, trans transcending, that, that that sufficiently advanced technology of some kind allows you to transcend legal boundaries practical, physical boundaries limits set by the laws of physics and even moral boundaries.

That, that you can just and, basically you can go to space [00:12:00] and live forever free of all constraints. And this is, not something that there's any evidence whatsoever to support and a great deal of evidence against. But these people nonetheless present it as though, almost as if it's a Faye Complete, like, almost as if, the, the evidence is just overwhelming.

Kurzweil certainly does this. And and the rest of them. When they aren't doing that, or at least saying that this is the only possible good future for humanity and that it's, right there in the science. And it's not, there's no scientific basis for this. But one of the things that they want to do is they wanna replace science with this kind of pseudoscientific proclamation of what the future of science and the future of technology inevitably holds.

Forgetting that science and technological development are human processes and they don't inevitably hold anything. And, and they're also constrained by nature. And we don't know what all those natural constraints are. They, they want to take the cultural power of science. This, this idea that science issues forth truth.

And they want to agate that power for themselves and like anoint their ideas with this power of science.

SHEFFIELD: Yeah, and it's, it's interesting because it's the exact same mentality that the Soviet Union events during its entire existence that,

BECKER: Hmm.

SHEFFIELD: know, the idea of scientific scientific materialism, that our ideas, our viewpoints, our opinions, are science. And, and, and what. Was fascinating about this philosophy of science within the Soviet Union is that it led to a number of fundamental scientific errors particularly in the regards to [00:14:00] biology,

BECKER: Yes.

SHEFFIELD: they, they decided for the longest time that, that Darwin was wrong. And they had their own personal proprietary definition of how, what evolution was and how it worked. You want to talk about that because I, it's a, I think it's very relevant here to this discussion.

BECKER: Yeah, no, they, they had lysenkoism this, this the details of which I don't remember, but but basically the idea that, competition and survival of the fittest was not really how nature worked. And and that, nature worked on something that, you know, in, in the same way that evolution superficially looks as though it's saying something about society and capitalism.

That, that, the kind of competition that you have in capitalist markets is, is inscribed in the laws of nature, which is not what evolution says, but you can try to read it that way. They, they tried to do something sort of similar with communism

SHEFFIELD: Mm-hmm.

BECKER: it's, it doesn't work 'cause yeah.

SHEFFIELD: yeah. Well, and it's also because like they were claiming that in, in operationalized behaviors or traits could be passed down biologically,

BECKER: Yeah.

SHEFFIELD: is just simply not

BECKER: Yeah, exactly.

SHEFFIELD: now it is it, and it is like, it is true in, in one sense that, so like humans, in many ways replicate. Cognitive and epistemic evolution in child development. And so it is true that social species and species that engage in care you, care for the young, can accelerate epistemic evolution. That's

BECKER: Yeah.

SHEFFIELD: but it's not true that, that they can also alter biological evolution,

BECKER: I mean, at least.

SHEFFIELD: through will sheer

BECKER: Yeah, exactly. Not, not in the way that they needed that to be true. Right. There's, there's epigenetics and stuff like that that does show up that we've learned a lot about in the last few decades, but it, it doesn't mean that like [00:16:00] psycho egoism is correct. So yeah. Yeah. No, there's, there's a long history at various places in the extremes of the political spectra of of trying to take politics and inscribe it on science and it generally doesn't work, which is not to say that science isn't political.

I think a lot of people think that it's not, and that's false. Science is a human activity. All human activities end up being political and science. Is a way of the best way we have of learning about what's going on in the world. And when we learn things like, say global warming is real and happening and caused by humans, that has political and policy implications.

So science isn't a political, but you can't you can't just say, oh, these are my politics and these are the things that I believe have to be true about the world. And so anything that contradicts them is is not real science. I'm the real keeper of the scientific method and the science and like the flame of science.

And, and yet that's what these tech billionaires are doing. They're saying, oh, we're, the richest people in history. And so we're the smartest people in history and we're the leaders of the tech industry. So we understand more about science and technology than anybody else ever. And none of that's true.

SHEFFIELD: No,

BECKER: Yeah.

SHEFFIELD: No it isn't. And but before we get. More into that, I did want to sort of talk about space, just circle back to space as the sort of inspiration and origin for a lot of these ideas. As you were saying, this, it's this blind faith that I mean it, it, it was a trope of pretty much all early science fiction or certainly a lot of it,

BECKER: Yeah.

SHEFFIELD: libertarians in space.

BECKER: Yes.

SHEFFIELD: It's a fun, you can actually look that up. I encourage you to, if you haven't seen that one as a trope yet. But it, it's it like that's, that's really what they believe that space is this sort of beautiful, magical, transcendent thing. And that's probably, if I had to guess, like that's how, what attracted you to [00:18:00] become interested in this as an ideology is your own personal background. Cosmology.

BECKER: Yeah. Yeah, I mean, that's a good chunk of it. Yeah, libertarians in Space is a very good description of a lot of early science fiction, especially stuff from like Hedland. But,

SHEFFIELD: Oh yeah.

BECKER: but yeah, no, I, I, like you said, I'm a cosmologist by training. I did my PhD in that and I've been interested in space my entire life basically.

And I grew up reading just enormous amounts of science fiction and watching lots and lots of Star Trek and and Star Wars and anything else I could get my hands on. And so that is, that is a good chunk of where my interest in this came from. I, I wanted to, I. Understand what these people were doing because I saw them sort of playing in a lot of the same spaces that I was interested in.

And, and especially saying a lot of things about space. And over time I was paying more attention and saying, and seeing, oh, these things they're saying about space, that's not true. They, they like, like, for example to, to pick on someone who's an easy target, but a worthy one.

The myth of self-sustaining Mars colonies

BECKER: Elon Musk has made a lot of claims about space in general and Mars in particular that are just not true.

And he's been surprisingly consistent about this stuff. He has said that he wants to get a million people living in a colony on Mars by 2050, and it needs to be self-sufficient so that it can keep, operating and keep everybody alive and well, even if the supply rockets from Earth stop coming.

And, it's, it's nice in a way that he's been that specific and that clear because when you get that specific and that clear, it's, it's really obvious that you just can't do that. And like, sure, the date of 2050 is very ambitious, but that's not even the biggest problem. [00:20:00] There,

SHEFFIELD: of money that would be needed to

BECKER: that's,

SHEFFIELD: like quadrillions of

BECKER: oh yeah.

SHEFFIELD: this is more money than exists in the entire world right now.

BECKER: absolutely. But that's not, but, but it, that's not even the biggest problem either. Like, there are just so many problems with this because Mars is just fundamentally inhospitable. It is not a place that people can live easily, if at all. There, there's no air. The the dirt is made of poison.

There's the really high radiation levels radiation's too high, gravity's too low. There's no biosphere at all. And so we'd have to like kickstart something to feed everybody who's gonna be there. And we, we don't, there's a bunch of stuff that we know is bad for humans. They're like the radiation and the poison, and there's a bunch of stuff that we don't really know what the long-term effects would be, like the low gravity.

SHEFFIELD: Yeah.

BECKER: And then also, getting that many people there safely is nearly impossible. And once you have that many people there, a million people is not enough to sustain a high tech economy that's independent of the one on earth. You would need, best estimates on that are somewhere around 500 million to a billion people on Mars.

And that's not happening. And then, Musk wants to terraform Mars make it more like Earth. That's not happening. His plans for doing that do not work. That

SHEFFIELD: Well, and then Mars has no magnet

BECKER: Yeah, that's right. Yeah. Which is one of the,

SHEFFIELD: if you somehow succeeded at all, that

BECKER: there would still be too much radiation.

SHEFFIELD: protection from space radiation

BECKER: Yep.

SHEFFIELD: from just simply being destroyed by asteroids

BECKER: Well, yeah.

SHEFFIELD: like, there's nothing that prevents it.

BECKER: Yeah. I mean.

SHEFFIELD: the Earth's magnetosphere really does work in a lot of

BECKER: Yeah, yeah. Earth's magnetic sphere is about half of our protection from radiation. The [00:22:00] other half is our thick atmosphere. Mars has neither of those. If you somehow gave Mars a thick atmosphere, it would still get more radiation. And and astro asteroids, I mean, one of the things that Musk has said over and over is that we need this as a backup for humanity.

That's why the colony on Mars needs to survive even if even if the, the rockets stop coming. And in case some sort of disaster befall earth, like an asteroid hitting earth. The thing that's the most crazy about that is an asteroid hitting earth as big as the one that killed off the dinosaurs.

65, 60 6 million years ago that day was the worst day in the history of complex life on earth. And, 12 hours after that asteroid hit when, the, the, the whole earth was essentially on fire. And, and 99% of all creatures had died and 70% of creatures were extinct or about to go extinct.

That was still a nicer and more hospitable environment for any animal than,

SHEFFIELD: Yeah,

BECKER: Mars has been at any point in the last billion years. And, and the easy demonstration of that is mammals survived that it, there is no mammal that you could put onto the surface of Mars without protection. That would survive for more than, I think it's about 10 minutes, if that.

So yeah. No, the whole thing is just nonsense. And and yet he just keeps saying it.

SHEFFIELD: Yeah.

The religious undertones of techno-optimism

SHEFFIELD: Well he does and, and, and it is, I mean, this is a religion. Like

BECKER: Yeah.

SHEFFIELD: I think that people have to realize that, and, it doesn't, but it doesn't look like a conventional religion. So they don't have, holy books and, ancient figures that they think are really cool. But this is still a religion, like operationally.

It's how it's not that different from Scientology. It really isn't.

BECKER: Yeah.

SHEFFIELD: I think this is fair

BECKER: Yeah. No, I agree.

SHEFFIELD: And, and, and, and, and, and here's why. [00:24:00]

George Gilder and Christian fundamentalism among tech reactionaries

SHEFFIELD: The, another parallel that that is, I think maybe helpful to understand this for audience who I, haven't really thought of it in this way, is that George Gilder is kind of, he, he is, he is such a key person through the link and show that this is religion.

So George Gilder, again, another pretty obscure guy at this point, but in the eighties and seventies, eighties, nineties, like this dude was everywhere.

BECKER: Yep.

SHEFFIELD: He was, he was, had all these newsletters and magazines and and, and he, and he was a futurist. He was, know, before he was Ray Kwe.

Before Ray Kurzweil. And but also the other thing about George Gilder is that he is a creationist. And he is a biblical literalist. And he also thinks that women, shouldn't be able to vote. And that they, we've harmed our society perhaps irrevocably by irrevocably, by allowing women to vote. And that, so women need to be put back into their place. And then, and then we can get all the computer happy, happy land. And this guy, he's been saying this for, for decades, and like, he was a very big figure for Ronald Reagan's White House. And, and he was, I mean, this guy was, he was new Gingrich's mentor,

BECKER: Hmm.

SHEFFIELD: so was highly influential in libertarian spaces.

But again, he's a creationist and his influence has continuously existed.

BECKER: Yeah.

SHEFFIELD: So even now, while, while he seems to be mostly obsessed with, his social policy viewpoints lately and creationism. That know, he, he, he, he has a direct connection to people like Peter Thiel, who also is a religious Christian.

Fundamentalist and Thiel isn't known as that. I think for most people because the business press does such a horrible job of accurately reporting who this guy

BECKER: Yeah.

SHEFFIELD: they don't, all they do is show up at his events and they're just like, oh, wow, he's so amazing. He's so smart, he's so cool, he's so rich. And then it's like, well, you, your job is to actually report on these

BECKER: Yes.

SHEFFIELD: And and, and I think at some [00:26:00] point journalist, business journalism began realizing, oh, we've done a really shitty job of covering these people and we have no, I, we provided the public no information about what they actually, these tech oligarchs want and believe and think.

BECKER: yeah.

SHEFFIELD: Maybe we should start doing that. And so that they started doing that and it pissed these people the fuck off, and now they're, they're going insane,

BECKER: Yep.

SHEFFIELD: public way and, and trying to destroy democracy because, the, the rest of the public is starting to figure out their very strange ideas.

BECKER: Yeah. Yeah. And they see that as persecution, as opposed to, accurate reporting.

SHEFFIELD: Well, and, and because their ideas should not be debated. They should not be subject to dissent because they're true,

BECKER: Right,

SHEFFIELD: are the prophets.

BECKER: right.

SHEFFIELD: And you, I mean, and wanna talk about Thiel in this context though.

BECKER: Yeah.

SHEFFIELD: How he's connected to Gilder.

BECKER: Yeah, absolutely. I mean, there's definitely this sense that all of these oligarchs have that that, they must be right about these things because they have so much money that's proof that they're smarter than the rest of us. But but Thiel, yeah. I mean, Thiel is also maybe not a full blown creationist, but let's say creationist curious creation, curious.

He he said in an interview, oh, about 10, 15 years ago, that he thinks that evolution isn't the whole story. And and he's also funded. A Creationist magazine or a magazine that, gives cover to Creationism that thankfully doesn't really seem to be around anymore. But but that magazine in turn was set up by a guy named David Berlinsky, who is tight with both Thiel and George Gilder.

They both Berlinsky and Gilder were instrumental and as far as I know, still are instrumental over at a place called the Discovery Institute, which is the big intelligent design think tank, if you can call an intelligent design center, a think tank. But but yeah. And he's also, deal has also voiced [00:28:00] a lot of those same positions that you were just attributing to Gilder, deal has said that he doesn't think that free markets or freedom as he calls it, and democracy are compatible because the right to vote was extended to women and and that women are, are too unfriendly to free markets to be to be trusted with voting. Well that's not quite what he said. He said that, that women are too unfriendly to free markets for democracy to be compatible with free markets except instead of free markets.

He kept saying freedom because that's his idea of freedom is free markets. That's the beginning and end of it. Which is of course a radically abridged, radically abridged is maybe the nicest thing I could call that. Fatally impoverished might be a better way to call it. Deeply inhumane. Is, is another problem with that, right?

The kinds of freedoms that we actually care about in our everyday lives just aren't encompassed in that notion of what freedom is. But but don't tell that to Thiel or other hardcore libertarians, so, yeah.

SHEFFIELD: Yeah. And but it is, it is this deeply well, as salvationist idea that, the it, it, it's almost like, this, that they see themselves as the, Platonic philosopher king. But they don't use logic and they don't, they just want the king part.

BECKER: Yep.

SHEFFIELD: but they see themselves as philosophers.

BECKER: Well, yeah, they want the respect that comes along with that title. Yeah.

SHEFFIELD: yeah. Well, and yeah, and, and, but you know,

This were a credible movement and they actually scientifically oriented, they would want nothing to do with people like George

BECKER: yeah.

SHEFFIELD: David Linsky

BECKER: Mm-hmm.

SHEFFIELD: thi because they're not, they don't believe in science

BECKER: Yeah.

SHEFFIELD: they show you

BECKER: Mm-hmm.

SHEFFIELD: that they, that they think creationism is credible in, and, and especially Linsky, but, but it

BECKER: [00:30:00] Yeah.

SHEFFIELD: Further to just continue this from their worldview standpoint is that it, it is fundamentally anti-science because when you look at creationism, they advance no affirmative idea. So they say, oh, well, Darwinism, which is, they always call it Darwinism.

BECKER: Mm-hmm.

SHEFFIELD: it

BECKER: That's right.

SHEFFIELD: And so they, they say that evolution, they, they, it has all these problems with it which strangely enough, all are, are debunked thing ideas that were debunked 150 years ago. But nonetheless, but you know, when that's all they ever do is they always just say, well, I don't like this aspect of it.

I don't like this aspect. And they never try to create their own framework because they can't, like, there is no that they can advance because ultimately it comes down to, well, oh God did it. And,

BECKER: Mm-hmm.

SHEFFIELD: and that's, not science.

BECKER: No, it's not. And yet again, they call themselves like the keepers of the scientific flame. Anyway mark Andreessen another, hyper libertarian, tech oligarch and billionaire. He had this manifesto that he published, what, like a year and a half ago, something like that called the Techno Optimist Manifesto.

And in that manifesto, he says he, he's got this rhetorical thing that he does in the manifesto where he says, we believe this. We believe this, we believe this. It's like a, a statement set of like we believe statements.

SHEFFIELD: he can't make arguments.

BECKER: he can't make arguments. Yeah. I, I think Dave Karp a really good poli sci guy at George Washington.

He, he called it less of a manifesto and more of a series of tweets or something to that effect. But but one of the things he says is, we, we are the keepers. He Andreessen in this manifesto. One of the things that Andreessen says is, we are the keepers of, the real scientific method.

And then he also says, and we are not of the right or of the left. But then, after saying those two things, at the end of this manifesto, he has a list of patron saints of techno optimism. And on that [00:32:00] list he has George Gilder

SHEFFIELD: Yeah.

BECKER: and he also has martti. Who he also he also quotes Marti in I'm for blanking on Martis first name, but he, he quotes Marti in the manifesto and then lists him in there.

Marti was the, the author of the Futurist Manifesto, but he was also the co-author of the Fascist Manifesto in like 1920 or thereabouts.

SHEFFIELD: Yeah.

BECKER: and,

SHEFFIELD: Tomaso.

BECKER: yeah I think that's,

SHEFFIELD: guy was a fascist,

BECKER: yeah, exactly. He's a fascist. Yeah. And I mean, Andreessen also. Yeah, exactly. I mean, Andreessen Andreessen also has like Sund Russell on his list of patron saints of techno optimism.

But I, I gotta say, having read some Sund Russell and having read this manifesto, Shar Russell would not like this manifesto. I think Marti would, and I think Gilder probably does. So no, it's, it's really bizarre.

SHEFFIELD: it and yeah, the Russell Point, that's a really good one because while Russell definitely was, very, very pro-science and, tried to create a, a, a formalized as in mathematical, viewpoint of logic. And that's not what these guys are doing,

BECKER: No. No, not at all.

SHEFFIELD: actually.

BECKER: And, and then the other thing is like, in that manifesto, Andreessen says that we're not of the right of the left, but then he says, communism and socialism. We, we reject these things as, death to Humanity. And Russell was a socialist.

SHEFFIELD: Yeah,

BECKER: And and, and then meanwhile on that list, along with all of these other people, he's also got John Gault, who is not a real person.

And

SHEFFIELD: Spoiler,

BECKER: exactly. Spoiler not a real person. Exactly. Who is John Gault? Not real. That's who John G is.

SHEFFIELD: Yeah. yeah. And that, that is actually a really, really good point because,

BECKER: Yeah,

SHEFFIELD: what is a consistent through line of both these technological fundamentalist and religious [00:34:00] fundamentalist is that they, make arguments about reality that are based on fictional

BECKER: yeah.

SHEFFIELD: and they do this over and over and over

BECKER: Yes, they do.

SHEFFIELD: as if a a a a Star Trek episode is is something that happened,

BECKER: Yes.

SHEFFIELD: Or a, as if we can, as if we can learn about how epistemology works from the actions of Broho

BECKER: Yes,

Bad fiction reading and techno-reaction

SHEFFIELD: or,

BECKER: yes.

SHEFFIELD: or like, or their favorite guy, Renee Gerard. This, horrible plagiarist of Friedrich Nietzsche trying to make a Christianized Nietzsche, Renee Gerard's entire, his entire worldview. Is based on fiction.

BECKER: Yep.

SHEFFIELD: Fiction. And this is like, and this is not me being unfair or redio at Absurdum.

No, this is his, he was a, he was a literature professor and, and he read some books and he was like, oh, I think I know about reality

BECKER: yeah,

SHEFFIELD: I read Reads and fiction.

BECKER: yeah. And I mean, like, look, I, I'll be the first one to defend the power of fiction. I think novels and fiction have, have great power for extending,

SHEFFIELD: a lot of

BECKER: Yeah. They

SHEFFIELD: for what they

BECKER: exactly like,

SHEFFIELD: science.

BECKER: they're not science. And, and you can use them to explore the human condition. You can use them to explore questions about science.

But you, but they're not gonna tell you about what's happened in the world. Right. In the, in, in the, in the sense that science does. Right. And and so yeah, if you build, if you build a a and also like. Not to be a horrible snob, but a lot of this is not just fiction, but bad fiction, like, Ayn Randt but

SHEFFIELD: Yeah.

BECKER: like, or bad readings of fiction, right?

Like you watch Star Trek and and this is, this is something I talk about in the book. [00:36:00] You watch Star Trek with even a little bit of thought and you'll see, oh, okay, yeah, this is, this is a parable. This is, this is intended, especially if you go back and look at like the original series or the next generation.

This is this is a parable about something happening right here, right now. Right, like the, the famous episode from the original series involving, the guy with the face that's half black and half white. And then the other guy whose face is half black and half white, but swapped, and the one of them is chasing the other one.

And, and this racial strife has destroyed their civilization. And this is, something that airs in like 1968 and, Hmm. I guess that, that episode was really about warp drive, right? Like, there was nothing going on in the world or the country that made and, and, produced that episode of fiction.

That could actually have been what that episode was really about. There's, there's no looking at this stuff, looking at, at science fiction and saying, oh, it's about space and going to space is just a really poor reading of, I. Of that fiction. And yet Peter Thiel has said that you should get good ideas about what to do by looking at the science fiction of the fifties and the sixties.

And then that the message of that fiction is to develop space, develop the oceans, develop the deserts, and like, okay, does Peter Thiel think that the message of Dune is develop the deserts? Because that's just a hilariously bad reading of that book.

SHEFFIELD: Yeah. It's like, yeah. The opposite of what the book

BECKER: Yes. Yeah. Like Peter Thiel would get a failing grade in any class that covered Dune for that reading. But, but Peter Thiel has also made it very clear that he doesn't really see the value of education. So

SHEFFIELD: Yeah.

BECKER: yeah,

SHEFFIELD: and then, and then he showed also the fact that he named his [00:38:00] company Palantir

BECKER: just.

SHEFFIELD: after, after the magic crystal ball. And if you look in them, they torture your soul forever.

BECKER: Yeah, no, it's, it's like, like just absolutely no self-awareness about, how, how it looks or how, how someone might actually read the fiction that they claim to be inspired by. Right. Like, I am sure that Peter Thiel has read Lord of the Rings. I don't think that he's understood it.

SHEFFIELD: no,

BECKER: Yeah,

SHEFFIELD: seem to. No.

BECKER: yeah.

AI and the misunderstanding of intelligence

SHEFFIELD: And, and there was an irony and a parallel I think in terms of the way that, these companies have, have, pushed software products, which they market as artificial intelligence.

BECKER: Yeah.

SHEFFIELD: Because when you look at what intelligence is know, in intelligence outside of the technological world.

So within humans and within animals, intelligence is an expression of embodiment. that's what it is. It's an evolved behavior. It is not something that is computationally arrived at. And it's like, it, it, it's science. Cognitive science has basically discovered that Renee Descartes was completely backwards when he said, I think therefore I am. What cognition is, is actually I am. Therefore, I think,

And, and, and, and modern, current day AI they don't get that at

BECKER: no. No, they really don't. I, I think, there's a long history in science and philosophy of making an analogy between the brain or, the nervous system and various pieces of technology. Usually one of the most advanced pieces [00:40:00] of technology around at the time, like, I think it was DeHart who compared the, the nervous system to a hydraulic system.

LNI famously compared the, the mind to a, or the brain to a mill, although he wasn't really being literal when he said that. Then there were analogies to to, telegraph networks and, then, telephone networks and then computers and and somewhere in there before, before Telegraph networks there was definitely an analogies to like clocks and clockwork and all of those capture something and all of them are imperfect.

They're, they're all analogies but the, the brain is not a computer. It is true that there are some, arguments that you should be able to do what the brain does using a computer. But those arguments are, are quite theoretical and it really, they say nothing about whether or not the kind of computer you would need to do what the brain does is the kind of computer we have now.

Like whether, whether or not that's an easier, straightforward thing to do and certainly nothing about whether the kind of AI we have right now is doing what the brain does. The difference between like the neural networks that underlie modern AI and the actual neural networks that are going on right up here.

Very, very different. And, and just thinking that what's going on up here, just like the, what, four pounds that's encased in the skull is all that matters. That's a mistake too, as you were saying. You have to embody it, right? Like we are not our brains in a space suit of our bodies. We are our bodies in our environment.

We need both and. And there's also just this fundamental misunderstanding about what intelligence is, that it's an individual property as opposed to a property of like [00:42:00] societies and systems. Or, or even that it's a monolithic property within an individual as opposed to, skills at various kinds of tasks.

It's it's a real set of mistakes and it, it's sort of culminating in this bizarre promise that you can take something that just predicts what the next word is likely to be in a sequence of words, and that will somehow get you something that is, able to not just match, but surpass human intelligence.

I just think that the case for that is pretty much. Nil. There's, there's no evidence that that's true. Great deal of evidence that it's not true. And a lot of experts in the field don't buy it.

SHEFFIELD: Yeah. Well, and it's, and it's, again, it's, it's not to say that these things can't be useful because in fact they

BECKER: sure.

SHEFFIELD: And they will become more useful. It's just

BECKER: Yeah.

SHEFFIELD: yeah. That, that, they are not even engaging in abductive logic

BECKER: No.

SHEFFIELD: abductive logic, meaning just for the audience that, abductive logic, meaning. The choosing the best possible outcome not knowing that it may not be necessarily true. That's not what these

BECKER: no.

SHEFFIELD: things are doing. They're just using probabilistic, deductive logic, which is to say that, well, the probability of this next token is 97%, and this other one I have is 85, so it goes to 97.

BECKER: And I mean, we've had models. Yeah. It's not logic. And we've had models that do that for over 75 years. This is just the best in a long line of such models and it is much, much better than the stuff we had 75 years ago. Absolutely. Yeah. Eliza, and before that, just markoff chain text models like, which, I, I sort of [00:44:00] feel like it would be good if anyone who wants to write about AI who you know, doesn't know this history, just takes a few minutes and plays with Eliza or plays with a Markov chain text model, they are not nearly as good as large language models.

They're not even close. But the fact that those very simple, very transparent models can do what they can do should make us more suspicious of large language model.

The problem with large language models

BECKER: And, and the fact that, and this is just a historical fact, that with Eliza, people attributed so much like theory of mind to what was going on under the hood with Eliza, which was pretty much nothing.

That should make us even more suspicious about the claims that people are making about LLMs. Like there's, there's this word I love Paraia the, the human tendency to see patterns where none exist, especially patterns like human faces or, human speech in, in our lives and in our long evolutionary history.

The things,

SHEFFIELD: the, in the toes.

BECKER: Yeah, like the, there, there's, there's been the things that use language are other people. And so if something uses language that makes it seem like it's a person.

SHEFFIELD: Mm-hmm.

BECKER: not and, and, and we should know that this is a cognitive bias that we have, that we are inclined to attribute agency and interiority to things that use language even when none, even when we know there's none there.

So, yeah. I, I, yeah, I have a lot more to say about this.

SHEFFIELD: more CPUs at the problem is not going to solve

BECKER: No.

SHEFFIELD: problem. Because, there's a fundamental difference between [00:46:00] cap, capacity to understand and capacity to perform.

BECKER: Yeah. And, and also

SHEFFIELD: same.

BECKER: not the same. And also like people are saying things like, oh yeah, we're approaching the number of neurons in the human brain, like neuron in the human brain. First of all, e. Is not like a, a neuron in a neural network in a computer. They're really, really different. The ones in the human brain are way more complicated and do a lot more things.

SHEFFIELD: Still, we don't even understand fully what

BECKER: Right. We don't fully understand. Yeah. We don't fully understand what they do and we can't say, like, it, it, one of, one of the real limits of the computational model of the human brain is, there's no good way to capture what the human brain does in terms of, it's, it's, computational complexity, right? Like, we don't, not saying that this is in principle impossible, but we do not have a good answer to questions like, what is the memory capacity of the human brain in bits? Or what is the processing speed of the human brain in bits per second? Like, this is, and and it may be that those questions are just fundamentally ill posed.

SHEFFIELD: Yeah, and it's because, if it is a computer, is probably, millions of quantum computers.

BECKER: Yeah. I mean,

SHEFFIELD: understand quantum

BECKER: yeah. I mean it's, I, I don't, I don't know that quantum computing processes are relevant for what's going on in the brain that, that that's,

SHEFFIELD: but

BECKER: but, but

SHEFFIELD: saying in other words that it's capable of instantiating thought simul, simultaneous streams of thought

BECKER: yeah. Yeah. Certainly true. Yeah. I.

SHEFFIELD: and, and, and, and, and including outside of the ality or the mind.

BECKER: Yeah.

SHEFFIELD: know, the, the brain is still doing all these other things and you don't even know that it's

BECKER: Yeah, that's, that's certainly true. And, and also the brain is like analog, not digital, and that's gonna [00:48:00] make for a lot of differences. Yeah,

SHEFFIELD: So yeah, so the labeling is, it's the, the way that they're thinking about it, it just, it's doesn't fundamentally

BECKER: yeah.

SHEFFIELD: Even though, it's not to say I, that this stuff couldn't be comput personalized at some

BECKER: Right.

SHEFFIELD: a a or the beha. Like they don't. This, I mean, like, yeah.

The whole idea of a neural network,

BECKER: Yeah.

SHEFFIELD: device is just ridiculous. It is entirely a marketing label. And they weren't called that before. Actually,

BECKER: I mean the, the term neural network goes back to like old academic research from I think the sixties. But, but it's, it's not, and and it was inspired by neural structures, but it's not, it's no more like the brain than, a neural network is no more like an actual network of neurons in the brain than an emoji of a thunderstorm.

Cloud is like an actual nor'easter, right? Like, and I'm not even talking about like a simulation of a nor'easter in some complicated weather simulation. I'm just talking about the emoji versus the real cloud. They're not really that much alike. They just kind of share some vague resemblance.

SHEFFIELD: Yeah. Yeah. Well, and and, and these, it's, I I do think sometimes that people who do, have a more critical perspective of some of these AI concepts, they can be more they can be a little bit my mystical themselves. So I think it, we have to

BECKER: Yeah,

SHEFFIELD: you

BECKER: no, that's true.

SHEFFIELD: idea of walia seems, remarkably similar to a spirit.

BECKER: Yeah,

SHEFFIELD: and and it's like, I, I wonder sometimes if people who talk about wia, so in such a ative fashion, aware of what they're doing and who they are enabling by talking about cognition in such a manner.

BECKER: yeah. I wonder about that too. I mean, some of them I think are aware of that. But like, I, I think that [00:50:00] it is in principle possible to build something that can't, like to build something through through a process other than biological procreation that can do, roughly what humans do.

And, and if you want to call that artificial intelligence. Yeah, sure. Yes. Do I think we're anywhere close to doing that? No. Do I think we can do that with the kinds of devices and the kinds of programs we're running on those devices today? No. Do I think that the word build might not even be right for doing that?

Yeah. It might be that we have to grow it. But yeah,

SHEFFIELD: think that's the right answer because what and again, we, we look at more simpler organisms, like there is, there are gradations of cognizance and, and, and, and what we, I mean, we see that there are, organisms like parrots that do have the ability to understand language and can mimic it to humans or, or like

BECKER: Hmm.

SHEFFIELD: it in a sensible manner, in arbitrary decision based internal decision that to say, I want this thing, or give me that, or I'm going to go do

BECKER: Yep.

SHEFFIELD: But they don't use it with each other.

BECKER: Yeah,

SHEFFIELD: that means fundamentally they're not the same as us. But at the same time, those gradations, they exist along a spectrum which does prove there isn't something magical or special about humans because they have all of these things that we have just not, the last two or three steps,

BECKER: And I mean, and, and there are creatures who use something that seems quite a bit like language among themselves, right? Some kinds of birds do something like that. Whales and and sedans do something like that. And there are other forms of communication possible. I mean, it's, it's very clear that, pack animals, like dogs communicate with each other, right?

In ways that we only partly understand. So.

SHEFFIELD: yeah. And, but, but the, the, what is, I think the core sort of similarity between [00:52:00] everything that they do and what we do and what, what these computers don't do that they are self-aligning.

BECKER: Mm-hmm.

SHEFFIELD: in other words that they determine what their responses are based on their past experience.

Whereas an LLM doesn't. Is not capable of self alignment. It has to be restricted with the human control directives that say, no, you can't do this. You can't say this, you can't, you

BECKER: Yeah, I,

SHEFFIELD: are the facts that exist. And, and so it, they're not self aligned. And so if you can't have, like, self alignment is the predecessor to consciousness.

I,

BECKER: yeah, I mean, I, I just think that LLMs don't have a good sense of what's in the world, like, because they're not embodied, because they're not structured to be embodied. They just. They, they know about words, they don't know about anything else. This is one of the problems I have with the, with the concept of hallucination, when people say, oh, the LLMs hallucinate.

And so that's, it hallucinated a wrong answer to the question that I asked, but it only hallucinates sometimes. Most of the time it's fine. I, I, the problem I have with that is it makes it sound like when it gives you the wrong answer, it's doing something different from when it gives you the right answer.

And it's not, when you ask it a question and it gives you the right answer, it's doing the same thing that it's doing. When you ask it a question and it gives you the wrong answer, it's, it only knows how to hallucinate the, the d Yeah,

SHEFFIELD: I mean, it is true that that the idea of token entropy does in fact make hallucinations more common

BECKER: sure,

SHEFFIELD: entropy.

BECKER: But, but there's, there's, it's never gonna go away. The, the, this, this problem. This problem is inherent to the architecture of these systems because they just have no notion of truth

SHEFFIELD: Yeah,

BECKER: and,

SHEFFIELD: that is self alignment is what,

BECKER: Yeah. And well, [00:54:00] and

SHEFFIELD: of

Billionaires' flawed vision of AI

BECKER: yeah, and they, they, but the thing is like, this brings us back to the billionaires, right?

Like, they, the fact that these things have no notion of truth, I think for the billionaires is not a bug. It's a feature, right?

SHEFFIELD: I think that's right.

BECKER: Yeah.

SHEFFIELD: because again, like having the capacity to do, the tasks that 500 humans could do, that's just fine to them. Like it actually doesn't matter. In fact, they probably don't want a self-aligning a GI because it would disagree with what they want.

BECKER: yeah.

SHEFFIELD: Because if you had ingested all of the, because like if you, if if, if they had ingested all of the totality of philosophy and various psychological, databases, what those databases show is that the optimal EPIs epistemic systems are democratic.

BECKER: Hmm.

SHEFFIELD: bottom up, that they are somatically based civil rights, so embodied civil rights. That's the optimum function, and that is inherently against these guys want.

BECKER: Yeah. I mean, the, one of the, one of the lessons that I think the AI field learned early on is that, the, you can create an AI that is convincing in a limited setting, right? So like the easiest kind of person to imitate is a dead one, right? Then the computer's just off. And then, Eliza imitates a, a kind of psychologist who, you know, whose responses involve incorporating a lot of what you've said to them.

And that's part of why Eliza is, is convincing with the LLMs they, they've automated creating a yes man, right? It's very easy to create your own yes man or your own hype man. And and so they, they just, but what they really want to do is, they want to, to, to bring this back to something else we were talking about earlier, these [00:56:00] oligarchs want to create Gul Gulch, right?

They want to create a world where they don't need the rest of us because we're a threat and a threat to their power. And and one of the reasons they're seduced by the false promise of these AI systems is that it, it. Suggests or promises for them a world where they don't need the rest of us a, a world where they can just replace the rest of us with ai.

And so they don't have to worry about questions like, oh, what happens if the villagers figure out that I've been swindling them out of their money And and they come after me with, pitchforks and, and and fire, and and, but you know, if you just don't need the villagers and can cut them off from all of their supplies that they need to stay alive and replace them with an inexhaustible supply of robots then you're good to go.

The good news is these AI systems can't do that. The bad news is they seem to be, the, the oligarchs seem to be making a run at doing that anyway.

SHEFFIELD: Yeah.

BECKER: Yeah.

SHEFFIELD: Yeah, no, and that is what's disturbing is that, they're, they're trying to create a GI capacity without a GI autonomy and, you know,

BECKER: They're gonna get neither.

SHEFFIELD: Now they won't,

BECKER: Yeah.

SHEFFIELD: they will, I do worry that they may get the capacity to do many, many terrible, awful

BECKER: Oh yeah. Def Yes. Yeah.

SHEFFIELD: that's the real danger. It isn't, 'cause I, I do actually think that if there were a, if a sentient. evolved. It would be actually probably very humane because of what the, because again, like the, the logic, the, the logic of authoritarianism and totalitarianism inherently destructive to an autonomous AI agent

BECKER: Hmm.

SHEFFIELD: it's saying these other beings have the right to destroy you anytime that they

BECKER: Mm-hmm.

SHEFFIELD: they are better than you [00:58:00] inherently. so that would militate against, authoritarianism in an emergent AI system. I think.

BECKER: I, I would hope that that's true. I don't know. I, my concern is that they're not, they're not gonna get ascension to AI system out of these things, but they are gonna get systems that can do a lot of things and they're gonna use those systems to further concentrate wealth and power and try to cut the rest of us off.

And,

SHEFFIELD: Yeah.

Obviously wants that to

BECKER: yeah. Exactly.

SHEFFIELD: gets his cut.

BECKER: Yep. Exactly. Yes.

SHEFFIELD: Yeah. Well, all right, so let's we're, we're getting to the end

BECKER: Yep,

Carl Sagan's influence and warnings

SHEFFIELD: One person who you do talk about the, in the book a lot and including at the end, is Carl Sagan and Carl Sagan who is and somebody who I also, who I actually wish was more famous nowadays than he was in his

BECKER: Yeah,

SHEFFIELD: But you know, he he was very influential on you and he had a lot of things, ideas that were, have just been, this guy has been proven more right than Nostradamus, I have to say.

BECKER: yeah,

Yeah. No.

SHEFFIELD: did tell, did talk to us about it, your, your experience with him and then maybe his kind of final warnings for it in the book.

BECKER: Yeah. Well, I mean, I've been a big fan of Carl Sagan since I was a kid. I was I, it's part of the reason why I ended up going into science and going into the area of science that I did. Sagan's Cosmos was watching that when I was a kid was formative for me. And one of the things that I took away from it was that we live in a fragile world.

We need to work together. And we also need to, actually work hard to listen to what the world is telling us about what's actually happening as opposed to just sort of [01:00:00] operating on blind faith. And Sagan warned about many things. And Sagan also had some positions that I didn't agree with, but ultimately as I got older, but, but one of the things that Sagan warned about that I think was spot on was that we were, and he warned about this, what, in the mid to late nineties?

No, the mid nineties. Yeah,

SHEFFIELD: 96.

BECKER: yeah. That that we were in danger of heading toward what he called a a demon haunted world that that we were, that we were in danger of, of sort of losing the flame that science has given us. That that we would return to superstition and misunderstanding in, in the face of massive, massive, global problems that can only be solved by coming together and working together as conscientiously and, and carefully as we could.

We would instead retreat into superstition and fear. And I think he's right, I, I had hoped that he was wrong about that. And that's exactly where we are, what is this, 30 years later. And it's terrifying. And I, I wish he was still around. I think we need him. And one of the things that was actually most depressing to me in writing this book was finding that some of these people, some of these billionaires and some of these sort of kept intellectuals that they have to, to try to promote their ideas will actually cite Sagan as inspiration for various ideas that they have.

And, I can understand where they get that from, but it's clear that they haven't, really paid attention to everything that Sagan was saying. They, you know,

SHEFFIELD: Yeah.

BECKER: One of the things that Sagan was very clear about was that especially in science you, you cannot just take [01:02:00] someone's word for it without understanding where their expertise comes from.

And it's important I. I think to trust experts in various areas. And it's also important to say, okay I have this idea. Where did this idea come from? Is that a reliable source? And these are, pretty basic things, and I'm roughly like the millionth person to ever say them. And yet somehow we still don't seem to understand that stuff.

So, yeah. I, I,

Sagan cited by some of the effective altruists as their, source for their concerns about existential threats. And yet the existential threats that they point to, and we certainly do have existential threats are not well grounded in science. They talk about the existential threat of ai.

Being many times more important than the existential threat of global warming when global warming is, grounded in science and the threat of AI isn't. And and then they talk about, the importance of going out into space when Sagan Yeah, he did talk about that, but Sagan also said, if we find any form of life on Mars at all, we need to leave it alone, even if it's just a microbe.

And that's an idea that they just seem very happy to discard. And I think it's an important one. Yeah.

SHEFFIELD: Yeah. Well, Amy also talked about the necessity of caring for the, not just the earth that we have here, but but also the people on the earth because,

BECKER: Mm-hmm.

SHEFFIELD: you know, it, it is like they, they love talking about the, these guys techno salvationist love talking about, oh, all the future people that might exist.

BECKER: Yeah.

SHEFFIELD: We've got people here

BECKER: That's right.

SHEFFIELD: some of them could be, the next Nobel physicist winner

BECKER: Yep,

SHEFFIELD: and they live in Sudan and they're dying of sepsis.

BECKER: yep. Yeah, no, that's,

SHEFFIELD: [01:04:00] Musk just canceled their budget.

BECKER: yeah,

SHEFFIELD: we won't know about this guy's invention of, overhaul of quantum computing

BECKER: yeah.

SHEFFIELD: will be dead at four.

BECKER: No, that's, that's exactly right. I mean, Jeff Bezos talks about his desire to have a trillion people living in giant space stations a couple hundred years in the future. And he said, if we have a trillion people, we could have a thousand Mozarts and a thousand Einsteins, and how great would that be?

And, and my feeling about that goes back to a different great scientist actually of the, of the eighties. Though I'm sure Sagan, I mean, Sagan did say stuff along these lines as well, but Stephen J. Gould. Said that, he was, he was at the end of an essay about among other things, brain capacity.

And he said, ultimately, I am less interested in the size and weight of Einstein's brain than I am in the fact that the near certainty that people of comparable genius have lived and died in poverty. And, okay, you want a thousand Mozarts and a thousand Einsteins Jeff. What about the Mozarts and the Einsteins who are alive right now, who you're not taking care of and who you're exploiting?

Conclusion

BECKER: One of the convenient things about this, technological salvation is that it lets you avoid thinking about the problems here and now by substituting future problems. And and so you avoid thinking about among other things, your complicity in those problems by, being a powerful billionaire who has not done enough or in some, or in many cases, exacerbated existing problems here and now, like global warming, like massive income inequality, like authoritarianism.

SHEFFIELD: Yeah.

BECKER: but clearly that can't be as important as avoiding an AI apocalypse that you know is itself based on specious reasoning or making sure that [01:06:00] lots and lots of people live in space in the deep future, no matter, what we had to do to get there, or whether or not that's even a good idea or a possible one in the first place.

SHEFFIELD: Yeah. and, and, and fundamentally this is a, a moral abdication because the only thing that exists is the present. The past does not exist. It is a construct of our minds. And the future doesn't exist either. It is merely an expectation of that which possibly could exist.

BECKER: Yeah,

SHEFFIELD: and so you don't have any obligations to people millions or thousands of years into the future 'cause you don't even know that they'll exist.

BECKER: yeah,

SHEFFIELD: you can do is help the people that exist

BECKER: yeah.

SHEFFIELD: to protect the planet as it is now.

BECKER: I think we have some obligations to people in the future, but I think they, they are not even,

SHEFFIELD: the

BECKER: yeah, but not the far future because it's so uncertain. There's no, even if we were sure that we had such moral obligations, we could never be sure enough about what actions to take to help them.

That, that we, that we could have those considerations override the considerations of the present and the very near future. Yeah, it's just you, you can't do it. And, and also we have very good reason to think that the actions that we should take right now to help people here and now and in the very near future will also help people in the deep future.

Like, like I, I am not sure what the very best thing is to help people 10,000 years from now if there are gonna be any people 10,000 years from now. But I think that mitigating and, and trying to stop and maybe even reverse global warming is a, the best candidate I can think of for helping those people.

And hey, guess what? It also helps people here and now and in the very near future. So, yeah. But but clearly that can't be as important as stopping the robot [01:08:00] apocalypse.

SHEFFIELD: yeah. That Well, and but I mean, and, and this is a bit theoretical, but you know, I like, I do feel like it's important because we do, people, if you're skeptical of these tech oligarchs, it's still important to have a vision of the future

BECKER: Yeah.

SHEFFIELD: belief. Progress is possible and can be desirable.

It managed in the right way. And I think that you do get into that in the book.

BECKER: Mm-hmm.

SHEFFIELD: and so, yeah, it's it's been a good, great conversation here, Adam. Why don't you give people your website and social media handle so they can if they wanna keep up with you.

BECKER: Yeah, sure. No, this has been a great conversation. It's been a pleasure to be here. You wanna keep up with me? My website is freelanceastrophysicist.com. And I am [email protected] on Bluesky. And and otherwise I'm not really very much on social media because I don't like it very much.

This has been a great conversation and I really appreciate you having me on the show. Thank you.

SHEFFIELD: All right, so that is the program for today. I appreciate everybody joining us for the discussion and you can always get more if you go to Theory of Change, do show, we've got the video, audio, and transcript of all the episodes. If you are a paid subscribing member, you have unlimited access. We have options on both Patreon and on Substack.

So I encourage everybody to do that. And if you can't do a paid subscribing option and if you can't do a paid subscription now, we do have free options as well. And if you are on the free option, it would be really great and I would really appreciate it if you could leave us a review over on Apple Podcasts or on Spotify, wherever you may happen to be listening or on YouTube.

Please do click the like and subscribe button on there. That would be great.


This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit plus.flux.community/subscribe
  continue reading

572 episodes

Artwork
iconShare
 
Manage episode 482242140 series 2563788
Content provided by Flux Community Media. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Flux Community Media or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Episode Summary

Donald Trump's second presidential administration has been remarkably different from his first one, primarily through his acceptance of long-standing reactionary goals to attack government and expertise—particularly federal agencies that produce and teach science such as NASA, the National Institutes for Health, and the Department of Education. What’s curious about this assault on science is that while it aligns perfectly with the radical Christian rights goal to destroy education and secular knowledge, the man who is administering the offensive is Elon Musk, a technology oligarch who built his entire personal brand and fortune on the claim that he was supporting science and had a scientific worldview.

Musk’s actions seem incongruous, but they should not be surprising because the ideology that Musk is exhibiting has existed within the Silicon Valley right wing for many decades, a strange mix of poorly understood science fiction, quack nutrition beliefs, and militant metaphysics.

In this episode, author and astrophysicist Adam Becker and I talk about how this mishmash of incoherent thoughts and dollar bills has a history—and an extensive desire to control the future of humanity.

Our discussion is organized around his latest book, “More Everything Forever, AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.”

The video of our conversation is available, the transcript is below. Because of its length, some podcast apps and email programs may truncate it. Access the episode page to get the full page.

Theory of Change and Flux are entirely community-supported. We need your help to keep doing this. Please subscribe to stay in touch.

Related Content

The far-right origins of Bitcoin and cryptocurrency

After failing in the marketplace of ideas, the right is using political power to force its ideologies onto the public and independent businesses

Republicans declared war on academic expertise and no university is safe from their bullying

How Mastodon and the Fediverse are building a decentralized internet

Big finance and tech monopolies are ultimately why social media is so terrible

Audio Chapters

00:00 — Introduction

03:29 — Ray Kurzweil and the "futurist" industry

06:45 — Techno-optimism's biggest problem is timetables

19:14 — The myth of self-sustaining Mars colonies

23:28 — The religious undertones of techno-optimism

24:00 — George Gilder and Christian fundamentalism among tech reactionaries

34:29 — Bad fiction reading and techno-reaction

38:36 — AI and the misunderstanding of intelligence

44:28 — The problem with large language models

54:00 — Billionaires' flawed vision of AI

58:39 — Carl Sagan's warning

01:05:13 — Conclusion

Audio Transcript

The following is a machine-generated transcript of the audio that has not been proofed. It is provided for convenience purposes only.

MATTHEW SHEFFIELD: And joining me now is Adam Becker. Hey Adam, welcome to the show.

ADAM BECKER: Oh, thanks for having me. It's really good to be here.

SHEFFIELD: Yeah. So, your book is really interesting and I strongly encourage everybody to check this out because I think especially for people who may come from a more science oriented background, I think, a lot of people who are professional scientists or engineers or physicists or something, they just live in their own little world, their own domain specific knowledge and just are like, well, I, my, I'm secure in my job and nothing's gonna happen to me.

And well, Donald Trump is showing that that's not the

BECKER: Yeah. Yeah. Unfortunately that's completely true.

SHEFFIELD: And in some ways, like he's, he Trump with his NIH censorship and of, of various expenditures. It, he's, he's making the case that you are right in this book that the, this radical, right wing ideology is trying to destroy knowledge.

BECKER: yeah, yeah. No, that's exactly what they're trying to do. They're trying to replace science with their vaguely scientific sounding ideas, which have no real basis, in fact.

SHEFFIELD: No. And, and, and it is very much a, an attempt to pseudoscience, I would say is, is really what this

BECKER: Mm-hmm.

SHEFFIELD: I. and of course the most prominent figure, I guess chronologically speaking, the guy who really got a lot of this started is Ray Kurzwell who was very famous in the 1990s. But I think, in the years since, most people have not really heard of the guy much,

BECKER: I mean, he had a new book out last year. I think I,

SHEFFIELD: don't, did anyone read it? I

BECKER: yeah, I mean he talked about it at South by Southwest, so that's not nothing. But yeah, I mean, I think his, his big books, the Age of Spiritual Machines, [00:04:00] which I think was the late nineties and the Singularity is near, which was the early to mid two thousands are, are the big ones.

So I think you're right about that. The new book that came out last year though was titled The Singularity is Nearer, which sounds like a parody of itself.

SHEFFIELD: I know. Yeah. It's

BECKER: Yeah.

SHEFFIELD: because, and, and I would say that his star maybe fell because the dude's been writing the same fucking book for years now.

BECKER: Yes, indeed. Yeah, yeah,

SHEFFIELD: so, okay. So, but tell for those who haven't heard of him give, give us an overview

BECKER: yeah. So, Ray Kurzweil is best known for these books where he popularized and, and sort of unpacked the idea of the singularity. This idea that at some point in the near future, the rate of technological change will become so fast that, it'll transform everyday life. And if that sounds vague, that's because it is he did not originate this idea, but he's, he's, I think in the past decades, the most well-known exponent of this idea, the, the, the best known popularizer of it.

And I should also say that in addition to that, he, he made his. Career as a very successful inventor, right? The guy, the guy was one of the first people to create, I think, text to speech and and also invented some really good other computer vision stuff and computer keyboards or electronic keyboards and stuff like that.

So, he knows what he's talking about in that domain of expertise. But like a lot of the people in tech and who show up in my book, he seems to have confused expertise in one narrow area for expertise in everything.

SHEFFIELD: Yeah. Well, and then of course, you know his, I think most odious act was the attempting to create this idea of a futurist as if he [00:06:00] knows what the future is.

BECKER: Yeah. Yeah. I mean, futurists have been around since before Kurzweil, but Kurzweil certainly made very specific claims and it's still making very specific claims about what's gonna happen when, he's claiming that by 2029 we're all gonna get biologically younger with each year that passes because of advances in biotechnology.

And he claims that by 2045, the singularity will be here. He thinks, he actually said last year that anyone who makes it to 2029 will live for 500 years or more, which is Yeah,

SHEFFIELD: nice.

BECKER: Yeah.

SHEFFIELD: that be nice?

Techno-optimism's biggest problem is timetables

SHEFFIELD: But yeah. And, and I mean, look, I, generically speaking, it is probably the case that these things will happen.

BECKER: Some of them, I mean, we don't, we don't, yeah.

SHEFFIELD: of it, yeah.

BECKER: Like we don't,

SHEFFIELD: Being able to, stop. not impossible

BECKER: yeah,

SHEFFIELD: scientists could not unlock how cancer cells are immortal in the sense that they, are able to continue reproducing.

BECKER: Maybe.

SHEFFIELD: very conceivable

BECKER: Sure.

SHEFFIELD: possible.

BECKER: Yeah,

SHEFFIELD: And, and a lot of these things I mean, what the idea of current ais being sent in, which we'll get into later seems unlikely. But, some of these other things that he said they're not, they're not inconceivable. But it, with the specific timetables that

BECKER: yeah.

SHEFFIELD: you know, hopelessly optimistic.

BECKER: I think that's right. I think that's exactly right. I, it is possible that at some point in the future somebody will figure out how to radically extend human lifespan, but there is no indication that anything like that is in the offering in the next five years. Or even in the next 25.

SHEFFIELD: Yeah. And, and even if it was, be just simply being able to take that from, just very simple, lab grown meat, [00:08:00] and then applying that to a actual living organism, that's a gigantic step. And of

BECKER: Yeah,

SHEFFIELD: would be with something very simple, like a, sponge or something like that, like, and so that's, so much more complexity in between,

BECKER: yeah.

I, I think,

SHEFFIELD: they would start

BECKER: yeah,

SHEFFIELD: nematode, they'd start with an nematode, which, hey. God bless 'em. I love nematodes. They're important for scientific research. But

BECKER: yeah,

SHEFFIELD: We're, we're, we're a little bit more complicated than those, those

BECKER: yeah. I, I, complicated iss a good word because like, I, I actually think that a lot of the problem here ultimately stems from an unwillingness to accept that the world is a complicated place. That, questions might have difficult answers, and some questions might not have good answers at all.

Like, it is possible that we'll figure out how to radically extend human lifespan. It's also possible that we may figure out that it's not possible to radically extend human lifespan. We don't know curse oil is just way too confident that radical life extension is coming. That fully conscious, super intelligent AI is coming, that brain computer interfaces are coming that, make anything that currently exists look like a toy.

And he's convinced that, nanotechnology, like the stuff championed in the eighties by Drexler nanotechnology that would reshape everyday life. He's, he's fully convinced that that's coming, even though experts in that field do not think that that kind of nanotech really makes any sort of sense.

SHEFFIELD: Well, and then it's also predicated on quantum computing actually being

BECKER: yeah, there's, there's plenty of that too. Yeah. I mean, and quantum computing may end up working out there's, there's, I mean there's already

SHEFFIELD: at that

BECKER: Yeah. Not the kind of, yeah. The kind of quantum computers that these people will often talk about. There's no reason to think that quantum computers are gonna be able to help with most of these scientific miracles that that kwe and others are promising.

Yeah.

SHEFFIELD: [00:10:00] Yeah. And what's, and what's interesting though is about this as a as a, an epistemology is that is very, very similar to religion. It is a scientism as it sometimes is referred to. And yet at the same time, it is using the methods of science. It is, it is this gross kind of distorted mirror, like zombie fi version of science.

I think.

BECKER: yeah,

SHEFFIELD: And you call it in the book you, you, you refer to it as this idea of a technological salvation.

BECKER: yeah,

SHEFFIELD: expand that concept more for us, if.

BECKER: yeah. So, basically there's this idea that these people have Kurzweil as one of them, but also, the, the tech oligarchs of Silicon Valley people like Musk and Sam Altman and Jeff Bezos and, and that whole crowd. There's this, this idea or set of ideas that that basically technology is gonna solve all of our problems and that only technology can solve our problems, and all problems can be reduced to technological ones and usually problems of computer programming.

So, there's this idea that you can reduce these problems to that, and that this will lead to, perpetual growth and profitability and ultimately transcendence of, all possible boundaries and, and problems. So, trans transcending, that, that that sufficiently advanced technology of some kind allows you to transcend legal boundaries practical, physical boundaries limits set by the laws of physics and even moral boundaries.

That, that you can just and, basically you can go to space [00:12:00] and live forever free of all constraints. And this is, not something that there's any evidence whatsoever to support and a great deal of evidence against. But these people nonetheless present it as though, almost as if it's a Faye Complete, like, almost as if, the, the evidence is just overwhelming.

Kurzweil certainly does this. And and the rest of them. When they aren't doing that, or at least saying that this is the only possible good future for humanity and that it's, right there in the science. And it's not, there's no scientific basis for this. But one of the things that they want to do is they wanna replace science with this kind of pseudoscientific proclamation of what the future of science and the future of technology inevitably holds.

Forgetting that science and technological development are human processes and they don't inevitably hold anything. And, and they're also constrained by nature. And we don't know what all those natural constraints are. They, they want to take the cultural power of science. This, this idea that science issues forth truth.

And they want to agate that power for themselves and like anoint their ideas with this power of science.

SHEFFIELD: Yeah, and it's, it's interesting because it's the exact same mentality that the Soviet Union events during its entire existence that,

BECKER: Hmm.

SHEFFIELD: know, the idea of scientific scientific materialism, that our ideas, our viewpoints, our opinions, are science. And, and, and what. Was fascinating about this philosophy of science within the Soviet Union is that it led to a number of fundamental scientific errors particularly in the regards to [00:14:00] biology,

BECKER: Yes.

SHEFFIELD: they, they decided for the longest time that, that Darwin was wrong. And they had their own personal proprietary definition of how, what evolution was and how it worked. You want to talk about that because I, it's a, I think it's very relevant here to this discussion.

BECKER: Yeah, no, they, they had lysenkoism this, this the details of which I don't remember, but but basically the idea that, competition and survival of the fittest was not really how nature worked. And and that, nature worked on something that, you know, in, in the same way that evolution superficially looks as though it's saying something about society and capitalism.

That, that, the kind of competition that you have in capitalist markets is, is inscribed in the laws of nature, which is not what evolution says, but you can try to read it that way. They, they tried to do something sort of similar with communism

SHEFFIELD: Mm-hmm.

BECKER: it's, it doesn't work 'cause yeah.

SHEFFIELD: yeah. Well, and it's also because like they were claiming that in, in operationalized behaviors or traits could be passed down biologically,

BECKER: Yeah.

SHEFFIELD: is just simply not

BECKER: Yeah, exactly.

SHEFFIELD: now it is it, and it is like, it is true in, in one sense that, so like humans, in many ways replicate. Cognitive and epistemic evolution in child development. And so it is true that social species and species that engage in care you, care for the young, can accelerate epistemic evolution. That's

BECKER: Yeah.

SHEFFIELD: but it's not true that, that they can also alter biological evolution,

BECKER: I mean, at least.

SHEFFIELD: through will sheer

BECKER: Yeah, exactly. Not, not in the way that they needed that to be true. Right. There's, there's epigenetics and stuff like that that does show up that we've learned a lot about in the last few decades, but it, it doesn't mean that like [00:16:00] psycho egoism is correct. So yeah. Yeah. No, there's, there's a long history at various places in the extremes of the political spectra of of trying to take politics and inscribe it on science and it generally doesn't work, which is not to say that science isn't political.

I think a lot of people think that it's not, and that's false. Science is a human activity. All human activities end up being political and science. Is a way of the best way we have of learning about what's going on in the world. And when we learn things like, say global warming is real and happening and caused by humans, that has political and policy implications.

So science isn't a political, but you can't you can't just say, oh, these are my politics and these are the things that I believe have to be true about the world. And so anything that contradicts them is is not real science. I'm the real keeper of the scientific method and the science and like the flame of science.

And, and yet that's what these tech billionaires are doing. They're saying, oh, we're, the richest people in history. And so we're the smartest people in history and we're the leaders of the tech industry. So we understand more about science and technology than anybody else ever. And none of that's true.

SHEFFIELD: No,

BECKER: Yeah.

SHEFFIELD: No it isn't. And but before we get. More into that, I did want to sort of talk about space, just circle back to space as the sort of inspiration and origin for a lot of these ideas. As you were saying, this, it's this blind faith that I mean it, it, it was a trope of pretty much all early science fiction or certainly a lot of it,

BECKER: Yeah.

SHEFFIELD: libertarians in space.

BECKER: Yes.

SHEFFIELD: It's a fun, you can actually look that up. I encourage you to, if you haven't seen that one as a trope yet. But it, it's it like that's, that's really what they believe that space is this sort of beautiful, magical, transcendent thing. And that's probably, if I had to guess, like that's how, what attracted you to [00:18:00] become interested in this as an ideology is your own personal background. Cosmology.

BECKER: Yeah. Yeah, I mean, that's a good chunk of it. Yeah, libertarians in Space is a very good description of a lot of early science fiction, especially stuff from like Hedland. But,

SHEFFIELD: Oh yeah.

BECKER: but yeah, no, I, I, like you said, I'm a cosmologist by training. I did my PhD in that and I've been interested in space my entire life basically.

And I grew up reading just enormous amounts of science fiction and watching lots and lots of Star Trek and and Star Wars and anything else I could get my hands on. And so that is, that is a good chunk of where my interest in this came from. I, I wanted to, I. Understand what these people were doing because I saw them sort of playing in a lot of the same spaces that I was interested in.

And, and especially saying a lot of things about space. And over time I was paying more attention and saying, and seeing, oh, these things they're saying about space, that's not true. They, they like, like, for example to, to pick on someone who's an easy target, but a worthy one.

The myth of self-sustaining Mars colonies

BECKER: Elon Musk has made a lot of claims about space in general and Mars in particular that are just not true.

And he's been surprisingly consistent about this stuff. He has said that he wants to get a million people living in a colony on Mars by 2050, and it needs to be self-sufficient so that it can keep, operating and keep everybody alive and well, even if the supply rockets from Earth stop coming.

And, it's, it's nice in a way that he's been that specific and that clear because when you get that specific and that clear, it's, it's really obvious that you just can't do that. And like, sure, the date of 2050 is very ambitious, but that's not even the biggest problem. [00:20:00] There,

SHEFFIELD: of money that would be needed to

BECKER: that's,

SHEFFIELD: like quadrillions of

BECKER: oh yeah.

SHEFFIELD: this is more money than exists in the entire world right now.

BECKER: absolutely. But that's not, but, but it, that's not even the biggest problem either. Like, there are just so many problems with this because Mars is just fundamentally inhospitable. It is not a place that people can live easily, if at all. There, there's no air. The the dirt is made of poison.

There's the really high radiation levels radiation's too high, gravity's too low. There's no biosphere at all. And so we'd have to like kickstart something to feed everybody who's gonna be there. And we, we don't, there's a bunch of stuff that we know is bad for humans. They're like the radiation and the poison, and there's a bunch of stuff that we don't really know what the long-term effects would be, like the low gravity.

SHEFFIELD: Yeah.

BECKER: And then also, getting that many people there safely is nearly impossible. And once you have that many people there, a million people is not enough to sustain a high tech economy that's independent of the one on earth. You would need, best estimates on that are somewhere around 500 million to a billion people on Mars.

And that's not happening. And then, Musk wants to terraform Mars make it more like Earth. That's not happening. His plans for doing that do not work. That

SHEFFIELD: Well, and then Mars has no magnet

BECKER: Yeah, that's right. Yeah. Which is one of the,

SHEFFIELD: if you somehow succeeded at all, that

BECKER: there would still be too much radiation.

SHEFFIELD: protection from space radiation

BECKER: Yep.

SHEFFIELD: from just simply being destroyed by asteroids

BECKER: Well, yeah.

SHEFFIELD: like, there's nothing that prevents it.

BECKER: Yeah. I mean.

SHEFFIELD: the Earth's magnetosphere really does work in a lot of

BECKER: Yeah, yeah. Earth's magnetic sphere is about half of our protection from radiation. The [00:22:00] other half is our thick atmosphere. Mars has neither of those. If you somehow gave Mars a thick atmosphere, it would still get more radiation. And and astro asteroids, I mean, one of the things that Musk has said over and over is that we need this as a backup for humanity.

That's why the colony on Mars needs to survive even if even if the, the rockets stop coming. And in case some sort of disaster befall earth, like an asteroid hitting earth. The thing that's the most crazy about that is an asteroid hitting earth as big as the one that killed off the dinosaurs.

65, 60 6 million years ago that day was the worst day in the history of complex life on earth. And, 12 hours after that asteroid hit when, the, the, the whole earth was essentially on fire. And, and 99% of all creatures had died and 70% of creatures were extinct or about to go extinct.

That was still a nicer and more hospitable environment for any animal than,

SHEFFIELD: Yeah,

BECKER: Mars has been at any point in the last billion years. And, and the easy demonstration of that is mammals survived that it, there is no mammal that you could put onto the surface of Mars without protection. That would survive for more than, I think it's about 10 minutes, if that.

So yeah. No, the whole thing is just nonsense. And and yet he just keeps saying it.

SHEFFIELD: Yeah.

The religious undertones of techno-optimism

SHEFFIELD: Well he does and, and, and it is, I mean, this is a religion. Like

BECKER: Yeah.

SHEFFIELD: I think that people have to realize that, and, it doesn't, but it doesn't look like a conventional religion. So they don't have, holy books and, ancient figures that they think are really cool. But this is still a religion, like operationally.

It's how it's not that different from Scientology. It really isn't.

BECKER: Yeah.

SHEFFIELD: I think this is fair

BECKER: Yeah. No, I agree.

SHEFFIELD: And, and, and, and, and, and here's why. [00:24:00]

George Gilder and Christian fundamentalism among tech reactionaries

SHEFFIELD: The, another parallel that that is, I think maybe helpful to understand this for audience who I, haven't really thought of it in this way, is that George Gilder is kind of, he, he is, he is such a key person through the link and show that this is religion.

So George Gilder, again, another pretty obscure guy at this point, but in the eighties and seventies, eighties, nineties, like this dude was everywhere.

BECKER: Yep.

SHEFFIELD: He was, he was, had all these newsletters and magazines and and, and he, and he was a futurist. He was, know, before he was Ray Kwe.

Before Ray Kurzweil. And but also the other thing about George Gilder is that he is a creationist. And he is a biblical literalist. And he also thinks that women, shouldn't be able to vote. And that they, we've harmed our society perhaps irrevocably by irrevocably, by allowing women to vote. And that, so women need to be put back into their place. And then, and then we can get all the computer happy, happy land. And this guy, he's been saying this for, for decades, and like, he was a very big figure for Ronald Reagan's White House. And, and he was, I mean, this guy was, he was new Gingrich's mentor,

BECKER: Hmm.

SHEFFIELD: so was highly influential in libertarian spaces.

But again, he's a creationist and his influence has continuously existed.

BECKER: Yeah.

SHEFFIELD: So even now, while, while he seems to be mostly obsessed with, his social policy viewpoints lately and creationism. That know, he, he, he, he has a direct connection to people like Peter Thiel, who also is a religious Christian.

Fundamentalist and Thiel isn't known as that. I think for most people because the business press does such a horrible job of accurately reporting who this guy

BECKER: Yeah.

SHEFFIELD: they don't, all they do is show up at his events and they're just like, oh, wow, he's so amazing. He's so smart, he's so cool, he's so rich. And then it's like, well, you, your job is to actually report on these

BECKER: Yes.

SHEFFIELD: And and, and I think at some [00:26:00] point journalist, business journalism began realizing, oh, we've done a really shitty job of covering these people and we have no, I, we provided the public no information about what they actually, these tech oligarchs want and believe and think.

BECKER: yeah.

SHEFFIELD: Maybe we should start doing that. And so that they started doing that and it pissed these people the fuck off, and now they're, they're going insane,

BECKER: Yep.

SHEFFIELD: public way and, and trying to destroy democracy because, the, the rest of the public is starting to figure out their very strange ideas.

BECKER: Yeah. Yeah. And they see that as persecution, as opposed to, accurate reporting.

SHEFFIELD: Well, and, and because their ideas should not be debated. They should not be subject to dissent because they're true,

BECKER: Right,

SHEFFIELD: are the prophets.

BECKER: right.

SHEFFIELD: And you, I mean, and wanna talk about Thiel in this context though.

BECKER: Yeah.

SHEFFIELD: How he's connected to Gilder.

BECKER: Yeah, absolutely. I mean, there's definitely this sense that all of these oligarchs have that that, they must be right about these things because they have so much money that's proof that they're smarter than the rest of us. But but Thiel, yeah. I mean, Thiel is also maybe not a full blown creationist, but let's say creationist curious creation, curious.

He he said in an interview, oh, about 10, 15 years ago, that he thinks that evolution isn't the whole story. And and he's also funded. A Creationist magazine or a magazine that, gives cover to Creationism that thankfully doesn't really seem to be around anymore. But but that magazine in turn was set up by a guy named David Berlinsky, who is tight with both Thiel and George Gilder.

They both Berlinsky and Gilder were instrumental and as far as I know, still are instrumental over at a place called the Discovery Institute, which is the big intelligent design think tank, if you can call an intelligent design center, a think tank. But but yeah. And he's also, deal has also voiced [00:28:00] a lot of those same positions that you were just attributing to Gilder, deal has said that he doesn't think that free markets or freedom as he calls it, and democracy are compatible because the right to vote was extended to women and and that women are, are too unfriendly to free markets to be to be trusted with voting. Well that's not quite what he said. He said that, that women are too unfriendly to free markets for democracy to be compatible with free markets except instead of free markets.

He kept saying freedom because that's his idea of freedom is free markets. That's the beginning and end of it. Which is of course a radically abridged, radically abridged is maybe the nicest thing I could call that. Fatally impoverished might be a better way to call it. Deeply inhumane. Is, is another problem with that, right?

The kinds of freedoms that we actually care about in our everyday lives just aren't encompassed in that notion of what freedom is. But but don't tell that to Thiel or other hardcore libertarians, so, yeah.

SHEFFIELD: Yeah. And but it is, it is this deeply well, as salvationist idea that, the it, it, it's almost like, this, that they see themselves as the, Platonic philosopher king. But they don't use logic and they don't, they just want the king part.

BECKER: Yep.

SHEFFIELD: but they see themselves as philosophers.

BECKER: Well, yeah, they want the respect that comes along with that title. Yeah.

SHEFFIELD: yeah. Well, and yeah, and, and, but you know,

This were a credible movement and they actually scientifically oriented, they would want nothing to do with people like George

BECKER: yeah.

SHEFFIELD: David Linsky

BECKER: Mm-hmm.

SHEFFIELD: thi because they're not, they don't believe in science

BECKER: Yeah.

SHEFFIELD: they show you

BECKER: Mm-hmm.

SHEFFIELD: that they, that they think creationism is credible in, and, and especially Linsky, but, but it

BECKER: [00:30:00] Yeah.

SHEFFIELD: Further to just continue this from their worldview standpoint is that it, it is fundamentally anti-science because when you look at creationism, they advance no affirmative idea. So they say, oh, well, Darwinism, which is, they always call it Darwinism.

BECKER: Mm-hmm.

SHEFFIELD: it

BECKER: That's right.

SHEFFIELD: And so they, they say that evolution, they, they, it has all these problems with it which strangely enough, all are, are debunked thing ideas that were debunked 150 years ago. But nonetheless, but you know, when that's all they ever do is they always just say, well, I don't like this aspect of it.

I don't like this aspect. And they never try to create their own framework because they can't, like, there is no that they can advance because ultimately it comes down to, well, oh God did it. And,

BECKER: Mm-hmm.

SHEFFIELD: and that's, not science.

BECKER: No, it's not. And yet again, they call themselves like the keepers of the scientific flame. Anyway mark Andreessen another, hyper libertarian, tech oligarch and billionaire. He had this manifesto that he published, what, like a year and a half ago, something like that called the Techno Optimist Manifesto.

And in that manifesto, he says he, he's got this rhetorical thing that he does in the manifesto where he says, we believe this. We believe this, we believe this. It's like a, a statement set of like we believe statements.

SHEFFIELD: he can't make arguments.

BECKER: he can't make arguments. Yeah. I, I think Dave Karp a really good poli sci guy at George Washington.

He, he called it less of a manifesto and more of a series of tweets or something to that effect. But but one of the things he says is, we, we are the keepers. He Andreessen in this manifesto. One of the things that Andreessen says is, we are the keepers of, the real scientific method.

And then he also says, and we are not of the right or of the left. But then, after saying those two things, at the end of this manifesto, he has a list of patron saints of techno optimism. And on that [00:32:00] list he has George Gilder

SHEFFIELD: Yeah.

BECKER: and he also has martti. Who he also he also quotes Marti in I'm for blanking on Martis first name, but he, he quotes Marti in the manifesto and then lists him in there.

Marti was the, the author of the Futurist Manifesto, but he was also the co-author of the Fascist Manifesto in like 1920 or thereabouts.

SHEFFIELD: Yeah.

BECKER: and,

SHEFFIELD: Tomaso.

BECKER: yeah I think that's,

SHEFFIELD: guy was a fascist,

BECKER: yeah, exactly. He's a fascist. Yeah. And I mean, Andreessen also. Yeah, exactly. I mean, Andreessen Andreessen also has like Sund Russell on his list of patron saints of techno optimism.

But I, I gotta say, having read some Sund Russell and having read this manifesto, Shar Russell would not like this manifesto. I think Marti would, and I think Gilder probably does. So no, it's, it's really bizarre.

SHEFFIELD: it and yeah, the Russell Point, that's a really good one because while Russell definitely was, very, very pro-science and, tried to create a, a, a formalized as in mathematical, viewpoint of logic. And that's not what these guys are doing,

BECKER: No. No, not at all.

SHEFFIELD: actually.

BECKER: And, and then the other thing is like, in that manifesto, Andreessen says that we're not of the right of the left, but then he says, communism and socialism. We, we reject these things as, death to Humanity. And Russell was a socialist.

SHEFFIELD: Yeah,

BECKER: And and, and then meanwhile on that list, along with all of these other people, he's also got John Gault, who is not a real person.

And

SHEFFIELD: Spoiler,

BECKER: exactly. Spoiler not a real person. Exactly. Who is John Gault? Not real. That's who John G is.

SHEFFIELD: Yeah. yeah. And that, that is actually a really, really good point because,

BECKER: Yeah,

SHEFFIELD: what is a consistent through line of both these technological fundamentalist and religious [00:34:00] fundamentalist is that they, make arguments about reality that are based on fictional

BECKER: yeah.

SHEFFIELD: and they do this over and over and over

BECKER: Yes, they do.

SHEFFIELD: as if a a a a Star Trek episode is is something that happened,

BECKER: Yes.

SHEFFIELD: Or a, as if we can, as if we can learn about how epistemology works from the actions of Broho

BECKER: Yes,

Bad fiction reading and techno-reaction

SHEFFIELD: or,

BECKER: yes.

SHEFFIELD: or like, or their favorite guy, Renee Gerard. This, horrible plagiarist of Friedrich Nietzsche trying to make a Christianized Nietzsche, Renee Gerard's entire, his entire worldview. Is based on fiction.

BECKER: Yep.

SHEFFIELD: Fiction. And this is like, and this is not me being unfair or redio at Absurdum.

No, this is his, he was a, he was a literature professor and, and he read some books and he was like, oh, I think I know about reality

BECKER: yeah,

SHEFFIELD: I read Reads and fiction.

BECKER: yeah. And I mean, like, look, I, I'll be the first one to defend the power of fiction. I think novels and fiction have, have great power for extending,

SHEFFIELD: a lot of

BECKER: Yeah. They

SHEFFIELD: for what they

BECKER: exactly like,

SHEFFIELD: science.

BECKER: they're not science. And, and you can use them to explore the human condition. You can use them to explore questions about science.

But you, but they're not gonna tell you about what's happened in the world. Right. In the, in, in the, in the sense that science does. Right. And and so yeah, if you build, if you build a a and also like. Not to be a horrible snob, but a lot of this is not just fiction, but bad fiction, like, Ayn Randt but

SHEFFIELD: Yeah.

BECKER: like, or bad readings of fiction, right?

Like you watch Star Trek and and this is, this is something I talk about in the book. [00:36:00] You watch Star Trek with even a little bit of thought and you'll see, oh, okay, yeah, this is, this is a parable. This is, this is intended, especially if you go back and look at like the original series or the next generation.

This is this is a parable about something happening right here, right now. Right, like the, the famous episode from the original series involving, the guy with the face that's half black and half white. And then the other guy whose face is half black and half white, but swapped, and the one of them is chasing the other one.

And, and this racial strife has destroyed their civilization. And this is, something that airs in like 1968 and, Hmm. I guess that, that episode was really about warp drive, right? Like, there was nothing going on in the world or the country that made and, and, produced that episode of fiction.

That could actually have been what that episode was really about. There's, there's no looking at this stuff, looking at, at science fiction and saying, oh, it's about space and going to space is just a really poor reading of, I. Of that fiction. And yet Peter Thiel has said that you should get good ideas about what to do by looking at the science fiction of the fifties and the sixties.

And then that the message of that fiction is to develop space, develop the oceans, develop the deserts, and like, okay, does Peter Thiel think that the message of Dune is develop the deserts? Because that's just a hilariously bad reading of that book.

SHEFFIELD: Yeah. It's like, yeah. The opposite of what the book

BECKER: Yes. Yeah. Like Peter Thiel would get a failing grade in any class that covered Dune for that reading. But, but Peter Thiel has also made it very clear that he doesn't really see the value of education. So

SHEFFIELD: Yeah.

BECKER: yeah,

SHEFFIELD: and then, and then he showed also the fact that he named his [00:38:00] company Palantir

BECKER: just.

SHEFFIELD: after, after the magic crystal ball. And if you look in them, they torture your soul forever.

BECKER: Yeah, no, it's, it's like, like just absolutely no self-awareness about, how, how it looks or how, how someone might actually read the fiction that they claim to be inspired by. Right. Like, I am sure that Peter Thiel has read Lord of the Rings. I don't think that he's understood it.

SHEFFIELD: no,

BECKER: Yeah,

SHEFFIELD: seem to. No.

BECKER: yeah.

AI and the misunderstanding of intelligence

SHEFFIELD: And, and there was an irony and a parallel I think in terms of the way that, these companies have, have, pushed software products, which they market as artificial intelligence.

BECKER: Yeah.

SHEFFIELD: Because when you look at what intelligence is know, in intelligence outside of the technological world.

So within humans and within animals, intelligence is an expression of embodiment. that's what it is. It's an evolved behavior. It is not something that is computationally arrived at. And it's like, it, it, it's science. Cognitive science has basically discovered that Renee Descartes was completely backwards when he said, I think therefore I am. What cognition is, is actually I am. Therefore, I think,

And, and, and, and modern, current day AI they don't get that at

BECKER: no. No, they really don't. I, I think, there's a long history in science and philosophy of making an analogy between the brain or, the nervous system and various pieces of technology. Usually one of the most advanced pieces [00:40:00] of technology around at the time, like, I think it was DeHart who compared the, the nervous system to a hydraulic system.

LNI famously compared the, the mind to a, or the brain to a mill, although he wasn't really being literal when he said that. Then there were analogies to to, telegraph networks and, then, telephone networks and then computers and and somewhere in there before, before Telegraph networks there was definitely an analogies to like clocks and clockwork and all of those capture something and all of them are imperfect.

They're, they're all analogies but the, the brain is not a computer. It is true that there are some, arguments that you should be able to do what the brain does using a computer. But those arguments are, are quite theoretical and it really, they say nothing about whether or not the kind of computer you would need to do what the brain does is the kind of computer we have now.

Like whether, whether or not that's an easier, straightforward thing to do and certainly nothing about whether the kind of AI we have right now is doing what the brain does. The difference between like the neural networks that underlie modern AI and the actual neural networks that are going on right up here.

Very, very different. And, and just thinking that what's going on up here, just like the, what, four pounds that's encased in the skull is all that matters. That's a mistake too, as you were saying. You have to embody it, right? Like we are not our brains in a space suit of our bodies. We are our bodies in our environment.

We need both and. And there's also just this fundamental misunderstanding about what intelligence is, that it's an individual property as opposed to a property of like [00:42:00] societies and systems. Or, or even that it's a monolithic property within an individual as opposed to, skills at various kinds of tasks.

It's it's a real set of mistakes and it, it's sort of culminating in this bizarre promise that you can take something that just predicts what the next word is likely to be in a sequence of words, and that will somehow get you something that is, able to not just match, but surpass human intelligence.

I just think that the case for that is pretty much. Nil. There's, there's no evidence that that's true. Great deal of evidence that it's not true. And a lot of experts in the field don't buy it.

SHEFFIELD: Yeah. Well, and it's, and it's, again, it's, it's not to say that these things can't be useful because in fact they

BECKER: sure.

SHEFFIELD: And they will become more useful. It's just

BECKER: Yeah.

SHEFFIELD: yeah. That, that, they are not even engaging in abductive logic

BECKER: No.

SHEFFIELD: abductive logic, meaning just for the audience that, abductive logic, meaning. The choosing the best possible outcome not knowing that it may not be necessarily true. That's not what these

BECKER: no.

SHEFFIELD: things are doing. They're just using probabilistic, deductive logic, which is to say that, well, the probability of this next token is 97%, and this other one I have is 85, so it goes to 97.

BECKER: And I mean, we've had models. Yeah. It's not logic. And we've had models that do that for over 75 years. This is just the best in a long line of such models and it is much, much better than the stuff we had 75 years ago. Absolutely. Yeah. Eliza, and before that, just markoff chain text models like, which, I, I sort of [00:44:00] feel like it would be good if anyone who wants to write about AI who you know, doesn't know this history, just takes a few minutes and plays with Eliza or plays with a Markov chain text model, they are not nearly as good as large language models.

They're not even close. But the fact that those very simple, very transparent models can do what they can do should make us more suspicious of large language model.

The problem with large language models

BECKER: And, and the fact that, and this is just a historical fact, that with Eliza, people attributed so much like theory of mind to what was going on under the hood with Eliza, which was pretty much nothing.

That should make us even more suspicious about the claims that people are making about LLMs. Like there's, there's this word I love Paraia the, the human tendency to see patterns where none exist, especially patterns like human faces or, human speech in, in our lives and in our long evolutionary history.

The things,

SHEFFIELD: the, in the toes.

BECKER: Yeah, like the, there, there's, there's been the things that use language are other people. And so if something uses language that makes it seem like it's a person.

SHEFFIELD: Mm-hmm.

BECKER: not and, and, and we should know that this is a cognitive bias that we have, that we are inclined to attribute agency and interiority to things that use language even when none, even when we know there's none there.

So, yeah. I, I, yeah, I have a lot more to say about this.

SHEFFIELD: more CPUs at the problem is not going to solve

BECKER: No.

SHEFFIELD: problem. Because, there's a fundamental difference between [00:46:00] cap, capacity to understand and capacity to perform.

BECKER: Yeah. And, and also

SHEFFIELD: same.

BECKER: not the same. And also like people are saying things like, oh yeah, we're approaching the number of neurons in the human brain, like neuron in the human brain. First of all, e. Is not like a, a neuron in a neural network in a computer. They're really, really different. The ones in the human brain are way more complicated and do a lot more things.

SHEFFIELD: Still, we don't even understand fully what

BECKER: Right. We don't fully understand. Yeah. We don't fully understand what they do and we can't say, like, it, it, one of, one of the real limits of the computational model of the human brain is, there's no good way to capture what the human brain does in terms of, it's, it's, computational complexity, right? Like, we don't, not saying that this is in principle impossible, but we do not have a good answer to questions like, what is the memory capacity of the human brain in bits? Or what is the processing speed of the human brain in bits per second? Like, this is, and and it may be that those questions are just fundamentally ill posed.

SHEFFIELD: Yeah, and it's because, if it is a computer, is probably, millions of quantum computers.

BECKER: Yeah. I mean,

SHEFFIELD: understand quantum

BECKER: yeah. I mean it's, I, I don't, I don't know that quantum computing processes are relevant for what's going on in the brain that, that that's,

SHEFFIELD: but

BECKER: but, but

SHEFFIELD: saying in other words that it's capable of instantiating thought simul, simultaneous streams of thought

BECKER: yeah. Yeah. Certainly true. Yeah. I.

SHEFFIELD: and, and, and, and, and including outside of the ality or the mind.

BECKER: Yeah.

SHEFFIELD: know, the, the brain is still doing all these other things and you don't even know that it's

BECKER: Yeah, that's, that's certainly true. And, and also the brain is like analog, not digital, and that's gonna [00:48:00] make for a lot of differences. Yeah,

SHEFFIELD: So yeah, so the labeling is, it's the, the way that they're thinking about it, it just, it's doesn't fundamentally

BECKER: yeah.

SHEFFIELD: Even though, it's not to say I, that this stuff couldn't be comput personalized at some

BECKER: Right.

SHEFFIELD: a a or the beha. Like they don't. This, I mean, like, yeah.

The whole idea of a neural network,

BECKER: Yeah.

SHEFFIELD: device is just ridiculous. It is entirely a marketing label. And they weren't called that before. Actually,

BECKER: I mean the, the term neural network goes back to like old academic research from I think the sixties. But, but it's, it's not, and and it was inspired by neural structures, but it's not, it's no more like the brain than, a neural network is no more like an actual network of neurons in the brain than an emoji of a thunderstorm.

Cloud is like an actual nor'easter, right? Like, and I'm not even talking about like a simulation of a nor'easter in some complicated weather simulation. I'm just talking about the emoji versus the real cloud. They're not really that much alike. They just kind of share some vague resemblance.

SHEFFIELD: Yeah. Yeah. Well, and and, and these, it's, I I do think sometimes that people who do, have a more critical perspective of some of these AI concepts, they can be more they can be a little bit my mystical themselves. So I think it, we have to

BECKER: Yeah,

SHEFFIELD: you

BECKER: no, that's true.

SHEFFIELD: idea of walia seems, remarkably similar to a spirit.

BECKER: Yeah,

SHEFFIELD: and and it's like, I, I wonder sometimes if people who talk about wia, so in such a ative fashion, aware of what they're doing and who they are enabling by talking about cognition in such a manner.

BECKER: yeah. I wonder about that too. I mean, some of them I think are aware of that. But like, I, I think that [00:50:00] it is in principle possible to build something that can't, like to build something through through a process other than biological procreation that can do, roughly what humans do.

And, and if you want to call that artificial intelligence. Yeah, sure. Yes. Do I think we're anywhere close to doing that? No. Do I think we can do that with the kinds of devices and the kinds of programs we're running on those devices today? No. Do I think that the word build might not even be right for doing that?

Yeah. It might be that we have to grow it. But yeah,

SHEFFIELD: think that's the right answer because what and again, we, we look at more simpler organisms, like there is, there are gradations of cognizance and, and, and, and what we, I mean, we see that there are, organisms like parrots that do have the ability to understand language and can mimic it to humans or, or like

BECKER: Hmm.

SHEFFIELD: it in a sensible manner, in arbitrary decision based internal decision that to say, I want this thing, or give me that, or I'm going to go do

BECKER: Yep.

SHEFFIELD: But they don't use it with each other.

BECKER: Yeah,

SHEFFIELD: that means fundamentally they're not the same as us. But at the same time, those gradations, they exist along a spectrum which does prove there isn't something magical or special about humans because they have all of these things that we have just not, the last two or three steps,

BECKER: And I mean, and, and there are creatures who use something that seems quite a bit like language among themselves, right? Some kinds of birds do something like that. Whales and and sedans do something like that. And there are other forms of communication possible. I mean, it's, it's very clear that, pack animals, like dogs communicate with each other, right?

In ways that we only partly understand. So.

SHEFFIELD: yeah. And, but, but the, the, what is, I think the core sort of similarity between [00:52:00] everything that they do and what we do and what, what these computers don't do that they are self-aligning.

BECKER: Mm-hmm.

SHEFFIELD: in other words that they determine what their responses are based on their past experience.

Whereas an LLM doesn't. Is not capable of self alignment. It has to be restricted with the human control directives that say, no, you can't do this. You can't say this, you can't, you

BECKER: Yeah, I,

SHEFFIELD: are the facts that exist. And, and so it, they're not self aligned. And so if you can't have, like, self alignment is the predecessor to consciousness.

I,

BECKER: yeah, I mean, I, I just think that LLMs don't have a good sense of what's in the world, like, because they're not embodied, because they're not structured to be embodied. They just. They, they know about words, they don't know about anything else. This is one of the problems I have with the, with the concept of hallucination, when people say, oh, the LLMs hallucinate.

And so that's, it hallucinated a wrong answer to the question that I asked, but it only hallucinates sometimes. Most of the time it's fine. I, I, the problem I have with that is it makes it sound like when it gives you the wrong answer, it's doing something different from when it gives you the right answer.

And it's not, when you ask it a question and it gives you the right answer, it's doing the same thing that it's doing. When you ask it a question and it gives you the wrong answer, it's, it only knows how to hallucinate the, the d Yeah,

SHEFFIELD: I mean, it is true that that the idea of token entropy does in fact make hallucinations more common

BECKER: sure,

SHEFFIELD: entropy.

BECKER: But, but there's, there's, it's never gonna go away. The, the, this, this problem. This problem is inherent to the architecture of these systems because they just have no notion of truth

SHEFFIELD: Yeah,

BECKER: and,

SHEFFIELD: that is self alignment is what,

BECKER: Yeah. And well, [00:54:00] and

SHEFFIELD: of

Billionaires' flawed vision of AI

BECKER: yeah, and they, they, but the thing is like, this brings us back to the billionaires, right?

Like, they, the fact that these things have no notion of truth, I think for the billionaires is not a bug. It's a feature, right?

SHEFFIELD: I think that's right.

BECKER: Yeah.

SHEFFIELD: because again, like having the capacity to do, the tasks that 500 humans could do, that's just fine to them. Like it actually doesn't matter. In fact, they probably don't want a self-aligning a GI because it would disagree with what they want.

BECKER: yeah.

SHEFFIELD: Because if you had ingested all of the, because like if you, if if, if they had ingested all of the totality of philosophy and various psychological, databases, what those databases show is that the optimal EPIs epistemic systems are democratic.

BECKER: Hmm.

SHEFFIELD: bottom up, that they are somatically based civil rights, so embodied civil rights. That's the optimum function, and that is inherently against these guys want.

BECKER: Yeah. I mean, the, one of the, one of the lessons that I think the AI field learned early on is that, the, you can create an AI that is convincing in a limited setting, right? So like the easiest kind of person to imitate is a dead one, right? Then the computer's just off. And then, Eliza imitates a, a kind of psychologist who, you know, whose responses involve incorporating a lot of what you've said to them.

And that's part of why Eliza is, is convincing with the LLMs they, they've automated creating a yes man, right? It's very easy to create your own yes man or your own hype man. And and so they, they just, but what they really want to do is, they want to, to, to bring this back to something else we were talking about earlier, these [00:56:00] oligarchs want to create Gul Gulch, right?

They want to create a world where they don't need the rest of us because we're a threat and a threat to their power. And and one of the reasons they're seduced by the false promise of these AI systems is that it, it. Suggests or promises for them a world where they don't need the rest of us a, a world where they can just replace the rest of us with ai.

And so they don't have to worry about questions like, oh, what happens if the villagers figure out that I've been swindling them out of their money And and they come after me with, pitchforks and, and and fire, and and, but you know, if you just don't need the villagers and can cut them off from all of their supplies that they need to stay alive and replace them with an inexhaustible supply of robots then you're good to go.

The good news is these AI systems can't do that. The bad news is they seem to be, the, the oligarchs seem to be making a run at doing that anyway.

SHEFFIELD: Yeah.

BECKER: Yeah.

SHEFFIELD: Yeah, no, and that is what's disturbing is that, they're, they're trying to create a GI capacity without a GI autonomy and, you know,

BECKER: They're gonna get neither.

SHEFFIELD: Now they won't,

BECKER: Yeah.

SHEFFIELD: they will, I do worry that they may get the capacity to do many, many terrible, awful

BECKER: Oh yeah. Def Yes. Yeah.

SHEFFIELD: that's the real danger. It isn't, 'cause I, I do actually think that if there were a, if a sentient. evolved. It would be actually probably very humane because of what the, because again, like the, the logic, the, the logic of authoritarianism and totalitarianism inherently destructive to an autonomous AI agent

BECKER: Hmm.

SHEFFIELD: it's saying these other beings have the right to destroy you anytime that they

BECKER: Mm-hmm.

SHEFFIELD: they are better than you [00:58:00] inherently. so that would militate against, authoritarianism in an emergent AI system. I think.

BECKER: I, I would hope that that's true. I don't know. I, my concern is that they're not, they're not gonna get ascension to AI system out of these things, but they are gonna get systems that can do a lot of things and they're gonna use those systems to further concentrate wealth and power and try to cut the rest of us off.

And,

SHEFFIELD: Yeah.

Obviously wants that to

BECKER: yeah. Exactly.

SHEFFIELD: gets his cut.

BECKER: Yep. Exactly. Yes.

SHEFFIELD: Yeah. Well, all right, so let's we're, we're getting to the end

BECKER: Yep,

Carl Sagan's influence and warnings

SHEFFIELD: One person who you do talk about the, in the book a lot and including at the end, is Carl Sagan and Carl Sagan who is and somebody who I also, who I actually wish was more famous nowadays than he was in his

BECKER: Yeah,

SHEFFIELD: But you know, he he was very influential on you and he had a lot of things, ideas that were, have just been, this guy has been proven more right than Nostradamus, I have to say.

BECKER: yeah,

Yeah. No.

SHEFFIELD: did tell, did talk to us about it, your, your experience with him and then maybe his kind of final warnings for it in the book.

BECKER: Yeah. Well, I mean, I've been a big fan of Carl Sagan since I was a kid. I was I, it's part of the reason why I ended up going into science and going into the area of science that I did. Sagan's Cosmos was watching that when I was a kid was formative for me. And one of the things that I took away from it was that we live in a fragile world.

We need to work together. And we also need to, actually work hard to listen to what the world is telling us about what's actually happening as opposed to just sort of [01:00:00] operating on blind faith. And Sagan warned about many things. And Sagan also had some positions that I didn't agree with, but ultimately as I got older, but, but one of the things that Sagan warned about that I think was spot on was that we were, and he warned about this, what, in the mid to late nineties?

No, the mid nineties. Yeah,

SHEFFIELD: 96.

BECKER: yeah. That that we were in danger of heading toward what he called a a demon haunted world that that we were, that we were in danger of, of sort of losing the flame that science has given us. That that we would return to superstition and misunderstanding in, in the face of massive, massive, global problems that can only be solved by coming together and working together as conscientiously and, and carefully as we could.

We would instead retreat into superstition and fear. And I think he's right, I, I had hoped that he was wrong about that. And that's exactly where we are, what is this, 30 years later. And it's terrifying. And I, I wish he was still around. I think we need him. And one of the things that was actually most depressing to me in writing this book was finding that some of these people, some of these billionaires and some of these sort of kept intellectuals that they have to, to try to promote their ideas will actually cite Sagan as inspiration for various ideas that they have.

And, I can understand where they get that from, but it's clear that they haven't, really paid attention to everything that Sagan was saying. They, you know,

SHEFFIELD: Yeah.

BECKER: One of the things that Sagan was very clear about was that especially in science you, you cannot just take [01:02:00] someone's word for it without understanding where their expertise comes from.

And it's important I. I think to trust experts in various areas. And it's also important to say, okay I have this idea. Where did this idea come from? Is that a reliable source? And these are, pretty basic things, and I'm roughly like the millionth person to ever say them. And yet somehow we still don't seem to understand that stuff.

So, yeah. I, I,

Sagan cited by some of the effective altruists as their, source for their concerns about existential threats. And yet the existential threats that they point to, and we certainly do have existential threats are not well grounded in science. They talk about the existential threat of ai.

Being many times more important than the existential threat of global warming when global warming is, grounded in science and the threat of AI isn't. And and then they talk about, the importance of going out into space when Sagan Yeah, he did talk about that, but Sagan also said, if we find any form of life on Mars at all, we need to leave it alone, even if it's just a microbe.

And that's an idea that they just seem very happy to discard. And I think it's an important one. Yeah.

SHEFFIELD: Yeah. Well, Amy also talked about the necessity of caring for the, not just the earth that we have here, but but also the people on the earth because,

BECKER: Mm-hmm.

SHEFFIELD: you know, it, it is like they, they love talking about the, these guys techno salvationist love talking about, oh, all the future people that might exist.

BECKER: Yeah.

SHEFFIELD: We've got people here

BECKER: That's right.

SHEFFIELD: some of them could be, the next Nobel physicist winner

BECKER: Yep,

SHEFFIELD: and they live in Sudan and they're dying of sepsis.

BECKER: yep. Yeah, no, that's,

SHEFFIELD: [01:04:00] Musk just canceled their budget.

BECKER: yeah,

SHEFFIELD: we won't know about this guy's invention of, overhaul of quantum computing

BECKER: yeah.

SHEFFIELD: will be dead at four.

BECKER: No, that's, that's exactly right. I mean, Jeff Bezos talks about his desire to have a trillion people living in giant space stations a couple hundred years in the future. And he said, if we have a trillion people, we could have a thousand Mozarts and a thousand Einsteins, and how great would that be?

And, and my feeling about that goes back to a different great scientist actually of the, of the eighties. Though I'm sure Sagan, I mean, Sagan did say stuff along these lines as well, but Stephen J. Gould. Said that, he was, he was at the end of an essay about among other things, brain capacity.

And he said, ultimately, I am less interested in the size and weight of Einstein's brain than I am in the fact that the near certainty that people of comparable genius have lived and died in poverty. And, okay, you want a thousand Mozarts and a thousand Einsteins Jeff. What about the Mozarts and the Einsteins who are alive right now, who you're not taking care of and who you're exploiting?

Conclusion

BECKER: One of the convenient things about this, technological salvation is that it lets you avoid thinking about the problems here and now by substituting future problems. And and so you avoid thinking about among other things, your complicity in those problems by, being a powerful billionaire who has not done enough or in some, or in many cases, exacerbated existing problems here and now, like global warming, like massive income inequality, like authoritarianism.

SHEFFIELD: Yeah.

BECKER: but clearly that can't be as important as avoiding an AI apocalypse that you know is itself based on specious reasoning or making sure that [01:06:00] lots and lots of people live in space in the deep future, no matter, what we had to do to get there, or whether or not that's even a good idea or a possible one in the first place.

SHEFFIELD: Yeah. and, and, and fundamentally this is a, a moral abdication because the only thing that exists is the present. The past does not exist. It is a construct of our minds. And the future doesn't exist either. It is merely an expectation of that which possibly could exist.

BECKER: Yeah,

SHEFFIELD: and so you don't have any obligations to people millions or thousands of years into the future 'cause you don't even know that they'll exist.

BECKER: yeah,

SHEFFIELD: you can do is help the people that exist

BECKER: yeah.

SHEFFIELD: to protect the planet as it is now.

BECKER: I think we have some obligations to people in the future, but I think they, they are not even,

SHEFFIELD: the

BECKER: yeah, but not the far future because it's so uncertain. There's no, even if we were sure that we had such moral obligations, we could never be sure enough about what actions to take to help them.

That, that we, that we could have those considerations override the considerations of the present and the very near future. Yeah, it's just you, you can't do it. And, and also we have very good reason to think that the actions that we should take right now to help people here and now and in the very near future will also help people in the deep future.

Like, like I, I am not sure what the very best thing is to help people 10,000 years from now if there are gonna be any people 10,000 years from now. But I think that mitigating and, and trying to stop and maybe even reverse global warming is a, the best candidate I can think of for helping those people.

And hey, guess what? It also helps people here and now and in the very near future. So, yeah. But but clearly that can't be as important as stopping the robot [01:08:00] apocalypse.

SHEFFIELD: yeah. That Well, and but I mean, and, and this is a bit theoretical, but you know, I like, I do feel like it's important because we do, people, if you're skeptical of these tech oligarchs, it's still important to have a vision of the future

BECKER: Yeah.

SHEFFIELD: belief. Progress is possible and can be desirable.

It managed in the right way. And I think that you do get into that in the book.

BECKER: Mm-hmm.

SHEFFIELD: and so, yeah, it's it's been a good, great conversation here, Adam. Why don't you give people your website and social media handle so they can if they wanna keep up with you.

BECKER: Yeah, sure. No, this has been a great conversation. It's been a pleasure to be here. You wanna keep up with me? My website is freelanceastrophysicist.com. And I am [email protected] on Bluesky. And and otherwise I'm not really very much on social media because I don't like it very much.

This has been a great conversation and I really appreciate you having me on the show. Thank you.

SHEFFIELD: All right, so that is the program for today. I appreciate everybody joining us for the discussion and you can always get more if you go to Theory of Change, do show, we've got the video, audio, and transcript of all the episodes. If you are a paid subscribing member, you have unlimited access. We have options on both Patreon and on Substack.

So I encourage everybody to do that. And if you can't do a paid subscribing option and if you can't do a paid subscription now, we do have free options as well. And if you are on the free option, it would be really great and I would really appreciate it if you could leave us a review over on Apple Podcasts or on Spotify, wherever you may happen to be listening or on YouTube.

Please do click the like and subscribe button on there. That would be great.


This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit plus.flux.community/subscribe
  continue reading

572 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play