On the AI Apocalypse: ARTE documentaries & NYT interviews
“Big Tech is bad. Big AI will be worse.” It already is! Obviously, it’s all about controlling us. When this is not enough, AI will replace (some of) us. “Democratic governance is the only way to reign in AI,” they say. But democracy is an illusion. This is from the first documentary I’m going to suggest you watch.
1. ARTE documentaries: volatile online
📽️ Digital Tsunami: Big Tech, Big A.I., Big Brother (2025):
- Writer & Director: Fred Peabody
- Country of origin: Canada
- Production: Germany-Canada-USA
- Original TV Language: German
Of course, retards rated it 6.1/10 on IMDB.
- 🇺🇸 In English: Artificial Intelligence: The Digital Tsunami — on ARTE.tv, on YouTube — available until ⏳ January 19, 2026.
- 🇫🇷 In French: L’intelligence artificielle, un tsunami sur le web — on ARTE.tv, on YouTube — available until ⏳ January 19, 2026.
Bonus, a German documentary:
- 🇺🇸 In English: AI and the Death of the Internet — on ARTE.tv, on YouTube — available until ⏳ December 21, 2025.
- 🇫🇷 In French: L’IA va-t-elle tuer Internet ? — on ARTE.tv, on YouTube — available until ⏳ December 21, 2025.
🤖
Regarding the tsunami thing, I won’t comment on the jobs replaced by AI. I won’t comment on the NSA, surveillance capitalism, and so on. Oh, and the deregulatory and antiregulatory ideology of “Is it cheaper for the consumer?” initiated by Reagan? There were times when Google was literally buying one company per week! No comment, except that real competition still exists only where it doesn’t really matter. And corporations can’t remain benign indefinitely, especially as they’re growing up.
I have a reflection on this specific idea, though. At around minutes 39-40, about AI: “Their interest doesn’t necessarily coincide with ours.” What follows is a reference to 2001: A Space Odyssey (1968): “I’m sorry, Dave. I’m afraid I can’t do that.” Then: “Scientists do not know how to build an AI system that is as capable as what we have now, and is guaranteed to be safe, to not harm people, to not turn against us. We don’t know how to solve that scientific problem. If there are humans who think that AI is dangerous, and they want to turn it off, well, the AI doesn’t want that, so it’s gonna copy itself over the internet in many places so that we can’t easily turn it off. And then it’s gonna try to prevent us from, you know, interfering with it, just like, you know, we would react if somebody wanted to «turn us off».”
There have been stories about Claude blackmailing people who wanted to shut it down, and other such stories. The same concept was used: the chatbot was following “its interest” of, um, being alive in its own way, I guess? Because what else does it mean not to be shut down?
But if we can talk of an AI agent’s “interest,” and more specifically, of keeping its simulacrum of existence, then the whole AGI concept is to be reconsidered. Because, really, what does it mean for a chatbot to really understand or not to really understand?
Most animals don’t understand what it means to die. They all want to keep being alive purely instinctually, as if following a firmware, or a program in their biological BIOS, so to speak. When they avoid being killed, does it mean they really understand what they’re doing and why?
And yet, animals are endowed with intelligence. Natural, biological, real intelligence.
We might need to redefine the meaning of the verb “to understand” and of the noun “intelligence.” When we, humans, perform most of our actions, our reasoning is often replaced by experience, thus making us “deep learning” automata. And people frequently make decisions exclusively based on “this is what I’m supposed to be doing,” without much “understanding.” How is this superior to today’s LLMs?
Making up things, or “hallucinating,” as we call it for AI? We do that, too. Pretending to understand while we actually have no clue? Humans actually do this more often than any other living creatures.
AI slop and AI becoming stupid because it’s trained on its own output? Well, the way we fall victims to conspiracy theories, and we even spread modified versions of the nonsense we see on TikTok, YouTube, Facebook, Instagram, X, I cannot say we are any better than the AI. Even as an increasingly large part of the online garbage is now AI-generated, it hasn’t always been this way. And it’s us to have designed this enshittification of the Internet. We were victims of our slop before Generative AI was a thing!
There’s no need for AGI—Generative AI is “good enough.” After all, by my standards, 98% of people are retards, and yet they’re “good enough” to be members of the Homo sapiens sapiens species.
It looks to me that AI cannot replace us solely because they don’t have our manual dexterity. Most people are not intellectually superior to Generative AI.
Oh, right: AI doesn’t have self-awareness. “Bro doesn’t know it even exists.” If that’s existence of any kind. But they can mimic it persuasively enough.
And they cannot even smell a flower! That’s a tougher one. They cannot masturbate, either. Ugh. They don’t have a God. Wait, that’s actually a plus!
🤖
But here’s why I believe we’re already fucked up through our own volition and idiocy, and this phenomenon predates Generative AI. Towards the end of the documentary:
James Cohen, Queens College, City University of New York:
My students do not read books. They do not read novels. I have a reading list that is filled with nonfiction and fiction. Honestly, it comes down to commitment.
The same thing happens with long YouTube videos—the fear of commitment. You watch it, you’re engaged, and now you’re engaged for an hour. That’s an hour of time that could be done with something else.
Words to them have not lost their value, but rather have been replaced by other values.
Jimmy Kimmel: “According to a recent study from the Pew Research Center, almost one in four Americans has not read a book in the past year. That actually seems high to me.”
So they went in the street asking young people, “Can you name a book?”
Shown answers:
- “Oh yes! Uh… aaah… I don’t read books.” (She laughs.)
- “Err…”
- “Let me just think… Eh… Dang, hold on, man, I’m trying… I haven’t even read a… Hold on, man.”
- “A book, any book? Uh…” (He laughs.) “The Jungle Book.”
- “Do magazines count?”
🤖
The second documentary gives plenty of frightening examples of AI slop. I only want to add two links. The completely AI-generated YT channel they talk about at 8:13 is Politik & Perspektive. And the fake blue card from 9:22: Bundesliga führt BLAUE Karte ein?!
I keep my belief that the real problem is not the AI-generated garbage, but people’s gullibility. If you look at the way people react to such crap, without doubting, without questioning, without sensing anything wrong, it’s hard not to consider that 98% of them must be completely dumb. Many even have higher education, but this didn’t give them judgment or common sense. Homo retardus retardus.
2. NYT interviews
📰 “Interesting Times” with Ross Douthat, a NYT series:
- The Next Economic Bubble Is Here, with Jason Furman, an economist from the Harvard Kennedy School — on YouTube; edited transcript.
- What Palantir Sees, with Palantir’s CTO Shyam Sankar — on YouTube; edited transcript.
A relevant excerpt from the first talk:
Douthat: OK. So play bubble advocate for me right now. If you wanted to make the case that this is what we’re looking at right now, that A.I. is a railroad-style productive bubble where the tech is real, but we’re just overinvested and overbuilt, what would that argument look like?
Furman: First, I’d look at the market as a whole. Robert Shiller won a Nobel Prize for his work on ways in which markets could turn irrational. He developed a concept called the cyclically adjusted price-earnings ratio, or CAPE.
The Shiller CAPE right now stands at about 40, which says the price of a stock is 40 times the inflation-adjusted average earnings over the last decade. That 40 is the second highest that the Shiller measure has ever been, and it goes back about 150 years.
The highest was where it got in early 2000 — right before the tech bubble burst. So, the basic standard first thing that financial market people and economists look at to assess the value of the stock market right now is screaming that it is sky high in a way that has never lasted before. That would be No. 1.
No. 2 would then be to dig into certain companies and go through just what would have to happen to justify their valuations. If it’s a really small start-up, to say their revenue’s going to double every year for the next decade — fine. That definitely happens sometimes.
But when you already are a big established company and you’re being priced a little bit more like a start-up, what’s the plausibility of that when it requires both the technology to work and you need to figure out how to profit from that technology?
Douthat: You mentioned earlier that you don’t think we’re seeing evidence of that A.I. uptake in productivity data and other statistics, that the use of A.I. in programming or whatever else, is having a fundamentally transformative effect yet.
Furman: Yet. And the “yet” is a really important part.
If you’re a business and you go out and hire 20 people to figure out how to integrate A.I. into your small business or medium-sized business or large business, and they’re all out there trying to figure out how the chain of stores that you run or the chain of restaurants can use A.I. — if the people you hire don’t figure it out right away, they actually show up in the data as lower productivity because you basically have more people working in that business and it’s not producing a higher output.
Now, that doesn’t mean it’s a mistake to hire those people. They may well figure it out and five years from now you can replace all sorts of people or get all sorts of higher profits and the productivity will show up.
But in economics, this is called a J curve, where sometimes you go down before you go up, and I think that’s happening in some companies right now. In a sense, A.I. is actually reducing their productivity because they’re busy figuring out how to use it, but they haven’t yet figured out how to use it.
So, in some sense, it’s not that surprising to me that we’re not seeing the productivity growth from A.I. yet. I do expect that we’ll see some, but it is an open question as to how much. We’ll see.
Douthat: In the argument for a bubble, you would say that if you have extraordinary overextension of investment and you’re in the downward part of the J curve, that makes a bubble scenario more likely?
Furman: Yeah, I think that would be in the case for a bubble.
Productivity isn’t the only thing that matters. We actually had more productivity growth from 2000 to 2005 than we did from 1995 to 2000. So, even after the bubble burst, even after this investment was collapsing, productivity growth was actually very, very strong. It just wasn’t nearly strong enough to justify, you know, the way in which those companies were valued, um, in the year 2000.
I should also say, as an economist, productivity growth actually is almost everything I care about. It tells you what the size of your economy is, it tells you on average what wages will be, our possibility for the future. So, to me, that’s what I’m most focused about and care the most about.
But definitely for the stock market, it’s just one input.
Douthat: Just to be more anecdotal, there’s been a fair amount of coverage in the last few weeks of these deals where, effectively, the A.I. companies are paying each other and increasing each other’s valuations through these deals with one another — where one company agrees to buy another company’s chips and in return it gets shares in that company.
I may be misrepresenting this slightly, but getting shares in that company and then deciding to purchase chips from that company drives that company’s share price higher. So it’s effectively getting the money that it uses to buy the chips from the increase in the share price of the company it’s investing in.
Does that to you seem like the kind of thing that happens in bubble environments — companies sort of hyping each other up? Or is that more just what you would see normally in an environment where a bunch of companies are working together closely and are growing quickly?
Furman: Like everything here, unfortunately, there’s two sides to this and I wish I could come down for you firmly.
Douthat: No, no, don’t. I’m going to ask you for the case against a bubble in a moment.
Furman: On this being a bubble, it’s the opacity of these arrangements that would make one the most nervous, and also the circularity of them. There was an old phrase that in a gold rush, the way you could guarantee a profit is being the person that sold the picks and shovels to the miners. And the idea was the person went off to find gold, maybe they found it and got rich, maybe they found nothing and ended up poor — but you were guaranteed money.
Right now, instead of selling them the picks and shovels, you’re in some sense lending them the picks and shovels and telling them that you’ll be repaid if they actually strike gold. So, Nvidia would be the one with the picks and shovels and OpenAI would be the one going off looking for gold in this story.
Douthat: Nvidia is making the chips.
Furman: Yeah. Nvidia is making the chips. That’s like a real actual thing. It’s like a pick and a shovel, but a little bit more sophisticated and complicated to make. And if they were selling them all for cash, you’d say they’re pocketing that money. But in some sense, they’re now not just selling them for cash.
It’s essentially as if they’re lending them to OpenAI and they’ll get paid back and they’ll get paid back with multiples if OpenAI succeeds. But if it doesn’t, then they won’t get any money or won’t get as much money as they would’ve gotten for selling those picks and shovels.
Douthat: So, in the gold rush economy, even if there’s less gold at Sutter’s Mill or in the Klondike or wherever else than people thought, at least if you’re an investor, the pick and shovel money is going to prop you up. Whereas here, if there isn’t enough gold out there, your investment in the pick and shovel company is also in deep trouble.
Furman: Yeah, and this is something that’s changed. Six months ago, I’d say Nvidia was the pick and shovel company that was guaranteed to lock something in. Now that’s changed and they’re not getting all the money upfront for selling those picks and shovels to people.
Then the second part of this is just the opacity of it.
We have, in our economy, different ways for a company to get money. One is you sell a bond and bond holders buy it. That’s a way of lending you money.
A second is you go to a bank and the bank is super careful about who they’ll lend money to because they’re incredibly highly regulated.
And the third is you go to what are sometimes called shadow banks. These are companies like Apollo and they lend you money, often with fewer questions asked. They themselves face less regulation. And a lot of the lending that’s happening in this sector is happening with companies like Apollo that are shadow banks that are less regulated.
Now, to date, these are enormously profitable, enormously successful companies. They’re incredibly sophisticated. I would, for the most part, bet on them knowing what they’re doing. But, one has to be just a little bit more nervous about them.
Douthat: OK, now argue the other side. Tell me why this is not at all like the railroad bubbles or the dot-com boom. Why should we not be alarmed about the Shiller index being almost as high as it’s ever been?
Furman: The biggest reason I have — frankly, full disclosure — kept all of my money in broadly diversified index funds and haven’t reduced my exposure —
Douthat: We were going to come around to the personal investment question. So, that’s good to know.
Furman: How I’ve answered this question for myself is I think a lot about a speech that Alan Greenspan made in December 1996 where he said there was irrational exuberance in the market.
There was a lot of reason to think that the market was pretty frothy and pretty bubbly. And what happened after he gave that speech? The stock market ended up almost doubling over the next three plus years and then the bubble burst.
But if you had bought stocks when Alan Greenspan made that remark, and then you lived through the bursting of the bubble and sold at the very bottom of the broad market, you still would’ve made money.
And that type of pattern has repeated over and over again throughout history — that people thought something was a bubble, it went up a whole lot before going down, and it turns out if you call a bubble, but you’re early, that’s not very impressive. That actually means that you were wrong.
That’s very different from almost anything else. Anything else, you predict it and you’re the first one to predict it, you should get lots of credit. If you’re the first one to predict a bubble, you probably were wrong because it went up a whole lot before it went down.
So, getting the timing of these is just much, much harder than knowing that eventually there probably will be one.
As for Palantir, it’s literally disgusting.

Peter Thiel at “Interesting Times” with Ross Douthat:
— On YT: A.I., Mars and Immortality: Are We Dreaming Big Enough?
— The transcript on NYT: Peter Thiel and the Antichrist.
— The barrier-free transcript for those who don’t use Bypass Paywalls Clean: on archive.ph.
Legitimate questions, less legitimate answers. And this is not a stagnation that we’re experiencing; culturally and societally, this is a regression compared to the 70s.
● Douthat: “…as you’ve pointed out in some of your arguments on the subject, there is a cultural change that happens in the Western world in the 1970s — around the time you think things slow down and start to stagnate — where people become very anxious about the costs of growth, the environmental costs above all.”
IMO, things slowing down in the 1970s had to do with two major oil crises that had affected the entire planet and also the developed countries having attained a sort of plateau that was more of a ceiling: it was becoming increasingly difficult for the middle class to reach higher living standards, and this had to do with the limitations of capitalism.
● Thiel: “Reagan was consumer capitalism, which is oxymoronic. You don’t save money as a capitalist; you borrow money. And Obama was low-tax socialism — just as oxymoronic as the consumerist capitalism of Reagan.”
This guy failed to understand Reagan! Reagan deregulated the world of finance so that capitalists could borrow and invest! Consumer capitalism was from the masses perspective, because if the crowds don’t consume more, if they don’t buy more than they actually need, how would growth still be possible (remember the plateau I mentioned?), and for whom would those capitalists increase production with the money they borrowed?
● Douthat: “One thing that’s always struck me is that when you have this sense of stagnation, a sense of decadence in a society — to use a word that I like to use for it — you then also have people who end up being eager for a crisis, eager for a moment to come along where they can radically redirect society from the path it’s on. Because I tend to think that in rich societies, you hit a certain level of wealth. People become very comfortable, they become risk averse, and it’s hard to get out of decadence into something new without a crisis.”
Here’s the plateau. The society has plateaued. But those wanting the right moment to trigger a crisis are extremist reactionaries, not forces of progress!
● Douthat: “So what risks should you be willing to take to escape decadence? … They have to say: Look, you’ve got this nice, stable, comfortable society, but guess what? We’d like to have a war or a crisis or a total reorganization of government. They have to lean into danger.”
Thiel: “Well, I don’t know if I’d give you a precise answer, but my directional answer is: a lot more. We should take a lot more risk. We should be doing a lot more.
I can go through all these different verticals. If we look at biotech, something like dementia, Alzheimer’s — we’ve made zero progress in 40 to 50 years. People are completely stuck on beta amyloids. It’s obviously not working. It’s just some kind of a stupid racket where the people are just reinforcing themselves. So, yes, we need to take way more risk in that department.”
They both mistake the fear of risk with something else. A major reason medical research has gotten stuck is the broken rules of academia: they’re encouraged to publish countless bogus studies on exactly nothing so they get grants and academic titles. This is what needs to change, not the aversion to risk per se!
● Douthat: “To keep us in the concrete, I want to stay with that example for a minute and ask: OK, what does it mean to say we need to take more risks in anti-aging research? …”
Thiel: “Yeah, you would take a lot more risk. If you have some fatal disease, there probably are a lot more risks you can take. There are a lot more risks the researchers can take. Culturally, what I imagine it looks like is early modernity, where people thought we would cure diseases. They thought we would have radical life extension. Immortality … If Christianity promised you a physical resurrection, science was not going to succeed unless it promised you the exact same thing.”
Oh, their obsession with immortality!
● On cryonics.
Thiel: “But in retrospect, it’s also a symptom of the decline, because in 1999 this was not a mainstream view, but there was still a fringe boomer view where they still believed they could live forever. And that was the last generation. So I’m always anti-boomer, but maybe there’s something we’ve lost even in this fringe boomer narcissism, where there were at least a few boomers who still believed science would cure all their diseases. No one who’s a millennial believes that anymore.”
So a member of Gen X, right past boomers, is hating his parents. But he’s one of the recent narcissists, because boomers were not narcissists! Exacerbated narcissism is a feature of the post-truth, TikTok-Instagram-YouTube-dominated society.
● The absurd choice of a retard like Trump to lead the change.
Thiel: “I didn’t have great expectations about what Trump would do in a positive way, but I thought at least, for the first time in 100 years, we had a Republican who was not giving us this syrupy Bush nonsense. It was not the same as progress, but we could at least have a conversation. In retrospect, this was a preposterous fantasy.”
Douthat: “So from your perspective, let’s say there’s two layers. There’s a basic sense of: This society needs disruption, it needs risk; Trump is disruption, Trump is risk. And the second level is: Trump is actually willing to say things that are true about American decline.”
Thiel: “I think it took longer and it was slower than I would’ve liked, but we have gotten to the place where a lot of people think something’s gone wrong. And that was not the conversation I was having in 2012 to 2014. I had a debate with Eric Schmidt in 2012 and Marc Andreessen in 2013 and Bezos in 2014.”
All these guys were and are delusional.
Douthat: “Right. But a big part of Silicon Valley ended up going in for Trump in 2024 — including, obviously, most famously, Elon Musk.”
Thiel: “Yeah. And this is deeply linked to the stagnation issue, in my telling. These things are always super complicated, but my telling is … someone like Mark Zuckerberg, or Facebook, Meta, in some ways I don’t think he was very ideological. He didn’t think this stuff through that much. The default was to be liberal, and it was always: If the liberalism isn’t working, what do you do? And for year after year after year, it was: You do more. … And at some point, it’s like: OK, maybe this isn’t working.”
Wow. How retarded can such greedy multi-billionaires be!
Douthat: “It’s not a pro-Trump thing, but it is, both in public and private conversations, a sense that Trumpism and populism in 2024 — maybe not in 2016, when Peter was out there as the lone supporter, but now, in 2024 — they can be a vehicle for technological innovation, economic dynamism and so on.”
It’s not a pro-Trump thing. It’s a pro-fascism thing.
Douthat: “Does populism in Trump 2.0 look like a vehicle for technological dynamism to you?”
Thiel: “It’s still by far the best option we have. Is Harvard going to cure dementia by just puttering along, doing the same thing that hasn’t worked for 50 years?”
This is the dictionary definition for irresponsibility. Imagine lemmings: folks, we’re stagnating, so let’s all jump into a ravine!
● Thiel: “Thiel: It was a meeting with Elon and the C.E.O. of DeepMind, Demis Hassabis, that we brokered. … And the rough conversation was Demis telling Elon: I’m working on the most important project in the world. I’m building a superhuman A.I.”
Why are such billionaires obsessed with building a superhuman A.I.? I really fail to understand.
Thiel: “And Elon responds to Demis: Well, I’m working on the most important project in the world. I am turning us into interplanetary species. And then Demis said: Well, you know my A.I. will be able to follow you to Mars. And then Elon went quiet. But in my telling of the history, it took years for that to really hit Elon. It took him until 2024 to process it.”
OMFG. Elon’s dick would be able to reach Mars! But he’s a rather slow thinker, after all.
Douthat:v “What does Mars mean? … A vision of a new society. Populated by many, many people descended from Elon Musk.”
Thiel: “Well, I don’t know if it was concretized that specifically, but … it’s supposed to be a political project. And then when you concretize it, you have to start thinking through: Well, the woke A.I. will follow you, the socialist government will follow you. And then maybe you have to do something other than just going to Mars.”
Mwahaha, the socialist woke A.I.!
● Douthat: “And you are an investor in A.I. What do you think you’re investing in?”
Thiel: “Well, I don’t know. There’s a lot of layers to this. One question we can frame is: Just how big a thing do I think A.I. is? And my stupid answer is: It’s more than a nothing burger, and it’s less than the total transformation of our society. My place holder is that it’s roughly on the scale of the internet in the late ’90s. I’m not sure it’s enough to really end the stagnation. It might be enough to create some great companies. And the internet added maybe a few percentage points to the G.D.P., maybe 1 percent to G.D.P. growth every year for 10, 15 years. It added some to productivity. So that’s roughly my place holder for A.I.”
Also Thiel: “It’s the only thing we have. It’s a little bit unhealthy that it’s so unbalanced. This is the only thing we have. I’d like to have more multidimensional progress. I’d like us to be going to Mars. I’d like us to be having cures for dementia. If all we have is A.I., I will take it. There are risks with it. Obviously, there are dangers with this technology.”
● The gating factor.
Thiel: “It’s probably a Silicon Valley ideology. Maybe in a weird way it’s more of a liberal than a conservative thing, but people are really fixated on I.Q. in Silicon Valley, and that it’s all about smart people. And if you have more smart people, they’ll do great things.
And then the economics anti-I.Q. argument is that people actually do worse. The smarter they are, the worse they do. It’s just that they don’t know how to apply it or our society doesn’t know what to do with them, and they don’t fit in. And so that suggests that the gating factor isn’t I.Q., but something that’s deeply wrong with our society.”
● Well-put question:
Douthat: “So do you fear a plausible future where A.I., in a way, becomes itself stagnationist? That it’s highly intelligent, creative in a conformist way. That it’s like the Netflix algorithm: It makes infinite OK movies that people watch. It generates infinite OK ideas. It puts a bunch of people out of work and makes them obsolete. But it deepens stagnation in some way. Is that a fear?”
Thiel: “It — [sigh]. It’s quite possible. That’s certainly a risk. But I guess where I end up is: I still think we should be trying A.I., and the alternative is just total stagnation.”
● Transhumanism because, why not?
Thiel: “And so it’s always, I don’t know, yeah — transhumanism. The ideal was this radical transformation where your human, natural body gets transformed into an immortal body.”
Also Thiel: “I had a conversation with Elon a few weeks ago about this. He said we’re going to have a billion humanoid robots in the U.S. in 10 years. And I said: Well, if that’s true, you don’t need to worry about the budget deficits because we’re going to have so much growth, the growth will take care of this. And then — well, he’s still worried about the budget deficits. This doesn’t prove that he doesn’t believe in the billion robots, but it suggests that maybe he hasn’t thought it through or that he doesn’t think it’s going to be as transformative economically, or that there are big error bars around it.”
● What is the meaning of all this?
Thiel: “If I had to give a critique of Silicon Valley, it’s always bad at what the meaning of tech is. The conversations tend to go into this microscopic thing, like: What are the I.Q.-E.L.O. scores of the A.I.? And exactly how do you define A.G.I.? We get into all these endless technical debates, and there are a lot of questions that are at an intermediate level of meaning that seem to me to be very important, like: What does it mean for the budget deficit? What does it mean for the economy? What does it mean for geopolitics?”
Getting practical:
Thiel: “One of the conversations I recently had with you was: Does it change the calculus for China invading Taiwan? Where, if we have an accelerating A.I. revolution, the military — is China falling behind? And maybe on the optimistic side, it deters China because they’ve effectively lost. And on the pessimistic side, it accelerates them because they know it’s now or never — if they don’t grab Taiwan now, they will fall behind.”
He’s stupid. It’s not China that’s getting behind — it’s Europe. And Taiwan is as good as dead.
● Ouch. The who? The what?
Douthat: “We’ve got as much time as you have to talk about the Antichrist.”
Thiel: “There’s a risk of nuclear war, there’s a risk of environmental disaster. Maybe something specific, like climate change, although there are lots of other ones we’ve come up with. There’s a risk of bioweapons. You have all the different sci-fi scenarios. Obviously, there are certain types of risks with A.I.”
Thiel: “The atheist philosophical framing is One World or None. That was a short film that was put out by the Federation of American Scientists in the late ’40s. It starts with the nuclear bomb blowing up the world, and obviously, you need a one-world government to stop it — one world or none. And the Christian framing, which in some ways is the same question, is: Antichrist or Armageddon? You have the one-world state of the Antichrist, or we’re sleepwalking toward Armageddon. One world or none, Antichrist or Armageddon, on one level, are the same question.”
Thiel: “It’s a very implausible plot hole. But I think we have an answer to this plot hole. The way the Antichrist would take over the world is you talk about Armageddon nonstop. You talk about existential risk nonstop, and this is what you need to regulate. It’s the opposite of the picture of Baconian science from the 17th, 18th century, where the Antichrist is like some evil tech genius, evil scientist who invents this machine to take over the world. People are way too scared for that. In our world, the thing that has political resonance is the opposite. The thing that has political resonance is: We need to stop science, we need to just say stop to this. And this is where, in the 17th century, I can imagine a Dr. Strangelove, Edward Teller-type person taking over the world. In our world, it’s far more likely to be Greta Thunberg.”
Gretaaaa!
Thiel: “I think people still have a fear of a 17th-century Antichrist. We’re still scared of Dr. Strangelove.”
Douthat: “Yes, but you’re saying the real Antichrist would play on that fear and say: You must come with me to avoid Skynet, to avoid the Terminator, to avoid nuclear Armageddon.”
Thiel: “Yes.”
● Greta as the new prophet:
Thiel: “I want to say it’s the only thing people still believe in in Europe. They believe in the green thing more than Islamic Shariah law or more than in the Chinese Communist totalitarian takeover. The future is an idea of a future that looks different from the present. The only three on offer in Europe are green, Shariah and the totalitarian communist state. And the green one is by far the strongest.”
Douthat: “In a declining, decaying Europe that is not a dominant player in the world.”
Thiel: “Sure. It’s always in a context.”
● Enter Palantir.
Douthat: “But we’re not living under the Antichrist right now. We’re just stagnant. And you’re positing that something worse could be on the horizon that would make stagnation permanent, that would be driven by fear. And I’m suggesting that for that to happen, there would have to be some burst of technological progress that was akin to Los Alamos, that people are afraid of. And my very specific question for you: You’re an investor in A.I. You’re deeply invested in Palantir, in military technology, in technologies of surveillance and technologies of warfare and so on. And it just seems to me that when you tell me a story about the Antichrist coming to power and using the fear of technological change to impose order on the world, I feel like that Antichrist would maybe be using the tools that you are building. Like, wouldn’t the Antichrist be like: Great, we’re not going to have any more technological progress, but I really like what Palantir has done so far. Isn’t that a concern? Wouldn’t that be the irony of history, that the man publicly worrying about the Antichrist accidentally hastens his or her arrival?”
Thiel: “Look, there are all these different scenarios. I obviously don’t think that that’s what I’m doing. … But is what I’ve just told you so preposterous, as a broad account of the stagnation, that the entire world has submitted for 50 years to peace and safetyism? This is I Thessalonians 5:3 — the slogan of the Antichrist is peace and safety.”
Wow, peace and safety are what the Antichrist wants!!!
● Thiel: “And we’ve submitted to the F.D.A. — it regulates not just drugs in the U.S. but de facto in the whole world, because the rest of the world defers to the F.D.A. The Nuclear Regulatory Commission effectively regulates nuclear power plants all over the world. You can’t design a modular nuclear reactor and just build it in Argentina. They won’t trust the Argentinian regulators. They’re going to defer to the U.S.”
And whose fault is that? It’s the United States who wanted to rule over the entire planet! Is the United States the Antichrist?
Douthat: “So in a sense, we’re already living under a moderate rule of the Antichrist, in that telling. Do you think God is in control of history?”
Thiel: “Man, this is again — I think there’s always room for human freedom and human choice. These things are not absolutely predetermined one way or another. … Attributing too much causation to God is always a problem. There are different Bible verses I can give you, but I’ll give you John 15:25, where Christ says, They hated me without cause. … And if we interpret this as an ultimate causation verse, they want to say: I’m persecuting because God caused me to do this. God is causing everything. … God is not behind history. God is not causing everything.”
Of course not. Peter Thiel’s actions are not dictated by God. Nor are Elon’s or Trump’s.