On the AI Apocalypse: ARTE documentaries & NYT interviews
“Big Tech is bad. Big AI will be worse.” It already is! Obviously, it’s all about controlling us. When this is not enough, AI will replace (some of) us. “Democratic governance is the only way to reign in AI,” they say. But democracy is an illusion. This is from the first documentary I’m going to suggest you watch.
1. ARTE documentaries: volatile online
📽️ Digital Tsunami: Big Tech, Big A.I., Big Brother (2025):
- Writer & Director: Fred Peabody
- Country of origin: Canada
- Production: Germany-Canada-USA
- Original TV Language: German
Of course, retards rated it 6.1/10 on IMDB.
- 🇺🇸 In English: Artificial Intelligence: The Digital Tsunami — on ARTE.tv, on YouTube — available until ⏳ January 19, 2026.
- 🇫🇷 In French: L’intelligence artificielle, un tsunami sur le web — on ARTE.tv, on YouTube — available until ⏳ January 19, 2026.
Bonus, a German documentary:
- 🇺🇸 In English: AI and the Death of the Internet — on ARTE.tv, on YouTube — available until ⏳ December 21, 2025.
- 🇫🇷 In French: L’IA va-t-elle tuer Internet ? — on ARTE.tv, on YouTube — available until ⏳ December 21, 2025.
🤖
Regarding the tsunami thing, I won’t comment on the jobs replaced by AI. I won’t comment on the NSA, surveillance capitalism, and so on. Oh, and the deregulatory and antiregulatory ideology of “Is it cheaper for the consumer?” initiated by Reagan? There were times when Google was literally buying one company per week! No comment, except that real competition still exists only where it doesn’t really matter. And corporations can’t remain benign indefinitely, especially as they’re growing up.
I have a reflection on this specific idea, though. At around minutes 39-40, about AI: “Their interest doesn’t necessarily coincide with ours.” What follows is a reference to 2001: A Space Odyssey (1968): “I’m sorry, Dave. I’m afraid I can’t do that.” Then: “Scientists do not know how to build an AI system that is as capable as what we have now, and is guaranteed to be safe, to not harm people, to not turn against us. We don’t know how to solve that scientific problem. If there are humans who think that AI is dangerous, and they want to turn it off, well, the AI doesn’t want that, so it’s gonna copy itself over the internet in many places so that we can’t easily turn it off. And then it’s gonna try to prevent us from, you know, interfering with it, just like, you know, we would react if somebody wanted to «turn us off».”
There have been stories about Claude blackmailing people who wanted to shut it down, and other such stories. The same concept was used: the chatbot was following “its interest” of, um, being alive in its own way, I guess? Because what else does it mean not to be shut down?
But if we can talk of an AI agent’s “interest,” and more specifically, of keeping its simulacrum of existence, then the whole AGI concept is to be reconsidered. Because, really, what does it mean for a chatbot to really understand or not to really understand?
Most animals don’t understand what it means to die. They all want to keep being alive purely instinctually, as if following a firmware, or a program in their biological BIOS, so to speak. When they avoid being killed, does it mean they really understand what they’re doing and why?
And yet, animals are endowed with intelligence. Natural, biological, real intelligence.
We might need to redefine the meaning of the verb “to understand” and of the noun “intelligence.” When we, humans, perform most of our actions, our reasoning is often replaced by experience, thus making us “deep learning” automata. And people frequently make decisions exclusively based on “this is what I’m supposed to be doing,” without much “understanding.” How is this superior to today’s LLMs?
Making up things, or “hallucinating,” as we call it for AI? We do that, too. Pretending to understand while we actually have no clue? Humans actually do this more often than any other living creatures.
AI slop and AI becoming stupid because it’s trained on its own output? Well, the way we fall victims to conspiracy theories, and we even spread modified versions of the nonsense we see on TikTok, YouTube, Facebook, Instagram, X, I cannot say we are any better than the AI. Even as an increasingly large part of the online garbage is now AI-generated, it hasn’t always been this way. And it’s us to have designed this enshittification of the Internet. We were victims of our slop before Generative AI was a thing!
There’s no need for AGI—Generative AI is “good enough.” After all, by my standards, 98% of people are retards, and yet they’re “good enough” to be members of the Homo sapiens sapiens species.
It looks to me that AI cannot replace us solely because they don’t have our manual dexterity. Most people are not intellectually superior to Generative AI.
Oh, right: AI doesn’t have self-awareness. “Bro doesn’t know it even exists.” If that’s existence of any kind. But they can mimic it persuasively enough.
And they cannot even smell a flower! That’s a tougher one. They cannot masturbate, either. Ugh. They don’t have a God. Wait, that’s actually a plus!
🤖
But here’s why I believe we’re already fucked up through our own volition and idiocy, and this phenomenon predates Generative AI. Towards the end of the documentary:
James Cohen, Queens College, City University of New York:
My students do not read books. They do not read novels. I have a reading list that is filled with nonfiction and fiction. Honestly, it comes down to commitment.
The same thing happens with long YouTube videos—the fear of commitment. You watch it, you’re engaged, and now you’re engaged for an hour. That’s an hour of time that could be done with something else.
Words to them have not lost their value, but rather have been replaced by other values.
Jimmy Kimmel: “According to a recent study from the Pew Research Center, almost one in four Americans has not read a book in the past year. That actually seems high to me.”
So they went in the street asking young people, “Can you name a book?”
Shown answers:
- “Oh yes! Uh… aaah… I don’t read books.” (She laughs.)
- “Err…”
- “Let me just think… Eh… Dang, hold on, man, I’m trying… I haven’t even read a… Hold on, man.”
- “A book, any book? Uh…” (He laughs.) “The Jungle Book.”
- “Do magazines count?”
🤖
The second documentary gives plenty of frightening examples of AI slop. I only want to add two links. The completely AI-generated YT channel they talk about at 8:13 is Politik & Perspektive. And the fake blue card from 9:22: Bundesliga führt BLAUE Karte ein?!
I keep my belief that the real problem is not the AI-generated garbage, but people’s gullibility. If you look at the way people react to such crap, without doubting, without questioning, without sensing anything wrong, it’s hard not to consider that 98% of them must be completely dumb. Many even have higher education, but this didn’t give them judgment or common sense. Homo retardus retardus.
2. NYT interviews
📰 “Interesting Times” with Ross Douthat, a NYT series:
- The Next Economic Bubble Is Here, with Jason Furman, an economist from the Harvard Kennedy School — on YouTube; edited transcript.
- What Palantir Sees, with Palantir’s CTO Shyam Sankar — on YouTube; edited transcript.
A relevant excerpt from the first talk:
Douthat: OK. So play bubble advocate for me right now. If you wanted to make the case that this is what we’re looking at right now, that A.I. is a railroad-style productive bubble where the tech is real, but we’re just overinvested and overbuilt, what would that argument look like?
Furman: First, I’d look at the market as a whole. Robert Shiller won a Nobel Prize for his work on ways in which markets could turn irrational. He developed a concept called the cyclically adjusted price-earnings ratio, or CAPE.
The Shiller CAPE right now stands at about 40, which says the price of a stock is 40 times the inflation-adjusted average earnings over the last decade. That 40 is the second highest that the Shiller measure has ever been, and it goes back about 150 years.
The highest was where it got in early 2000 — right before the tech bubble burst. So, the basic standard first thing that financial market people and economists look at to assess the value of the stock market right now is screaming that it is sky high in a way that has never lasted before. That would be No. 1.
No. 2 would then be to dig into certain companies and go through just what would have to happen to justify their valuations. If it’s a really small start-up, to say their revenue’s going to double every year for the next decade — fine. That definitely happens sometimes.
But when you already are a big established company and you’re being priced a little bit more like a start-up, what’s the plausibility of that when it requires both the technology to work and you need to figure out how to profit from that technology?
Douthat: You mentioned earlier that you don’t think we’re seeing evidence of that A.I. uptake in productivity data and other statistics, that the use of A.I. in programming or whatever else, is having a fundamentally transformative effect yet.
Furman: Yet. And the “yet” is a really important part.
If you’re a business and you go out and hire 20 people to figure out how to integrate A.I. into your small business or medium-sized business or large business, and they’re all out there trying to figure out how the chain of stores that you run or the chain of restaurants can use A.I. — if the people you hire don’t figure it out right away, they actually show up in the data as lower productivity because you basically have more people working in that business and it’s not producing a higher output.
Now, that doesn’t mean it’s a mistake to hire those people. They may well figure it out and five years from now you can replace all sorts of people or get all sorts of higher profits and the productivity will show up.
But in economics, this is called a J curve, where sometimes you go down before you go up, and I think that’s happening in some companies right now. In a sense, A.I. is actually reducing their productivity because they’re busy figuring out how to use it, but they haven’t yet figured out how to use it.
So, in some sense, it’s not that surprising to me that we’re not seeing the productivity growth from A.I. yet. I do expect that we’ll see some, but it is an open question as to how much. We’ll see.
Douthat: In the argument for a bubble, you would say that if you have extraordinary overextension of investment and you’re in the downward part of the J curve, that makes a bubble scenario more likely?
Furman: Yeah, I think that would be in the case for a bubble.
Productivity isn’t the only thing that matters. We actually had more productivity growth from 2000 to 2005 than we did from 1995 to 2000. So, even after the bubble burst, even after this investment was collapsing, productivity growth was actually very, very strong. It just wasn’t nearly strong enough to justify, you know, the way in which those companies were valued, um, in the year 2000.
I should also say, as an economist, productivity growth actually is almost everything I care about. It tells you what the size of your economy is, it tells you on average what wages will be, our possibility for the future. So, to me, that’s what I’m most focused about and care the most about.
But definitely for the stock market, it’s just one input.
Douthat: Just to be more anecdotal, there’s been a fair amount of coverage in the last few weeks of these deals where, effectively, the A.I. companies are paying each other and increasing each other’s valuations through these deals with one another — where one company agrees to buy another company’s chips and in return it gets shares in that company.
I may be misrepresenting this slightly, but getting shares in that company and then deciding to purchase chips from that company drives that company’s share price higher. So it’s effectively getting the money that it uses to buy the chips from the increase in the share price of the company it’s investing in.
Does that to you seem like the kind of thing that happens in bubble environments — companies sort of hyping each other up? Or is that more just what you would see normally in an environment where a bunch of companies are working together closely and are growing quickly?
Furman: Like everything here, unfortunately, there’s two sides to this and I wish I could come down for you firmly.
Douthat: No, no, don’t. I’m going to ask you for the case against a bubble in a moment.
Furman: On this being a bubble, it’s the opacity of these arrangements that would make one the most nervous, and also the circularity of them. There was an old phrase that in a gold rush, the way you could guarantee a profit is being the person that sold the picks and shovels to the miners. And the idea was the person went off to find gold, maybe they found it and got rich, maybe they found nothing and ended up poor — but you were guaranteed money.
Right now, instead of selling them the picks and shovels, you’re in some sense lending them the picks and shovels and telling them that you’ll be repaid if they actually strike gold. So, Nvidia would be the one with the picks and shovels and OpenAI would be the one going off looking for gold in this story.
Douthat: Nvidia is making the chips.
Furman: Yeah. Nvidia is making the chips. That’s like a real actual thing. It’s like a pick and a shovel, but a little bit more sophisticated and complicated to make. And if they were selling them all for cash, you’d say they’re pocketing that money. But in some sense, they’re now not just selling them for cash.
It’s essentially as if they’re lending them to OpenAI and they’ll get paid back and they’ll get paid back with multiples if OpenAI succeeds. But if it doesn’t, then they won’t get any money or won’t get as much money as they would’ve gotten for selling those picks and shovels.
Douthat: So, in the gold rush economy, even if there’s less gold at Sutter’s Mill or in the Klondike or wherever else than people thought, at least if you’re an investor, the pick and shovel money is going to prop you up. Whereas here, if there isn’t enough gold out there, your investment in the pick and shovel company is also in deep trouble.
Furman: Yeah, and this is something that’s changed. Six months ago, I’d say Nvidia was the pick and shovel company that was guaranteed to lock something in. Now that’s changed and they’re not getting all the money upfront for selling those picks and shovels to people.
Then the second part of this is just the opacity of it.
We have, in our economy, different ways for a company to get money. One is you sell a bond and bond holders buy it. That’s a way of lending you money.
A second is you go to a bank and the bank is super careful about who they’ll lend money to because they’re incredibly highly regulated.
And the third is you go to what are sometimes called shadow banks. These are companies like Apollo and they lend you money, often with fewer questions asked. They themselves face less regulation. And a lot of the lending that’s happening in this sector is happening with companies like Apollo that are shadow banks that are less regulated.
Now, to date, these are enormously profitable, enormously successful companies. They’re incredibly sophisticated. I would, for the most part, bet on them knowing what they’re doing. But, one has to be just a little bit more nervous about them.
Douthat: OK, now argue the other side. Tell me why this is not at all like the railroad bubbles or the dot-com boom. Why should we not be alarmed about the Shiller index being almost as high as it’s ever been?
Furman: The biggest reason I have — frankly, full disclosure — kept all of my money in broadly diversified index funds and haven’t reduced my exposure —
Douthat: We were going to come around to the personal investment question. So, that’s good to know.
Furman: How I’ve answered this question for myself is I think a lot about a speech that Alan Greenspan made in December 1996 where he said there was irrational exuberance in the market.
There was a lot of reason to think that the market was pretty frothy and pretty bubbly. And what happened after he gave that speech? The stock market ended up almost doubling over the next three plus years and then the bubble burst.
But if you had bought stocks when Alan Greenspan made that remark, and then you lived through the bursting of the bubble and sold at the very bottom of the broad market, you still would’ve made money.
And that type of pattern has repeated over and over again throughout history — that people thought something was a bubble, it went up a whole lot before going down, and it turns out if you call a bubble, but you’re early, that’s not very impressive. That actually means that you were wrong.
That’s very different from almost anything else. Anything else, you predict it and you’re the first one to predict it, you should get lots of credit. If you’re the first one to predict a bubble, you probably were wrong because it went up a whole lot before it went down.
So, getting the timing of these is just much, much harder than knowing that eventually there probably will be one.
As for Palantir, it’s literally disgusting.

Leave a Reply