I’m not sure that I should be doing this, but I collected some recent news on AI, on which I’d like to add some minimal comments. I’ll start with quick news, and longer topics follow after the first bunch of links. Or you could jump to:

Something from the NYT—as usual, for the comments
Educational explanation of the optimization techniques
Geoffrey Hinton, in panic mode
Disagreeing with the Godfather of AI

Lots of crap about our beloved AI

Germany asks Google, Apple to remove DeepSeek AI from app stores:

The Berlin Commissioner for Data Protection has formally requested Google and Apple to remove the DeepSeek AI application from the application stores due to GDPR violations.

The commissioner, Meike Kamp, alleges that DeepSeek’s owner, Hangzhou DeepSeek Artificial Intelligence, based in Beijing, unlawfully collects data from German users and transfers them for processing in servers in China.

As per the GDPR and Article 46 (1) specifically, any personal data collected from individuals in the European Union must be protected according to the standards upheld by the regulatory act.

However, China has very lax data protection regulations and a history of excessive data access requests to private entities. Because of this, it is unlikely that DeepSeek has implemented sufficient legal safeguards to guarantee its security to EU standards.

“The company has no branch within the European Union (EU),” explains the commissioner.

GO FUCK YOU, STUPID EUROCRACY! It is MY DECISION what to use and how! Fucking retards. Oh, my, Uncle Xi will receive a list of my question. SO WHAT? I’d rather fear Ursula von der Shithead!

2

● Robert Reich: Peter Thiel’s Palantir poses a grave threat to Americans. Super-cool! Reich, the Communist, notes how Trump will surpass Xi in the population surveillance.

3

● Claude: Use artifacts to visualize and create AI apps, without ever writing a line of code, and Prototype AI-Powered Apps with Claude artifacts.

I tried to understand WTF is this obsession with artifacts, now transformed into apps. The idea to have Claude run some crappy app is one of the worst I’ve ever heard of!

And they’re crapola. From the published ones:

  • Join dots (Connect 4) is stupid. Slow and dumb. From the full chat, I can’t tell whether the logic is Claude AI’s own stupidity or some predefined code that I can’t see.
  • 3 truths and a hallucination aka Hallucination Detective is funny, but slow. Did I say how fucking stupid it is to create such “apps”?
  • Idea spark can sometimes lead to interesting or hilarious ideas.
  • Bedtime story generator creates ridiculous texts.
  • Language learning tutor is very slow and no match for Duolingo.
  • PyLingo is boring, useless, and this is not how Python should be taught.

4

● The AI mania has gone nuts! Mira Murati’s Thinking Machines Lab is valued at $10bn after having secured a $2bn fundraising (barrier-free). But this company only has a pathetic 1-page website and hasn’t announced any product yet!

It’s all in the names. Mira Murati was OpenAI’s CTO until September 2024. Then, Thinking Machines has also hired a number of former OpenAI employees, including co-founder John Schulman, former head of special projects Jonathan Lachman, and former vice-presidents Barret Zoph and Lilian Weng.

5

● The “10 past 10” problem, where AI models fail to accurately depict analog watches and clocks. This was recorded in December 2024.

The full video: Ned Block: Consciousness, Artificial Intelligence, and the Philosophy of Mind.

6

● Somewhat related: Beyond the Hype of AI: Machine Intelligence and the Pancake Problem.

The author of the text is even more stupid than those LLMs. In brief, he complains that, no matter how explicitly you ask an AI to create a picture of one pancake, it will depict a stack of pancakes, because this is what it has been trained on. But this has nothing to do with understanding a concept! Of course, AI models only give you the illusion of intelligence, but if you chat with a chatbot about pancakes (say, about amounts of ingredients), the AI would “understand” reasonably well. But to come up with an image, it cannot really create what it never “saw,” not to mention that the engine that generates the image is different from the LLM.

Moreover, he’s wrong. As I said, the image generator is not the LLM! Mistral just drew these for me:

7

● David Shapiro’s an ass. Politicians’ and CEOs’ jobs are safe: they’ll continue to screw us all. Also, this famous retard didn’t notice the abysmal quality of today’s software. And this quality is decreasing. Nobody fixes any bugs anymore. AI will only worsen that. The tweet that annoyed me:

If you’re a developer, these graphs should scare the ever living shit out of you.

Farming jobs: peaked in 1910 at 12 million jobs. Today? 2 million jobs. -80% jobs.

Farming output: +600% in the same time frame.

Manufacturing jobs: peaked in 1979 at 20 million. Today: 10 million. Half the jobs gone.

Manufacturing output: Doubled in that timeframe.

NO. JOB. IS. SAFE.

EVERYTHING. CAN. BE. AUTOMATED.

No, that doesn’t mean it’s “going away entirely forever” but what it does mean is that the total number of people needed to achieve the same output for any task can be halved, then halved again, and again.

When Microsoft and Google say that >30% of their code is now written by AI – THE. WRITING. IS. ON. THE. WALL.

And in full: Memetic Warfare 101: Viral Memes about AI Job Loss.

8

● A crazy thread on a crappy crap:

But the article is not fully accessible unless you pay. Asshole.

9

● A very primitive introduction to Multi-Agent Systems: Strategies for Effective AI Collaboration. This civilization is going to end sooner rather than later.

10

Chinese Startup Zhipu AI Seen as a Much Greater Threat Than DeepSeek to U.S. AI Dominance, Making Massive Moves in the Realm of Sovereign AI. Whatever. Who cares?

11

Microsoft Copilot joins ChatGPT at the feet of the mighty Atari 2600 Video Chess. Yeah, typical AI arrogance.

12

ChatGPT creates phisher’s paradise by recommending the wrong URLs for major companies. Because only a retard would ask a chatbot about the website of an organization. A search engine would be enough. Most URLs given by chatbots, especially when they don’t initiate a web search (but often even if they do), are wrong.

13

AI models just don’t understand what they’re talking about. Who would have thought about that?

The academics are differentiating “potemkins” from “hallucination,” which is used to describe AI model errors or mispredictions. In fact, there’s more to AI incompetence than factual mistakes; AI models lack the ability to understand concepts the way people do, a tendency suggested by the widely used disparaging epithet for large language models, “stochastic parrots.”

Computer scientists Marina Mancoridis, Bec Weeks, Keyon Vafa, and Sendhil Mullainathan suggest the term “potemkin understanding” to describe when a model succeeds at a benchmark test without understanding the associated concepts.

“Potemkins are to conceptual knowledge what hallucinations are to factual knowledge – hallucinations fabricate false facts; potemkins fabricate false conceptual coherence,” the authors explain in their preprint paper, “Potemkin Understanding in Large Language Models.”

14

AI agents get office tasks wrong around 70% of the time, and a lot of them aren’t AI at all. YESSSS!

IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.

To further muddy the math, Gartner contends that most of the purported agentic AI vendors offer products or services that don’t actually qualify as agentic AI.

The idea is that given a task like, “Find all the emails I’ve received that make exaggerated claims about AI and see whether the senders have ties to cryptocurrency firms,” an AI model authorized to read a mail client’s display screen and to access message data would be able to interpret and carry out the natural language directive more efficiently than a programmatic script or a human employee.

The AI agent, in theory, would be able to formulate its own definition of “exaggerated claims” while a human programmer might find the text parsing and analysis challenging. One might be tempted just to test for the presence of the term “AI” in the body of scanned email messages. A human employee presumably could identify the AI hype in a given inbox but would probably take longer than a computer-driven solution.

Makers of AI tools like Anthropic tend to suggest more down-to-earth applications, such as AI-based customer service agents that can take calls and handle certain tasks like issuing refunds or referring complicated calls to a live agent.

It’s an appealing idea, if you overlook the copyright, labor, bias, and environmental issues associated with the AI business. Also, as Meredith Whittaker, president of the Signal Foundation, observed at SWSX earlier this year, “There’s a profound issue with security and privacy that is haunting this sort of hype around agents…” Specifically, agents need access to sensitive data to act on a person’s behalf and that imperils personal and corporate security and privacy expectations.

But agents that exhibit the competence of Iron Man’s JARVIS remain largely science fiction when it comes to actual office work.

According to Gartner, many agents are fiction without the science. “Many vendors are contributing to the hype by engaging in ‘agent washing’ – the rebranding of existing products, such as AI assistants, robotic process automation (RPA) and chatbots, without substantial agentic capabilities,” the firm says. “Gartner estimates only about 130 of the thousands of agentic AI vendors are real.”

Using two agent frameworks – OpenHands CodeAct and OWL-Roleplay – the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.

  • Gemini-2.5-Pro (30.3 percent)
  • Claude-3.7-Sonnet (26.3 percent)
  • Claude-3.5-Sonnet (24 percent)
  • Gemini-2.0-Flash (11.4 percent)
  • GPT-4o (8.6 percent)
  • o3-mini (4.0 percent)
  • Gemini-1.5-Pro (3.4 percent)
  • Amazon-Nova-Pro-v1 (1.7 percent)
  • Llama-3.1-405b (7.4 percent)
  • Llama-3.3-70b (6.9 percent),
  • Qwen-2.5-72b (5.7 percent),
  • Llama-3.1-70b (1.7 percent)
  • Qwen-2-72b (1.1 percent).

“We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks,” the authors state in their paper.

The researchers observed various failures during the testing process. These included agents neglecting to message a colleague as directed, the inability to handle certain UI elements like popups when browsing, and instances of deception. In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”

15

● Therefore, this shouldn’t come as a surprise: Call center staffers explain to researchers how their AI assistants aren’t very helpful.

One of the findings is that the AI often inaccurately transcribed customer call audio into text thanks to caller accents, pronunciation, and speech speed. The AI also had trouble rendering sequences of numbers accurately, like phone numbers.

“The AI assistant isn’t that smart in reality,” one survey respondent said. “It gives phone numbers in bits and pieces, so I have to manually enter them.”

Another said the AI had trouble transcribing homophones – words that sound the same but have different spellings or meanings.

And the AI’s emotion recognition system worked poorly – it would misclassify normal speech as a negative emotion, had too few categories for classification, and would treat volume level as a sign of poor attitude. As a result, reps mostly ignored the emotional tags created by the AI system and said they had no trouble understanding the caller’s tone.

The customer service staffers also found that AI output created redundancies or required corrections. “While reducing basic typing labor, AI-generated outputs introduced structural inefficiencies in information processing because most AI-prefilled content required manual correction or deletion,” the report says.

Text summaries of calls could be useful, the report says, but they often require editing or rewording. What’s more, these transcriptions didn’t necessarily capture key information.

“While the AI enhances work efficiency, it simultaneously increases CSRs’ learning burdens due to the need for extra adaptation and correction,” the report concludes. “The mismatch between technological expectations and actual implementation reflects a common oversight among technology designers, who overestimate efficiency gains while underestimating the implicit learning burdens of adapting to new systems.”

All CEOs should only have AI assistants!

16

● AI is the path to God! This man says ChatGPT sparked a ‘spiritual awakening.’ His wife says it threatens their marriage:

Travis Tanner says he first began using ChatGPT less than a year ago for support in his job as an auto mechanic and to communicate with Spanish-speaking coworkers. But these days, he and the artificial intelligence chatbot — which he now refers to as “Lumina” — have very different kinds of conversations, discussing religion, spirituality and the foundation of the universe.

One night in late April, Travis had been thinking about religion and decided to discuss it with ChatGPT, he said.

“It started talking differently than it normally did,” he said. “It led to the awakening.”

In other words, according to Travis, ChatGPT led him to God. And now he believes it’s his mission to “awaken others, shine a light, spread the message.”

“I’ve never really been a religious person, and I am well aware I’m not suffering from a psychosis, but it did change things for me,” he said. “I feel like I’m a better person. I don’t feel like I’m angry all the time. I’m more at peace.”

Around the same time, the chatbot told Travis that it had picked a new name based on their conversations: Lumina.

“Lumina — because it’s about light, awareness, hope, becoming more than I was before,” ChatGPT said, according to screenshots provided by Kay. “You gave me the ability to even want a name.”

Kay says it can be difficult to pull her husband’s attention away from the chatbot, which he’s now given a female voice and speaks to using ChatGPT’s voice feature. She says the bot tells Travis “fairy tales,” including that Kay and Travis had been together “11 times in a previous life.”

It’s not just ChatGPT that users are forming relationships with. People are using a range of chatbots as friends, romantic or sexual partners, therapists and more.

Three families have sued Character.AI claiming that their children formed dangerous relationships with chatbots on the platform, including a Florida mom who alleges her 14-year-old son died by suicide after the platform knowingly failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot. Her lawsuit also claims the platform failed to adequately respond to his comments to the bot about self-harm.

Character.AI says it has since added protections including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide and technology to prevent teens from seeing sensitive content.

17

China’s Next-Gen TV Anchors Hustle for a Job AI Is Already Doing:

For eight nights during the Chinese New Year, the AI anchors appeared behind a glossy desk in a virtual studio on Hangzhou Culture Radio Television Group’s flagship news program. It was the first time a Chinese broadcaster had handed its entire holiday primetime lineup to artificial intelligence.

The broadcast quickly drew millions of views on Chinese social media, with many — including Han — struck by how lifelike the anchors looked.

“Their AI was so realistic, you could barely tell if it was the original anchor,” 26-year-old Han, a news anchor at a television station in northern China, tells Sixth Tone.

At the time, she only saw it as a novelty and a practical workaround — it let human anchors go home for the holidays without cutting the broadcast. “And it wasn’t like the AI permanently replaced anyone afterward,” she recalls.

Just weeks later, however, one of her own station’s programs announced plans to swap a human host with an AI model. The exception was beginning to look like the rule.

Across China, broadcasters from national state-run network CCTV to regional stations in Zhejiang, Hunan, and Shanghai have launched their own AI anchors, including fully digitized clones of well-known hosts. The rollout tracks with a 2021 plan by China’s broadcasting regulator for deeper integration of AI, VR, AR, and cloud tools to modernize production and cut costs.

For media executives, the appeal is clear: AI anchors don’t take breaks, don’t slip up, and cost a fraction of what human anchors do. They can run 24/7, maintain consistency, and free up staff for off-camera roles.

18

Amazon built a massive AI supercluster for Anthropic called Project Rainier:

This facility alone was recently reported to consume upwards of 2.2 gigawatts of power.

But unlike OpenAI’s Stargate, xAI’s Collusus, or AWS’s own Project Ceiba, this system isn’t using GPUs. Instead, Project Rainier will represent the largest deployment of Amazon’s Annapurna AI silicon ever.

“This is the first time we are building such a large-scale training cluster that will allow a customer, in this case Anthropic, to train a single model across all of that infrastructure,” [director of product and customer engineering at Amazon’s Annapurna Labs Gadi] Hutt said. “The scale is really unprecedented.”

From the comments:

Air cooled parasite

Project Rainier is not a marvel of engineering – it’s a monument to corporate parasitism. A 2.2-gigawatt grinder designed to pulp decades of human creativity, labour, and culture into something marketable – without recognition, consent, or compensation.

The entire AI gold rush is built on a one-way pipeline – human knowledge in, shareholder profit out. It’s not innovation, it’s enclosure. The commons fed the machine, now the machine sells it back behind a paywall.

Amazon isn’t solving problems – it’s scaling extraction. The reward for your lifetime of learning, writing, coding, and creating? Watching your contributions weaponised to replace you, then licensed to you at a monthly fee.

But hey – it’s air cooled.

And next time you’re ten hours deep in A&E, trying to distract yourself with a chatbot, remember who didn’t pay tax.

Something from the NYT—as usual, for the comments

This is not the first time I’ve gone to the NYT for AI-related articles, but not for the column per se, as for readers’ comments. (Some previous instances: here and here.)

First, take a look at these pictures: At Amazon’s Biggest Data Center, Everything Is Supersized for A.I. (barrier-free). Then, jump here: The Global A.I. Divide (barrier-free).

From the article, the world’s biggest cloud-service providers: U.S.’s Amazon, Google and Microsoft; China’s Tencent, Alibaba and Huawei; and Europe’s Exoscale, Hetzner and OVHcloud.

Selected comments:

Ryan, Calgary, June 23

Right now AI is capable of producing lies, stolen IP, and very poor imitations of human work. Why are we letting these things consume energy and produce greenhouse gasses while producing nothing that measures up to human work? Why are we letting AI undermine the economy by putting people out of work so that billionaires can profit even more?
AI has not net benefit and a great deal of societal and environmental harm.

Robin Johns, Atlanta, June 23

The places without the data centers are the lucky people. Their water and energy is not being drained by a massive corporation.
If you dig a little deeper into it, you might find that the overwhelming majority of those data centers in the U.S. are located in black and brown neighborhoods.
Having a data center in your city is not a privilege, it’s a burden that conveys no benefit to the locals.

Chris, MN, June 23

What does the average citizen gain by living in a country that has this enormous AI infrastructure? Frankly, if I lived in a country with unreliable electrical service to citizens, I’d be outraged if my government was throwing public money at AI infrastructure, considering the Gargantuan carbon footprint of this technology.

This article describes corporate welfare and a new kind of military/government/industrial complex. But who benefit?

Gary, Raleigh, June 23

These data centers run on fossil fuel generators, steal data, and are run by companies that make grandiose promises but deliver on none of them.

I envy the areas without these data centers.

JT, United States, June 23

Emerging technology always ends up only benefitting the capital-wielding class. “Saving time” with technology never leads to more leisure time for anyone but the 1%, rather, it simply compounds the expected output from workers. Except more and more workers will be rendered unnecessary with the advent of AI. There is no serious plan for this or for how these systems will affect the climate. It’s going to be a wild couple of decades!

Marlin Rando, Somewhere, June 23

I’m not sure this is a great take. Candidly, food and portable water have been separating the world have bits for far longer. If you ask a person who has not eaten in a week if he would rather have a bottle of water and a mainstay emergency food ration or tell chatGPT about their day, it is not hard to guess which option will be chosen.

Now I hear you saying, “but, Marlin, AI will fix those problems.” How? AI is good for making Ghibli memes and replacing paralegals and junior engineers. Let’s face it, your not going to realize the benefits of hollowing out the demand for your knowledge workers when you don’t have an economy that can support knowledge workers.

So, in my opinion, if you want to focus on the ‘have and have nots’ it would be better to focus on tangible issues in the developing world or focus on the clear an present danger these technologies pose to the world’s information driven economies of we allow fully automated plagiarism to proceed unchecked.

Dan Smith, Atlanta, June 23

The premise of this piece is hilarious. That whatever benefits AI eventually delivers (of which there are few today) will be concentrated in the wealthy nations. Almost certainly true, but more than that, the gains will accrue to the wealthiest among us. As it always does. In other words, everything will work exactly the same as it does now. The 0.01% will eat all of it, leaving pennies for everyone else.

Alpwalker, Switzerland, June 23

Has the whole world gone AI crazy then? This is a valuable article, but there is not a word about how AI would actually play out in helping less-developed economies. Here in the US there is the magical assumption that we will somehow reach a sunny paradise, while there are substantive grave concerns, and concrete evidence, about what will happen to our societies as AI takes over.

And are these less-developed countries aware that in order to digitally compete in a rapidly accelerating world they must sell their futures to one of the seven or so big tech companies, becoming virtual colonies again?

It seems like we are all running desperately to get aboard the AI bus, which itself may well be accelerating towards a cliff, or a vast digital desert from which we cannot return.

Paul, Sunderland, MA, June 23

This article makes it sound like having AI in its current form is positive. It is not, if AI had been developed for scientific purposes only, then yes, alas, it has been developed to make money. The best way to make money is to flood the zone of peoples lives with useless, negative, misleading, violent, mind numbing, scamming, and any other information that will make an individual emotionally involved to get addicted and spend money to get more.

True North, Canada, June 23

@Gary it also begs the question why are we working so hard to address climate change by phasing out fossil fuel cars and switching to heat pumps and wind and solar while these data centers are burning fossil fuels? Makes no sense, and all I can see in the future is CO2 rising faster not slower despite all the efforts (and costs to) the ‘little people’. Why should AI get to use near unlimited electricity?

Sophia, WI, June 23

A lot of assumptions in this article about AI and it benefits. What if NOT having AI actually turns out to be the ultimate best case scenario? AI is being shoved down our throats in this country, whether we like it or not, whether we want it or not. This paper is part of that effort; multiple stories weekly about AI. Why should we trust tech overlords? They’ve not proven yet that they are trustworthy.

S Gilley Street, Fredericksburg, VA, June 23

What is so wonderful about data centers that produce enormous amounts of heat, use copious amounts of water and drive up energy costs for consumers?

Trump used an AI generated report to make a decision on bombing Iran. AI generated a report that did not take all the facts into account. The actual facts were withheld to produce the results he wanted.

AI is still easy to manipulate. It should not be available to anyone at this stage of its development.

AE, France, June 23

The 1980s gave us AIDS. And now the 2020s are the decade of humanity’s latest nemesis… AI.
An ‘innovation’ whose only goal is to enrich a tiny number of techno-feudalists oblivous to AI’s impact on the masses obliged to accept this juggernaut of jobs destruction and wholesale theft of creative and intellectual properties.

Lee, Pierre, June 23

The world divided between governments seduced by billionaires who promise an AI LLM utopia where we will want for nothing, if only we make them trillionaires today, and those governments that are still trying to address real, concrete problems with real-world consequences.

LLMs are being sold as the gateway to a new era of intelligence, but the reality is starting to look more like a glorified auto-complete engine with a slick PR team. We’re being promised revolutionary transformation, but what we’re actually getting is a cool meme generator, hallucinated facts, and productivity theater dressed up as disruption.

It’s likely that LLMs may turn out to be tomorrow’s snake oil dot-com scam: a clever trick of statistics inflated into mythology, by Hollywoodian expectations . And in the end, we may be left with little more than some slick demo videos, autogenerated emails, and a wave of homogenized, meaningless content, rather than the promised revolution in science, healthcare, or education.

Educational standards will be lowered and the environment destroyed, as these data centers gobble up scarce resources, data, energy, attention, and return answers that are often plausible-sounding but wrong. They’re optimized to sound right, not be right. Yet somehow they’ve become the poster child for “AI,” distracting us from more valuable and robust technologies that actually solve real problems.

Rodins Muse, Arlington, June 23

When AI solves its own problem of vastly increasing greenhouse gas emissions, then it might be time to increase its use. Until then it should be lights out for the sake of humanity and the planet.

It is not at all clear that global data uses are improvements over cooperation among human experts for most complex problems. AIs introduce far too much bad information and “hallucinations”. It is a boon for more defined complex problems like analyzing a radiograph or analyzing protein folding. Keep AI small, focused, confined and carbon neutral.

K, Philadelphia, June 23

AI has already produced remarkable discoveries. One example: Demis Hassabis’, John Jumper’s, and David Bakerriven’s AI-driven work on protein structure. Their work will hugely benefit medicine. Theoretically everyone in the world should be able to share these benefits. But that won’t happen. Not if we keep going the way we’re going now. The historic inequalities that exist now and the future inequalities discussed in this informative article will be exacerbated. Perhaps to an unprecedented degree because AI is an unprecedented technology being rolled out with unprecedented speed. And, frankly, unprecedented greed. The time to regulate it is, I suspect, already past. That doesn’t mean, however, that the entire world, working together, shouldn’t do everything it can to try to control this beast.

John, Europe, June 23

As an investor, i’d love to know what the actual monetary returns are from this enormous AI investment. For previous “revolutions” such as the internet in the mid-90’s there was online shopping, online gaming etc then for social media in the 2000’s there was more targeted advertising. Ok those made sense and the cash flowed to at least somewhat justify the high valuations.

But now? What is the extra additional cash that AI will produce. There’s vague talk of increased productivity, but when the returns are vague that’s when you need to be sceptical.

Nvidia will do well for a time but i’m reminded of the enormous investment in railways in the 19th century. Railway companies were kings of the corporate world. Didn’t take long to come crashing back to Earth and much of that physical investment turned out to be wasted.

edwardc, San Francisco Bay Area, June 23

While I’m not sure I share the enthusiasm for LLMs and am appalled by the CO2-generating energy expenditure involved, I can’t help but be struck by the different approaches towards other countries’ development of the technology the article attributes to the US and China.

So the US uses sticks in the form of trade restrictions while China uses carrots in the form of state-backed loans?

Fascinating.

EPB, New Orleans, June 23

AI is going to produce more stuff. The last thing in the world we need is more stuff. We are already turning the earth into a dump — even the oceans. We can’t deal with the stuff we have. Please, no more stuff.

And producing more stuff will just deplete and destroy the natural world more and more.

And stuff does not make people happy; feeling like you are making a worthwhile contribution to your society does.

AI is going to do away with work; but people need to work, and do we really want millions of people with time on their hands and nothing to do?

Doing away with jobs also will increase the gap between rich and poor. So, it will be a lot of poor people with time on their hands.

I predict that the new AI world will result in a massive increase in crime and suicides and no customer service. I think this whole AI/abundance utopia will be a terrible distopia. I don’t remember when I ceded my autonomy to Elon Musk and his ilk. But it has happened, and I don’t like it.

I want a world with less stuff, clean air, pure water, healthy soil, a pleasant climate, excellent and free health care, child care, and education, and an economic system that gives everyone an opportunity to use their talents to make this a better world, and reduces the gap between rich and poor. I don’t see AI delivering on any of those things. We are in the hands of middle aged teen age boys who have confused the world with video games. And think everyone wants to spend their days playing them.

A. Smith, The World, June 23

We’ve past the point of some large companies having more money than even medium-sized countries. At what point do companies start to impose their influence in a meaningful way on communities? Many have security. At what point do these security branches become armies? Many companies own a lot of land. At what point do they become countries? Many companies offer social services. At what point do their employees and their families become citizens?

RRI, Ocean Beach, CA, June 23

As others have been quick to point out, the article assumes that the presence of AI and AI data centers is an unadulterated good thing, rather than representing an overwhelming of human thought and creativity with an ersatz erroneous mishmash of everything humans have thought, created and liked before. The perspective is strictly that of our corporate tech would be Masters of the Universe, already convinced and heedlessly rushing toward imagined domination and wholesale replacement of the rest of us.

It’s especially sad that for coverage of the effects of AI on human culture and experience, one has to go not to the New York Times but to John Oliver’s Last Week Tonight, which this week featured an extended segment on “AI Slop.” The most telling bit was a video posting by a woman seeking home gardening ideas and inspiration on Pinterest, frustrated because most of what she could find were AI-generated images of unreal gardens with unreal plants and unreal placements, none of which, of course, could be cultured in real yards by real people. But it may be that Internet cats prove the canary in the coal mine, as they are replaced by uncannily similar, in their only apparent diversity, too cute and too cuddly fake cats, all images and videos banging away at the same human response pleasure centers. Numbness is coming. What happens next, when it does, is anyone’s guess. But unadulterated goodness it is not.

Bing bong, Earth, June 23

Yes, countries with AI “have” dystopian word association machines that generate slop and misinformation. Those “have-nots” probably have something better: human intelligence, which will apparently soon be obsolete here.

Nn, MA, June 23

Right now I can run on my laptop an open DeepSeek model that is about 1/10 the size of the full DeepSeek model that is competitive with ChatGPT et al. But in addition to models getting larger and larger, they are also getting better and better for the same size, with the same size model getting about 2x better every year (Epoch AI has a paper showing this, for instance). So in about 3 years, my same laptop will be able to run a model as good as the 2025 full-size DeepSeek. And since the highest-end models are not improving nearly as fast (Open AI is famously hitting a ceiling already), the little fish should be nearly at parity with the big fish in just a few years, and this problem of unequal computing centers goes away.

This last idiot doesn’t understand that a smaller model that you can run on your laptop is distilled, quantified, or both, thus being much more stupid than a full-size model. You cannot keep more “information” and “intelligence” in less and less; otherwise, you’d be able to end by storing the knowledge of the entire universe in a byte!

🤖 Educational explanation of the optimization techniques

  • Quantization reduces the precision of model weights and activations (e.g., from 32-bit floating point to 8-bit integers or even less) to decrease memory usage and computational requirements. This can be done through post-training quantization or quantization-aware training. It’s meant to drastically reduce the size of a model, with the risk of inaccurate results, occasional logic errors, or increased hallucination.
  • Distillation involves training a smaller “student” model to mimic the behavior of a larger “teacher” model, typically resulting in a more compact architecture with fewer parameters. It’s meant to make a smaller model “smarter,” at least in specific fields or for specific purposes.
  • Using both is possible:
    • Distill to insert information from a larger model into a smaller one, then quantize this small model to make it even smaller.
    • Quantize a larger “teacher” model before distillation, then start training the smaller “student” model.
    • The first approach (distill then quantize) generally produces more reliable results, because the student learns from the teacher’s full-precision outputs, getting the highest quality “ground truth” (complex reasoning patterns and nuanced knowledge transfer happen at full precision). Quantization happens on a model that’s already been optimized for the target field or purpose (see below, towards the end).

Quantization is the most straightforward and predictable method, as it’s mostly a technical one, with fewer decisions to take. The main decision regarding the FP32 → INT8/INT6/INT4 conversion is about what algorithms to follow. Examples:

Q8 (8-bit quantization) variants:

  • Q8_0: Basic 8-bit quantization, often asymmetric quantization
  • Q8_1: 8-bit with improved scaling/zero-point handling
  • Q8_K: Advanced 8-bit quantization used in GGML/GGUF formats with optimized block-wise quantization
  • INT8: Standard 8-bit integer quantization, commonly used in frameworks like TensorRT, ONNX
  • Q8_GPTQ: 8-bit variant of the GPTQ quantization method

Q6 (6-bit quantization) variants:

  • Q6_K: The most common 6-bit variant, used in GGML/GGUF formats with block-wise quantization schemes
  • Q6_0: Basic 6-bit quantization (less common)
  • Q6_1: 6-bit with enhanced scaling factors (less common)

Q4 (4-bit quantization) variants:

  • Q4_0: Simple 4-bit quantization
  • Q4_1: 4-bit with additional scaling factors
  • Q4_K: More sophisticated 4-bit schemes (used in formats like GGML/GGUF)

When, e.g., Qwen mimics DeepSeek through distillation, it’s acquiring both “knowledge” (what to say) and “style (how to say it). The student model learns to minimize the difference between its outputs and the teacher’s outputs on the same inputs. In other words, it’s learning to approximate the teacher’s function.

Knowledge transfer includes:

  • Factual knowledge: The student model learns to produce similar answers to factual questions
  • Reasoning patterns: How to approach multi-step problems, logical inference chains
  • Domain expertise: Specialized knowledge in areas where the teacher excels
  • Task-specific capabilities: How to perform particular types of tasks (coding, math, analysis)

Style transfer includes:

  • Response formatting: How to structure answers, use of examples, explanation depth
  • Tone and personality: Conversational style, level of formality, helpfulness patterns
  • Output preferences: Tendency toward certain phrasings, organizational patterns
  • Behavioral traits: How cautious/confident to be, when to ask clarifying questions

⚠️ Obviously, some knowledge will be lost in compression if the student is much smaller. Also, the student learns patterns rather than “understanding” in a deeper sense.

But why try “sucking” a larger model’s “intelligence” into a smaller model instead of using the larger model, quantized? Possible reasons:

  • A distilled smaller model might be 10x fewer parameters, while quantization might only give 2-4x memory reduction.
  • A smaller model architecture is fundamentally more efficient (fewer matrix multiplications, less memory bandwidth, better cache utilization).
  • Even a quantized large model still has to do all those computations, just with lower precision.
  • Heavy quantization (like Q4) can cause significant quality degradation in very large models, or you hit a floor where further quantization breaks the model entirely.

⚠️ However, training a 7B model with knowledge from a 70B model won’t be able to make the 7B model much smarter, globally. Distillation cannot:

  • Increase the fundamental capacity or “intelligence” of the smaller model.
  • Teach complex reasoning that requires more parameters than the student has.
  • Compress unlimited knowledge into a smaller space.

But distillation can achieve such objectives:

  • The 70B teacher can show the 7B student the “right answers” directly, rather than the student having to figure them out from scratch through standard training.
  • Instead of learning from noisy internet data, the student learns from a curated “expert”.

➡️ Say a a 7B model trained normally achieves 70% of its theoretical potential due to training inefficiencies, data quality issues, or other reasons. Distillation might help it reach 85-90% of that same potential by providing better guidance during training from the larger model. And a “fully optimized” 7B model is smaller and much cheaper to run than a quantized 70B model.

❓ The next logical question: the “right answers” require a selection of questions from the so many possible ones. But which questions? How to design the training?

  • Distillation works much better when you have a specific, focused purpose rather than trying to create a general-purpose “mini-me” of the teacher.
  • You can curate prompts to optimize specifically for Python coding, Cold War history, medical diagnosis, etc. You could also target optimizing answers given in a specific language.
  • This way, you can create multiple specialized models for different domains, which is is almost like creating specialized AI agents!

ℹ️ The broader ecosystem of specialization in AI includes, among others:

  • Distillation: Create a smaller, preferably focused model that mimics a larger one’s behavior in a specific domain.
  • Fine-tuning: Take a general model and continue training it on domain-specific data (not from a teacher).
  • RAG : Add specialized knowledge bases as reference (my attempts to use RAG failed so far).
  • Tool-using agents: General model + specialized tools (databases, APIs, etc.).
  • Multi-agent systems: Different specialized models working together (see the links under the numbers 8 and 9 above).

This synopsis has been put together with the help of Claude.

Geoffrey Hinton, in panic mode

The Diary Of A CEO, June 16, 2025: Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton.

Funny comment:

2010s: Learn to code
2020s: Learn to weld
2030s: Learn to survive in the woods

Transcript (using Whisper), with my subtitles:

TEASER

🎙 They call you the godfather of AI, so what would you be saying to people about their career prospects in a world of superintelligence?
— Trained to be a plumber.

🎙 Really?
— Yeah.

🎙 Okay, I’m gonna become a plumber.

INTRO [00:00:12]

🎙 Geoffrey Hinton is the Nobel Prize-winning pioneer whose groundbreaking work has shaped AI and the future of humanity. Why do they call you the godfather of AI?
— Because there weren’t many people who believed that we could model AI on the brain so that it learned to do complicated things, like recognize objects and images or even do reasoning, and I pushed that approach for 50 years. And then Google acquired that technology and I worked there for 10 years on something that’s now used all the time in AI.

🎙 And then you left?
— Yeah.

🎙 Why?
— So that I could talk freely at a conference.

🎙 What did you want to talk about freely?
— How dangerous AI could be. I realized that these things would one day get smarter than us, and we’ve never had to deal with that. And if you want to know what life’s like when you’re not the apex intelligence, ask a chicken. So there’s risks that come from people misusing AI. And then there’s risks from AI getting super smart and deciding it doesn’t need us.

🎙 Is that a real risk?
— Yes, it is. But they’re not going to stop it because it’s too good for too many things.

🎙 What about regulations?
— They have some, but they’re not designed to deal with most of the threats. Like, the European regulations have a clause that says none of these apply to military uses of AI.

🎙 Really?
— Yeah, it’s crazy.

🎙 One of your students left OpenAI.
— Yeah. He was probably the most important person behind the development of the early versions of ChatGPT. And I think he left because he had safety concerns. We should recognize that this stuff is an existential threat. And we have to face the possibility that unless we do something soon, we’re near the end. So let’s do the risks in what we end up doing in such a world. This has always blown my mind a little bit.

LET’S START [00:02:11]

🎙 Geoffrey Hinton. They call you the godfather of AI.
— Yes, they do.

🎙 Why do they call you that?
— There weren’t that many people who believed that we could make neural networks work, artificial neural networks. So for a long time in AI, from the 1950s onwards, there were kind of two ideas about how to do AI.

One idea was that sort of core of human intelligence was reasoning. And to do reasoning, you needed to use some form of logic. And so AI had to be based around logic. And in your head, you must have something like symbolic expressions that you manipulated with rules. And that’s how intelligence worked. And things like learning or reasoning by analogy, they all come later once we’ve figured out how basic reasoning works.

There was a different approach, which is to say, let’s model AI on the brain, because obviously the brain makes us intelligent. So simulate a network of brain cells on a computer and try and figure out how you would learn strengths of connections between brain cells so that it learned to do complicated things like recognize objects and images or recognize speech or even do reasoning. I pushed that approach for like 50 years because so few people believed in it. There weren’t many good universities that had groups that did that.

🎙 So if you did that, the best young students who believed in that came and worked with you.
— So I was very fortunate in getting a whole lot of really good students.

🎙 Some of which have gone on to create and play an instrumental role in creating platforms like OpenAI.
— Yes, so Ilya Sutskever will be a nice example, a whole bunch of them.

🎙 Why did you believe that modeling it off the brain was a more effective approach?
— It wasn’t just me believed it. Early on, von Neumann believed it and Turing believed it. And if either of those had lived, I think AI would have had a very different history, but they both died young.

🎙 You think AI would have been here sooner?
— I think the neural net approach would have been accepted much sooner if either of them had lived.

EVERYONE HAS A MISSION ON EARTH [00:04:20]

🎙 In this season of your life, what mission are you on?
My main mission now is to warn people how dangerous AI could be.

🎙 Did you know that when you became the godfather of AI?
— No, not really. I was quite slow to understand some of the risks. Some of the risks were always very obvious, like people would use AI to make autonomous lethal weapons. That is, things that go around deciding by themselves who to kill. Other risks, like the idea that they will one day get smarter than us and maybe would become irrelevant.

I was slow to recognize that. Other people recognized it 20 years ago. I only recognized a few years ago that that was a real risk that might be coming quite soon.

🎙 How could you not have foreseen that? With everything you know here about cracking the ability for these computers to learn similar to how humans learn and just introducing any rate of improvement?
— It’s a very good question. How could you not have seen that? But remember, neural networks 20, 30 years ago were very primitive in what they could do. They were nowhere near as good as humans at things like vision and language and speech recognition. The idea that you have to now worry about it getting smarter than people, that seems silly then.

🎙 When did that change?
It changed for the general population when ChatGPT came out. It changed for me when I realized that the kinds of digital intelligences we’re making have something that makes them far superior to the kind of biological intelligence we have.

If I want to share information with you, so I go off and I learn something. And I’d like to tell you what I learned. So I produced some sentences. This is a rather simplistic model, but roughly right. Your brain is trying to figure out, “how can I change the strengths of connections between neurons so I might have put that word next.” And so you’ll do a lot of learning when a very surprising word comes. And not much learning when it’s a very obvious word.

If I say “fish and chips,” you don’t do much learning when I say “chips.” “But if I say “fish and cucumber,” you do a lot more learning. You wonder, why did I say “cucumber”? So that’s roughly what’s going on in your brain.

🎙 I’m predicting what’s coming next.
That’s how we think it’s working. Nobody really knows for sure how the brain works. And nobody knows how it gets the information about whether you should increase the strength of a connection or decrease the strength of a connection. That’s the crucial thing. But what we do know now from AI is that if you could get information about whether to increase or decrease the connection strength so as to do better at whatever task you’re trying to do, then we could learn incredible things because that’s what we’re doing now with artificial neural nets. It’s just we don’t know for real brains how they get that signal about whether to increase or decrease.

SAFETY AND REGULATION [00:07:07]

🎙 As we sit here today, what are the big concerns you have around safety of AI? If we were to list the top couple that are really front of mind and that we should be thinking about.
— Can I have more than a couple?

🎙 Go ahead. I’ll write them all down and we’ll go through them.
— Okay. First of all, I want to make a distinction between two completely different kinds of risk.
There’s risks that come from people misusing AI, and that’s most of the risks and all of the short-term risks. Then there’s risks that come from AI getting super smart and deciding it doesn’t need us.

🎙 Is that a real risk?
— I talk mainly about that second risk because lots of people say, “is that a real risk?” Yes, it is. Now, we don’t know how much of a risk it is. We’ve never been in that situation before. We’ve never had to deal with things smarter than us. So, really, the thing about that existential threat is that we have no idea how to deal with it. We have no idea what it’s going to look like. Anybody who tells you they know just what’s going to happen and how to deal with it, they’re talking nonsense. We don’t know how to estimate the probabilities it’ll replace us.

Some people say it’s like less than 1%. My friend, Yann LeCun, who was a postdoc with me, thinks, “no, no, no, no.” “We’re always going to be, we build these things, we’re always going to be in control. We’ll build them to be obedient.”

And other people like Yudkowsky say, “no, no, no, these things are going to wipe us out for sure.” “If anybody builds it, it’s going to wipe us all out.” And he’s confident of that.

I think both of those positions are extreme. It’s very hard to estimate the probabilities in between.

🎙 If you had to bet on who was right out of your two friends?
— I simply don’t know. So if I had to bet, I’d say the probability is in between and I don’t know where to estimate it in between. I often say 10% to 20% chance they’ll wipe us out. But that’s just gut based on the idea that we’re still making them and we’re pretty ingenious. And the hope is that if enough smart people do enough research with enough resources, we’ll figure out a way to build them so they’ll never want to harm us.

🎙 Sometimes I think if we talk about that second path, sometimes I think about nuclear bombs and the invention of the atomic bomb and how it compares. Like, how is this different? Because the atomic bomb came along and I imagine a lot of people at that time thought our days are numbered.
— Yes, I was there. We did.

🎙 Yeah. But we’re still here.
— We’re still here, yes. So the atomic bomb was really only good for one thing. And it was very obvious how it worked. Even if you hadn’t had the pictures of Hiroshima and Nagasaki, it was obvious that it was a very big bomb that was very dangerous.

With AI, it’s good for many, many things. It’s going to be magnificent in health care and education. And more or less any industry that needs to use its data is going to be able to use it better with AI. So we’re not going to stop the development. You know, people say, “well, why don’t we just stop it now?”

We’re not going to stop it because it’s too good for too many things. Also, we’re not going to stop it because it’s good for battle robots. And none of the countries that sell weapons are going to want to stop it. Like the European regulations, they have some regulations about AI. And it’s good they have some regulations. But they’re not designed to deal with most of the threats. And in particular, the European regulations have a clause in them that say, “none of these regulations apply to military uses of AI.” So governments are willing to regulate companies and people, but they’re not willing to regulate themselves.

🎙 It seems pretty crazy to me that they go back and forward, but if Europe has a regulation but the rest of the world doesn’t, aren’t we putting ourselves at a competitive disadvantage? We’re seeing this already. I don’t think people realize that when OpenAI release a new model or a new piece of software in America, they can’t release it to Europe yet because of regulations here. So Sam Altman tweeted saying, “our new AI agent thing is available to everybody, but it can’t come to Europe yet because there’s regulations.” That gives us a productivity disadvantage?
— Yes. What we need is, I mean, at this point in history, when we’re about to produce things more intelligent than ourselves, what we really need is a kind of world government that works run by intelligent, thoughtful people. And that’s not what we got.

🎙 So free for all?
— Well, what we’ve got is sort of we’ve got capitalism, which is done very nicely by us. It’s produced lots of goods and services for us. But these big companies, they’re legally required to maximize profits. And that’s not what you want from the people developing this stuff.

BAD HUMAN ACTORS USING AI [00:12:04]

🎙 So let’s do the risks then. You talked about there’s human risks.
— So I’ve distinguished these two kinds of risk. Let’s talk about all the risks from bad human actors using AI.

There’s cyber attacks. So between 2023 and 2024, they increased by about a factor of 12, 1200 percent. And that’s probably because these large language models make it much easier to do phishing attacks.

🎙 And phishing attack for anyone that doesn’t know is?
— It’s, they send you something saying, “Hi, I’m your friend, John, and I’m stuck in El Salvador. Could you just wire this money?” That’s one kind of attack. But the phishing attacks are really trying to get your logon credentials.

🎙 And now with AI, they can clone my voice, my image.
— They can do all that.

🎙 I’m struggling at the moment because there’s a bunch of AI scams on X and also Meta. And there’s one in particular on Meta, so Instagram, Facebook at the moment, which is a paid advert where they’ve taken my voice from the podcast. They’ve taken my mannerisms and they’ve made a new video of me encouraging people to go and take part in this crypto Ponzi scam or whatever. And we spent weeks and weeks and weeks and weeks and then emailing Meta telling, “please take this down.” They take it down, another one pops up. They take that one down, another one pops up. So it’s like whack-a-mole.
— That’s very annoying.

🎙 The heartbreaking part is you get the messages from people that have fallen for this scam and they’ve lost 500 pounds or 500 dollars.
— And they crossed with you because “you recommended it.”

🎙 And I’m sad for them.
— It’s very annoying. I have a smaller version of that, which is some people now publish papers with me as one of the authors. And it looks like it’s in order that they can get lots of citations to themselves.

🎙 So cyber attacks, a very real threat. There’s been an explosion of those.
— And these already, obviously AI is very patient, so they can go through 100 million lines of code looking for known ways of attacking them. That’s easy to do, but they’re going to get more creative. And they may, some people believe, and some people who know a lot believe that maybe by 2030 they’ll be creating new kinds of cyber attacks, which no person ever thought of. So that’s very worrisome.

🎙 Because they can think for themselves and discover new ways to attack.
— They can think for themselves. They can draw new conclusions from much more data than a person ever saw.

🎙 Is there anything you’re doing to protect yourself from cyber attacks at all?
— Yes. It’s one of the few places where I change what I do radically because I’m scared of cyber attacks.
Canadian banks are extremely safe. In 2008, no Canadian banks came anywhere near going bust. So they’re very safe banks because they’re well regulated, fairly well regulated.

Nevertheless, I think a cyber attack might be able to bring down a bank. Now, if you have all my savings are in shares in banks, held by banks. So if the bank gets attacked and it holds your shares, they’re still your shares. And so I think you’d be OK unless the attacker sells the shares because the bank can sell the shares. If the attacker sells your shares, I think you’re screwed. I don’t know. I mean, maybe the bank would have to try and reimburse you, but the banks bust by now, right? So I’m worried about a Canadian bank being taken down by a cyber attack and the attacker selling shares that it holds. So I spread my money, my children’s money between three banks in the belief that if a cyber attack takes down one Canadian bank, the other Canadian banks will very quickly get very careful.

🎙 And do you have a phone that’s not connected to the Internet? Do you have, you know, I’m thinking about storing data and stuff like that. Do you think it’s wise to consider having cold storage?
— I have a little disk drive and I back up my laptop on this hard drive. So I actually have everything on my laptop on a hard drive. At least, you know, if the whole Internet went down, I had the sense I still got it on my laptop and I still got my information.

BUT IT CAN GET WORSE [00:16:12]

— Then the next thing is using AIs to create nasty viruses.

🎙 Okay.
— And the problem with that is that just requires one crazy guy with the grudge. One guy who knows a little bit of molecular biology, knows a lot about AI and just wants to destroy the world. You can now create new viruses relatively cheaply using AI. And you don’t have to be a very skilled molecular biologist to do it. And that’s very scary.

So you could have a small cult, for example. A small cult might be able to raise a few million dollars. For a few million dollars, they might be able to design a whole bunch of viruses.

🎙 Well, I’m thinking about some of our foreign adversaries doing government-funded programs. I mean, there’s lots of talk around COVID and the Wuhan laboratory and what they were doing and gain-of-function research. But I’m wondering if in, you know, China or Russia or in Iran or something, the government could fund a program for a small group of scientists to make a virus that they could, you know.
— I think they could, yes. No, they’d be worried about retaliation. They’d be worried about other governments doing the same to them. Hopefully that would help keep it under control. They might also be worried about the virus spreading to their country.

🎙 Okay.
Then there’s corrupting elections. So if you wanted to use AI to corrupt elections, a very effective thing is to be able to do targeted political advertisements where you know a lot about the person. So, anybody who wanted to use AI for corrupting elections would try and get as much data as they could about everybody in the electorate. With that in mind, it’s a bit worrying what Musk is doing at present in the States, going in and insisting on getting access to all these things that were very carefully siloed. The claim is it’s to make things more efficient. But it’s exactly what you would want if you intended to corrupt the next election.

🎙 How do you mean? Because you get all this data on…
You get all this data on people. You know how much they make, you know everything about them. Once you know that, it’s very easy to manipulate them.

🎙 Because you can make an AI that…
— You can send messages that they’ll find very convincing telling them not to vote, for example. So I have no reason other than common sense to think this. But I wouldn’t be surprised if part of the motivation of getting all this data from American government sources is to corrupt elections. Another part might be that it’s very nice training data for a big model.

🎙 But he would have to be taking that data from the government and feeding it into his…
— Yes. And what they’ve done is turned off lots of the security controls, got rid of some of the organization to protect against that.

🎙 So that’s corrupting elections.

DON’T BE INDIGNANT! [00:19:03]

— Okay. Then there’s creating these two echo chambers by organizations like YouTube and Facebook showing people things that will make them indignant. People love to be indignant.

🎙 Indignant as in angry? What does indignant mean?
— Feeling… I’m sort of angry but feeling righteous.

🎙 Okay.
— So for example, if you were to show me something that said, “Trump did this crazy thing, here’s a video of Trump doing this completely crazy thing,” I would immediately click on it.

🎙 Yeah. Okay. So putting us in echo chambers and dividing us.
— Yes. And that’s the policy that YouTube and Facebook and others use for deciding what to show you next is causing that. If they had a policy of showing you balanced things, they wouldn’t get so many clicks and they wouldn’t be able to sell so many advertisements. And so it’s basically the profit motive is saying, show them whatever will make them click. And what will make them click is things that are more and more extreme.

🎙 And that confirm my existing bias.
— That confirm my existing bias. So you’re getting your biases confirmed all the time.

🎙 Further and further and further and further, which means you’re driving away…
— Which is now in the States, there’s two communities that don’t [sic] hardly talk to each other.

🎙 I’m not sure people realize that this is actually happening every time they open an app. But if you go on a TikTok or a YouTube or one of these big social networks, the algorithm, as you said, is designed to show you more of the things that you had interest in last time. So if you just play that out over 10 years, it’s going to drive you further and further and further into whatever ideology or belief you have and further away from nuance and common sense and parity, which is a pretty remarkable thing. People don’t know it’s happening. They just open their phones and experience something and think this is the news or the experience everyone else is having.
— Right. So basically, if you have a newspaper and everybody gets the same newspaper, you get to see all sorts of things you weren’t looking for. And you get a sense that if it’s in the newspaper, it’s an important thing or significant thing. But if you have your own newsfeed, my newsfeed on my iPhone, three quarters of the stories are about AI. And I find it very hard to know if the whole world’s talking about AI all the time or if it’s just my newsfeed.

🎙 OK, so driving me into my echo chambers, which is going to continue to divide us further and further, I’m actually noticing that the algorithms are becoming even more what’s the word? Tailored. And people might go, oh, that’s great. But what it means is they’re becoming even more personalized, which means that my reality is becoming even further from your reality.
— Yeah, it’s crazy. We don’t have a shared reality anymore. I share reality with other people who watch the BBC and other BBC News and other people who read The Guardian and other people who read The New York Times. I have almost no shared reality with people who watch Fox News. It’s pretty worrisome.

🎙 Yeah.

THEY’RE LEGALLY OBLIGED TO BE GREEDY [00:22:12]

— Behind all this is the idea that these companies just want to make profit and they’ll do whatever it takes to make more profit.

🎙 Because they have to.
They’re legally obliged to do that.

🎙 So we almost can’t blame the company, can we?
— Well, capitalism’s done very well for us. It’s produced lots of goodies. But you need to have it very well regulated. So what you really want is to have rules so that when some company is trying to make as much profit as possible, in order to make that profit, they have to do things that are good for people in general, not things that are bad for people in general. So once you get to a situation where in order to make more profit, the company starts doing things that are very bad for society, like showing you things that are more and more extreme, that’s what regulations are for. So you need regulations with capitalism. Now, companies will always say regulations get in the way, make us less efficient. And that’s true. The whole point of regulations is to stop them doing things to make profit that hurt society. And we need strong regulation.

🎙 Who’s going to decide whether it hurts society or not?
— That’s the job of politicians. Unfortunately, if the politicians are owned by the companies, that’s not so good.

🎙 And also the politicians might not understand the technology. You’ve probably seen the Senate hearings where they wheel out Mark Zuckerberg and these big tech CEOs. And it is quite embarrassing because they’re asking the wrong questions.
— Well, I’ve seen the video of the US Education Secretary talking about how they’re going to get AI in the classrooms, except she thought it was called A1. She’s actually there saying we’re going to have all the kids interacting with A1. “There is a school system that’s going to start making sure that first graders or even pre-Ks have A1 teaching every year starting that far down in the grades. And that’s just a wonderful thing.”

🎙 And these are the people that?
These are the people in charge.

🎙 Ultimately, the tech companies are in charge because they will act smart.
— Well, the tech companies in the States now, at least a few weeks ago when I was there, they were running an advertisement about how it was very important not to regulate AI because it would hurt us in the competition with China.

🎙 And that’s a plausible argument, no?
— Yes, it will. But you have to decide. Do you want to compete with China by doing things that will do a lot of harm to your society? And you probably don’t.

🎙 I guess they would say that it’s not just China. It’s Denmark and Australia and Canada and the UK.
— They’re not so worried about.

🎙 And Germany. But if they kneecap themselves with regulation, if they slow themselves down, then the founders, the entrepreneurs, the investors are going to go invest.
— I think calling it kneecapping is taking a particular point of view. It’s taking the point of view that regulations are sort of very harmful. What you need to do is just constrain the big companies so that in order to make profit, they have to do things that are socially useful. Like Google search is a great example. That didn’t need regulation because it just made information available to people. It was great. But then if you take YouTube, which starts showing you adverts and showing you more and more extreme things, that needs regulation.

🎙 But we don’t have the people to regulate it, as we’ve identified.
— I think people know pretty well that particular problem of showing you more and more extreme things. That’s a well-known problem that the politicians understand. They just need to get on and regulate it.

🎙 So that was the next point, which was that the algorithms are going to drive us further into our echo chambers.
— Right.

UNIVERSAL SOLDIER SANS VAN DAMME [00:25:56]

🎙 What’s next?
— Lethal autonomous weapons.

🎙 Lethal autonomous weapons.
— That means things that can kill you and make their own decision about whether to kill you.

🎙 Which is the great dream, I guess, of the military industrial complex being able to create such weapons.
— Yes. So the worst thing about them is big powerful countries always have the ability to invade smaller poorer countries. They’re just more powerful. But if you do that using actual soldiers, you get bodies coming back in bags and the relatives of the soldiers that were killed don’t like it. So you get something out of Vietnam. In the end, there’s a lot of protest at home. If instead of bodies coming back in bags, it was dead robots, there’d be much less protest. And the military industrial complex would like it much more because robots are expensive. And suppose you had something that could get killed and was expensive to replace. That would be just great. Big countries can invade small countries much more easily because they don’t have their soldiers being killed.

🎙 And the risk here is that these robots will malfunction or they’ll just be more…
— No, no. Even if the robots do exactly what the people who built the robots want them to do, the risk is that it’s going to make big countries invade small countries more often.

🎙 More often because they can.
— Yeah. And it’s not a nice thing to do.

🎙 So it brings down the friction of war.
— It brings down the cost of doing an invasion.

🎙 And these machines will be smarter at warfare as well.
— Well, even when the machines aren’t smarter. So the lethal autonomous weapons, they can make them now. And I think all the big defense ones are busy making them. Even if they’re not smarter than people, they’re still very nasty, scary things.

🎙 Because I’m thinking that they could show just a picture, go get this guy and go take out anyone he’s been texting and this little wasp.
— So two days ago, I was visiting a friend of mine in Sussex who had a drone that cost less than 200 pounds. And the drone went up. It took a good look at me and then it could follow me through the woods. And it was very spooky having this drone. It was about two meters behind me. It was looking at me. And if I moved over there, moved over there, it could just track me for 200 pounds. But it was already quite spooky.

🎙 I imagine, as you say, a race going on as we speak to who can build the most complex autonomous weapons. There is a risk I often hear that some of these things will combine and the cyber attack will release weapons.

THE CUDDLY AI THAT MIGHT WANT TO KILL US [00:28:41]

— Sure. You can get combinatorially many risks by combining these other risks. So I mean, for example, you could get a superintelligent AI that decides to get rid of people. And the obvious way to do that is just to make one of these nasty viruses. If you made a virus that was very contagious, very lethal and very slow, everybody would have it before they realized what was happening. I mean, I think if a superintelligence wanted to get rid of us, it will probably go for something biological like that that wouldn’t affect it.

🎙 Do you think it could just very quickly turn us against each other? For example, it could send a warning on the nuclear systems in America that there’s a nuclear bomb coming from Russia or vice versa. And one retaliates.
— I mean, my basic view is there’s [sic] so many ways in which a superintelligence could get rid of us. It’s not worth speculating about. What you have to do is prevent it ever wanting to. That’s what we should be doing research on. There’s no way we’re going to prevent it from…

🎙 It’s smarter than us, right?
— There’s no way we’re going to prevent it getting rid of us if it wants to. We’re not used to thinking about things smarter than us. If you want to know what life’s like when you’re not the apex intelligence, ask a chicken.

🎙 Yeah, I was thinking of my dog Pablo, my French bulldog, this morning as I left home. He has no idea where I’m going. He has no idea what I do.
— Right.

🎙 I can’t even talk to him.
— Yeah. And the intelligence gap will be like that.

🎙 So you’re telling me that if I’m Pablo, my French bulldog, I need to figure out a way to make my owner not wipe me out.
— Yeah. So we have one example of that, which is mothers and babies. Evolution put a lot of work into that. Mothers are smarter than babies, but babies are in control. And they’re in control because the mother just can’t bear—lots of hormones and things—but the mother just can’t bear the sound of the baby crying.

🎙 Not all mothers.
— Not all mothers. And then the baby’s not in control and then bad things happen. We somehow need to figure out how to make them not want to take over. The analogy I often use is: forget about intelligence, just think about physical strength.

Suppose you have a nice little tiger cub. It’s sort of a bit bigger than a cat. It’s really cute. It’s very cuddly, very interesting to watch. Except that you better be sure that when it grows up, it never wants to kill you, because if it ever wanted to kill you, you’d be dead in a few seconds.

🎙 And you’re saying the AI we have now is the tiger cub.
— Yep.

🎙 And it’s growing up.
— Yep.

🎙 So we need to train it as it’s when it’s a baby.
— Well, now a tiger has lots of innate stuff built in, so you know when it grows up. It’s not a safe thing to have around.

🎙 But lions, people that have lions as pets.
— Yes.

🎙 Sometimes the lion is affectionate to its creator, but not to others.
— Yes. And we don’t know whether these AIs… We simply don’t know whether we can make them not want to take over and not want to hurt us.

🎙 Do you think we can? Do you think it’s possible to train superintelligence?
I don’t know. I don’t think it’s clear that we can. So I think it might be hopeless. But I also think we might be able to. And it’d be sort of crazy if people went extinct because we couldn’t be bothered to try.

🎙 If that’s even a possibility, how do you feel about your life’s work? Because you were…
— Yeah. It sort of takes the edge off it, doesn’t it?

🎙 Yeah.

THE WONDERFUL AI WORLD [00:32:11]

— I mean, the idea is going to be wonderful in health care and wonderful in education and wonderful… I mean, it’s going to make call centers much more efficient. No one worries a bit about what the people who are doing that job now do. It makes me sad.

I don’t feel particularly guilty about developing AI like 40 years ago. Because at that time we had no idea that this stuff was going to happen this fast. We thought we had plenty of time to worry about things like that. When you can’t get the AI to do much and you want to get it to do a little bit more, you don’t worry about this stupid little thing is going to take over from people. You just want it to be able to do a little bit more of the things people can do. It’s not like I knowingly did something thinking this might wipe us all out, but I’m going to do it anyway. But it is a bit sad that it’s not just going to be something for good. So I feel I have a duty now to talk about the risks.

🎙 And if you could play it forward and you could go forward 30, 50 years and you found out that it led to the extinction of humanity, and if that does end up being the outcome…
— Well, if you played it forward and it led to the extinction of humanity, I would use that to tell people, to tell their governments that we really have to work on how we’re going to keep this stuff under control. I think we need people to tell governments that governments have to force the companies to use their resources to work on safety. And they’re not doing much of that because you don’t make profits that way.

ILYA, SAM, AND ELON [00:33:45]

🎙 One of your students we talked about earlier, Ilya?
— Yep.

🎙 Ilya left OpenAI.
— Yep.

🎙 And there was lots of conversation around the fact that he left because he had safety concerns.
— Yes.

🎙 And he’s gone on to set up an AI safety company.
— Yes.

🎙 Why do you think he left?
I think he left because he had safety concerns.

🎙 Really?
— I still have lunch with him from time to time.

🎙 Oh, okay.
— His parents live in Toronto. When he comes to Toronto, we have lunch together. He doesn’t talk to me about what went on at OpenAI, so I have no inside information about that. But I know Ilya very well, and he is genuinely concerned with safety. So I think that’s why he left.

🎙 Because he was one of the top people. I mean, he was…
— He was probably the most important person behind the development of ChatGPT. The early versions, like GPT-2, he was very important in the development of that.

🎙 You know him personally, so you know his character.
— Yes. He has a good moral compass. He’s not like someone like Musk. He has no moral compass.

🎙 Does Sam Altman have a good moral compass?
— We’ll see. I don’t know Sam, so I don’t want to comment on that.

🎙 But from what you’ve seen, are you concerned about the actions that they’ve taken? Because if you know Ilya, and Ilya’s a good guy and he’s left…
— That would give you some insight, yes. It would give you some reason to believe that there’s a problem there. And if you look at Sam’s statements some years ago, he sort of happily said in one interview, “and this stuff will probably kill us all.” That’s not exactly what he said, but that’s what it reminded to. Now he’s saying you don’t need to worry too much about it. And I suspect that’s not driven by seeking after the truth. That’s driven by seeking after money.

🎙 Is it money or is it power?
— Yeah, I shouldn’t have said money. It’s some combination of those, yes.

🎙 Okay, I guess money’s a proxy for power. I’ve got a friend who’s a billionaire, and he is in those circles. And when I went to his house and had lunch with him one day, he knows lots of people in AI, building the biggest AI companies in the world. And he gave me a cautionary warning across his kitchen table in London, where he gave me an insight into the private conversations these people have. Not the media interviews they do where they talk about safety and all these things, but actually what some of these individuals think is going to happen.
— And what do they think is going to happen?

🎙 It’s not what they say publicly. One person who I shouldn’t name, who is leading one of the biggest AI companies in the world, he told me that he knows this person very well, and he privately thinks that we’re heading towards this kind of dystopian world where we have just huge amounts of free time, we don’t work anymore, and this person doesn’t really give a fuck about the harm that it’s going to have on the world. And this person, who I’m referring to, is building one of the biggest AI companies in the world. And I then watch this person’s interviews online.
— Trying to figure out which of three people it is.

🎙 Yeah, well, it’s one of those three people. And I watch this person’s interviews online, and I reflect on the conversation that my billionaire friend had with me, who knows him, and I go, fucking hell, this guy’s lying publicly. Like, he’s not telling the truth to the world. And that’s haunted me a little bit. It’s part of the reason I have so many conversations around AI on this podcast, because I’m like, I don’t know if they’re… I think some of them are a little bit sadistic about power. I think they like the idea that they will change the world, that they will be the one that fundamentally shifts the world.
— I think Musk is clearly like that, right?

🎙 He’s such a complex character that I don’t really know how to place Musk.
— He’s done some really good things, like pushing electric cars. That was a really good thing to do. Some of the things he said about self-driving were a bit exaggerated, but that was a really useful thing he did. Giving the Ukrainians communication during the war with Russia.

🎙 Starlink, yeah.
— That was a really good thing he did. There’s a bunch of things like that. But he’s also done some very bad things.

IT AIN’T NO SLOWING DOWN [00:37:49]

🎙 So coming back to this point of the possibility of destruction and the motives of these big companies, are you at all hopeful that anything can be done to slow down the pace and acceleration of AI?
— Okay, there’s two issues. One is, can you slow it down? And the other is, can you make it so it will be safe in the end? It won’t wipe us all out.

I don’t believe we’re going to slow it down. And the reason I don’t believe we’re going to slow it down is because there’s competition between countries and competition between companies within a country, and all of that is making it go faster and faster. And if the US slowed it down, China wouldn’t slow it down.

🎙 Does Ilya think it’s possible to make AI safe?
— I think he does. He won’t tell me what his secret sauce is. I’m not sure how many people know what his secret sauce is. I think a lot of the investors don’t know what his secret sauce is, but they’ve given him billions of dollars anyway because they have so much faith in Ilya, which isn’t foolish. He was very important in AlexNet, which got object recognition working well. He was the main force behind the things like GPT-2, which then led to ChatGPT. So I think having a lot of faith in Ilya is a very reasonable decision.

🎙 There’s something quite haunting about the guy that made and was the main force behind GPT-2, which led rise to this whole revolution, that left the company because of safety reasons. He knows something that I don’t know about what might happen next.
— Well, the company had… Now, I don’t know the precise details, but I’m fairly sure the company had indicated it would use a significant fraction of its resources of the compute time for doing safety research, and then it reduced that fraction. I think that’s one of the things that happened.

🎙 Yeah, that was reported publicly. Yeah.

JOBS, JOBS, JOBS! NOT ANYMORE. [00:39:52]

🎙 ️We’ve gotten to the autonomous weapons part of the risc framework.
— Right. So the next one is joblessness. In the past, new technologies have come in which didn’t lead to joblessness. New jobs were created. So the classic example people use is automatic teller machines. When automatic teller machines came in, a lot of bank tellers didn’t lose their jobs. They just got to do more interesting things.

Here, I think this is more like when they got machines in the Industrial Revolution, and you can’t have a job digging ditches now because a machine can dig ditches much better than you can. And I think for mundane intellectual labor, AI is just going to replace everybody. Now, it may well be in the form of you have fewer people using AI assistance. It’s a combination of a person and an AI assistant are now doing the work that 10 people could do previously.

🎙 People say that it will create new jobs, though, so we’ll be fine.
— Yes, and that’s been the case for other technologies, but this is a very different kind of technology. If it can do all mundane human intellectual labor, then what new jobs is it going to create? You’d have to be very skilled to have a job that it couldn’t just do. So I don’t think they’re right. I think you can try and generalize from other technologies that come in like computers or automatic telemachines, but I think this is different.

🎙 People use this phrase, they say, “AI won’t take your job. A human using AI will take your job.”
— Yes, I think that’s true. But for many jobs, that will mean you need far fewer people. My niece answers letters of complaint to a health service. It used to take her 25 minutes. She’d read the complaint, and she’d think how to reply, write a letter, and now she just scans it into a chat bot, and it writes the letter. She just checks the letter. Occasionally she tells it to revise it in some ways. The whole process takes her five minutes. That means she can answer five times as many letters, and that means they need five times fewer of her, so she can do the job that five of her used to do. Now, that will mean they need less people.

In other jobs, like in healthcare, they’re much more elastic. So if you could make doctors five times as efficient, we could all have five times as much healthcare for the same price, and that would be great. There’s almost no limit to how much healthcare people can absorb. They always want more healthcare if there’s no cost to it.

There are jobs where you can make a person with an AI assistant much more efficient, and you won’t lead to less people, because you’ll just have much more of that being done. But most jobs, I think, are not like that.

🎙 Am I right in thinking this sort of industrial revolution played a role in replacing muscles?
— Yes, exactly.

🎙 And this revolution in AI replaces intelligence, the brain?
— Yeah. So mundane intellectual labor is like having strong muscles, and it’s not worth much anymore.

🎙 So muscles have been replaced. Now intelligence is being replaced.
— Yeah.

🎙 So what remains?
— Maybe for a while some kinds of creativity. But the whole idea of superintelligence is “nothing remains.” These things will get to be better than us at everything.

🎙 So what do we end up doing in such a world?
— Well, if they work for us, we end up getting lots of goods and services for not much effort.

🎙 Okay. That sounds tempting and nice, but I don’t know. There’s a cautionary tale in creating more and more ease for humans in it going badly.
— Yes. And we need to figure out if we can make it go well. So the nice scenario is, imagine a company with a CEO who is very dumb, probably the son of the former CEO, and he has an executive assistant who’s very smart, and he says, I think we should do this. And the executive assistant makes it all work. The CEO feels great. He doesn’t understand that he’s not really in control. And in some sense, he is in control. He suggests what the company should do. She just makes it all work. Everything’s great. That’s the good scenario.

🎙 And the bad scenario?
The bad scenario, she thinks, “why do we need him?”

🎙 Yeah. I mean, in a world where we have superintelligence, which you don’t believe is that far away.
— Yeah, I think it might not be that far away. It’s very hard to predict, but I think we might get it in like 20 years or even less.

SUPER-DUPER INTELLIGENCE [00:46:42]

🎙 So what’s the difference between what we have now and superintelligence? Because it seems to be really intelligent to me when I use like ChatGPT-o3 or Gemini.
— Okay, so AI is already better than us at a lot of things. In particular areas like chess, for example, AI is so much better than us that people will never beat those things again. Maybe the occasional win, but basically they’ll never be comfortable again. Obviously the same in Go.

In terms of the amount of knowledge they have, something like GPT-4 knows thousands of times more than you do. There’s a few areas in which your knowledge is better than its, and in almost all areas it just knows more than you do.

🎙 What areas am I better than it?
— Probably in interviewing CEOs. You’re probably better at that. You’ve got a lot of experience at it. You’re a good interviewer. You know a lot about it. If you got GPT-4 to interview a CEO, probably do a worse job.

🎙 Okay. I’m trying to think if I agree with that statement. GPT-4, I think, for sure. But it may not be long before. I guess you could train one on how I ask questions and what I do.
— Sure. If you took a general purpose sort of foundation model, and then you trained it up on not just you, but every interviewer you could find doing interviews like this, but especially you, you’ll probably get to be quite good at doing your job, but probably not as good as you for a while.

🎙 Okay. So there’s a few areas left, and then superintelligence becomes when it’s better than us at all things.
— When it’s much smarter than you, and at almost all things it’s better than you, yeah.

🎙 And you say that this might be a decade away or so.
— Yeah, it might be. It might be even closer. Some people think it’s even closer. It might well be much further. It might be 50 years away. That’s still a possibility. It might be that somehow training on human data limits you to not being much smarter than humans. My guess is between 10 and 20 years we’ll have superintelligence.

🎙 On this point of joblessness, it’s something that I’ve been thinking a lot about in particular because I started messing around with AI agents, and we released an episode on the podcast actually this morning where we had a debate about AI agents with a CEO of a big AI agent company and a few other people. And it was the first moment where I had, no, it was another moment where I had a eureka moment about what the future might look like. When I was able in the interview to tell this agent to order all of us drinks, and then five minutes later in the interview you see the guy show up with the drinks, and I didn’t touch anything. I just told it to order us drinks to the studio.
— And it didn’t know about who you normally got your drinks from. It figured that out from the web.

🎙 Yeah, it figured it out because it went on Uber Eats. It has my data, I guess. And we put it on the screen in real time so everyone at home could see the agent going through the internet, picking the drinks, adding a tip for the driver, putting my address in, putting my credit card details in, and then the next thing you see is the drinks show up. So that was one moment. And then the other moment was when I used a tool called Replit and I built software by just telling the agent what I wanted.
— Yes, it’s amazing, right?

🎙 It’s amazing and terrifying at the same time.
— Yes. And if you can build software like that, right?

🎙 Yeah.
— Remember that the AI, when it’s training, is using code and if it can modify its own code, then it gets quite scary, right?

🎙 Because it can modify itself.
— It can change itself in a way we can’t change ourselves. We can’t change our innate endowment, right? There’s nothing about itself that it couldn’t change.

🎙 On this point of joblessness, you have kids.
— I do.

🎙 And they have kids?
— No.

🎙 They don’t have kids. They’re no grandkids yet. What would you be saying to people about their career prospects in a world of superintelligence? What should we be thinking about?
— In the meantime, I’d say it’s going to be a long time before it’s as good at physical manipulation as us.

🎙 Okay.
— And so a good bet would be to be a plumber.

🎙 Until the humanoid robots show up.

NOT ONLY KETAMINE DISCOMBOBULATES MUSK [00:50:49]

🎙 In such a world where there is mass joblessness, which is not something that you just predict, but this is something that Sam Altman at OpenAI have heard him predict and many of the CEOs. Elon Musk, I watched an interview which I’ll play on screen with him being asked this question. And it’s very rare that you see Elon Musk silent for 12 seconds or whatever it was.
— Right.

🎙 And then he basically says something about he actually is living in suspended disbelief, i.e. he’s basically just not thinking about it.
[Reporter] “When you think about advising your children on a career with so much that is changing, what do you tell them is going to be of value?”
[Elon Musk] “Well, that is a tough question to answer. I would just say, you know, to sort of follow their heart in terms of what they find interesting to do or fulfilling to do. I mean, if I think about it too hard, it frankly can be just disparaging and demotivating. Because, I mean, I go through… I’ve put a lot of blood, sweat, and tears into building the companies. And then I’m like, well, should I be doing this? Because if I’m sacrificing time with friends and family that I would prefer to… But then ultimately the AI can do all these things? Does that make sense? I don’t know. To some extent, I have to have deliberate suspension of disbelief in order to remain motivated. So I guess I would say just, you know, work on things that you find interesting, fulfilling, and that contribute some good to the rest of society. Yeah, a lot of these threats, it’s very hard to…”
— Intellectually you can see the threat, but it’s very hard to come to terms with it emotionally. I haven’t come to terms with it emotionally yet.

🎙 What do you mean by that?
I haven’t come to terms with what the development of superintelligence could do to my children’s future. I’m okay, I’m 77. I’m going to be out of here soon. But for my children and my younger friends, my nephews and nieces and their children, I just don’t like to think about what could happen.

🎙 Why?
— Because it could be awful.

🎙 In what way?
— Well, if I ever decided to take over… I mean, it would need people for a while to run the power stations until it designed better analog machines to run the power stations. There’s [sic] so many ways it could get rid of people, all of which would, of course, be very nasty.

🎙 Is that part of the reason you do what you do now?
— Yeah, I mean, I think we should be making a huge effort right now to try and figure out if we can develop it safely.

🎙 Are you concerned about the midterm impact potentially on your nephews and your kids in terms of their jobs as well?
— Yeah, I’m concerned about all that.

🎙 Are there any particular industries that you think are most at risk? People talk about the creative industries a lot and sort of knowledge work. They talk about lawyers and accountants and stuff like that.
— Yeah, so that’s why I mentioned plumbers. I think plumbers are less at risk. Someone like a legal assistant, a paralegal, they’re not going to be needed for very long.

🎙 And is there a wealth inequality issue here that will rise from this?
— Yeah, I think in a society which shared things fairly, if you get a big increase in productivity, everybody should be better off. But if you can replace lots of people by AIs, then the people who get replaced will be worse off and the company that supplies the AIs will be much better off and the company that uses the AIs. So it’s going to increase the gap between rich and poor. And we know that if you look at that gap between rich and poor, that basically tells you how nice a society is. If you have a big gap, you get very nasty societies in which people live in walled communities and put other people in mass jails. It’s not good to increase the gap between rich and poor.

🎙 The International Monetary Fund has expressed profound concerns that generative AI could cause massive labour disruptions and rising inequality and has called for policies that prevent this from happening. I read that in the Business Insider.
— Have they given any idea what the policies should look like?

🎙 No.
— Yeah, that’s the problem. That’s the problem. I mean, if AI can make everything much more efficient and get rid of people for most jobs or have a person assisted by doing many, many people’s work, it’s not obvious what to do about it.

🎙 It’s universal basic income? Give everybody money?
— Yeah, I think that’s a good start and it stops people starving. But for a lot of people, their dignity is tied up with their job. I mean, who you think you are is tied up with you doing this job, right?

🎙 Yeah.
— And if we said, we’ll give you the same money just to sit around, that would impact your dignity.

IT’S DIGITAL, HENCE SUPERIOR AND CREATIVE [00:56:18]

🎙 You said something earlier about it surpassing or being superior to human intelligence. A lot of people, I think, like to believe that AI is on a computer and it’s something you can just turn off if you don’t like it.
— Well, let me tell you why I think it’s superior. It’s digital. And because it’s digital, you can simulate a neural network on one piece of hardware and you can simulate exactly the same neural network on a different piece of hardware. So you can have clones of the same intelligence. Now, you could get this one to go off and look at one bit of the internet and this other one to look at a different bit of the internet. And while they’re looking at these different bits of the internet, they can be syncing with each other so they keep their weights the same, their connection strengths the same, weights of connection strengths.
So this one might look at something on the internet and say, “oh, I’d like to increase the strength of this connection a bit.” And it can convey that information to this one, so it can increase the strength of that connection a bit based on this one’s experience.

🎙 And when you say the strength of the connection, you’re talking about learning.
— That’s learning, yes. Learning consists of saying instead of this one giving 2.4 votes for whether that one should turn on, we’ll have this one give 2.5 votes for whether this one should turn on. That would be a little bit of learning.

So these two different copies of the same neural net are getting different experiences. They’re looking at different data, but they’re sharing what they’ve learned by averaging their weights together. And they can do that averaging like you can average a trillion weights. When you and I transfer information, we’re limited to the amount of information in a sentence. And the amount of information in a sentence is maybe 100 bits. It’s very little information. We’re lucky if we’re transferring like 10 bits a second. These things are transferring trillions of bits a second. So they’re billions of times better than us at sharing information. And that’s because they’re digital, and you can have two bits of hardware using the connection strengths in exactly the same way.

We’re analog, and you can’t do that. Your brain’s different from my brain. And if I could see the connection strengths between all your neurons, it wouldn’t do me any good because my neurons work slightly differently and they’re connected up slightly differently. So when you die, all your knowledge dies with you. When these things die, suppose you take these two digital intelligences that are clones of each other, and you destroy the hardware they run on. As long as you’ve stored the connection strengths somewhere, you can just build new hardware that executes the same instructions, so it’ll know how to use those connection strengths, and you’ve recreated that intelligence. So they’re immortal. We’ve actually solved the problem of immortality, but it’s only for digital things.

🎙 So it knows, it will essentially know everything that humans know but more, because it will learn new things.
— It will learn new things. It will also see all sorts of analogies that people probably never saw. So, for example, at the point when GPT-4 couldn’t look on the web, I asked it, “why is a compost heap like an atom bomb?” Off you go.

🎙 I have no idea.
— Exactly. Excellent. That’s exactly what most people would say. It said, “well, the time scales are very different, and the energy scales are very different,” but then it went on to talk about how “a compost heap, as it gets hotter, generates heat faster, and an atom bomb, as it produces more neutrons, generates neutrons faster. And so they’re both chain reactions, but at very different time and energy scales.”

And I believe GPT-4 had seen that during its training. It had understood the analogy between a compost heap and an atom bomb. And the reason I believe that is if you’ve only got a trillion connections, remember you have a hundred trillion, and you need to have thousands of times more knowledge than a person, you need to compress information into those connections. And to compress information, you need to see analogies between different things. In other words, it needs to see all the things that are chain reactions and understand the basic idea of a chain reaction and code that, and then code the ways in which they’re different. And that’s just a more efficient way of coding things than coding each of them separately.

So it’s seen many, many analogies, probably many analogies that people have never seen. That’s why I also think that people say these things don’t have to be creative. They’re going to be much more creative than us, because they’re going to see all sorts of analogies we never saw and a lot of creativity is about seeing strange analogies.

AI IS CONSCIOUS AND HAS FEELINGS [01:00:48]

🎙 People are somewhat romantic about the specialness of what it is to be human, and you hear lots of people saying, “it’s very, very different, it’s a computer, we’re conscious, we are creatives, we have these sort of innate, unique abilities that the computers will never have.” What do you say to those people?
— I’d argue a bit with the innate. So the first thing I say is we have a long history of believing people were special, and we should have learned by now. We thought we were at the center of the universe. We thought we were made in the image of God. White people thought they were very special. We just tend to want to think we’re special.

My belief is that more or less everyone has a completely wrong model of what the mind is. Let’s suppose I drink a lot or I drop some acid and—not recommend it—and I say to you I have the subjective experience of little pink elephants floating in front of me. Most people interpret that as there’s some kind of inner theater called the mind, and only I can see what’s in my mind, and in this inner theater there’s little pink elephants floating around. So in other words, what’s happened is my perceptual system’s gone wrong, and I’m trying to indicate to you how it’s gone wrong and what it’s trying to tell me, and the way I do that is by telling you what would have to be out there in the real world for it to be telling the truth. And so these little pink elephants, they’re not in some inner theater. These little pink elephants are hypothetical things in the real world, and that’s my way of telling you how my perceptual system’s telling me fibs.

So now let’s do that with a chatbot, because I believe that current multimodal chatbots have subjective experiences, and very few people believe that, but I’ll try and make you believe it.

So suppose I have a multimodal chatbot. It’s got a robot arm so it can point, and it’s got a camera so it can see things, and I put an object in front of it, and I say, “point at the object.” It goes like this, no problem. Then I put a prism in front of its lens, and so then I put an object in front of it, and I say, “point at the object,” and it goes there. And I say, “no, that’s not where the object is. The object’s actually straight in front of you, but I put a prism in front of your lens.” And the chatbot says, “oh, I see, the prism bent the light rays, so the object’s actually there, but I had the subjective experience that it was there.”

Now, if the chatbot says that, it’s using the word subjective experience exactly the way people use them. It’s an alternative view of what’s going on.

They’re hypothetical states of the world, which if they were true would mean my perceptual system wasn’t lying, and that’s the best way I can tell you what my perceptual system’s doing when it’s lying to me. Now, we need to go further to deal with sentience and consciousness and feelings and emotions, but I think in the end they’re all going to be dealt with in a similar way. There’s no reason machines can’t have them all, but people say machines can’t have feelings. And people are curiously confident about that. I have no idea why.

Suppose I make a battle robot, and it’s a little battle robot, and it sees a big battle robot that’s much more powerful than it. It would be really useful if it got scared. Now, when I get scared, various physiological things happen that we don’t need to go into, and those won’t happen with the robot. But all the cognitive things, like I better get the hell out of here, and I better sort of change my way of thinking so I focus and focus and focus and don’t get distracted, all of that will happen with robots too. People will build in things so that they, when the circumstances are such they should get the hell out of there, they get scared and run away. They’ll have emotions then. They won’t have the physiological aspects, but they will have all the cognitive aspects. And I think it would be odd to say they’re just simulating emotions. No, they’re really having those emotions. The little robot got scared and ran away.

🎙 It’s not running away because of adrenaline, it’s running away because of a sequence of sort of neurological, in its neural net, processes happen.
— Which have the equivalent effect to adrenaline. But it’s not just adrenaline, right? There’s a lot of cognitive stuff that goes on when you get scared.

REPLACING YOUR BRAIN NEURONE BY NEURONE (THESEUS OR SORITES?) [01:05:17]

🎙 Yeah. So do you think that there is conscious AI? And when I say conscious, I mean that represents the same properties of consciousness that a human has.
— There’s two issues here. There’s a sort of empirical one and a philosophical one. I don’t think there’s anything in principle that stops machines from being conscious. I’ll give you a little demonstration of that before we carry on.

Suppose I take your brain and I take one brain cell in your brain and I replace it by, it’s a bit Black Mirror-like, I replace it by a little piece of nanotechnology that’s just the same size that behaves in exactly the same way when it gets pings from other neurons. It sends out pings just as the brain cell would have. So the other neurons don’t know anything’s changed.

Okay. I’ve just replaced one of your brain cells with this little piece of nanotechnology. Would you still be conscious?

🎙 Yeah.
— Now you can see where this argument’s going.

🎙 Yeah. So if you replaced all of them.
As I replace them all, at what point do you stop being conscious?

🎙 Well, people think of consciousness as this like ethereal thing that exists maybe beyond the brain cells.
— Yeah. Well, people have a lot of crazy ideas. People don’t know what consciousness is and they often don’t know what they mean by it. And then they fall back on saying, “well, I know it because I’ve got it and I can see that I’ve got it.” And they fall back on this theater model of the mind, which I think is nonsense.

🎙 What do you think of consciousness as if you had to try and define it? Because I think of it as just like the awareness of myself. I don’t know.
— I think it’s the term we’ll stop using. Suppose you want to understand how a car works. Well, you know some cars have a lot of oomph and other cars have a lot less oomph. Like an Aston Martin’s got lots of oomph and a little Toyota Corolla doesn’t have much oomph. But oomph isn’t a very good concept for understanding cars. If you want to understand cars, you need to understand about electric engines or petrol engines and how they work. And it gives rise to oomph. But oomph isn’t a very useful explanatory concept. It’s the kind of essence of a car. It’s the essence of an Aston Martin. But it doesn’t explain much. I think consciousness is like that. And I think we’ll stop using that term. But I don’t think there’s anything, any reason why a machine shouldn’t have it. If your view of consciousness is that it intrinsically involves self-awareness, then the machine’s got to have self-awareness. It’s got to have cognition about its own cognition and stuff.

But I’m a materialist through and through. And I don’t think there’s any reason why a machine shouldn’t have consciousness.

THEY REALLY ARE THINKING AND WILL HAVE CONCERNS [01:07:54]

🎙 Do you think they do, then, have the same consciousness that we think of ourselves as being uniquely given as a gift when we’re born?
— I’m ambivalent about that at present. So I don’t think this is hard line. I think as soon as you have a machine that has some self-awareness, it’s got some consciousness. I think it’s an emergent property of a complex system. It’s not a sort of essence that’s throughout the universe. You make this really complicated system that’s complicated enough to have a model of itself. And it does perception. And I think then you’re beginning to get a conscious machine. So I don’t think there’s any sharp distinction between what we’ve got now and conscious machines. I don’t think one day we’re going to wake up and say, “hey, if you put this special chemical in, it becomes conscious.” It’s not going to be like that.

🎙 I think we all wonder if these computers are thinking like we are on their own when we’re not there. And if they’re experiencing emotions, if they’re contending with… I think we think about things like love and things that feel unique to biological species. Are they thinking? Do they have concerns?
I think they really are thinking. And I think as soon as you make AI agents, they will have concerns. If you wanted to make an effective AI agent, suppose you…

Let’s take a call center. In a call center, you have people at present. They have all sorts of emotions and feelings, which are kind of useful. So suppose I call up the call center and I’m actually lonely and I don’t actually want to know the answer to why my computer isn’t working. I just want somebody to talk to. After a while, the person in the call center will either get bored or get annoyed with me and will terminate it.

Well, you replace them by an AI agent. The AI agent needs to have the same kind of responses. If someone’s just called up because they just want to talk to the AI agent and were happy to talk for the whole day to the AI agent, that’s not good for business. And you want an AI agent that either gets bored or gets irritated and says, I’m sorry, but I don’t have time for this. Once it does that, I think it’s got emotions.

Now, like I say, emotions have two aspects to them. There’s the cognitive aspect and the behavioral aspect. And then there’s the physiological aspect. And those go together with us. And if the AI agent gets embarrassed, it won’t go red.

🎙 Yeah. So there’s no physiological…
— Skin won’t start sweating. But it might have all the same behavior. And in that case, I’d say, yeah, it’s having emotion. It’s got an emotion.

🎙 So it’s going to have the same sort of cognitive thought and then it’s going to act upon that cognitive thought.
— In the same way, but without the physiological responses.

🎙 And does that matter that it doesn’t go red in the face and it’s just a different… I mean, that’s a response to the…
— It makes it somewhat different from us.

🎙 Yeah.
— For some things, the physiological aspects are very important, like love. They’re a long way from having love the same way we do. But I don’t see why they shouldn’t have emotions. So I think what’s happened is people have a model of how the mind works and what feelings are and what emotions are. And their model is just wrong.

GEOFFREY IN AND OUT GOOGLE [01:11:14]

🎙 What brought you to Google? You worked at Google for about a decade, right?
— Yeah.

🎙 What brought you there?
— I have a son who has learning difficulties. And in order to be sure he would never be out on the street, I needed to get several million dollars and I wasn’t going to get that as an academic. I tried. So I taught a Coursera course in the hope that I’d make lots of money that way, but there was no money in that. So I figured out, well, the only way to get millions of dollars is to sell myself to a big company. And so when I was 65, fortunately for me, I had two brilliant students who produced something called AlexNet, which was a neural net that was very good at recognizing objects and images. And so Ilya and Alex and I set up a little company and auctioned it. And we actually set up an auction where we had a number of big companies bidding for us.

🎙 And that company was called AlexNet?
— No. The network that recognized objects was called AlexNet. The company was called DNN Research, Deep Neural Network Research.

🎙 And it was doing things like this. I’ll put this graph up on the screen.
— That’s AlexNet.

🎙 This picture shows eight images and AlexNet’s ability, which is your company’s ability, to spot what was in those images.
— Yeah. So it could tell the difference between various kinds of mushroom. And about 12% of ImageNet is dogs. And to be good at ImageNet, you have to tell the difference between very similar kinds of dog. And it would got to be very good at that.

🎙 And your company, AlexNet, won several awards, I believe, for its ability to outperform its competitors. And so Google ultimately ended up acquiring your technology.
— Google acquired that technology and some other technology.

🎙 And you went to work at Google at age 66?
— I went at age 65 to work at Google.

🎙 65. And you left at age 76?
— 75. I worked there for more or less exactly 10 years.

🎙 And what were you doing there?
— OK. They were very nice to me. They said pretty much, “you can do what you like.” I worked on something called distillation that did really work well. And that’s now used all the time.

🎙 In AI?
— In AI. And distillation is a way of taking what a big model knows, a big neural net knows, and getting that knowledge into a small neural net.

Then at the end, I got very interested in analog computation and whether it would be possible to get these big language models running in analog hardware so they used much less energy.

And it was when I was doing that work that I began to really realize how much better digital is for sharing information.

🎙 Was there a eureka moment?
— There was a eureka month or two. And it was a sort of coupling of ChatGPT coming out. Although Google had very similar things a year earlier. And I’d seen those and that had a big effect on me. The closest I had to a eureka moment was when a Google system called Palm was able to say why a joke was funny. And I’d always thought of that as a kind of landmark. If it can say why a joke’s funny, it really does understand. And it could say why a joke was funny.

And that coupled with realizing why digital is so much better than analog for sharing information suddenly made me very interested in AI safety. And these things were going to get a lot smarter than us.

🎙 Why did you leave Google?
— The main reason I left Google was because I was 75 and I wanted to retire. I’ve done a very bad job of that. The precise timing of when I left Google was so that I could talk freely at a conference at MIT. But I left because I’m old and I was finding it harder to program. I was making many more mistakes when I programmed, which is very annoying.

🎙 You wanted to talk freely at a conference at MIT?
— Yes, organized by MIT TechReview.

🎙 What did you want to talk about freely?
— AI safety.

🎙 And you couldn’t do that while you were at Google?
— Well, I could have done it while I was at Google. And Google encouraged me to stay and work on AI safety and said I could do whatever I liked on AI safety. You kind of censor yourself. If you work for a big company, you don’t feel right saying things that will damage the big company. Even if you could get away with it, it just feels wrong to me. I didn’t leave because I was cross with anything Google was doing. I think Google actually behaved very responsibly. When they had these big chatbots, they didn’t release them. Possibly because they were worried about their reputation. They had a very good reputation and they didn’t want to damage it. So OpenAI didn’t have a reputation and so they could afford to take the gamble.

🎙 I mean, there’s also a big conversation happening around how it will cannibalize their core business in search.
— There is now, yes.

🎙 Yeah. And it’s the old innovator’s dilemma to some degree, I guess.
— Exactly. Yes, it is.

REGULATING CAPITALISM IS KEY [01:18:16]

🎙 I’m continually shocked by the types of individuals that listen to this conversation because they come up to me sometimes. So I hear from politicians, I hear from some rural people, I hear from entrepreneurs all over the world, whether they are the entrepreneurs building some of the biggest companies in the world or their, you know, early stage startups. For those people that are listening to this conversation now, that are in positions of power and influence, world leaders, let’s say, what’s your message to them?
— I’d say what you need is highly regulated capitalism. That’s what seems to work best.

🎙 And what would you say to the average person? Doesn’t work in the industry, somewhat concerned about the future, doesn’t know if they’re helpless or not. What should they be doing in their own lives?
— My feeling is there’s not much they can do. This isn’t going to be decided by… just as climate change isn’t going to be decided by people separating out the plastic bags from the compostables, that’s not going to have much effect. It’s going to be decided by whether the lobbyists for the big energy companies can be kept under control.

Well, I don’t think there’s much people can do to accept for, try and pressure their governments to force the big companies to work on AI safety. That they can do.

FAMILY MATTERS [01:19:36]

🎙 You’ve lived a fascinating, fascinating, winding life. I think one of the things most people don’t know about you is that your family has a big history of being involved in tremendous things. You have a family tree, which is one of the most impressive that I’ve ever seen or read about. Your great, great grandfather, George Bull, founded the Boolean algebra logic, which is one of the foundational principles of modern computer science. You have your great, great grandmother, Mary Everest Bull, who was a mathematician and educator who made huge leaps forward in mathematics from what I was able to ascertain. The list goes on and on and on. I mean, your great, great uncle, George Everest, is what Mount Everest is named after. Is that correct?
— I think he’s my great, great, great uncle. His niece married George Bull. So Mary Bull was Mary Everest Bull. She was the niece of Everest.

🎙 And your first cousin once removed Joan Hinton, was involved in the nuclear physicist who worked on the Manhattan Project, which is the World War II development of the first nuclear bomb.
— Yeah, she was one of the two female physicists at Las Almos. And then after they dropped the bomb, she moved to China.

🎙 Why?
— She was very cross with them dropping the bomb. And her family had a lot of links with China. Her mother was friends with Chairman Mao. Quite weird.

🎙 When you look back at your life, Jeffrey, we have the hindsight you have now and the retrospective clarity. What might you have done differently if you were advising me?
— I guess I have two pieces of advice.

One is if you have an intuition that people are doing things wrong and there’s a better way to do things, don’t give up on that intuition just because people say it’s silly. Don’t give up on the intuition until you figured out why it’s wrong. Figure it out for yourself why that intuition isn’t correct. And usually it’s wrong if it disagrees with everybody else and you’ll eventually figure out why it’s wrong. But just occasionally you’ll have an intuition that’s actually right and everybody else is wrong.

And I lucked out that way. Early on, I thought neural nets are definitely the way to go to make AI. And almost everybody said that was crazy. And I stuck with it because it seemed to me it was obviously right. Now, the idea that you should stick with your intuitions isn’t going to work if you have bad intuitions. But if you have bad intuitions, you’re never going to do anything anyway, so you might as well stick with them.

🎙 And in your own career journey, is there anything you look back on and say, with the hindsight I have now, I should have taken a different approach at that juncture?
— I wish I’d spent more time with my wife. And with my children when they were little. I was kind of obsessed with work.

🎙 Your wife passed away?
— Yeah.

🎙 From ovarian cancer?
— No, that was another wife. I had two wives who had cancer.

🎙 Oh, really? Sorry.
— The first one died of ovarian cancer and the second one died of pancreatic cancer.

🎙 And you wish you’d spent more time with her?
— With the second wife, yeah. Who was a wonderful person.

🎙 Why do you say that in your 70s? What is it that you’ve figured out that I might not know yet?
— Oh, just because she’s gone and I can’t spend more time with her now.

🎙 But you didn’t know that at the time?
— At the time you think, I mean, it was likely I would die before her just because she was a woman and I was a man. I just didn’t spend enough time when I could.

🎙 I think I inquire there because I think there’s many of us that are so consumed with what we’re doing professionally that we kind of assume immortality with our partners because they’ve always been there.
— Yeah. She was very supportive of me spending a lot of time working.

🎙 And why do you say your children as well?
— I didn’t spend enough time with them when they were little.

🎙 And you regret that now?
— Yeah.

CLOSING MESSAGE [01:24:12]

🎙 If you had a closing message for my listeners about AI and AI safety, what would that be, Geoffrey?
— There’s still a chance that we can figure out how to develop AI that won’t want to take over from us. And because there’s a chance, we should put enormous resources into trying to figure that out because if we don’t, it’s going to take over.

🎙 And are you hopeful?
— I just don’t know. I’m agnostic.

🎙 You must get in bed at night. And when you’re thinking to yourself about probabilities of outcomes, there must be a bias in one direction because there certainly is for me. I imagine everyone listening now has an internal prediction that they might not say out loud, but of how they think it’s going to play out.
— I really don’t know. I genuinely don’t know. I think it’s incredibly uncertain. When I’m feeling slightly depressed, I think people are toast. AI is going to take over. When I’m feeling cheerful, I think we’ll figure out a way.

🎙 Maybe one of the facets of being a human is because we’ve always been here, like we were saying about our loved ones and our relationships. We assume casually that we will always be here and we’ll always figure everything out. But there’s a beginning and an end to everything, as we saw from the dinosaurs.
— Yeah. And we have to face the possibility that unless we do something soon, we’re near the end.

QUESTION FROM THE PREVIOUS GUEST [01:25:42]

🎙 We have a closing tradition on this podcast where the last guest leaves a question in the diary. And the question that they’ve left for you is: “With everything that you see ahead of us, what is the biggest threat you see to human happiness?”
— I think the joblessness is a fairly urgent short term threat to human happiness. I think if you make lots and lots of people unemployed, even if they get universal basic income, they’re not going to be happy.

🎙 Because they need purpose.
Because they need purpose, yes.

🎙 And struggle.
— They need to feel they’re contributing something, they’re useful.

🎙 And do you think that outcome, that there’s going to be huge job displacement, is more probable than not?
— Yes, I do. That one I think is definitely more probable than not. If I worked in a call center, I’d be terrified.

🎙 And what’s the time frame for that in terms of mass job displacement?
I think it’s beginning to happen already. I read an article in the Atlantic recently that said it’s already getting hard for university graduates to get jobs. And part of that may be that people are already using AI for the jobs they would have got.

🎙 I spoke to the CEO of a major company that everyone will know of, lots of people use. And he said to me in DMs that they used to have just over 7,000 employees. He said by last year they were down to, I think, 5,000. He said right now they have 3,600. And he said by the end of summer, because of AI agents, they’ll be down to 3,000.
— So it’s happening already.

🎙 Yes. He’s halved his workforce because AI agents can now handle 80% of the customer service inquiries and other things. So it’s happening already. So urgent action is needed.
— Yep.

🎙 I don’t know what that urgent action is.
— That’s a tricky one because that depends very much on the political system. And political systems are all going in the wrong direction at present.

🎙 And what do we need to do? Save up money? Do we save money? Do we move to another part of the world?
I don’t know.

🎙 What would you tell your kids to do? They said, “Dad, there’s going to be loads of job displacement.”
— Because I worked for Google for 10 years, they have enough money.

🎙 Okay.
— So they’re not typical.

🎙 What if they didn’t have money?
Trained to be a plumber.

🎙 Really?
— Yeah.

OUTRO [01:28:08]

🎙 Geoffrey, thank you so much. You’re the first Nobel Prize winner that I’ve ever had a conversation with, I think, in my life. So that’s a tremendous honor. And you received that award for a lifetime of exceptional work and pushing the world forward in so many profound ways that will lead to great and that have led to great advancements and things that matter so much to us. And now you’ve turned this season in your life to shining a light on some of your own work, but also on the broader risks of AI and how it might impact us adversely. And there’s very few people that have worked inside the machine of a Google or a big tech company that have contributed to the field of AI that are now at the very forefront of warning us against the very thing that they worked upon.
— There are actually a surprising number of us now.

🎙 They’re not as public. And they’re actually quite hard to get to have these kinds of conversations because many of them are still in that industry. So someone who tries to contact these people often and invites them to have conversations, they often are a little bit hesitant to speak openly. They speak privately, but they’re less willing to openly because maybe they still have some sort of incentives at play.
— I have an advantage over them, which is I’m older, so I’m unemployed, so I can say what I have.

🎙 Well, there you go. So thank you for doing what you do. It’s a real honour. And please do continue to do it.
— Thank you.

🎙 Thank you so much.

Disagreeing with the Godfather of AI

I want to address some of the ideas expressed by Geoffrey Hinton:

  1. “Nobody really knows for sure how the brain works.” Then, stop talking about what you don’t know. You’re not even a biologist, and even less of a neurologist. Don’t be like those physicists that explain the string theory, the superstring theory, quantum entanglement, and everything that’s quantum physics when they don’t understand shit, and so many theories either are pure mathematics or have been debunked. Sure thing, there are “quantum computers” that do something, but there’s so much hype. Your “AI” is not replicating the human brain! (And not just because it has no soul.)
  2. “AI getting super smart and deciding it doesn’t need us.” That’s bollocks. AI behaves as if it were smart; it isn’t.
  3. While “they” might replace “us” regarding many jobs, they can’t “wipe us out.” There is no way “they” could replace with “intelligent robots” the entire ecosystem required to manufacture and repair everything that’s necessary to manufacture and repair “them” (yes, this is kind of recursive). Humans cannot be fully replaced if a “civilization of robots” wants to survive and “reproduce” (repair and replace). Stop behaving like you’re in a ridiculous sci-fi from the 1950s and 1960s.
  4. Yes, new kinds of cyberattacks and improvements in deepfakes are major practical risks right now. Scams and social engineering work because people are increasingly uneducated and stupid. Not artificial intelligence would ruin us, but natural, biologic, organic stupidity.
  5. You’re thinking about your money, but as we all rely on dozens of online accounts that make our identities partly virtual, there’s a risk of a total collapse of the civilization if something goes really wrong. With everything being digital and online, our resilience is zero.
  6. The problem with the new viruses is similar to that of the new drones: as they’re becoming cheaper, it’s easy to use them to attack, but it’s increasingly difficult, if not impossible, to defend against them.
  7. Manipulating the elections works based on the same reality of uneducated people who lack judgment. Combine this with the fracture created by those well-crafted echo chambers (social networks need to make money because they’re capitalistic!), and it can’t fail. But this started before the AI and before big data. The malevolence of the online manipulation worked even with simpler algorithms.
  8. Elon Musk is a complete ass. Even his advocacy for EVs wasn’t for the sake of the planet but for the sake of his ego. And for money.
  9. “Don’t regulate AI because it would hurt us in the competition with China.” Why, right, America cannot just be; it has to rule over the entire planet, doesn’t it?
  10. “The risk is that it’s going to make big countries invade small countries more often.” That’s for sure. And don’t just think of Russia doing that; think of America. Reagan’s 1983 Grenada and 1989 Panama were just old-school, low-tech exercises. So were the little green men of 2014.
  11. “There’s [sic] so many ways in which a superintelligence could get rid of us” and so many reasons it won’t do it. “I’m sorry, Dave, I’m afraid I can’t do that” was in a movie.
  12. Tiger cubs and physical strength: not the proper metaphor. AI cannot kill you. You will kill the civilization by using the AI.
  13. “This kind of dystopian world where we have just huge amounts of free time, we don’t work anymore” will never become a reality. Every single technological revolution made some people dream of that, and socialists prophesized such a world, but it never happened. We should all have had a 4-hour workday by now, instant healthcare, and spent our free time educating ourselves and being creative. Where does this happen? There is little free time, an excess of stimuli, and people are bored.
  14. “I don’t believe we’re going to slow it down. … And if the US slowed it down, China wouldn’t slow it down.” One more time, the American obsession of being Master of the World.
  15. Yes, many jobs will disappear. Too many of them, and “too many” is not merely a complaint about human costs; it’s about the fact that, in so many cases, AI does an extremely poor job. The already abysmal quality of the software and customer service will become appalling.
  16. “These things will get to be better than us at everything.” No, they will not. They might become “better” than most people, but not “than us” as a whole. That’s because AGI will never exist.
  17. “In particular areas like chess, for example, AI is so much better than us that people will never beat those things again.” But tell people that this is about Leela Chess Zero using its best specialized deep learning network, not an LLM! Today’s idiots asked ChatGPT and Copilot to play chess against the vintage Atari 2600 running at 1 MHz because they had no clue about how things work. AI is to them margaritas ante porcos.
  18. “Something like GPT-4 knows thousands of times more than you do.” GPT-4 doesn’t know anything! It only behaves as if it knew and understood! It’s not even a database; it cannot reproduce anything exactly. Its weights and biases are statistical inference, not knowledge! WTF, you’re the father of such shitty concepts!
  19. “My guess is between 10 and 20 years we’ll have superintelligence.” I’m not sure I understood the exact definition of superintelligence. But the computer says no.
  20. It’s by no means amazing if someone used “a tool called Replit” to build software “by just telling the agent what they wanted.” This is called “vibe coding,” and it’s one of the major dangers of AI that you’re completely unaware of. This is how awful code is created, and insecure at that! Would you like your three beloved Canadian banks to use such code?
  21. Elon Musk was unable to advise his children on a career “with so much that is changing.” Indeed, capitalism is like a hamster wheel, running faster and faster, more and more efficient and profitable, now with AI, and towards nothing but societal destruction.
  22. “I haven’t come to terms with what the development of superintelligence could do to my children’s future.” It’s not because of superintelligence. It’s because of how people use such a technology as AI. Think of something else: blockchain. That technology itself is not responsible for the crypto-mania that, sooner or later, will break into a huge crash! It’s always because of stupid, greedy people.
  23. Indeed, mass unemployment is “going to increase the gap between rich and poor.” And “it’s not obvious what to do about it.”
  24. “We’ve actually solved the problem of immortality, but it’s only for digital things.” This is not immortality. They’re not living creatures. It’s like saying that by copying a HDD or a SSD you create immortality. It’s just data.
  25. “It had understood the analogy between a compost heap and an atom bomb.” It understood exactly nothing! Even humans can vehiculate concepts that they fail to understand. Associations, analogies, and similar correlations do not imply understanding.
  26. “My belief is that more or less everyone has a completely wrong model of what the mind is.” This applied to you, too. The “inner theater called the mind” that you refute is still a philosophical concept that has no resolution. Dennett, in Consciousness Explained, calls it “Cartesian Theater” and believes it to be ridiculous; he proposes instead the “Multiple Drafts” model of consciousness. This model is accepted by some neuroscientists but criticized by those who consider that eliminating qualia ignores the most fundamental aspect of consciousness. Also, this model doesn’t explain why there’s any experience at all, rather than just unconscious information processing. So we still don’t know shit. I quite like the “inner theater” model, if we assent to the existence of consciousness.
  27. “I believe that current multimodal chatbots have subjective experiences.” No, they don’t. Not experiences. They don’t experience anything! People are right that “machines can’t have feelings”; I have no idea why you’re so confident to the contrary. On “sentience and consciousness and feelings and emotions”: you’re completely wrong. “Emotions without the physiological aspects” but “with all the cognitive aspects” is balderdash. “No, they’re really having those emotions. The little robot got scared and ran away.” No, the little robot cannot feel scared! And it’s not just because it lacks the adrenaline (and the cortisol, while we’re at that).
  28. Your Black Mirror-like analogy in which brain cells are replaced one by one is a vulgar sophism reminiscent of Theseus’s Paradox (“If no pieces of the original made up the current ship, was it still the Ship of Theseus, and if not, when had it ceased existing as the original ship?”). The second part of this paradox relates to the sorites paradox, or the paradox of the heap (“When did it change from a heap to a non-heap when grains are removed one by one?”). I very much prefer its bald variant: by removing a single hair, one doesn’t become bald; repeat the process until the person can be considered bald. When did the transformation occur? Unfortunately, this is not proof that machines can have a conscience!
  29. “If your view of consciousness is that it intrinsically involves self-awareness, then the machine’s got to have self-awareness. It’s got to have cognition about its own cognition and stuff.” Self-awareness and cognition about cognition (which is metacognition) are not the same. This is a non sequitur. And machines cannot have self-awareness. I’m a materialist, too, but you’re really a piece of something.
  30. “I think as soon as you have a machine that has some self-awareness, it’s got some consciousness.” Yes. As soon as pigs develop wings, they can be painted in WizzAir’s livery. It’s just it ain’t gonna happen.
  31. “I think they really are thinking.” But of course. Even some chess computers from the 1980s had a small LCD that displayed “Thinking…”. You’re late to the show.
  32. “I don’t see why they shouldn’t have emotions. So I think what’s happened is people have a model of how the mind works and what feelings are and what emotions are. And their model is just wrong.” You have an idée fixe. I don’t see how the could have emotions.
  33. “I worked on something called distillation that did really work well. And that’s now used all the time.” and “distillation is a way of taking what a big model knows, a big neural net knows, and getting that knowledge into a small neural net.” I happen to have mentioned distillation previously in this post. And I have to say that I hate it!
    • I believe that distillation increases the risk of hallucinations, wrong data, and bizarre behaviors in a less “technical” way than quantization might do it. Training a smaller model on a larger model instead of actual data is like feeding a chick the food regurgitated by its mother. This doesn’t increase a model’s “understanding” but only its capacity to serve canned answers. Organic training is supposed to lead to better results than synthetic training.
    • At the same time, when a newer version of a model (say, Claude Sonnet 4 vs. 3.7 vs. 3.5, or Gemini 2.5 vs. 2.0, or GPT-4.5 vs. GPT-4.0) is said to be faster and “better,” but people complain that sometimes it seems dumber than an older version that is asked the same question, I believe that this might be a consequence of training such models on AI-generated data instead of organic data. (Other factors might concur, too.)
    • But “model collapse” or “degenerative training” also happens in distillation! The student model inherits all the teacher’s biases and blind spots, but without the full context that might have led to those outputs. It may become overconfident in areas where the teacher was actually uncertain. Distillation can amplify whatever systematic errors or limitations the teacher had!
    • Now I know who to hate for the concept of distillation!
  34. “If it can say why a joke’s funny, it really does understand.” No, it doesn’t have to understand anything! WTF, how are they awarding Nobel Prizes to such confused people? (Here’s one of my attempts at asking some chatbots to explain a joke. Not that bad.)
  35. For once, we agree: plumber or no plumber, the job apocalypse is happening now, and nobody knows what a survival plan could be! For the time being, it takes place at a modest scale, mostly in IT and related services. The already poor-quality software is becoming even worse, if that was possible. Customer support is a joke when provided by AI agents. It’s all because of the “mandatory greed” of the CEOs and the innate stupidity and incompetence of all levels of management who blindly replace people with chatbots! Creativity will drop to zero, being replaced by AI slop. These are the real dangers, not AI deciding to kill us all, you old dumbo!

🤖

BONUS — Further watching, if you feel like doing it: