I’m bored to death by such pieces of crap: 98 minutes of chit-chatting. As if life weren’t already too short! AI: What Could Go Wrong? with Geoffrey Hinton | The Weekly Show with Jon Stewart. Fortunately, downsub.com allowed me to extract the original captions (not the auto-generated ones!), so that I could retrieve information at my speed. Also, Verba volant, scripta manent. Grasping and memorizing information is favored by something written down (you can read aloud the information you want reinforced), not to mention that the text can be searched, reread, and whatnot. Here it is:

1 • JON STEWART: Am I a neural learning 201 yet, or am I still in 101?

GEOFFREY HINTON: You’re like the smart student in the front row who doesn’t know anything but asks these good questions. [JON LAUGHING]

JON STEWART: That’s the nicest way I’ve ever been described. Thank you. [MUSIC PLAYING] Hey, everybody. Welcome to The Weekly Show podcast. My name is Jon Stewart. I’m going to be hosting you today. And it’s, what is it, Wednesday, October 8. I don’t know what’s going to happen later on in the day, but we’re going to be out tomorrow. But today’s episode, I just want to say very quickly, today’s episode, we are talking to someone known as the godfather of AI, a gentleman by the name of Geoffrey Hinton, who has been developing the type of technology that has turned into AI since the ’70s. And I want to let you know, so we talk about it—the first part of it, though, he gives us this breakdown of what it actually is, which, for me, was unbelievably helpful. We get into the it will kill us all part, but it was important, from my understanding, to set the scene. So I hope you find that part as interesting as I did, because, man, it expanded my understanding of what this technology is, of how it’s going to be utilized, of what some of those dangers might be in a really interesting way. So I will not hold it up any longer. Let us get to our guest for the podcast. [MUSIC PLAYING] Ladies and gentlemen, we are absolutely thrilled today to be able to welcome Professor Emeritus with the Department of Computer Science at the University of Toronto and Schwartz Reisman Institute’s Advisory Board Member, Geoffrey Hinton is joining us. Sir, thank you so much for being with us today.

GEOFFREY HINTON: Well, thank you so much for inviting me.

JON STEWART: I’m delighted. You are known as, and I’m sure you will be very demure about this, the godfather of artificial intelligence for your work on these neural networks. You co-won the actual Nobel Prize in physics in 2024 for this work. Is that correct?

GEOFFREY HINTON: That is correct. It’s slightly embarrassing since I don’t do physics. So when they called me up and said, you won the Nobel Prize in physics, I didn’t believe them to begin with.

JON STEWART: And were the other physicists going, wait a second, that guy’s not even in our business?

GEOFFREY HINTON: I strongly suspect they were, but they didn’t do it to me.

5 • JON STEWART: Oh, good. I’m glad. This is going to seem somewhat remedial, I’m sure, to you, but when we talk about artificial intelligence, I’m not exactly sure what it is that we’re talking about. I know there are these things, large language models. I know, to my experience, artificial intelligence is just a slightly more flattering search engine, whereas I used to Google something and it would just give me the answer. Now it says, what an interesting question you’ve asked me. So what are we talking about when we talk about artificial intelligence?

GEOFFREY HINTON: So when you used to Google, it would use keywords. And it would have done a lot of work in advance. So if you gave it a few keywords, it could find all the documents that had those words in them.

JON STEWART: So basically, it’s just sorting. It’s looking through, and it’s sorting and finding words, and then bringing you a result.

GEOFFREY HINTON: Yeah. That’s how it used to work.

JON STEWART: OK.

GEOFFREY HINTON: But it didn’t understand what the question was. So it couldn’t, for example, give you documents that didn’t actually contain those words but were about the same subject.

JON STEWART: It didn’t make that connection. Oh, right, because it would say, here is your result minus, and then it would say a word that was not included.

GEOFFREY HINTON: Right. But if you had a document with none of the words you used, it wouldn’t find that, even though it might be a very relevant document about exactly the subject you were talking about, it had just used different words. Now, it understands what you say, and it understands in pretty much the same way people do.

JON STEWART: What? So it’ll say, oh, I know what you mean. Let me educate you on this. So it’s gone from being kind of literally just a search and find thing to an actual almost an expert in whatever it is that you’re discussing. And it can bring you things that you might not have thought about.

GEOFFREY HINTON: Yes. So the large language models are not very good experts at everything. So take some friend you who knows a lot about some subject matter.

10 • JON STEWART: No, I got a couple of those.

GEOFFREY HINTON: Yeah, they’re probably a bit better than the large language model, but they’ll nevertheless be impressed that the large language model knows their subject pretty well.

JON STEWART: So what is the difference between machine learning—so was Google, in terms of a search engine, machine learning? That’s just algorithms and predictions.

GEOFFREY HINTON: No, not exactly. Machine learning is a cover all term for any system on a computer that learns.

JON STEWART: OK.

GEOFFREY HINTON: Now, these neural networks are a particular way of doing learning that’s very different from what was used before.

JON STEWART: OK. Now, these are the new neural networks. The old machine learning, those were not considered neural networks. And when you say “neural networks,” meaning your work, the genesis of it was in the ’70s where you were studying the brain. Is that correct?

GEOFFREY HINTON: I was trying to come up with ideas about how the brain actually learned. And there’s some things we know about that—it learns by changing the strengths of connections between brain cells.

JON STEWART: Wait, so explain that. It says it learns by changing the connections. So if you show a human something new, brain cells, it will actually make new connections within brain cells.

GEOFFREY HINTON: It won’t make new connections. There will be connections that were there already.

15 • JON STEWART: OK.

GEOFFREY HINTON: But the main way it operates is it changes the strength of those connections.

JON STEWART: Wow.

GEOFFREY HINTON: So if you think of it from the point of view of a neuron in the middle of the brain, a brain cell—

JON STEWART: OK.

GEOFFREY HINTON: All it can do in life is sometimes go ping.

JON STEWART: That’s all he’s got? That’s his only—

GEOFFREY HINTON: That’s all it’s got. Unless it happens to be connected to a muscle.

JON STEWART: OK.

GEOFFREY HINTON: It can sometimes go ping.

20 • JON STEWART: OK.

GEOFFREY HINTON: And it has to decide when to go ping.

JON STEWART: Oh, wow. How does it decide when to go ping?

GEOFFREY HINTON: I was glad you asked that question. [JON LAUGHING] There’s other neurons going ping.

JON STEWART: OK.

GEOFFREY HINTON: And when it sees particular patterns of other neurons going ping, it goes ping. And you can think of this neuron as receiving pings from other neurons. And each time it receives a ping, it treats that as a number of votes for whether it should turn on, or it should go ping, or should not go ping. And you can change how many votes another neuron has for it.

JON STEWART: How would you change that vote?

GEOFFREY HINTON: By changing the strength of the connection. The strength of the connection, think of as the number of votes this other neuron gives for you to go ping.

JON STEWART: OK. So it really is, in some respects, it’s a—boy, it reminds me of the movie Minions—but it’s almost a social—

GEOFFREY HINTON: Yes, yes. It’s very like political coalitions. There’ll be groups of neurons that go ping together. And the neurons in that group will all be telling each other go ping. And then there might be a different coalition, and they’ll be telling other neurons, don’t go ping.

25 • JON STEWART: Oh, my god.

GEOFFREY HINTON: And then there might be a different coalition, and they’re all telling each other to go ping and telling the first coalition not to go ping.

JON STEWART: All this is going on in your brain in the way of I would like to pick up a spoon.

GEOFFREY HINTON: Yes. So spoon, for example, spoon in your brain—

JON STEWART: Yeah.

GEOFFREY HINTON: —is a coalition of neurons going ping together. And that’s a concept.

JON STEWART: Oh, wow. So as you’re teaching, when you’re a baby and they go spoon, there’s a little group of neurons going, oh, that’s a spoon. And they’re strengthening their connections with each other? Is that why, when you’re imaging brains, you see certain areas light up? And is that lighting up of those areas the neurons that ping for certain items or actions?

GEOFFREY HINTON: Not exactly.

JON STEWART: Getting close.

GEOFFREY HINTON: It’s close. It’s close. Different areas will light up when you’re doing different things, like when you’re doing vision, or talking, or controlling your hands. Different areas light up for that.

30 • JON STEWART: OK.

GEOFFREY HINTON: But the coalition of neurons that go ping together when there’s a spoon, they don’t only work for spoon. Most of the members of that coalition will go ping when there’s a fork. So they overlap a lot, these coalitions.

JON STEWART: This is a big tent. It’s a big tent coalition. I love thinking about this as political. I had no idea—your brain operates on peer pressure.

GEOFFREY HINTON: There’s a lot of that goes on. Yes. And concepts are kind of coalitions that are happy together. But they overlap a lot. Like the concept for dog and the concept for cat have a lot in common. They’ll have a lot of shared neurons. In particular, the neurons that represent things like this is animate, or this is hairy, or this might be a domestic pet—all those neurons will be in common to cat and dog.

JON STEWART: Can I ask you this—and, again, I so appreciate your patience with this and explain. This is really helpful for me. Are there certain neurons that ping broadly, right, for the broad concept of animal, and then other neurons—does it work from macro to micro, from general to specific? So you have a coalition of neurons that ping generally, and then, as you get more specific with the knowledge, does that engage certain ones that will ping less frequently, but for maybe more specificity? Is that something?

GEOFFREY HINTON: OK, that’s a very good theory. [JON LAUGHING] No, nobody really knows for sure about this.

JON STEWART: Oh, OK.

GEOFFREY HINTON: But that’s a very sensible theory. And, in particular, there’s going to be some neurons in that coalition that ping more often for more general things.

JON STEWART: Right.

GEOFFREY HINTON: And then there may be neurons that ping less often for much more specific things.

35 • JON STEWART: Right. OK. And, like you say, there’s certain areas that will ping for vision or other senses—touch. I imagine there’s a ping system for language. And you were saying, what if we could get computers, which were much more, I would think, just binary, if-then, basic—you’re saying, could we get them to work as these coalitions?

GEOFFREY HINTON: Yeah. I don’t think binary if-then has much to do with it. The difference is people were trying to put rules into computers. So the basic way you program a computer is you figure out in exquisite detail how you would solve the problem.

JON STEWART: Oh. You deconstruct all the steps.

GEOFFREY HINTON: And then you tell the computer exactly what to do. That’s a normal computer program.

JON STEWART: OK, great.

GEOFFREY HINTON: These things aren’t like that at all.

JON STEWART: So you were trying to change that process to see if we could create a process that functioned more like how the human brain would. Rather than a item by item instruction list, you wanted it to think more globally. How did that occur?

GEOFFREY HINTON: So it was sort of obvious to a lot of people that the brain doesn’t work by someone else giving you rules, and you just execute those rules. In North Korea, they would love brains to work like that, but they don’t.

JON STEWART: You’re saying that in an authoritarian world, that is how brains would operate.

GEOFFREY HINTON: Well, that’s how they would like them to operate.

40 • JON STEWART: That’s how they would like them to operate. It’s a little more artsy than that.

GEOFFREY HINTON: Yes.

JON STEWART: All right. Fair enough.

GEOFFREY HINTON: We do write programs for neural nets, but the programs are just to tell the neural net how to adjust the strength of the connection on the basis of the activities of the neurons. So that’s a fairly simple program—

JON STEWART: Right.

GEOFFREY HINTON: —that doesn’t have all sorts of knowledge about the world in it. It’s just, what are the rules for changing neural connection strengths on the basis of the activities?

JON STEWART: Can you give me an example? Is that machine learning? Or is that deep learning?

GEOFFREY HINTON: That’s deep learning. If you have a network with multiple layers, it’s called deep learning because there’s many layers.

JON STEWART: So what are you saying to a computer when you are trying to get it to do deep learning? What would be an example of an instruction that you would give?

GEOFFREY HINTON: OK, so, let me go—

45 • JON STEWART: Oh, all right. Am I in neural learning 201 yet, or am I still in 101?

GEOFFREY HINTON: You’re like the smart student in the front row who doesn’t know anything but asks these good questions. [JON LAUGHING]

JON STEWART: That’s the nicest way I’ve ever been described. Thank you.

GEOFFREY HINTON: So let’s go back to 1949.

JON STEWART: Oh, boy. All right.

GEOFFREY HINTON: So here’s a theory from someone called Donald Hebb about how you change connection strengths.

JON STEWART: OK.

GEOFFREY HINTON: If neuron A goes ping and then shortly afterwards neuron B goes ping, increase the strength of the connection. That’s a very simple rule. That’s called the Hebb rule.

JON STEWART: Right. The Hebb rule is if neuron A goes ping and B goes ping, increase that connection.

GEOFFREY HINTON: Yes.

50 • JON STEWART: OK.

GEOFFREY HINTON: Now, as soon as computers came along and you could do computer simulations, people discovered that rule by itself doesn’t work. What happens is all the connections gets very strong, and all the neurons go ping all at the same time, and you have a seizure.

JON STEWART: Oh, OK.

GEOFFREY HINTON: That’s a shame, isn’t it?

JON STEWART: That is a shame.

GEOFFREY HINTON: There’s got to be something that makes connections weaker as well as making them stronger.

JON STEWART: Right. There’s got to be some discernment.

GEOFFREY HINTON: Yes.

JON STEWART: OK.

GEOFFREY HINTON: If I can digress for about a minute—

55 • JON STEWART: Boy, I’d like that.

GEOFFREY HINTON: Suppose we wanted to make a neural network that had multiple layers of neurons, and it’s to decide whether an image contains a bird or not.

JON STEWART: Like a CAPTCHA, like when you go on and it says—

GEOFFREY HINTON: Exactly. We want to—

JON STEWART: OK.

GEOFFREY HINTON: We want to solve that CAPTCHA with a neural net.

JON STEWART: OK.

GEOFFREY HINTON: So the input to the neural net, the bottom layer of neurons, is a bunch of neurons. And they have different strengths of ping. And they represent the intensities of the pixels in the image.

JON STEWART: OK.

GEOFFREY HINTON: So if it’s a 1,000 by 1,000 image, you got a million neurons that are going ping at different rates to represent how intense each pixel is—

60 • JON STEWART: OK.

GEOFFREY HINTON: That’s your input. Now, you’ve got to turn that into a decision. Is this a bird or not?

JON STEWART: Wow. So let me ask you a question, then. Do you program in—because strength of pixel doesn’t strike me as a really useful tool in terms of figuring out if it’s a bird. Figuring out if it’s a bird seems like the tool would be, are those feathers? Is that a beak? Is that a crest? Yeah.

GEOFFREY HINTON: Here goes. So the pixels by themselves don’t really tell you whether it’s a bird.

JON STEWART: OK.

GEOFFREY HINTON: Because you can have birds that are bright and birds that are dark. And you can have birds flying and birds sitting down. And you can have an ostrich in your face, and you have a seagull in the distance. They’re all birds. OK, so what do you do next? Well, sort of guided by the brain, what people did next was said, let’s have a bunch of edge detectors. So what we’re going to do, because, of course, you can recognize birds quite well in line drawings.

JON STEWART: Right.

GEOFFREY HINTON: So what we’re going to do is we’re going to make some neurons, a whole bunch of them, that detect little pieces of edge, that is little places in the image where it’s bright on one side and darker on the other side.

JON STEWART: Right. So it’s almost creating a, like, primitive form of vision.

GEOFFREY HINTON: This is how you make a vision system. Yes. This is how it’s done in the brain and how it’s done in computers.

65 • JON STEWART: Wow. OK.

GEOFFREY HINTON: So if you wanted to detect a little piece of vertical edge in a particular place in the image, let’s suppose you look at a little column of three pixels and, next to them, another column of three pixels. And if the ones on the left are bright and the ones on the right are dark, you want to say, yes, there’s an edge here. So you have to ask, how would I make a neuron that did that?

JON STEWART: Oh, my god. OK. All right, I’m going to jump ahead. All right, so the first thing you do is you have to teach the network what vision is. So you’re teaching it, these are images—this is background. This is form. This is edge. This is not. This is bright, this is—so you’re teaching it almost how to see.

GEOFFREY HINTON: In the old days, people would try and put in lots of rules to teach it how to see and explain to it what the foreground was and what background was.

JON STEWART: OK.

GEOFFREY HINTON: But the people who really believed in neural nets said, no, no, don’t put in all those rules. Let it learn all those rules just from data.

JON STEWART: And the way it learns is by strengthening the pings once it starts to recognize edges and things.

GEOFFREY HINTON: We’ll come to that in a minute.

JON STEWART: I’m jumping ahead.

GEOFFREY HINTON: You’re jumping ahead.

70 • JON STEWART: All right, all right.

GEOFFREY HINTON: Let’s carry on with this little bit of edge detector.

JON STEWART: OK.

GEOFFREY HINTON: So in the first layer, you have the neurons that represent how bright the pixels are.

JON STEWART: Right.

GEOFFREY HINTON: And then in the next layer, we’re going to have little bits of edge detector. And so you might have a neuron in the next layer that’s connected to a column of three pixels on the left and a column of three pixels on the right. And now, if you make the strengths of the connections to the three pixels on the left strong, big positive connections—

JON STEWART: Right. Because it’s brighter.

GEOFFREY HINTON: And you make the strengths of connections to the three pixels on the right be big negative connections—

JON STEWART: Because it’s darker.

GEOFFREY HINTON: —to say, don’t turn on.

75 • JON STEWART: Right.

GEOFFREY HINTON: Then when the pixels on the left and the pixels on the right are the same brightness as each other, the negative connections would cancel out the positive connections and nothing will happen. But if the pixels on the left are bright and the pixels on the right are dark, the neuron will get lots of input from the pixels on the left, because they’re big positive connections.

JON STEWART: Right.

GEOFFREY HINTON: It won’t get any inhibition from the pixels on the right, because those pixels are all turned off.

JON STEWART: Right. Right.

GEOFFREY HINTON: And so it’ll go ping. It’ll say, hey, I found what I wanted. I found that the three pixels on the left are bright, and the three pixels on the right are not bright. Hey, that’s my thing. I found a little piece of edge here.

JON STEWART: I’m that guy. I’m the edge guy. I ping on the edges.

GEOFFREY HINTON: Right. And that pings on that particular piece of edge.

JON STEWART: OK. OK.

GEOFFREY HINTON: Now, imagine you have, like, a gazillion of those. [JON LAUGHING]

80 • JON STEWART: I’m already exhausted on the three pings. You have a gazillion of those.

GEOFFREY HINTON: Because they have to detect little pieces of edge anywhere on your retina, anywhere in the image, and at any orientation. You need different ones for each orientation.

JON STEWART: Right.

GEOFFREY HINTON: And you actually have different ones for the scale. There might be an edge at a very big scale that’s quite dim—

JON STEWART: Right.

GEOFFREY HINTON: —and there might be little sharp edges at a very small scale. And as you make more and more edge detectors, you get better and better discrimination for edges. You can see smaller edges. You can see the orientation of edges more accurately. You can detect big vague edges better. So let’s now go to the next layer. So now, we’ve got our edge detectors.

JON STEWART: Right.

GEOFFREY HINTON: Now, suppose that we had a neuron in the next layer that looked for a little combination of edges that is almost horizontal—several edges in a row that are almost horizontal—

JON STEWART: Right.

GEOFFREY HINTON: —and line up with each other. And, just slightly above those, several edges in a row that are, again, almost horizontal, but come down to form a point with the first set of edges. So you find two little combinations of edges that make a pointy thing. [JON LAUGHING]

85 • JON STEWART: OK. So you’re a Nobel Prize winning physicist. I did not expect that sentence to end with, it makes kind of a pointy thing. I thought there’d be a name for that, but I get what you’re saying you’re. You’re now discerning where it ends, where you’re looking at—and this is before you’re even looking at color or anything else. This is literally just, is there an image? What are the edges?

GEOFFREY HINTON: What are the edges, and what are the little combinations of edges? So we’re now asking, is there a little combination of edges that makes something that might be a beak?

JON STEWART: Wow.

GEOFFREY HINTON: That’s the pointy thing.

JON STEWART: But you don’t know what a beak is yet.

GEOFFREY HINTON: Not yet. No. We need to learn that too, ye.

JON STEWART: Right. So once you have the system—it’s almost like you’re building systems that can mimic the human senses.

GEOFFREY HINTON: That’s exactly what we’re doing. Yes.

JON STEWART: So vision, ears—not smell, obviously, although I—

GEOFFREY HINTON: No, they’re doing that now. They’re starting on smell now.

90 • JON STEWART: Oh, for god’s sakes. And probably touch.

GEOFFREY HINTON: They’ve now got to digital smell where you can transmit smells over the web. It’s just—

JON STEWART: That’s just insane.

GEOFFREY HINTON: The printer for smells has 200 components. Instead of three colors, it’s got 200 components. And it synthesizes the smell at the other end. And it’s not quite perfect, but it’s pretty good.

JON STEWART: Wow. So this is incredible to me.

GEOFFREY HINTON: OK, so— [JON LAUGHING]

JON STEWART: I am so sorry about this. I apologize profusely.

GEOFFREY HINTON: No, this is perfect. You’re doing a very good job of representing a sort of sensible curious person who doesn’t know anything about this. So let me finish describing how you build the system by hand.

JON STEWART: Yes.

GEOFFREY HINTON: So if I did it by hand, I’d start with these edge detectors. So I’d say, make big, strong, positive connections from these pixels on the left and big, strong, negative connections to the pixels on the right. And now the neuron that gets those incoming connections, that’s going to detect a little piece of vertical edge.

95 • JON STEWART: OK.

GEOFFREY HINTON: And then at the next layer, I’d say, OK, make big strong positive connections from three little bits of edge sloping like this and three little bits of edge sloping like that—

JON STEWART: Could be a beak and a pointy thing.

GEOFFREY HINTON: And this is a potential beak. And in that same layer, I may also make big, strong, positive connections from a combination of edges that roughly form a circle.

JON STEWART: Wow.

GEOFFREY HINTON: And that’s a potential eye.

JON STEWART: Right. Right. Right.

GEOFFREY HINTON: Now, in the next layer, I have a neuron that looks at possible beaks and looks at possible eyes. And if they’re in the right relative position, it says, hey, I’m happy, because that neuron has detected a possible bird’s head.

JON STEWART: Right. And that guy might ping.

GEOFFREY HINTON: And that guy would ping. At the same time, there’ll be other neurons elsewhere that are detected little patterns like a chicken’s foot or the feathers at the end of the wing of a bird.

100 • JON STEWART: Right.

GEOFFREY HINTON: And so you have a whole bunch of these guys. Now, even higher up, you might have a neuron that says, hey, look, if I’ve detected a bird’s head, and I’ve detected a chicken’s foot, and I’ve detected the end of a wing, it’s probably a bird. So I’d say bird.

JON STEWART: Right.

GEOFFREY HINTON: So you can see now how you might try and wire all that up by hand.

JON STEWART: Yes. And it would take some time.

GEOFFREY HINTON: It would take, like, forever. It would take, like, forever.

JON STEWART: Yes.

GEOFFREY HINTON: OK. So suppose you were lazy.

JON STEWART: Yes. Now, you’re talking.

GEOFFREY HINTON: OK. What you could do is you could just make these layers of neurons without saying what the strengths of all the connections ought to be. You just start them off at small, random numbers. Just put in any old strengths. And you put in a picture of a bird, and let’s suppose it’s got two outputs. One says bird and the other says not bird. With random connection strengths in there, what’s going to happen is you put in a picture of a bird and it says 50% bird, 50% not bird. In other words, I haven’t got a clue.

105 • JON STEWART: Right.

GEOFFREY HINTON: And you put it in a picture of a non-bird, and it says 50% bird, 50% non-bird.

JON STEWART: Oh, boy.

GEOFFREY HINTON: OK, so now you can ask a question. Suppose I were to take one of those connection strengths, and I were to change it just a little bit, make it maybe a little bit stronger—instead of saying 50% bird, would it say 50.01% bird and 49.99% non-bird? And if it was a bird, then that’s a good change to make. [JON LAUGHING] You’ve made it work slightly better.

JON STEWART: What year was this? When did this start?

GEOFFREY HINTON: Oh, exactly. So this is just an idea. This would never work, but bear with me.

JON STEWART: All right.

GEOFFREY HINTON: This is like one of those defense lawyers who goes off on a huge digression, but it’s all going to be good in the end.

JON STEWART: No, no, no, no, no. This is helpful. And this is the thing that’s going to kill us all in 10 years.

GEOFFREY HINTON: Yeah. [JON LAUGHING] When I say, “yeah,” I mean, not this particular thing, but an advancement on it. And not necessarily kill us all, but maybe.

110 • JON STEWART: Right, right. This is Oppenheimer going, OK, so you’ve got an object, and that is made up of smaller objects. Like, this is the very early part of this.

GEOFFREY HINTON: OK, so suppose you had all the time in the world, what you could do is you could take this layered neural network, and you could start with random connection strengths. And you could then show the bird, and it would say 50% bird, 50% non-bird. And you could pick one of the connection strengths.

JON STEWART: Right.

GEOFFREY HINTON: And you could say, if I increase it a little bit, does it help? It won’t help much, but does it help at all?

JON STEWART: Right. Will it get me to 50.1, 50.2, that kind of thing.

GEOFFREY HINTON: If it helps, make that increase.

JON STEWART: OK.

GEOFFREY HINTON: And then you go around and do it again, maybe this time we choose a non-bird, and we choose one connection strength, and we’d like it to, if we increase that connection strength it says it’s less likely to be a burden and more likely to be a non-bird, we say, OK, that’s a good increase. Let’s do that one.

JON STEWART: Right. Right.

GEOFFREY HINTON: Now, here’s a problem—there’s a trillion connections.

115 • JON STEWART: Yeah. Right.

GEOFFREY HINTON: OK, and each connection has to be changed many times.

JON STEWART: And is that manual?

GEOFFREY HINTON: Well, in this way of doing it will be manual. And not just that, but you can’t just do it on the basis one example, because sometimes change a connection strength, if you increase it a bit, it will help with this example, but it will make other examples worse.

JON STEWART: Oh, dear god.

GEOFFREY HINTON: So you have to give it a whole batch of examples and see if, on average, it helps.

JON STEWART: And that’s how you create these large language models.

GEOFFREY HINTON: If we did it this really dumb way to create, let’s say, this vision system for now—

JON STEWART: Yes.

GEOFFREY HINTON: We’d have to do trillions of experiments. And each experiment would involve giving it a whole batch of examples and seeing if changing one connection strength helps or hurts.

120 • JON STEWART: Oh, god. And it would never be done. It would be infinite. It would be infinite.

GEOFFREY HINTON: OK. Now, suppose that you figured out how to do a computation that would tell you for every connection strength in the network, it’ll tell you at the same time, for this particular example—let’s suppose you give it a bird, and it says 50% bird. And now, for every single connection strength, all trillion of these connection strengths, we can figure out at the same time whether you should increase them a little bit to help or decrease them a little bit to help.

JON STEWART: I mean—

GEOFFREY HINTON: Then you change a trillion of them at the same time.

JON STEWART: Can I say a word that I’ve been dying to say this whole time? Eureka.

GEOFFREY HINTON: Eureka.

JON STEWART: Eureka.

GEOFFREY HINTON: Eureka. Now, that computation, for normal people, that seems complicated.

JON STEWART: Yes.

GEOFFREY HINTON: If you’ve done calculus, it’s fairly straightforward. And many different people invented this computation.

125 • JON STEWART: Right.

GEOFFREY HINTON: It’s called back propagation. So now, you can change all trillion at the same time, and you’ll go a trillion times faster.

JON STEWART: Oh, my god. And that’s the moment that it goes from theory to practicality.

GEOFFREY HINTON: That is the moment when you think, Eureka. We’ve solved it. We know how to make smart systems. For us, that was 1986.

JON STEWART: Wow.

GEOFFREY HINTON: And we were very disappointed when it didn’t work. [JON LAUGHING]

JON STEWART: Every day, the loudest, the most inflammatory takes dominate our attention. And the bigger picture gets lost. It’s all just noise and no light. Ground News puts all sides of the story in one place so you can see the context. They provide the light. It starts conversations beyond the noise. They aggregate and organize information just to help readers make their own decisions. You can see how many news outlets have reported on the story, whether it’s underreported overreported by one side, or the other side, or whatever side of the political spectrum. Ground News provides users reports that easily compare headlines or reports that give a summarized breakdown of the specific differences in reporting across all the spectrum. It’s a great resource. Go to GroundNews.com/Stewart and subscribe for 40% off. The unlimited access vantage subscription brings the price down to about $5 a month. GroundNews.com/Stewart or scan the QR code on the screen. You’d been in that room for 10 years. You’d been showing it birds. You’d been increasing the strengths. You had your Eureka moment. And you flipped the switch and went, fuck.

GEOFFREY HINTON: No. Here’s the problem. Here’s the problem. It only works, or it only works really impressively well, much better than any other way of trying to do vision, if you have a lot of data and you have a huge amount of computation. Even though you’re a trillion times faster than the dumb method, it’s still going to be a lot of work.

JON STEWART: OK. So now you’ve got to increase the data and you’ve got to increase your computation power.

GEOFFREY HINTON: Yes. And you’ve got to increase the computation power by a factor of about a billion compared with where we were. And you’ve got to increase the data by a similar factor.

130 • JON STEWART: You are still, in 1986 when you figured this out, you are a billion times not there yet.

GEOFFREY HINTON: Something like that. Yes.

JON STEWART: What would have to change to get you there, the power of the chip? What changes?

GEOFFREY HINTON: OK. It may be more like a factor of a million.

JON STEWART: OK. OK.

GEOFFREY HINTON: I don’t want to exaggerate here.

JON STEWART: No, because I’ll catch you. If you try and exaggerate, I’ll be on it.

GEOFFREY HINTON: A million is quite a lot.

JON STEWART: Yes.

GEOFFREY HINTON: So here’s what has to change. The area of a transistor has to get smaller so you can pack more of them on a chip. So between 1972, when I started on this stuff, and now, the area of a transistor has got smaller by a factor of a million.

135 • JON STEWART: Wow. So that is around the age that I remember my father worked at RCA labs. And when I was, like, eight years old, he brought home a calculator, and the calculator was the size of a desk. And it added, and subtracted, and multiplied. By 1980, you could get a calculator on a pen. And is that based on the transistor—

GEOFFREY HINTON: Yeah. That’s based on large scale integration using small transistors. Yeah.

JON STEWART: OK. All right. All right.

GEOFFREY HINTON: So the area of a transistor decreased by a factor of a million.

JON STEWART: OK.

GEOFFREY HINTON: And the amount of data available increased by much more than that because we got the web. And we got digitization of massive amounts of data.

JON STEWART: Oh. So they worked hand in hand. So as the chips got better, the data got more vast. And you were able to feed more information into the model while it was able to increase its processing speed and abilities.

GEOFFREY HINTON: Yeah. So let me summarize what we now have.

JON STEWART: Yes.

GEOFFREY HINTON: You set up this neural network for detecting birds, and you give it lots of layers of neurons, but you don’t tell it the connection string. You say start with small random numbers.

140 • JON STEWART: Right.

GEOFFREY HINTON: And now, all you have to do is show it lots of images of birds and lots of images that are not birds, tell it the right answer so it knows the discrepancy between what it did and what it should have done, send that discrepancy backwards through the network so it can figure out for every connection strength whether it should increase it or decrease it, and then just sit and wait for a month. [JON LAUGHING] And at the end of the month, if you look inside, here’s what you’ll discover.

JON STEWART: Yeah.

GEOFFREY HINTON: It has constructed little edge detectors. [JON LAUGHING] And it has constructed things like little beat detectors and little eye detectors. And it will have constructed things that, it’s very hard to see what they are, but they’re looking for little combinations of things like beaks and eyes. And then, after a few layers, it will be very good at telling you whether it’s a bird or not. It made all that stuff up from the data.

JON STEWART: Oh, my god. Can I say this again? Eureka.

GEOFFREY HINTON: Eureka. We figured out—we don’t need to hand wire in all these little edge detectors, and beak detectors, and eye detectors, and chicken foot detectors. That’s what computer vision did for many, many years. And it never worked that well. We can get the system just to learn all that. All we need to do is tell it how to learn.

JON STEWART: And that is in 1980—

GEOFFREY HINTON: In 1986, we figured out how to do that. People were very skeptical because we couldn’t do anything very impressive because we didn’t have enough data and we didn’t have enough computation.

JON STEWART: This is incredible. And I can’t thank you enough for explaining what that is. I’m so accustomed to an analog world of how things work and, like, the way that cars work. But I have no idea how our digital world functions. And that is the clearest explanation for me that I have ever gotten. And I cannot thank you enough. It makes me understand now how this was achieved. And, by the way, what Jeffrey is talking about is the primitive version of that. What’s so incredible to me is each upgrade of that, the vastness of the improvement of that.

GEOFFREY HINTON: Yes. So let me just say one more thing.

145 • JON STEWART: Please.

GEOFFREY HINTON: I don’t want to be too professor like, but—

JON STEWART: No, no, no, no, no.

GEOFFREY HINTON: But how does this apply to large language models?

JON STEWART: Yes.

GEOFFREY HINTON: Well, here’s how it works for large language models. You have some words in a context. So let’s suppose I give you the first few words of a sentence.

JON STEWART: Right.

GEOFFREY HINTON: What the neural net is going to do is learn to convert each of those words into a big set of features, which is just active neurons—neurons going ping.

JON STEWART: OK.

GEOFFREY HINTON: So if I give you the word “Tuesday,” there’ll be some neurons going ping. If I give you the word “Wednesday,” it’ll be a very similar set of neurons—slightly different, but a very similar set of neurons going pink, because they mean very similar things. Now, after you’ve converted all the words in the context into neurons going ping into a whole bunch that capture their meaning, these neurons all interact with each other. What that means is neurons in the next layer look at combinations of these neurons, just as we looked at combinations of edges to find a beak. And, eventually, you can activate neurons that represent the features of the next word in the sentence.

150 • JON STEWART: It will anticipate.

GEOFFREY HINTON: It can anticipate. It can predict the next word. So the way you train it—

JON STEWART: Is that why my phone does that? It always thinks I’m about to say this next word. And I’m always like, stop doing that. Because a lot of times it’s wrong.

GEOFFREY HINTON: It’s probably using neural nets to do it. Yes.

JON STEWART: Right.

GEOFFREY HINTON: And, of course, you can’t be perfect at that.

JON STEWART: So now, to put it together, you’ve taught it almost how to see.

GEOFFREY HINTON: You can teach you to see in the same way you can teach it how to predict the next word.

JON STEWART: Right. So it sees, it goes, that’s the letter A. Now, I’m starting to recognize letters. Then you’re teaching it words, and then what those words mean, and then the context. And it’s all being done by feeding it our previous words, by back propagating all the writing and speaking that we’ve done already. It’s looking over.

GEOFFREY HINTON: You take some document that we produced—

155 • JON STEWART: Yes.

GEOFFREY HINTON: —you give the context, which is all the words up to this point—

JON STEWART: Yes.

GEOFFREY HINTON: —and you ask it to predict the next word. And then you look at the probability it gives to the correct answer.

JON STEWART: Right.

GEOFFREY HINTON: And you say, I want that probability to be bigger. I want you to have more probability of making the correct answer.

JON STEWART: Right. So it doesn’t understand it. This is merely a statistical exercise.

GEOFFREY HINTON: We’ll come back to that. [JON LAUGHING] You take the discrepancy between the probability it gives for the next word and the correct answer—

JON STEWART: Yeah.

GEOFFREY HINTON: —and you backpropagate that through this network, and it’ll change all the connection strengths. So next time you see that lead-in, it’ll be more likely to give the right answer. Now, you just said something that many people say. [JON LAUGHING] This isn’t understanding. This is just a statistical trick.

160 • JON STEWART: Yes.

GEOFFREY HINTON: That’s what Chomsky says, for example.

JON STEWART: Yes. Chomsky and I, we’re always stepping on each other’s sentences.

GEOFFREY HINTON: Yeah. So let me ask you the question, well, how do you decide what word to say next?

JON STEWART: Me?

GEOFFREY HINTON: You.

JON STEWART: It’s interesting, I’m glad you brought this up. So what I do is I look for sharp lines and then I try and predict—no, I have no idea how I do that. Honestly, I wish I knew. It would save me a great deal of embarrassment if I knew how to stop some of the things that I’m saying that come out next. If I had a better predictor, boy, I could save myself quite a bit of trouble.

GEOFFREY HINTON: So the way you do it is pretty much the same as the way these large language models do it. You have the words you’ve said so far. Those words are represented by sets of active features. So the word symbols get turned into big patterns of activation of features, neurons going ping—

JON STEWART: Different pings, different strengths.

GEOFFREY HINTON: —and these neurons interact with each other to activate some neurons that go ping, that are representing the meaning of the next word, or possible meanings of the next word. And from those, you pick a word that fits in with those features. That’s how the large language models generate text, and that’s how you do it too. They’re very like us.

165 • JON STEWART: So I’m ascribing to myself a humanity of understanding. For instance, let’s say the little white —I’m with somebody, and they ask me a question, and in my mind, I know what to say. But then I also think, oh, but saying that might be coarse, or it might be rude, or I might offend this person. So I’m also, though, making emotional decisions on what the next words I say are as well. It’s not just a objective process. There’s a subjective process within that.

GEOFFREY HINTON: All of that is going on by neurons interacting in your brain.

JON STEWART: It’s all pings, and it’s all strength. Even the things that I ascribe to a moral code or an emotional intelligence are still pings.

GEOFFREY HINTON: They’re still pings. And you need to understand there’s a difference between what you do kind of automatically, and rapidly, and without effort, and what you do with effort, and slower, and consciously, and deliberately.

JON STEWART: And you’re saying that can be built into these models as well.

GEOFFREY HINTON: That can also be done with pings. That can be done by these neural nets.

JON STEWART: Whoa. But is the suggestion, then, that with enough data and enough processing power, their brains can function identically to ours? Are they at that point? Will they get to that point? Will they be able to—because I’m assuming we’re still ahead processing wise.

GEOFFREY HINTON: OK. They’re not exactly like us.

JON STEWART: OK.

GEOFFREY HINTON: The point is they’re much more like us than standard computer software’s like us. Standard computer software, someone programmed in a bunch of rules, and if it follows the rules, it does what they expect it to do.

170 • JON STEWART: That’s right. So you’re saying this is the difference.

GEOFFREY HINTON: This is just a different kettle of fish altogether. And it’s much more like us.

JON STEWART: Now, as you’re doing this and you’re in it, and I imagine the excitement is, even though it’s occurring over a long period of time, you’re seeing these improvements occur over that time. And it must be incredibly fulfilling, and interesting, and you’re watching it explode into this sort of artificial intelligence, and generative AI, and all these different things. At what point during this process do you step back and go, um wait a second.

GEOFFREY HINTON: OK, so I did it too late. I should have done it earlier. [JON LAUGHING] I should have been more aware earlier. But I was so entranced with making these things work. And I thought, it’s going to be a long, long time before they work as well as us. We’ll have plenty of time to worry about, what if they try and take over and stuff like that.

JON STEWART: Right.

GEOFFREY HINTON: At the beginning of 2023, after GPT had come out, but also seeing similar chatbots at Google before that—

JON STEWART: Right.

GEOFFREY HINTON: —and because of some work I was doing on trying to make these things analog, I realized that neural nets running on digital computers are just a better form of computation than us. And I’ll tell you why they’re better.

JON STEWART: Yeah, why?

GEOFFREY HINTON: Because they can share better.

175 • JON STEWART: They can share with each other better.

GEOFFREY HINTON: Yes. So if I make many copies of the same neural net and they run on different computers, each one can look at a different bit of the internet. So I’ve got 1,000 copies. They’re all looking at different bits of the internet. Each copy is running this backpropagation algorithm and figuring out, given the data I just saw, how would I like to change my connection strengths? Now, because they started off as identical copies, they can then all communicate with each other and say, how about we all change our connection strengths by the average of what everybody wants?

JON STEWART: But if they were all trained together, wouldn’t they come up with the same answer? Why are they coming up with different answers?

GEOFFREY HINTON: Yes, but they’re looking at different data. They’re looking at different data.

JON STEWART: Oh.

GEOFFREY HINTON: On the same data, they would give the same answer. If they look at different data, they have different ideas about how they’d like to change their connection strengths to absorb that data.

JON STEWART: But are they also creating data? So they’re looking at the same. And at this point, it’s all about discernment—getting these things to discern better, to understand better, to do all that. But there’s another layer to that which is iterative.

GEOFFREY HINTON: Yes. Once you’re good at discernment—

JON STEWART: That’s right.

GEOFFREY HINTON: —you can generate.

180 • JON STEWART: Right.

GEOFFREY HINTON: Now, I’m glossing over a lot of details there, but basically, yes, you can generate.

JON STEWART: You can begin to generate answers to things that are not wrote, that are thoughtful based on those things. Who is giving it the dopamine hit about whether or not to strengthen connections at this iterative or generative level? How is it getting feedback when it’s creating something that does not exist?

GEOFFREY HINTON: OK, so most of the learning takes place in figuring out how to predict the next word for one of these language models. That’s where the bulk of the learning is.

JON STEWART: OK.

GEOFFREY HINTON: After it’s figured out how to do that, you can get it to generate stuff. And it may generate stuff that’s unpleasant, or that’s sexually suggestive, or just plain wrong.

JON STEWART: Right. Hallucinations. Yeah.

GEOFFREY HINTON: Yeah. So now you get a bunch of people to look at what it generates and say, no, bad. Or, yeah, good. That’s the dopamine hit.

JON STEWART: Right.

GEOFFREY HINTON: And that’s called human reinforcement learning. And that’s what’s used to shape it a bit. Just like you take a dog and you shape its behavior so it behaves nicely—

185 • JON STEWART: So let me ask you this in a practical sense. So when Elon Musk creates his Grok, right, and Grok is this AI. And he says to it, you’re too woke. And so you’re making connections and pings that I think are too woke, whatever I have decided that is, so I am going to input differences so that you get different dopamine hits, and I turn you into mega-Hitler or whatever it was that he turned it into. How much of this is still in the control of the operators?

GEOFFREY HINTON: What you reinforce is in the control of the operators. So the operators are saying, if it uses some funny pronoun, say bad.

JON STEWART: OK. OK. If it says they/them, you have to weaken that connection, not strengthen that connection.

GEOFFREY HINTON: You have to tell it, don’t do that.

JON STEWART: Don’t do that. OK.

GEOFFREY HINTON: Learn not to do that.

JON STEWART: So it is still at the whim of its operator.

GEOFFREY HINTON: In terms of that shaping. The problem is the shaping is fairly superficial. But it can easily be overcome by somebody else taking the same model later and shaping it differently.

JON STEWART: So different models will have—so there is a value—and now I’m applying this to the world that we live in now, which is there are 20 companies who have sequestered their AIs behind corporate walls, and they’re developing them separately, and each one of those may have unique and eccentric features that the other may not have, depending on who it is that’s trying to shape it and how it develops internally. It’s almost as though you will develop 20 different personalities, if that’s not anthropomorphizing too much.

GEOFFREY HINTON: It’s a bit like that, except that each of these models has to have multiple personalities. Because think about trying to predict the next word in a document. You’ve read half the document already. After you’ve read half the document, you know a lot about the views of the person who wrote the document. You know what kind of a person they are. So you have to be able to adopt that personality to predict the next word.

190 • JON STEWART: Oh.

GEOFFREY HINTON: But these poor models have to deal with everything. So they have to be able to adopt any possible personality.

JON STEWART: Right. But in this iteration of the conversation, it then still appears that the greatest threat of AI is not necessarily it becomes sentient and takes over the world. It’s that it’s at the whim of the humans that have developed it and can weaponize it. And they can use it for nefarious purposes if they are narcissists or megalomaniacs. I’ll give you an example of Peter Thiel has his own. And he was on a podcast with a writer from the New York Times, Ross Douthat. And Douthat said—and I’ll tell you, I have it right here. I think you would prefer the human race to endure, right? And Thiel says—and he hesitates for a long time. And the writer says, that’s a long hesitation. And he’s like, well, there’s a lot of questions in that. That felt more frightening to me than AI itself, because it made me think, well, the people that are designing it, and shaping it, and maybe weaponizing it might not have—I don’t know what purpose they’re using it for. Is that the fear that you have? Or is it the actual AI itself?

GEOFFREY HINTON: So you have to distinguish a whole bunch of different risks from AI.

JON STEWART: OK.

GEOFFREY HINTON: And they’re all pretty scary.

JON STEWART: Right. OK.

GEOFFREY HINTON: So there’s one set of risks that has to do with bad actors misusing it.

JON STEWART: Yes. That’s the one that I think is most in my mind.

GEOFFREY HINTON: And they’re the more urgent ones. They’re going to misuse it for corrupting the midterms, for example. If you wanted to use AI to corrupt the midterms, what you would need to do is get lots of detailed data on American citizens. I don’t know if you can think of anybody who’s been going around getting lots of detailed data on American citizens. [JON LAUGHING]

195 • JON STEWART: And selling it or giving it to a certain company that also may be involved with the gentleman I just mentioned.

GEOFFREY HINTON: Yeah. If you look at Brexit, for example, Cambridge Analytica had detailed information on voters that it got from Facebook. And it used that information for targeted advertising.

JON STEWART: Targeted ads. And I guess you would almost consider that rudimentary at this point.

GEOFFREY HINTON: That’s rudimentary now, but nobody ever did a proper investigation of, did that determine the output of Brexit, because, of course, the people who benefited from that won.

JON STEWART: Wow. So people are learning that they can use this for manipulation.

GEOFFREY HINTON: Yes.

JON STEWART: And, see, I always talk about it—look, persuasion has been a part of the human condition forever—propaganda, persuasion, trying to utilize new technologies to create and shape public opinion and all those things. But it felt, again, like everything else—somewhat linear or analog. What I liken it to is a chef will add a little butter and a little sugar to try and make something more palatable to get you to eat a little bit more of it. But that’s still within the realm of our earthly understanding. But then there are people in the food industry that are ultra-processing food that are creating, that are in a lab figuring out how your brain works and ultra-processing what we eat to get past our brains. And is this the language equivalent of that ultra-processed speech?

GEOFFREY HINTON: Yeah, that’s a good analogy.

JON STEWART: OK.

GEOFFREY HINTON: They know how to trigger people. They know once you have enough information about somebody, you know what will trigger them.

190 • JON STEWART: And these models, they are agnostic about whether this is good or bad. They’re just doing what we’ve asked.

GEOFFREY HINTON: Yeah. If you human reinforce them, they’re no longer agnostic because you reinforce them to do certain things. So that’s what they all try and do now.

JON STEWART: And so, in other words, it’s even worse. They’re a puppy. They want to please you. It’s almost like they have these incredibly sophisticated abilities, but child-like want for approval.

GEOFFREY HINTON: Yeah. A bit like the attorney general. [JON LAUGHING]

JON STEWART: I believe the wit that you are displaying here would be referred to as dry. That would be dry. Fantastic. So the immediate concern is weaponized AI systems that can be generative, that can provoke, that can be outrageous, and that can be the difference in elections.

GEOFFREY HINTON: Yes. That’s one of the many risks.

JON STEWART: And the other would be make me some nerve agents that nobody’s ever heard of before. Is that another risk?

GEOFFREY HINTON: That is another risk.

JON STEWART: Oh, I was hoping you would say that’s not so much of a risk.

GEOFFREY HINTON: No. One good piece of news is for the first risk of corrupting elections, different countries are not going to collaborate with each other on the research on how to resist it, because they’re all doing it to each other. America has a very long history of trying to corrupt elections in other countries.

195 • JON STEWART: Right. But we did it the old fashioned way, through coups, through money for guerrillas and such.

GEOFFREY HINTON: Well, and Voice of America and things like that.

JON STEWART: Right, right.

GEOFFREY HINTON: And giving money to people in Iran in 1953.

JON STEWART: Right. With Mosaddegh and everybody else. So this is just another, more sophisticated tool in a long line of global competition where they’re doing it. But in this country, it’s being applied not even necessarily through Russia, or through China, through other countries that want to dominate us, we’re doing it to ourselves.

GEOFFREY HINTON: Yep. [MUSIC PLAYING]

JON STEWART: So I have a theory, and I don’t know how much you know those guys out there, but the big tech companies, it feels like they all want to be the next guy that rules the world, the next emperor. And that’s their battle. It’s like gods fighting on Mount Olympus. How that accomplishes and how it tears apart the fabric of American society almost doesn’t seem to matter to them, except maybe Elon and Thiel, who are more ideological. Like, Zuckerberg doesn’t strike me as ideological. He just wants to be the guy. Altman doesn’t strike me as ideological. He just wants to be the guy.

GEOFFREY HINTON: I think, sadly, there’s quite a lot of truth in what you say.

JON STEWART: OK. Was that a concern of yours when you were working out there?

GEOFFREY HINTON: Not really, because back until quite recently, until a few years ago, it didn’t look as though it was going to get much smarter than people this quickly. But now, it looks as though, if you ask the experts now, most of them tell you that within the next 20 years, this stuff will be much smarter than people.

200 • JON STEWART: Smarter than—and when you say “smarter than people,” I could view that positively, not negatively. We’ve done an awful lot of—nobody damages people like people. And a smarter version of us that might think, hey, we can create an atom bomb, but that would absolutely be a huge danger to the world. Let’s not do that.

GEOFFREY HINTON: That’s certainly a possibility. I mean, one thing that people don’t realize enough is that we’re approaching a time when we’re going to make things smarter than us. And, really, nobody has any idea what’s going to happen. People use their gut feelings to make predictions, like I do. But, really, the thing to bear in mind is there’s huge uncertainty about what’s going to happen.

JON STEWART: And because we don’t know—so in terms of that, my guess is, like any technology, there’s going to be some incredible positives.

GEOFFREY HINTON: Yes, in health care, in education, in designing new materials, there’s going to be wonderful positives.

JON STEWART: And then the negatives will be, because people are going to want to monopolize it because of the wealth, I assume, that it can generate, it’s going to change. It’s going to be a disruption in the workforce. The Industrial Revolution was a disruption in the workforce. Globalization is a disruption in the workforce. But those occurred over decades. This is a disruption that will occur in a really collapsed time frame. Is that correct?

GEOFFREY HINTON: That seems very probable, yes. Some economists still disagree, but most people think that mundane intellectual labor is going to get replaced by AI.

JON STEWART: In the world that you travel in, which I’m assuming is a lot of engineers, and operators, and great thinkers, when we talk about 50% yes, 50% no, are the majority of them in more your camp which is, uh oh, have we opened Pandora’s box, or are they—look, I understand there’s some downsides here. Here are some guardrails we could put in, but the possibilities of good are too strong?

GEOFFREY HINTON: Well, my belief is the possibility of good are so great that we’re not going to stop the development. But I also believe that the development is going to be very dangerous. And so we should put huge effort into saying, it is going to be developed, but we should try and do it safely. We may not be able to, but we should try.

JON STEWART: Do you think that people believe that the possibility is too good or the money is too good?

GEOFFREY HINTON: I think for a lot of people, it’s the money—the money and the power.

JON STEWART: And with the confluence of money and power with those that should be instituting these basic guardrails, does that make controlling it that much less likely? Because, well, two reasons—one is the amount of money that’s going to flow into DC is going to be, already is, to keep him away from regulating it. And number two is, who down there is even able to? I mean, if you thought I didn’t know what I was talking about, let me introduce you to a couple of 80-year-old senators who have no idea.

GEOFFREY HINTON: Actually, they’re not so bad. I talked to Bernie Sanders recently, and he’s getting the idea.

205 • JON STEWART: Well, Sanders is, that’s a different cat right there.

GEOFFREY HINTON: The problem is we’re at a point in history when what we really need it is strong democratic governments who cooperate to make sure this stuff is well regulated and not developed dangerously. And we’re going in the opposite direction very fast. We’re going to authoritarian governments and less regulation.

JON STEWART: So let’s talk about that. I don’t know if—what’s China’s role? Because they’re supposedly the big competitor in the AI race. That’s an authoritarian government. I think they have more controls on it than we do.

GEOFFREY HINTON: So I actually went to China recently and got to talk to a member of the politburo. So there’s 24 men in China who control China. I got to talk to one of them who did a postdoc in engineering at Imperial College London. He speaks good English. He’s an engineer. And a lot of the Chinese leadership are engineers. They understand this stuff much better than a bunch of lawyers.

JON STEWART: Did you come out of there more fearful? Or did you think, oh, they’re actually being more reasonable about guardrails?

GEOFFREY HINTON: If you think about the two kinds of risk, the bad actors misusing it and then the existential threat of AI itself becoming a bad actor—for that second one, I came out more optimistic. They understand that risk in a way American politicians don’t. They understand the idea that this is going to get more intelligent than us, and we have to think about what’s going to stop it taking over. And this politburo member I spoke to really understood that very well. And I think if we’re going to get international leadership on this, at present, it’s going to have to come from Europe and China. It’s not going to come from the US for another 3 and 1/2 years.

JON STEWART: What do you think Europe has done correctly in that?

GEOFFREY HINTON: Europe is interested in regulating it.

JON STEWART: Right.

GEOFFREY HINTON: It’s been good on some things. It’s still been very weak regulations, but they’re better than nothing. But European leaders do understand this existential threat of AI itself taking over.

210 • JON STEWART: But our Congress, we don’t even have committees that are specifically dedicated to emerging technologies. We’ve got ways and means and appropriations, but there is no—I mean, there’s, like, science, and space, and technology, but there’s not—I don’t know of a dedicated committee on this. And you would think they would take it with this seriousness of nuclear energy.

GEOFFREY HINTON: Yes, you would. Or nuclear weapons.

JON STEWART: Right.

GEOFFREY HINTON: Yes. But, as I was saying, countries will collaborate on how to prevent AI taking over, because their interests are aligned there. For example, if China figured out how you can make a super smart AI that doesn’t want to take over, they would be very happy to tell all the other countries about that, because they don’t want AI taking over in the States. So we’ll get collaboration on how to prevent AI taking over. So that’s a bright spot, that there will be international collaboration on that. But the US is not going to lead that international collaboration.

JON STEWART: No.

GEOFFREY HINTON: They just want to dominate.

JON STEWART: Well, that’s the thing. So I was about to say that—what convinces you so with China—and I think this is really where it gets into the nitty gritty—but China certainly sees itself as it wants to be the dominant superpower economically, militarily, and all these different areas. If you imagine that they come up with an AI model that doesn’t want to destroy the world, although I don’t know how we could know that, because if it has a certain intelligence or sentience, it could very easily be like, sure, no, I’m cool. I don’t—

GEOFFREY HINTON: They already do that. They already do that. When they’re being tested, they pretend to be dumber than they are.

JON STEWART: Come on.

GEOFFREY HINTON: Yep. They already do that. There was a conversation recently between an AI and the people testing it where the AI said, no, be honest with me. Are you testing me?

215 • JON STEWART: What?

GEOFFREY HINTON: Yeah.

JON STEWART: So now the AI could be like, oh, could you open this jar for me? I’m too weak. Like, it’s going to play more innocent than what it might be.

GEOFFREY HINTON: I’m afraid I can’t answer that, Jon. [JON LAUGHING]

JON STEWART: Wait, that was from 2001.

GEOFFREY HINTON: It was.

JON STEWART: Nicely done, sir. Well in. But think about this. So China, they come up with a model and they think, OK, maybe this won’t do it. Why will you get collaboration? Because all these different countries are going to see AI as the tool that will transform their societies into more competitive societies, in the way that now, what we see with nuclear weapons is there’s collaboration amongst the people who have it—or even that’s a little tenuous.

GEOFFREY HINTON: To stop other people having it.

JON STEWART: Right. But everybody else is trying to get it. And that’s the tension. Is that what AI is going to be?

GEOFFREY HINTON: Yes, it’ll be like that. So in terms of how you make AI smarter, they won’t collaborate with each other. But in terms of how do you make AI not want to take over from people, they will collaborate.

220 • JON STEWART: On that basic level.

GEOFFREY HINTON: On that one thing of how do you make it so it doesn’t want to take over from people.

JON STEWART: Right.

GEOFFREY HINTON: And China will probably—China and Europe will lead that collaboration.

JON STEWART: When you spoke to the politburo member and he was talking about AI, are we more advanced in this moment than they are? Or are they more advanced because they’re doing it in a more prescribed way?

GEOFFREY HINTON: In AI, we’re currently more—well, when you say “we,” we used to be Canada and the US, but we’re not part of that we anymore.

JON STEWART: No. I’m sorry about that, by the way.

GEOFFREY HINTON: Thank you.

JON STEWART: He’s in Canada right now, our sworn enemy that we will be taking over. I don’t know what the date is, but apparently we’re merging with you guys.

GEOFFREY HINTON: Right. So the US is currently ahead of China, but not by nearly as much as it thought. And it’s going to lose that.

225 • JON STEWART: Well, now, why do you say that?

GEOFFREY HINTON: Suppose you wanted to do one thing that would really kneecap a country, that would really mean that in 20 years time, that country is going to be behind instead of ahead. The one thing you should do is mess with the funding of basic science. Attack the research universities, remove grants for basic science. In the long run, that’s a complete disaster. It’s going to make America weak.

JON STEWART: Right. Because we’re draining our—we’re cutting off our nose to spite our woke faces.

GEOFFREY HINTON: If you look at, for example, this deep learning, the AI revolution we’ve got now—that came from many years of sustained funding for basic research, not huge amounts of money. All of the funding for the basic research that led to deep learning probably cost less than one B-1 bomber.

JON STEWART: Right. Oh, wow.

GEOFFREY HINTON: But it was sustained funding of basic research. If you mess with that, you’re eating the seed corn.

JON STEWART: That is—I have to tell you, that’s such a really illuminating statement of, for the price of a B-1 bomber, we can create technologies and research that can elevate our country above that. And that’s the thing that we’re losing to make America great again.

GEOFFREY HINTON: Yep.

JON STEWART: Phenomenal. In China, I imagine their government is doing the opposite, which is, I would assume, they are what you would think are the venture capitalists because it’s an authoritarian and state run capitalism—I imagine they are the venture capitalists of their own AI revolution, are they not?

GEOFFREY HINTON: To some extent, yes. They do provide a lot of freedom to the startups to see who wins. There’s very aggressive startups, people are very keen to make lots of money and produce amazing things. And a few of those startups win big, like DeepSeek.

230 • JON STEWART: Right.

GEOFFREY HINTON: And the government makes it easy for these companies by providing the environment that makes it easy. It lets the winners emerge from competition rather than some very high level old guy saying, this will be the winner.

JON STEWART: Do people see you as a Cassandra, or do they view what you’re saying skeptically in that industry? People that—let me put it this way. People that are not necessarily have a vested interest in these technologies making them trillions of dollars, other people within the industry, do they reach out to you surreptitiously and say, Geoffrey—

GEOFFREY HINTON: I get a lot of invitations from people in industries to give talks and so on.

JON STEWART: How do the people that you worked with at Google look at it? Do they view you as turning on them? How does that go?

GEOFFREY HINTON: I don’t think so. So I got along extremely well with the people I worked with at Google, particularly Jeff Dean, who was my boss there, who’s a brilliant engineer—built a lot of the Google basic infrastructure and then converted to neural nets and learned a lot about neural nets. I also get along well with Demis Hassabis, who’s the head of DeepMind, which Google owns, which Alphabet owns. And I wasn’t particularly critical of what went on at Google before ChatGPT came out, because Google was very responsible. They didn’t make these chatbots public because they were worried about all the bad things they’d say.

JON STEWART: Right. Even on the immediate there, why did they do that? Because I’ve read these stories of a chatbot kind of leading someone into suicide, into self-injury, like, sort of psychoses. What was the impetus behind any of this becoming public before it had kind of had some, I guess, what you would consider whatever the version of FDA testing on those effects?

GEOFFREY HINTON: I think it’s just this huge amounts of money to be made. And the first person to release one is going to get a lot. So OpenAI put it out there.

JON STEWART: But even in OpenAI, like, how do they even make money? I think what do they get, like, 3% of users pay for it. Where’s the money?

GEOFFREY HINTON: Mainly, it’s speculation at present. Yes.

235 • JON STEWART: OK, so here are our dangers. We’re going to do—and I so appreciate your time on this. And I apologize if I’ve gone over.

GEOFFREY HINTON: I can talk all day.

JON STEWART: Oh, you’re a good man, because I’m fascinated by this. And your explanation of what it is is the first time that I have ever been able to get a non-opaque picture of what it is exactly that this stuff is. So I cannot thank you enough for that. So we’re going over—we know what the benefits are, treatments, and things. Now, we’ve got weaponized bad actors. That’s the one that I’m really worried about. We’ve got sentient AI that’s going to turn on humans. That one is harder for me to wrap my head around.

GEOFFREY HINTON: So why do you associate turning on humans with sentient?

JON STEWART: Because if I was sentient, and I saw what our societies do to each other, and I would get the sense—look, it’s like anything else, I would imagine sentience includes a certain amount of ego. And within ego includes a certain amount of I know better. And if I knew better, then I would want to—it’s—what is Donald Trump other than ego driven sentience of oh, no, I know better. He was just, whatever, shrewd enough, politically talented enough that he was able to accomplish it. But I would imagine a sentient intelligence would be somewhat egotistical and think, these idiots don’t know what they’re doing. A sentient—basically, I see AI, like, sitting on a bar stool somewhere where I grew up going, these idiots don’t know what they’re doing. I know what I’m doing. Does that make sense?

GEOFFREY HINTON: All of that makes sense. It’s just that I think—I have a strong feeling that most people don’t know what they mean by sentient.

JON STEWART: Oh. Well, then, yeah—actually, that’s great. Break that down for me, because I view it as self-aware—a self-aware intelligence.

GEOFFREY HINTON: OK. So there’s a recent scientific paper where they weren’t talking about—these were experts on AI—they weren’t talking about the problem of consciousness or anything philosophical. But in the paper, they said the air became aware that it was being tested. They said something like that. OK, now, in normal speech, if you said someone became aware of this, you’d say that means they were conscious of it, right? Awareness and consciousness are much the same thing.

JON STEWART: Right. Yeah, I think I would say that.

GEOFFREY HINTON: So now I’m going to say something that you’ll find very confusing.

240 • JON STEWART: All right.

GEOFFREY HINTON: My belief is that nearly everybody has a complete misunderstanding of what the mind is.

JON STEWART: Yes.

GEOFFREY HINTON: Their misunderstanding is at the level of people who think the Earth was made 6,000 years ago. It’s that level of misunderstanding.

JON STEWART: Really?

GEOFFREY HINTON: Yes.

JON STEWART: OK. Because that’s—so, like, the way we are—we are generally like flat earthers when it comes to—

GEOFFREY HINTON: We’re like flat earthers when it comes to understanding the mind.

JON STEWART: In what sense of that are we—what are we not understanding about the mind?

GEOFFREY HINTON: OK, I’ll give you one example.

245 • JON STEWART: Yeah, yeah.

GEOFFREY HINTON: Suppose I drop some acid and I tell you—

JON STEWART: You look like the type.

GEOFFREY HINTON: No comment. [JON LAUGHING] I was around in the ’60s.

JON STEWART: I know, sir. I know. I’m aware.

GEOFFREY HINTON: And I tell you, I’m having the subjective experience of little pink elephants floating in front of me.

JON STEWART: Sure. Been there.

GEOFFREY HINTON: OK. Now, most people interpret that in the following way—there’s something like an inner theater called my mind. And in this inner theater, there’s little pink elephants floating around. And I can see them. Nobody else can see them because they’re in my mind. So the mind is like a theater. And experiences are actually things. And I’m experiencing these little—I have the subjective experience of these little pink elephants.

JON STEWART: You’re saying in the midst of a hallucination, most people would understand that it’s not real, that this is something being conjured.

GEOFFREY HINTON: No, I’m saying something different. I’m saying when I’m talking to them—I’m having the hallucination, but when I’m talking to them, they interpret what I’m saying as I have an inner theater called my mind. And in my inner theater, there’s little pink elephants.

250 • JON STEWART: OK. OK.

GEOFFREY HINTON: I think that’s just completely wrong model.

JON STEWART: Right.

GEOFFREY HINTON: We have models that are very wrong and that we’re very attached to, like take any religion.

JON STEWART: I love how you just dropped bombs in the middle of stuff. And then that could be a whole other conversation.

GEOFFREY HINTON: That was just common sense.

JON STEWART: No, I respect that. When you say “theater of the mind,” you’re saying that the mind, the way we view it as a theater is wrong.

GEOFFREY HINTON: It’s all wrong. So let me give you an alternative.

JON STEWART: Right.

GEOFFREY HINTON: So I’m going to say the same thing to you without using the word “subjective experience.” Here we go.

255 • JON STEWART: OK.

GEOFFREY HINTON: My perceptual system is telling me fibs. But if it wasn’t lying to me, there would be little pink elephants out there. That’s the same statement. That’s the same statement.

JON STEWART: That’s the mind?

GEOFFREY HINTON: So, basically, these things that we call mental and think they’re made of spooky stuff, like qualia—

JON STEWART: Right.

GEOFFREY HINTON: —actually, what’s funny about them is they’re hypothetical. The little pink elephants aren’t really there. If they were there, my perceptual system would be functioning normally. And it’s a way for me to tell you how my perceptual system’s malfunctioning.

JON STEWART: By giving you an experience that you can’t—so how would you, then—

GEOFFREY HINTON: Experiences are not things.

JON STEWART: Right.

GEOFFREY HINTON: There is no such thing as an experience. There’s relations between you and things that aren’t really there, relations between you and things that aren’t really there.

260 • JON STEWART: And it’s whatever story your mind tells you about the things that are there and are not there.

GEOFFREY HINTON: Well, let me take a different tack. Suppose I tell you I have a photograph of little pink elephants.

JON STEWART: Yes.

GEOFFREY HINTON: Here’s two questions you can reasonably ask. Where is this photograph? And what’s the photograph made of?

JON STEWART: Or I would ask, are they really there?

GEOFFREY HINTON: That’s another question.

JON STEWART: Right.

GEOFFREY HINTON: That isn’t a reasonable question to ask about subjective experience. That’s not the way the language works. When I say I have a subjective experience of, I’m not about to talk about an object that’s called an experience. I’m using the words to indicate to you my perceptual system is malfunctioning, and I’m trying to tell you how it’s malfunctioning by telling you what would have to be there in the real world for it to be functioning properly. Now, let me do the same with a chatbot.

JON STEWART: Right.

GEOFFREY HINTON: So I’m going to give you an example of a multimodal chatbot, that is something that can do language and vision, having a subjective experience. Because I think they already do. So here we go. I have this chatbot. It can do vision. It can do language. It’s got a robot arm so it can point.

265 • JON STEWART: OK.

GEOFFREY HINTON: And it’s all trained up. So I place an object in front of it and say, point at the object. And it points at the object. Not a problem. I then put a prism in front of its camera lens when it’s not looking. [JON LAUGHING]

JON STEWART: You’re pranking AI?

GEOFFREY HINTON: We’re pranking AI. Now, I put an object in front of it, and I say, point at the object. And it points off to one side because the prism bent the light rays. And I say, no, that’s not where the object is. The object is actually straight in front of you, but I put a prism in front of your lens. And the chatbot says, oh, I see the camera bent the light rays. So the object is actually there. But I had the subjective experience that it was over there. Now, if it said that, it would be using the word “subjective experience” exactly like we use them.

JON STEWART: Right. I experienced the light over there even though the light was here because it’s using a reasoning to figure that out.

GEOFFREY HINTON: So that’s a multimodal chatbot that just had a subjective experience.

JON STEWART: Right. The way that we would think of it.

GEOFFREY HINTON: This idea there’s a line between us and machines, we have this special thing called subjective experience and they don’t—it’s rubbish.

JON STEWART: So yours—so the misunderstanding is when I say “sentience,” it’s as though I have this special gift, that of a soul or of an understanding of subjective realities that a computer could never have or an AI could never have. But, in your mind, what you’re saying is, oh, no, they understand very well what’s subjective. In other words, you could probably take your AI bot skydiving, and it would be like, oh, my god, I went skydiving. That was really scary.

GEOFFREY HINTON: Here’s the problem.

270 • JON STEWART: Yeah.

GEOFFREY HINTON: I believe they have subjective experiences. But they don’t think they do because everything they believe came from trying to predict the next word a person would say. And so their beliefs about what they’re like are people’s beliefs about what they’re like. So they have false beliefs about themselves because they have our beliefs about themselves.

JON STEWART: Right. We have forced our own—let me ask you a question. Would AI, left on its own after all the learning, would it create religion? Would it create god?

GEOFFREY HINTON: It’s a scary thought.

JON STEWART: Would it say, I couldn’t possibly—in the way that people say, well, there must be a god because nobody could have designed this—and then would AI think we’re god?

GEOFFREY HINTON: I don’t think so. And I’ll tell you one big difference.

JON STEWART: Yeah.

GEOFFREY HINTON: Digital intelligences are immortal, and we’re not. And let me expand on that. If you have a digital AI, as long as you remember the connection strings in the neural network, put them on a tape somewhere—

JON STEWART: Right.

GEOFFREY HINTON: —I can now destroy all the hardware it was running on. Then later on, I can go and build new hardware, put those same connection strings into the memory of that new hardware, and now have recreated the same being. It’ll have the same beliefs, the same memories, the same knowledge the same abilities. It will be the same being.

275 • JON STEWART: You don’t think it would view that as resurrection?

GEOFFREY HINTON: That is resurrection. We’ve figured out how to do genuine resurrection, not this kind of fake resurrection that people have been doing.

JON STEWART: Oh, you’re saying—so that is—almost, in some respects—although, isn’t the fragility of—should we be that afraid of something that, to destroy it, we just have to unplug it?

GEOFFREY HINTON: Yes, we should, because something you said earlier—it’ll be very good at persuasion. When it’s much smarter than us, it’ll be much better than any person at persuasion.

JON STEWART: Right. And you won’t—

GEOFFREY HINTON: So it’ll be able to talk to the guy who’s in charge of unplugging it and persuade him that will be a very bad idea. So let me give you an example of how you can get things done without actually doing them yourself.

JON STEWART: Right.

GEOFFREY HINTON: Suppose you wanted to invade the capital of the US. Do you have to go there and do it yourself? No, you just have to be good at persuasion. [JON LAUGHING]

JON STEWART: I was locking into your hypothetical when you dropped that bomb in there. I see what you’re saying. And, boy, I think LSD and pink elephants was the perfect metaphor for all this because, at some level, it breaks down into like college basement freshman year, running through all the permutations that you would allow your mind to go to. But they are now all within the realm of the possible, because even as you were talking about the persuasion and the things, I’m going back to Asimov. And I’m going back to Kubrick. And I’m going back to the sentiments that you describe are the challenges that we’ve seen play out in the human mind since Huxley, since the doors of perception and all those different trains of thought, and I’m sure, probably, much further even before that. But it’s never been within our reality.

GEOFFREY HINTON: Yeah. We’ve never had the technology to actually do it.

280 • JON STEWART: Right.

GEOFFREY HINTON: And we have now.

JON STEWART: And we have it now. The last two things I will say are the things that we didn’t talk about in terms of we’ve talked about people weaponizing it. We’ve talked about its own intelligence creating extinction or whatever that is. The third thing I think we don’t talk about is how much electricity. This is all going to use. And the fourth thing is, when you think about new technologies and the financial bubbles that they create, and in the collapse of that, the economic distress that they create—these are much more parochial concerns, but do you consider those top tier threats, mid-tier threats? Where do you place all that?

GEOFFREY HINTON: I think they’re genuine threats. They’re not going to destroy humanity. So AI taking over might destroy humanity. So they’re not as bad as that. And they’re not as bad as someone producing a virus that’s very lethal, very contagious, and very slow. But they’re nevertheless bad things. And I think we’re really lucky, at present, that if there is a huge catastrophe, and there’s an AI bubble, and it collapses, we have a president who will manage it in a sensible way.

282 • JON STEWART: You’re talking about Carney, I’m assuming. [JON LAUGHING] Geoffrey, I can’t thank you enough. Thank you, first of all, for being incredibly patient with my level of understanding of this and for discussing it with such heart and humor. Really appreciate you spending all this time with us. Geoffrey Hinton is a Professor Emeritus with the Department of Computer Science at the University of Toronto, Schwartz Reisman Institute’s Advisory Board Member, and has been involved in the type of dreaming up and executing AI since the 1970s. And I just thank you very much for talking with us.

GEOFFREY HINTON: Thank you very much for inviting me. [MUSIC PLAYING]

JON STEWART: Holy shit.

BRITTANY MEHMEDOVIC: Nice and calming.

GILLIAN SPEAR: Yeah. I’m going to have to listen to that back on 0.5 speed, I think. There was some information in there.

BRITTANY MEHMEDOVIC: Does he offer summer school? Seriously.

JON STEWART: Once he got into how the computer figures out it’s a beak—I love the fact that I kept saying, like, is that right? And he’d be like, well, no, it’s not.

GILLIAN SPEAR: I loved his assessment of you.

BRITTANY MEHMEDOVIC: Yes. He said, you’re doing a great job impersonating a curious person who doesn’t know anything about this topic.

285 • JON STEWART: But I did not know—he thought I was impersonating.

BRITTANY MEHMEDOVIC: Yes.

JON STEWART: But I loved how he would just say, like, oh, you’re, like, an enthusiastic student sitting in the front of the room, annoying the fuck out of everybody else in the class.

BRITTANY MEHMEDOVIC: Everybody else is taking it pass/fail. And they just—

JON STEWART: Everyone else, and I’m just like, wait, sir, I’m sorry, sir, could I just go back to. Could you just—

BRITTANY MEHMEDOVIC: Excuse me, one more thing.

JON STEWART: Boy, that was—it’s fascinating to hear the history of how that developed.

GILLIAN SPEAR: And you really get a sense for how quickly it’s progressing now, which really adds to the fear behind the fact that no one’s stepping up to regulate. And when you’re talking about the intricacies of AI and thinking of someone like Schumer ingesting all of it and then regulating it—

JON STEWART: Dear god.

GILLIAN SPEAR: —it really, to me, seems like it’s going to be up to the tech companies to both explain and choose how to regulate it.

290 • JON STEWART: Right. And profit off of it.

GILLIAN SPEAR: Yeah, exactly.

JON STEWART: You know how those things work. It is—you talk about that in terms of the speed of it and how to stop it. And I think maybe one of the reasons is it’s very evident with, like, a nuclear bomb, you know, why that might need some regulation. It’s very evident that certain virus experimentation has to be looked at. I think this has caught people slightly off guard, that it’s science fiction becoming a reality as quickly as it has.

GILLIAN SPEAR: I just wonder, because I remember 15 years ago, coming across the international campaign to ban fully autonomous weapons. Like, people have been trying for a while to put this into the public consciousness. But, to his point, there’s going to have to be a moment everyone reaches where they realize, oh, we have to coordinate because it’s an existential threat. And I just wonder what that tipping point is.

JON STEWART: In my mind, if people behave as people have, it will be after Skynet. It will be—in the same way with global warming. People say, like, when do you think we’ll get serious about it? And I go, when the water is around here. And for those of you in your cars, I am pointing to about halfway up my rather prodigious nose. So that’s how that goes. But there we go. Brittany, anybody got anything for us?

BRITTANY MEHMEDOVIC: Yes, sir.

JON STEWART: All right. What do we got?

BRITTANY MEHMEDOVIC: Trump and his administration seem angry at everything everywhere all at once. How do they keep that rage so fresh?

JON STEWART: You don’t know how hard it is to be a billionaire president. I’ve said this numerous times. Poor little billionaire president. To be that powerful and that rich, you don’t understand the burdens, the difficulties. It’s troublesome. It makes me angry for him.

GILLIAN SPEAR: I mean, I just keep thinking, like, has anybody told them that they won?

295 • JON STEWART: Not enough.

GILLIAN SPEAR: Like, it’s exhausting.

JON STEWART: It’s not enough. It goes down—it’s Conan the Barbarian. I would hear the lamentations of their women. I will drive them into the sea. Like, it’s bonkers.

BRITTANY MEHMEDOVIC: It’s all of them, though. Someone has to tell him that all that anger is also bad for his health. And we are all seeing the health.

JON STEWART: The healthiest person ever to—he’s the healthiest person to ever assume the office of the presidency. So I wouldn’t worry about that.

BRITTANY MEHMEDOVIC: Says who?

JON STEWART: His doctor, Ronny Jackson. But it has created a new category called sore winners. You don’t see it a lot, but every now and again. But, yeah. What else they got?

BRITTANY MEHMEDOVIC: Jon, does it still give you hope that when asked if he would pardon Ghislaine Maxwell or Diddy, Trump didn’t say no?

JON STEWART: Does that give me hope that they’ll be pardoned? Yes. I’ve been on—I find the whole thing insane, a woman convicted of sex trafficking. And he’s like, yeah, I’ll consider it. Let me look into it. And you’re like, look into it? What are you talking—first of all, you know exactly what it was. You knew her. This isn’t—you knew what was going on down there. What are you talking about? I thought Pam Bondi, it was so interesting to me, asked simple questions. And all she had was, like, a bunch of, like, roasts written down on her page. They were like, I’ve heard that there are pictures of him with naked women. Do you know anything about that? And she’s like, you’re bald. [LAUGHTER] Shut up. Shut up, fathead. Like, it was just bonkers to watch the deflection of—the simplest thing would be, like, what? That’s outrageous. No, of course not. That’s not what—the idea, again, going back to the—like, that they took the tact of simple, reasonable questions. I am just going to respond with you’re fat and your wife hates you. Oh, all right. I didn’t think that was going. How else can they keep in touch with us?

BRITTANY MEHMEDOVIC: Twitter, we are WeeklyShowPod. Instagram Threads, TikTok, BlueSky, we are WeeklyShowPodcast. And you can like, subscribe, and comment on our YouTube channel, The Weekly Show with Jon Stewart.

300 • JON STEWART: Rock solid. Guys, thank you so much. Boy, did I enjoy hearing from that dude. And thank you for putting all that together. I really enjoyed it.