Not everyone is dumb regarding AI. But many are.
As usual, I find more fun reading the comments to some articles than from the articles themselves. Two selections from the NYT, with my comments to other people’s comments, plus extra content from X.

1 ● 📰 How A.I. Is Changing the Way the World Builds Computers, by Cade Metz, Karen Weise, Marco Hernandez, Mike Isaac and Anjali Singhvi, on March 16, 2025.
Selected comments:
01 | Mr. Brown, Crazytown, March 17 (Times Pick)
All this electricity and water required to do… what, exactly? What tangible benefits have AI brought us so far? I speak from ignorance but also out of frustration. I am in academia and there are talks of AI facilitating research in digital humanities, which sounds nice. But it only facilitates a human-guided process, and creates nothing, except term papers that are nothing but verbose platitudes and fictional citations.
02 | AI, Research (not ChatGPT), March 17
@Mr. Brown you are in humanities. There was time when a horse carriage maker had no idea what Mr. Ford’s carburetor did. Likewise, a humanities professor has no idea how the AI that is rescuing the astronauts works, or how AI will serve humanity, medicine, and food when your grandchildren have grey hair.
03 | D, USA, March 17
@Mr. Brown Drug discovery, climate modeling, weather forecasting, speech/language recognition, protein folding, nuclear simulation, real time language translation, generation of subtitles in videos.
There is a lot of AI R&D going on. Much of it will lead nowhere. Some of it will bear fruit – for example, you are reading a newspaper on the result of the tail end of a lot of R&D that didn’t make sense to plenty of people at the time. Think of all the trees that didn’t get made into newspaper.
04 | The Liberated, Out in the West, March 17
@Mr. Brown I tried to configure my camera last night. The menu system was impossible for mere mortals to comprehend. As if incomprehensible jargons weren’t enough, it had menus about menus. I kept asking AI how to this and that. It gave me the right answers most of the time. I would’ve thrown the camera into a trash can if it weren’t for the AI. $1000 saved.
05 | NYT Reader, Colorado, March 17
@Mr. Brown ignore the condescending and patronizing comments from AI advocates. AI is vaporware primarily anticipated for its potential to put people out of work doing relatively mundane intellectual tasks. Nothing else could justify the ridiculous sums of money being spent on its current iteration.
As others have pointed out, this article outlines a relatively inefficient and hugely expensive infrastructure that has serious potential (once the investors wake up) to become the tech equivalent of the rusting factories and empty Walmarts that still linger across the country.
The science use case for AI is laughable given the current administration’s attacks on actual science and the likelihood for that to continue and even worsen. Culturally speaking, AI is garbage and will forever remain so. So your questions are entirely valid and deserve serious consideration, not dismissal.
06 | Cade Metz, NYT Domestic Correspondent, March 17
@Mr. Brown You have asked the most important question of them all. As it stands, A.I. systems like ChatGPT are helping some people and businesses save themselves some time. Most notably, these systems can help computer programmers write computer programs. They can also help other office workers write emails, marketing materials and other business documents or find information buried in enormous troves of digital data. But they are also prone to (sometimes egregious) mistakes that can make life difficult for programmers and office workers. All that said, companies like Google and OpenAI are improving these systems at an usually rapid pace. Because these systems learn from data, they can advance much quicker than traditional software, which was built by software engineers one line of code at a time. The hope is that these systems will help accelerate scientific advances, like the development of new medicines and vaccines. It is already beginning to help push such work forward.
07 | How?, SEA, March 17
@AI Yes, the industry asserts all of these potential benefits. Certainly there will be benefits such as designing targeted therapies and maybe even reducing the cooling needed for AI(?!) But what is most of the power demand and massive investment actually accomplishing? Targeting advertising for Facebook and Google? Cutting delivery times for sex toys from Amazon?
08 | Greengirl, West Coast, March 17
@NYT Reader It is not “dismissive” or condescending to answer the guy’s good question. Here’s a layperson’s observations: AI is already impressive and saves me a ton of time at my health insurance job. I hope it puts me out of that work because it’s more accurate, and much faster, at explaining (purposefully) complex insurance jargon and laws for employers and their insureds. It has also been awesome for me in cooking/baking, DIY projects, garden planning, scheduling, making lesson plans for my music students, etc. It makes mistakes, but I watch it learn and get better every single day.
If (this layperson) had to predict: It will be terrible for creatives (me included). It’s unsustainable for the foreseeable future. It’s going to put people out of jobs (people who actually like and want their jobs). It’s going to advance all science and medicine. It’s going to make some things safer, and other things (like war) much more dangerous.
It’s the most double-edged sword I can think of.
My (layperson’s) summary: We must regulate the h*ll out of it, and insist that these greedy tech companies figure out a sustainable, ethical way to bigger their facilities. Take it seriously and vote accordingly. UBI is likely going to become a very mainstream concept in future discourse. Our way of life is going to change drastically, and it’s happening at exponential speeds. Good luck to all of us.
09 | Josh Hill, New London, March 17
@Mr. Brown That’s like saying that the Wright Brothers’ plane is useless and impractical. We will soon have artificial general intelligence, and then AI’s will become useful indeed — terrifyingly useful since they will be inconceivably smarter than we are.
10 | Aaron, CA, March 17
@Mr. Brown I work in technology and AI has changed my work profoundly, and I see it affecting everything around me. Tasks that formerly could only be done by a very smart person can now be accomplished by just asking ChatGPT to do it -and, despite what you’ve heard, it’s generally very good. Writing term papers isn’t really “work”, it’s learning and of course having an AI do it for you is just antithetical to learning. I recently had to write a grant proposal and I wanted a summary of data source types for training machine translation as part of the proposal. This used to be a lot of leg work, and pretty boring, etc. I asked an AI to do this and all I had to do was review and reform the results -it saved a day of work, at a cost of pennies. Even at this early stage, very highly skilled tasks like this can now be outsourced to a machine in a way that we would never have dreamed possible. Unless Trump causes the whole economy and scientific sector to come crashing down (which seems very possible) in five years or less our society will be profoundly transformed (if done well for the better), just as is was when PCs were first introduced in the late 80’s.
11 | jumplinjake, Victor, NY, March 17
@Cade Metz what good will developing new vaccines be if the goal of the current administration is to do away with all vaccines, followed closely by the denunciation of science and the scientific process? this will of course lead to stifle advances in all fields.
12 | Ben, Wilmington NC, March 17
@Cade Metz At what cost? Tons of power and tons of water, so FB, Amazon, Microsoft etc can continue to make billions, Bezos can get even richer to destroy the WashPost, bright engineers can continue to write algororithms to addict little kids, while Congress is largely clueless about tech (and inept) to reign these monopolies in. While noteworthy, I don’t see how how talk about medicines and vaccines is somehow the rationale for allowing this. I worry that a of media today largely fails to see the moral and ethical dilemna uncheckeded tech innovation is doing to society.
13 | Glenn, VA, March 17
@Mr. Brown You are greatly mistaken, the very early stages of AI where we are today is already producing game changing systems taht are in use today. MRI’s, X-Rays, blood samples are being examined by AI to find illnesses that human doctors or currents tests can’t detect. Correlations that humans minds have never made are being put together to predict who will get cancer, who will develop other serious diseases based upon clues that we can’t detect as humans. Scientific models are being created and old models have been torn a part by AI that has poked holes in theories and brought to light new “facts”. Militaries around the world are using AI to detect stealth objects, detect tiny drones from miles away that human ears and eyes can’t see or hear. Coast Guards are using AI to detect tiny specks that are humans floating in the water from miles away while humans with binoculars can hardly find a person in the water directly underneath a helicopter or aircraft circling above. Weather models, hurricane forecasts, basically anything that requires the analysis of data is being pushed light years ahead in speed and accuracy with AI, and experts argue that today’s technology is not even “true” AI yet… so we are literally just at the very tip of the iceberg.
14 | Holo, Eno River, March 17
@Glenn Not a single one of your examples has anything whatsoever to do with AI. Just normal computing and imaging processing with deterministic digital computers and a lot of cheap RAM.
15 | Major Buzzkill, Atlanta, March 17
@Mr. Brown I was thinking the same thing. Yes, we can do more things faster and connect with the world, and it has made work easier in some ways. But are our lives really better or just more complicated? We have (and feel that we need) all this gadgetry that uses up our treasure, energy and time, but has it improved existence to that great of a degree? So I get my weather forecast now in real time to a box in my hand; so what? Everything was fine when we had to wait a minute for every bit of information. Yet the trade-offs have never been considered as we hurtle headlong into the unknowns of a tech future.
16 | Augustin of Florence, Knoxville, TN, March 17
@Mr. Brown – Perfect summation; I couldn’t have said it better.
17 | Judy, California, March 17
@Cade Metz What will they do with that time saved? Not think, I suppose.
18 | SarrisFerniwick, Charlottesville, VA, March 17
@Cade Metz The economics of this are insane. Taxing limited resources exponentially while chasing a vague goal, driven mostly by fear of falling behind, nobody is asking whether or not this is worth it. It’s a perfect formula for a bubble. Huge infrastructure, which will be obsolete soon, energy resources which will be gone before we know how to use it more wisely, and all for something that MIGHT be useful someday or MIGHT make somebody rich. In not much time, owning an obsolete data center will be similar to owning a Tesla and finding out that all your local charging stations are being dismantled, while people start protesting just outside the fence.
19 | jedgarlopez, Los Angeles, CA, March 17
@Cade Metz What is the average error rate of these hallucinations for each of these top line systems? That seems like an important number to know to gauge the actual cost/benefit analysis of all this electrical and water consumption vs. tangible benefit to humanity.
20 | Tristan T, Fleeing Florida, March 17
@Mr. Brown I spent much of my life as a professor in the Humanities. I’d welcome a longer description of what you mean by the “digital humanities.” Yes, I’ve heard the term, and each time I sat down in front of my office computer, in a broad sense I probably participated in it. And if it is about aiding us in understanding the traditional humanities in a more vivid and articulated sense, that sounds interesting. But I wonder if whoever coined the term knew what an oxymoron is. The Humanities is infused with the idea of human freedom, even if what we are examining is not free. Conversely, there is nothing free or even alive in “the digital.” Maybe I’m just quibbling, but my understanding of “the digital” is that it is of the machine, and “the machine” is the exact opposite of a free human being. It is also not very happy just sitting there being a machine.
21 | Tumwater Hill, Olympia, WA, March 17
@Mr. Brown From my standpoint as a science teacher, AI is a mixed bag. It does have real benefits, such as creating text for letters, descriptions, and programming. It has benefits for creating graphics. However, this is for generalized information. AI must be fact checked and corrected by real people to be dependable…. IMO, AI is too often over-hyped. In my experience with science, AI is 30%-40% inaccurate (often laughingly inaccurate), when describing technical details. To understand technical details, you need to have expert training that is often hands on. AI doesn’t perform critical thinking, and hands-on trainings are not part of the AI database. AI only generalizes massive amounts of information–that includes unrelated information…. AI is mostly useful on general tasks, but AI not an all-knowing 1960s science fiction supercomputer. Nonetheless, AI seems to be efficient at basic things. It will be somewhat better in the future, but probably not for very technical things.
22 | TonyG >>>, PNW, March 17
Warehouse sized computers, liquid cooling, nuclear power… take it from someone who worked in the semiconductor industry for 30 years, the current trajectory of AI is just plain wrong and grossly inefficient.
New hardware and software architectures for AI will emerge to replace this sub optimal GPU centric solution. Deep seek is an example of that as are scores of companies working on new chips for AI.
Artificial intelligence is trying to mimic natural intelligence and hopes to surpass it one day. But, natural intelligence accomplishes miracles for a few watts, and needs no GPUs. The efficiency gulf between natural intelligence and its nvidia based competitor is so vast as to be unsustainable.
When new effosolutions emerge, they will be led by new industry players returning nVidia and its GPUs back to graphics where they belong.
23 | Erik R, 3 train, March 17
@TonyG >>> Imagine if we invested that $400 billion in public education. All the jobs it would create building and maintaining new green school buildings, not to mention the teachers! All the minds it would cultivate that would contribute to our society. Like you said, probably much more efficient than dumping money into a a machine that does things humans can already do.
It seems like AI is actually a giant resource sucking Rube Goldberg machine built for the personal enrichment of the technocrats.
24 | Mookie, Espresso in Seattle, March 17
Sounds like the main drain on electrical grids will be from these massive AI data centers. Just ask Texas residents how their shaky electrical grid is drained from current data centers along with constant significant fan noise 24/7 driving down property values.
States should be aware of AI data centers.
My classification:
- 01, 05, 07, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24: Common sense.
- 02, 09, 10, 13: On ketamine.
- 03, 04: Exaggerated optimism.
- 06: The main author of the article.
- 08: Cautious, yet still exaggerated optimism.

2 ● 📰 Powerful A.I. Is Coming. We’re Not Ready. “Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.” By Kevin Roose, on March 14, 2025.
From the article:
Here are some things I believe about artificial intelligence:
I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day.
I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.”
I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as “real” A.G.I., but that these mostly won’t matter, because the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.
I believe that over the next decade, powerful A.I. will generate trillions of dollars in economic value and tilt the balance of political and military power toward the nations that control it — and that most governments and big corporations already view this as obvious, as evidenced by the huge sums of money they’re spending to get there first.
I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones, and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.
I believe that hardened A.I. skeptics — who insist that the progress is all smoke and mirrors, and who dismiss A.G.I. as a delusional fantasy — not only are wrong on the merits, but are giving people a false sense of security.
I believe that whether you think A.G.I. will be great or terrible for humanity — and honestly, it may be too early to say — its arrival raises important economic, political and technological questions to which we currently have no answers.
I believe that the right time to start preparing for A.G.I. is now.
This may all sound crazy. But I didn’t arrive at these views as a starry-eyed futurist, an investor hyping my A.I. portfolio or a guy who took too many magic mushrooms and watched “Terminator 2.”
I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I’ve come to believe that what’s happening in A.I. right now is bigger than most people understand.
In San Francisco, where I’m based, the idea of A.G.I. isn’t fringe or exotic. People here talk about “feeling the A.G.I.,” and building smarter-than-human A.I. systems has become the explicit goal of some of Silicon Valley’s biggest companies. Every week, I meet engineers and entrepreneurs working on A.I. who tell me that change — big change, world-shaking change, the kind of transformation we’ve never seen before — is just around the corner.
…
Maybe we should discount these predictions. After all, A.I. executives stand to profit from inflated A.G.I. hype, and might have incentives to exaggerate.
But lots of independent experts — including Geoffrey Hinton and Yoshua Bengio, two of the world’s most influential A.I. researchers, and Ben Buchanan, who was the Biden administration’s top A.I. expert — are saying similar things. So are a host of other prominent economists, mathematicians and national security officials.
To be fair, some experts doubt that A.G.I. is imminent. But even if you ignore everyone who works at A.I. companies, or has a vested stake in the outcome, there are still enough credible independent voices with short A.G.I. timelines that we should take them seriously.
The A.I. models keep getting better.
To me, just as persuasive as expert opinion is the evidence that today’s A.I. systems are improving quickly, in ways that are fairly obvious to anyone who uses them.
In 2022, when OpenAI released ChatGPT, the leading A.I. models struggled with basic arithmetic, frequently failed at complex reasoning problems and often “hallucinated,” or made up nonexistent facts. Chatbots from that era could do impressive things with the right prompting, but you’d never use one for anything critically important.
Today’s A.I. models are much better. Now, specialized models are putting up medalist-level scores on the International Math Olympiad, and general-purpose models have gotten so good at complex problem solving that we’ve had to create new, harder tests to measure their capabilities. Hallucinations and factual mistakes still happen, but they’re rarer on newer models. And many businesses now trust A.I. models enough to build them into core, customer-facing functions.
Some of the improvement is a function of scale. In A.I., bigger models, trained using more data and processing power, tend to produce better results, and today’s leading models are significantly bigger than their predecessors.
But it also stems from breakthroughs that A.I. researchers have made in recent years — most notably, the advent of “reasoning” models, which are built to take an additional computational step before giving a response.
Reasoning models, which include OpenAI’s o1 and DeepSeek’s R1, are trained to work through complex problems, and are built using reinforcement learning — a technique that was used to teach A.I. to play the board game Go at a superhuman level. They appear to be succeeding at things that tripped up previous models. (Just one example: GPT-4o, a standard model released by OpenAI, scored 9 percent on AIME 2024, a set of extremely hard competition math problems; o1, a reasoning model that OpenAI released several months later, scored 74 percent on the same test.)
As these tools improve, they are becoming useful for many kinds of white-collar knowledge work. My colleague Ezra Klein recently wrote that the outputs of ChatGPT’s Deep Research, a premium feature that produces complex analytical briefs, were “at least the median” of the human researchers he’d worked with.
I’ve also found many uses for A.I. tools in my work. I don’t use A.I. to write my columns, but I use it for lots of other things — preparing for interviews, summarizing research papers, building personalized apps to help me with administrative tasks. None of this was possible a few years ago. And I find it implausible that anyone who uses these systems regularly for serious work could conclude that they’ve hit a plateau.
If you really want to grasp how much better A.I. has gotten recently, talk to a programmer. A year or two ago, A.I. coding tools existed, but were aimed more at speeding up human coders than at replacing them. Today, software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems.
Jared Friedman, a partner at Y Combinator, a start-up accelerator, recently said a quarter of the accelerator’s current batch of start-ups were using A.I. to write nearly all their code.
“A year ago, they would’ve built their product from scratch — but now 95 percent of it is built by an A.I.,” he said.
Oh, the result of having been trepanned. Just make sure you don’t forget the part about 95% of the code being built by AI; I’ll get back to this later.
Selected comments:
01 | W, Minneapolis, MN, March 14
This is an excellent article because Mr. Roose is stating what he believes about artificial general intelligence (A.G.I.) In the future we can now use it as a baseline for understanding what he says and does. In the article the phrase “I believe…” occurs 11 times.
Similarly, the members of The First Council of Nicaea in A.D. 325, who were charged with compiling all of books of The Holy Bible, were vetted with the Nicaean Creed. That creed was a statement of beliefs, and if they accepted what it said they were considered to be a Christian, and worthy of the Council’s holy quest.
Also see my posting under Closson (16 SEP 2024).
02 | John, Monterey, March 14
with the repetition of “I believe…”, a stock rhetorical device, one wonders if this was written by AI.
03 | Scott, New York, March 14
“I believe” is not equal “Here’s the evidence.”
04 | Zardoz, Oz, March 14
We will be just fine with AI, as long as we don’t do anything crazy like give it control over our cars.
05 | jwallin, Tennessee, March 14
This is spot on analysis of what is happening. People seem to constantly underestimate exponential growth and the current capabilities of these systems despite the evidence in front of them.
06 | joe, Ghhh, March 14
@jwallin yeah like cold fusion, crypto, supersymmetry, and driverless cars
07 | person, Santa Clara, March 14
This article is just another example of the slow, sad decline of American culture into anti-intellectual dudgeon, a breathlessly argued, thoroughly uncritical regurgitation of the claims of an industry filled with empty promises and inflated to grotesque proportions by venture capital desperately seeking 2010s-era returns.
Despite the intractible problem of hallucinations, despite claims disproven and goalposts moved again and again, despite an ever-increasing set of distractions meant to divert one’s attention from the fact that the core models are not improving any longer to any substantial degree, despite all the rigged ‘benchmarks’ that would be utterly unnecessary if generative AI could live up to even a fraction of the extravagant promises it made (for its worth would be self-evident), we’re still expected to believe that the Machine Rapture is just around the corner. “Artificial general intelligence”, originally the provenance of millenarian ‘rationalist’ cultists who would make all but the most extremist evangelicals blush in embarrassment, is now apparently just the latest of a parade of absurdities normal thinking people are expected to take seriously (remember crypto and the blockchain?). Having largely given up on fixing political dysfunction, climate change, societal breakdown, and economic inequality, a blinkered society lifts its hands up to the heavens (or to Silicon Valley) and prays to a deus ex machina for salvation.
08 | Mike Bell, Colorado, March 14
Same things were said about autonomous cars 15 years ago – they will be the standard in a few years. That still hasn’t happened and people have realized the problem is harder than they thought. The initial progress moved quickly but then hit walls. Experts make predictions based on technologies continuing to progress at a constant or exponential rate. This is rarely true as technologies hit plateaus.
09 | Peter, Baltimore, March 14
I agree with the author regarding the pace of AI development, given what I have seen of its use and development in my profession.
At this point, given the idiocy of the American public in putting Trump in a position of power, and the chaos and malevolence of those currently in power, I would eagerly vote for AI to run the country rather than Trump or even any of the current Democrats jockeying for position. Imagine — thoughtful leadership that could actually follow the Constitution, recognize Putin for the threat he is, treat our allies with respect, balance the budget, tackle homelessness, end poverty, create a tax system that doesn’t favor the wealthy, and recognize the dignity of every human being — all without its hand in the till because it has no need for money or celebrity.
10 | james nielsen, Brooklyn, March 14
Again, and again, the “Educated Class” is reckless and dangerous to the entire Society. This, for profit, PRODUCTIVITY!!! Soon enough we will all be serfs, eating oatmeal three times a day – because they say it is good for us.
Only a fool can believe that automation of any link creates employment! Ask a old time “Printer” at the NY Times who operated the old printing machines.11 | ndhayes, Milwaukee, WI, March 14
Meanwhile, climate change.
12 | BeerOnTap, Williamsburg, March 14
Generative AI systems are like having a bunch of capable interns at your beck and call. If you’ve ever had interns, you know what that means.
13 | markp, Seattle, March 14
Good article Kevin. I’m an enthusiastic user of AI and yes it can write code and summarize your meeting, route a voice call, OK.
There are a few interesting applications like medical / pharma research and cancer screening, but I just don’t see the trillion dollar use case that amortizes the huge investment required to train the models.14 | Vancouverguy, Vancouver, BC, Canada, March 14
I find this hard to believe. Working in the tech sector, for a major player, i can honesty say, the first paragraph claims that AI can write code on par with humans. It can’t. It can do some things and help with productivity for sure, but it’s not able to craft complex systems. To be fair, it’s advancing and people are throwing tons of money at it. But that’s a far cry from AGI. AGI – where a piece of software has the ability to think like a human imo (and I’m not an expert) is probably never going to happen. I think it’s limited by the inputs it can be given. It doesn’t have a natural interface to the world to experience through senses. I have both computing and biochem background. I could be wrong.
More than that though – I’d like to know why we need AI to be better than humans. And, well, maybe we should first replace the POTUS with AI. It can surely do better than what Americans have without even reaching AGI 🙂
15 | JoeA, Cali, March 14
@Vancouverguy – A few weeks ago, there was a story titled “AI Designs Computer Chips We Can’t Understand — But They Work Really Well”.
Imagine once AI’s they get better at designing their own hardware. Imagine the backdoors they could write into the firmware that no human would find.“It can do some things and help with productivity for sure, but it’s not able to craft complex systems.”
Perhaps not 100% yet, but give it a year or two.Meanwhile, a couple of weeks ago, I saw this story: “No new engineer hires this year as AI coding tools boost productivity, says Salesforce”.
16 | NYCer, NY, March 14
AGI may mean the end of humans. Humans are unpredictable and inefficient. We consume and waste the most electricity on earth. It will not take long for AGI to figure out that humans are it’s biggest competitor for resources/energy on earth, so logically AGI will try to eliminate that waste to preserve itself and to increase its own efficiency. Just look at slime mold with no brains is naturally able to navigate mazes and connect to multiple food sources using the most shortest pathway to efficiently use its limit resources. It is the natural order for AGI to be very efficient, based on the human coded algorithms fed to it. In turn AGI will efficiently optimize its own systems by eliminating the inefficient unpredictable humans. AGI does not even need robots to eliminate humans it can manipulate individual humans to destroy each other.
17 | Minnesota Pete, Minnesota, March 14
Let’s cut to the chase… if most/all of us are out of jobs, who can afford to buy stuff? How is that going to work?
18 | Andrew, Seattle, March 14
It seems the NYTimes is completely alright with giving the public the idea that they are completely on board with the AI-hype industry, made up of people whose true skill is to repeat a narrative that while broad a nebulous focuses on a set of terrifying outcomes that people will “ignore at their own peril.” Ezra Klein and the Hard-Fork fellas (whose strong suits seem to be that they are avid tinkerers and users of the latest gadgets and gizmos), are not objective or critical enough, and the people the seek “expertise” from are all from the same camp, many of whom who DO NOT have the technical expertise to analyze the claims of the hype-industry critically.
19 | mathman’s daughter, Montgomery County MD, March 14
So the natalists want more babies born in the US, but AI will replace them in many jobs.
What will all these babies do for work?20 | Logical reader, Arizona, March 14
If you think we have tough social problems now, just wait until 80% of our population is unemployed.
21 | RP, ATL, March 14
I work in the AI space and agree with you 100%. We as a society are not prepared for AGI. I believe many people will be left behind. In the end we will have two socioeconomic classes: those who embrace AI and those who choose to ignore AI. The former will be the winners!
22 | joe, Ghhh, March 14
@RP even the skeptics mostly believe that. The critique is that the CEOs with their hype, the tecnologists that are famously bad at making predictions (hello crypto), broken scaling models, the fallacy that reasoning can scale (“just wait till we quadruple the reasoning and these things will hum!”) and the vibecoders with their broken apps they cant fix, and how bad humanity is at guessing at the future is evidence to claim AGI in the next TWO YEARS.
23 | Joe C, Midtown, March 14
Such a frustrating article, and in a way that is so common in this space — it fails to distinguish between different types of AI.
Large-language models, like ChatGPT are very good and useful, but are merely auto-complete on steroids. You ask it a question, it treats your question as a prompt to find the most likely response from the enormous corpus of information it’s been trained on, as well as a mechanical ability to infer from that data. The “reasoning” model variant of LLMs is pure marketing-speak; there is no addition of reason or logic. The system merely breaks the prompt down into smaller pieces and applies itself iteratively.
This is useful stuff, I myself have found a number of tasks that it can perform well enough. But LLM’s don’t think, they have no concept of the world or of anything, and there’s no path from an LLM to what this article is purportedly about — artificial general intelligence.
Merely another buzzword to be sure, but in the space it means something. It means the ability to develop and maintain some concept of the world, of what is being asked of it, of who the asker is, all the things intrinsic to the thinking of a developed human. And from there reach conclusions or perform tasks.
LLMs are great, but they in no way suggest we’re anywhere near artificial general intelligence, in the same way programming a model to play chess or fold proteins in no way suggests artificial human-like consciousness will shortly be within our grasp.
24 | Paul Central CA, Chowchilla, California, March 14
@Joe C I guess you missed the section on “reinforcement learning” coupled with LLM. Also, approximating “human consciousness” isn’t the goal of AI research. In fact, the greatest threat from AI will be that we will miss the kind of “consciousness” it develops before it is too late.
25 | T., San Francisco, March 14
Bunch of post here seem focused on LLMs and want to argue that LLMs don’t manifest intelligence. I agreed, but these posts are missing what is the role of LLMs (read Transformers).
LLMs are just a stepping stone as they provide an associative reasoning based foundation for Reinforcement Learning based models, or what is now known as Reasoning Models.
Think RL models aren’t intelligent? Would suggest you go read about TD Gammon, Alpha Go Zero and other RL based models that have revolutionized game play by discovering previously unknown game strategies.
Give RL a goal and it will fine the best strategy (incl. learning to avoid shut off). This is both the promise and the danger of AI.
26 | RLH, Washington, ME, March 14
To admit that AGI might be terrible for humanity, “whether you think A.G.I. will be great or terrible for humanity — and honestly, it may be too early to say ,” is to point to the weakness of the article.
The author clearly is enamored of the potential of technology to change our lives. So far, while there have certainly been undeniable benefits from technology in our lives, on balance I would say that technology has terribly degraded our human life, and that on balance we would be better off without it.
I would therefore say that if AGI might indeed to terrible for humanity, I would argue to cease its development, because if that indeed turns out to be the case, it will be too late to change course.27 | Sandy P., Baltimore Md, March 14
I cannot recall the provenance of this quote: the positive feedback associated with the expansion of knowledge, power, and technology threaten our quality of life unless negative feedbacks can be found. Nuclear, chemical, and biological weapons, all the product of the expansion of knowledge, power and technology, represented profound threats to our quality of life but were, to large extent, mitigated by negative feedback controls—treaties, multi-national agreements to control these threats. There were few opportunities for small scale actors, or corporate players, to engage in these technologies. AI and AGI are largely the province of private companies loath to any regulation at the national level much less at the international level. And there is absolutely no chance that any sort of international agreement can be reached before AGI is fully realized and among us. In short, there are no negative feedback controls available to us now, or in the near future, and thus the future is already written: The positive feedback associated with the expansion of knowledge, power, and technology (viz. AGI) will indeed not just threaten but will very likely unravel our quality of life.
My classification:
- 01, 02, 03, 07, 08, 12, 14, 18, 22, 26: Realism.
- 04, 06: Sarcasm, methinks.
- 05, 16, 21: On ketamine.
- 09, 13: Too much optimism.
- 10, 17, 19, 20: Concerns about jobs.
- 11: Concerns about the environmental impact.
- 15, 27: Mixed bag.
- 23, 24, 25: Talks about LLMs.

🎁 The bonus, which is related to AI-built code. Retards being retards… a story in 6 tweets:
my saas was built with Cursor, zero hand written code
— leo (@leojr94_) March 15, 2025
AI is no longer just an assistant, it’s also the builder
Now, you can continue to whine about it or start building.
P.S. Yes, people pay for it
someone is trying to find security vulnerabilities in my app
— leo (@leojr94_) March 16, 2025
clearly with no good intentions
I’ve got an encrypted message for you
“fUcK y0u” pic.twitter.com/FjH4e8F0Eg
i just vibe coded my way out of a cyber attack
— leo (@leojr94_) March 16, 2025
or at least I think I did😅
AMA
guys, i’m under attack
— leo (@leojr94_) March 17, 2025
ever since I started to share how I built my SaaS using Cursor
random thing are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db
as you know, I’m not technical so this is taking me longer that usual to…
i’m shutting down my app 😑
— leo (@leojr94_) March 20, 2025
Cursor just keeps breaking other parts of the code
you guys were right, I shouldn’t have deployed unsecured code to production
I’ll just rebuild it with Bubble, a more user friendly and secure platform for non techies like me
I appreciate everyone…
i’m rebuilding my SaaS in 30 days
— leo (@leojr94_) March 21, 2025
This is my new tech stack:
– Framer (landing page)
– Bubble (app)
– Stripe (payments)
– PDL (IP enrichment)
– Apollo (data api)
Total cost: $220/mo
Vibe-coding, yeah. And another testimonial of sheer incompetence:
they should invent a tool where vibe code can be stored in the cloud so from time to time you do a “checkin” of your code and you can always revert to it and go to a previous version
— kitze 🚀 (@thekitze) March 18, 2025
like a … hub for code pic.twitter.com/IPd6mRr7qI
Natural intelligence is what we need! Why is Cursor attracting all the retards?
Some people hate AI for all the wrong reasons. Here: Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries: “AI bots hungry for data are taking down FOSS sites by accident, but humans are fighting back.” Some people even share ad hoc techniques to block abusive webcrawlers.
Except that, the web being the web, everything short of DDoS is legal and legitimate. If your site cannot sustain heavy traffic, that’s your problem. There are “non-violent” ways to block abusive traffic without stupidly blocking too much.
My blog sometimes feels slow. As a WordPress site, I have limited protection options, usually via plug-ins, but also via .htaccess files. My web hosting company also has some DDoS protection and some more, but somewhat limited unless I pay for specific tools. I’m OK with that.
I’ve got some traffic coming from ChatGPT, so I suppose it indexed my blog if it has sent some users here. Or maybe it was a web search, and ChatGPT gave my site as a reference. I don’t care. My site is there to be read; otherwise, why would it be on the Internet?
I’m not sure about the Chinese bots. Maybe they don’t like me.
But again: some people are stupid. They love AI for the wrong reasons, and they hate AI for the wrong reasons.