The next level of AI crap: lab robots
You wouldn’t believe for what reason such a thing happened: Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up. Here’s the thing:
Dr. Agarwal is among more than 20 researchers who have left their work at Meta, OpenAI, Google DeepMind and other big A.I. projects in recent weeks to join a new Silicon Valley start-up, Periodic Labs. Many of them have given up tens of millions of dollars — if not hundreds of millions — to make the move.
As the A.I. labs chase amorphous goals like superintelligence and a similar concept called artificial general intelligence, Periodic is focused on building A.I. technology that can accelerate new scientific discoveries in areas like physics and chemistry.
“The main objective of A.I. is not to automate white-collar work,” said Liam Fedus, one of the start-up’s founders. “The main objective is to accelerate science.”
Mr. Fedus was among the small team of OpenAI researchers who created the online chatbot ChatGPT in 2022. He left OpenAI in March to found Periodic Labs with Ekin Dogus Cubuk, who previously worked at Google DeepMind, the tech giant’s primary A.I. lab.
…
“We believe advanced A.I. can move scientific discovery forward faster, and that OpenAI is uniquely positioned to help lead the way,” an OpenAI spokesman, Laurance Fauconnet, said in a statement.
But Mr. Fedus said such companies were not on a path to true scientific discovery. “Silicon Valley is intellectually lazy” when describing the future of large language models, he said. He and Dr. Cubuk are harking back to a time when the tech industry’s leading research operations, including Bell Labs and IBM Research, saw the physical sciences as a vital part of their mission.
…
Mr. Fedus and Dr. Cubuk believe that no matter how many textbooks and academic papers these systems analyze, they cannot master the art of scientific discovery. To reach that, they say, A.I. technologies must also learn from physical experiments in the real world.
A chatbot “can’t just reason for days and come up with an incredible discovery,” Dr. Cubuk said. “Humans can’t do that, either. They run many trial experiments before they find something incredible — if they even do.”
Periodic Labs, which has secured over $300 million in seed funding from the venture capital firm a16z and others, has started its work at a research lab in San Francisco. But it plans to build its own lab in Menlo Park, Calif., where robots — physical robots — will run scientific experiments on an enormous scale.
The company’s researchers will organize and guide these experiments. As they do, A.I. systems will analyze both the experimentation and the results. The hope is that these systems will learn to drive similar experiments on their own.
…
At Periodic Labs, A.I. systems will learn from scientific literature, physical experimentation and repeated efforts to modify and improve these experiments.
For instance, one of the company’s robots might run thousands of experiments in which it combines various powders and other materials in an effort to create a new kind of superconductor, which could be used to build all sorts of new electrical equipment.
Guided by the company’s staff, the robot might choose several promising powders based on existing scientific literature, mix them in a laboratory flask, heat them in a furnace, test the material and repeat the whole process with different powders.
After analyzing enough of this scientific trial and error — pinpointing the patterns that lead to success — an A.I. system could, in theory, learn to automate and accelerate similar experiments.
“It will not make the discovery on the first try, but it will iterate,” Dr. Cubuk said, meaning it will repeat the process over and over again. “After lots of iteration, we hope to get there faster.”

I could not believe my eyes!
Among the comments, beside the enthusiastic ones, such expressions of skepticism:
● jeff, singapore · Oct. 1
The field of AI has been making big promises for more than 10 years, none of which have been realized, and yet they still keep making big promises. The media loves these ideas because it gives them something to write about. One wonders why investors have an appetite for these investments, perhaps because they generally succeed at finding someone else to sell their investment to (i.e., the greater fool theory). Furthermore, the latest idea has been pursued with AI and robots many times before, but this time the idea is being released at the peak of the AI bubble. It is hard to be optimistic when the hype is so big. The last 10 years is documented in my book Unicorns, Hype and Bubbles.
● Blud, Detroit · Oct. 1
Ah yes, it will combine various powders and see what happens.
After reading “the scientific literature” which we all know describes exact details of experiments and findings in structured reproducible ways.
More AI snake oil.
● Tim, Over here · Sept. 30
I agree, LLM’s do uniformly badly with things like physics problems because LLM’s use language to mimic thought: i.e. language is a representation of thought, but thought itself is a separate thing. A question they will have to answer – which AI researchers have grappled with for decades – is “what is the correct representation of cognition in programming terms?”. LLM’s do not offer a path forward there, and I see no evidence that these LLM experts have new ideas on this critical element.
Meanwhile, as a physicist, there is the challenge of having these robots design and perform experiments, but we are still in a place where robots struggle to serve popcorn at the Tesla Diner. How well will these robots do at noticing some small mistake made in setting up an apparatus? When a particular trial had a different outcome due to something the robot is not equipped to sense? Who will actually build these setups and on what timescales? The article makes this sound trivial, but it’s maybe harder than the cognition part. (Not to mention… they need a lot of talented experimental physicists, chemists, etc. to guide them, because AI experts have no ability to judge how well their robots are performing at this.)
I would be a lot more encouraged about this whole enterprise if these methods had already successfully mastered a very simple experiment using very well-defined equipment that is orders of magnitude simpler than groundbreaking scientific research: driving a car.
● PBK Harvard, MA · Sept. 30
Hmmm . . . Not sure that I understand why the experimentation itself needs to be performed by the AI system. It would seem more important that the AI system design the experiment and be apprised of the results of the experiments it designs. And, of course, in order for the AI system to design the first experiments, it would need to be primed with foundational knowledge from which it could make inferences that would lead it to intelligently formulating the design of additional experiments intended to expand that foundational knowledge in a fruitful way. But as of now, I don’t think “they” have come up with a way to create an inanimate object capable of making such inferences.
Computers are just not capable of understanding conceptual themes (even if they can brainlessly regurgitate what they’ve been fed) and using such conceptual knowledge to craft new concepts. I remain skeptical. However, while it’s right that man’s reach should exceed his grasp perhaps the same can be said about inanimate, brainless, computers too.
● Barry Long, Australia · Sept. 30
“The main objective is to accelerate science.”
With $300 m seed funding from private equity, I suggest that the main objective is to make a profit. And to do that they will need to lock away any advances with patents and copyright. I have no logical objection to that except that it scares me that so much power could reside in the hands of a few.● Tracy, Ventura, Ca · Sept. 30
Robots and AI are going to perform experiments that they generate on their own? This is going to be some amazing comedy.
● Ben, Toronto · Sept. 30
Sounds crackpot to me.
I’ve spent a long life examining (or rather, cross-examining) published experiments. Judging the meaningfulness of an experiment and choosing what is the next direction to try is hard to do. Not dissing AI (or A1 Sauce either), but quite a fantasy to suppose keen judgment is near at hand.
My favourite example is a truly dreadful study called “AREDS” about vision loss. Yet it (and the more dreadful sequel “AREDS2”) is the basis of recommendations to all patients by ophthalmologists around the world.
It is particularly disconcerting to read a way way off the wall bit of nonsense about mixing some powders, putting them into an oven for some range of times and discovering some new world of super-conductivity.
These are promoters.
● Rachel G, CA · Sept. 30
It’s funny to me how every time someone opens up a new tech startup, they are immediately portrayed (or portray themselves) as morally driven or superior, as opposed to the company they worked for 5 minutes ago where they probably learned a lot of what they know making tons of money in the process. Maybe these founders will save the world and have no aspirations for profit or status. Time will tell.
● Melomoon, CA · Sept. 30
“Guided by the company’s staff, the robot might choose several promising powders based on existing scientific literature, mix them in a laboratory flask, heat them in a furnace, test the material and repeat the whole process with different powders.”
This approach is destined to fail because it greatly oversimplifies the process of making and purifying these materials. You might need dozens of steps and the steps and sequencing of steps might be substantially different for each material. Good luck building a robot or series of robots to that for more than the simplest of materials.
● Pablo Figueroa, Chile · Sept. 30
The AI described in this article may work better for brute force-driven science than hyphotesis-driven science.
Hyphotesis-driven science (scientific method) may need a real human-like intelligence probably far away from existing genetrative AI.● A. Thought, USA · Sept. 30
Kind of bizarre to see all these clever tech minds and financial resources poured into the complicated process of teaching AI robots to do physical experiments at the same time researchers, who are actively using their creative minds to do physical experiments advancing scientific discoveries, are being fired. Whole labs and fields of research destroyed, people with decades of training and fresh pHD’s headed to hard-won jobs. All terminated by the U.S.
Oh wait!! One method allows pooled resources to drive discovery for mutual benefit – the other gives private companies and investors control of the process and ownership of all the profits.
Ahhh, I see where the advantage is now.
● Screed, New York NY · Sept. 30
Sounds amazing! They got the name all wrong though. They’re supposed to call it Skynet.
● Steve, Oak Park · Sept. 30
Speaking as an experimentalist with experience in a few fields, clearly these guys don’t understand what we do.
Most designed, physical experiments that anyone ever hears about (as opposed to screens, exploratory/descriptive work, fieldwork, simulations or other means of discovery) were designed to show other people that a model/theory/insight can be validated by passing a rigorous test. Often, this is just a formality since the answer is already known to the experimenter, just not convincingly.
Yes, indeed, you are expected to know what the answer should be before you observe it. In short, to design an experiment, you first have to identify a concept that might be true or not. This is done just like an LLM, based on what you already had in mind and some thinking about how it might fit together. Then, you try to figure out how to test the idea. This usually means finding a practical method that can decisively prove you wrong. LLMs are good at that too. Not sure why LLMs need to be able to test their ideas. To convince other LLMs?
So, yeah, these AI guys might have been better to stick with their old jobs 😉
But I’m a Luddite, so I’m bound to be skeptical of highly valuable tulips and magic snail oil.
🤖
In other apocalyptic news, OpenAI ropes in Korean giants Samsung and SK Hynix to feed its AI megaproject: “Duo pledge memory for Stargate to the tune of 900k DRAM wafer starts a month.”
Oh, I expect there’s no chance that regular DRAM for regular people to double or triple its price, eh?
Tulip Futures
We all know AI in it’s current form is tulip mania, part deux, right? How many AI tulip futures do we need? And how many can dance on the head of a pin?
Nevermind. Cry havoc and let slip the exploitive profiteers.
Re: Tulip Futures
I think tulip producers weren’t unhappy with the Tulip mania. Go insane and order a $1B personal yacht? Samsung will build it too.
But, of course, we, the people, need to be “green.”
Oh, wait. After a 53% increase in 2024, the price of PC DRAM has risen for six consecutive months since a 22.22% jump in April, with August recording a record surge of 46.15%.
No worries. I’ll go back to MS-DOS and Windows 3.11. Instead of 16 GB, even 16 MB would be plenty.
Leave a Reply