Curiouser and curiouser…
Why do I feel the need to comment on Dedoimedo’s latest post? Well, because I disagree with a fundamental premise that led to a questionable conclusion. It has to do with hardware, despite his opinion to the contrary.
The curious case of sudden Blue Screens of Death
Yes, that’s the name of Dedoimedo’s post. Read it, because otherwise you won’t understand the spoilers below.
But I need to quote abundantly, be it from the ending only:
… Then, I went online, and I did a little search: “AMD Ryzen BSOD idle”. My oh my. I was flooded with posts and threads discussing the exact same issue as mine. Exact same. People with systems that worked superbly for years suddenly had these frequent crashes. And all of those coincide with July updates. These posts also highlighted a potential workaround: changing the CPU voltage a tiny bit.
And then, it dawned on me. The July update brought in all sorts of side channel attack mitigations, and that probably meant messing up power plans. And if a power plan is ever so slightly off, this could potentially trigger a system crash. Indeed, the quickest and cheapest method to verify this would be to roll back to a system state before the July patching. Luckily, I had the image ready.
…
Having done a lot of data collection in the past three or four months, here’s everything that differs between my Windows 10 before and after the July patches:
- After, I had more application and game crashes, including DWM and Nvidia drivers. Before, none.
- After, my Nvidia card was running a whole 10 degrees C hotter on idle. Yes. It would idle around 46-47, whereas beforehand, it idled around 36-37. And since I’ve restored the old image, we’re back to normal temperatures.
- The system is faster and more responsive with the pre-July state.
- Most importantly, no BSOD with the pre-July state.
…
I don’t see malice. I see negligence. And if anything, the BSOD only made me more determined never to use Windows for serious stuff, now or in the future. Until recently, I had the “excuse” of Windows being stable and robust, and my Linux being all naughty. But now that both these systems are meh, the would-be “conflict” is gone. Linux is now as viable. But more importantly, I’m very happy that I made the smart choice of buying myself a Macbook Pro. These software worries are now almost entertainment.
Now I can present my objections in my typical, convoluted way.
OK, so here’s the thing
Once upon a time, in the 1990s, there were CPUs that didn’t need heat sinks nor fans. Then they started needing heat sinks.
But, you see, electronics engineers, as well as their bosses, had this strange idea inherited from the 1970s and 1980s: an IC was not a single transistor; it was smarter. And since it was smarter, it should behave accordingly, in that it should not be able to commit thermal suicide.
That strange idea was inherited from analog electronics. Indeed, if you had an audio amplifier’s final stage in the form of push-pull transistors, and these transistors had an inadequate heat sink or the thermal paste wasn’t properly applied, they could melt and die. Not so with ICs, which had thermal protection.
In the late 70s, you had cheap class B audio amplifier ICs used in radios and cassette players, say, TBA820 (up to 2W) or TBA790 (up to 3.4W, depending on the variant). And they had thermal protection.
Let me quote from a data sheet:
The TBA820 includes built-in protection circuits to guard against short circuits, thermal overload, and overvoltage conditions. These circuits will automatically shut down the amplifier if any of these conditions are detected.
These were not digital ICs! But they were not designed by retards.
So in the early 90s, when I was exclusively using Intel and Cyrix CPUs (I never ever owned an AMD processor!), it went without saying that these CPUs could not die for lack of proper thermal dissipation! When a certain temperature was reached, they would simply halt.
I guess the initial AMD CPUs behaved similarly, until one day when I heard that newer, faster generations would not. Early K6/K7 and some Athlon models were notoriously prone to “melting” (not literally!) if the thermal paste wasn’t properly applied! This is when I lost my trust in AMD, if I ever had trusted them.
Fast-forward to today. Modern CPUs are much, much smarter. They have variable frequencies, and not by moving a jumper or changing a setting in the BIOS. They are supposed to be smart enough to switch to a lower frequency to avoid melting or instability. No more “you’re overclocking at your own risk,” but “the CPU will overclock within its power dissipation limits, then it will adjust itself!”
For Intel, the so-called Intel Turbo Boost Technology allows processors to dynamically increase their frequency and power consumption for short periods (the short-term power limit PL2) before settling back to a lower, sustained power level (the long-term power limit PL1). And let’s ignore the fact that some recent 13th and 14th gen. i7 and i9 models have experienced instability and crashes linked to excessive power boosting and voltage requests authorized by some motherboards’ settings (gaming systems, duh).
In theory, today’s Ryzen CPUs are supposed to have very robust built-in thermal protections, just like Intel.
And yet…
According to Dedoimedo’s conclusions, Windows 10 was able to adopt a power plan that:
- made an AMD Ryzen behave erratically, much like an overclocked CPU of the old days;
- made an Nvidia card run 10 degrees hotter than normal even in the idle state;
- made the system, paradoxically, slower, less responsive, and prone to crashing.
I would not put the blame on Windows! I blame a faulty hardware design of one or more of the following: AMD, Nvidia, or the mobo’s manufacturer.
I don’t know, and I don’t give a hoot about the means Windows tells the CPU and the GPU to “perform better” (so much better that they crash): is it purely through software, or does the mobo actually boost some voltages?
The fact is that, in 2026, a normal design should imply:
- that the CPU dynamically reduces its frequency regardless of what the fucking power plan tells it to do, as long as the built-in thermal protection self-activates;
- that the stupid Nvidia shit does the same, especially as it’s in an idle state, regardless of the power plan.
If instability occurs because of the CPU and/or the GPU being on steroids and running as deranged hamsters on a wheel, why blame Microsoft instead of blaming the manufacturers of such hardware?
In the 1990s, no OS could have crashed a CPU (and even less a video card), because the CPU had thermal protections that worked. (The GPUs weren’t really GPUs in the modern meaning of the term, and they never ran hot.)
OK, neither Win10 nor Win11 can be trusted. I can agree with this. Nothing compares to WinXP SP3 and Win7 SP1. But in this particular case, I’d rather take different decisions:
- Never use an AMD CPU again.
- Never use Nvidia if, like me, you’re not a gamer*, nor are you planning to run LLMs locally.
- Never use a PC with a mobo coming from the respective unspecified manufacturer.
Under Windows or Linux, the user or the OS can set the power plan to “Performance” or “High Performance” or whatever it’s called. It’s then the CPU’s task to prevent instability (or melting); how could an OS know what’s inside the CPU? Even if temperature readings are indeed available, it’s the CPU’s microcode’s task to enforce hard limits that software shouldn’t be able to overrule.
The same applies to the graphics card. If setting “Prefer Maximum Performance” in the Nvidia Control Panel under Windows can prevent the GPU from downclocking properly (apparently, it does!), then this is irresponsible design from Nvidia’s engineers or irresponsible decisions from Nvidia’s management!
I’m sticking with Intel and its integrated GPUs. I will not condone stupid hardware design.
The bottom line:
The ultimate responsibility for both not melting and thermal-related instability crashes should rest with the hardware and the corresponding OEM-provided firmware.
Think of it this way:
If someone tells you to throw yourself in front of the subway, and you do it, whose fault is it?
(Please don’t answer, “The government’s.”)
*NOTE: To me, “gaming” refers either to the “good old MS-DOS games” or to non-GPU-intensive games (chess, GO, puzzles, etc.). “Modern” games are simply sick and in poor taste.
Late update: a note on Linux
On the other hand, I did ditch MX-21_x64 XFCE “AHS” RC3 back in 2021 because on the system I installed it on, its default CPU policy was “frequency should be within 3.60 GHz and 3.60 GHz” on a 2 GHz CPU for which 3.6 GHz was the “turbo boost” frequency that could only be sustained for short periods of time! I also blamed the Linux kernel (and its documentation) for having replaced the classic governors (performance, powersave, userspace, ondemand, conservative, schedutil) with only two (performance, powersave) for more recent CPUs and for having given different meanings to these two words (the new performance and powersave don’t behave like the old ones!).
As I added here, the new powersave acts more like the old ondemand, and the new performance doesn’t keep the CPU at its top speed anymore, because the top speed is not the top nominal speed (say, 2-2.5 GHz), also improperly called “base frequency,” but the top burst speed (say, 3.6-5.2 GHz), also called “max turbo frequency.” The minimum idle frequency, often not specified in Intel’s data sheets, is usually 800 MHz. Semantically, this minimum frequency should have been called “base frequency.” (Retards.)
The reason I cursed that MX version was not related to stability but to the fact that it was an absurd bug to try to keep a CPU at the maximum turbo frequency! The fan was running at high speed, the 1-liter PC was evacuating a lot of heat, and it would have eventually lowered the CPU frequency (hopefully!).
I also had issues with Debian 12, as reported here:
The real problem is that on the respective laptop, I ended with the CPU governor set to “performance” instead of “powersave”; as a result, the fan was making the noise of a Tupolev, even if the CPU load was very low, but the frequency was around 3.3 GHz instead of, say, 800 MHz.
The Linux kernel is not immune to stupidity, and bugs in distros only make things worse.
If they were to ask a Luddite, they’d never have designed “turbo frequencies,” and all CPUs would have run at most at the frequency that can be sustained 100% of the time.
But they wanted “performance, performance, performance!” even more than that sweaty idiot wanted “developers, developers, developers!”
And don’t start me on Nvidia.

The desktop versions of Windows (Win 10, Win 11) are very unstable and bloated. The most robust Windows edition is the Server line. That’s why I use Windows Server 2022 on all my desktops and notebooks.
Combined with native boot from VHD, it’s easy to return to a consistent state if an update causes damage. The parent VHD has a known‑good system state. By creating a child VHD for daily use, you can quickly revert to the consistent state when a problem occurs. Simple and effective.
Windows Server or Windows IoT LTSC.
But I didn’t even know that there is such a thing as VHDX with Native Boot!
What the fuck is this? If it’s not immutable, then why boot from a file instead of using a partition? And what’s a child VHD? A clone? And how do you manage to break the OS so frequently that you need to revert to the parent/master copy?
Native boot from VHD… why simple when it can be complicated? OK, recovery and shit.
I hate virtualization of any kind.
I have to admit that this is the usual reaction when I talk about this topic (“What the heck is this? What are the advantages?”). But if you think about it for a moment, you’ll see that it’s like the best of both worlds: you get the advantages of real hardware (CPU, GPU, USB devices) together with the advantages of virtual machines (dynamic disk allocation, restore points). See the output of the “dir” command in the folder where the VHDs for this notebook are stored, which has two Windows installations: Server 2022 and Win10 LTSC IoT (best viewed in a monospaced-font editor; the dates are in DD/MM/YYYY format):
Directory of D:\VHDs05/01/2026__23:00 .07/03/2025__17:32____26.465.288.192 win10-ltsc21-iot-x64-base.vhd04/01/2026__23:14_____4.036.657.152 win10-ltsc21-iot-x64-curr.vhd16/11/2025__16:43____21.991.731.200 win2k22-base.vhd06/03/2026__21:43____20.888.645.632 win2k22-curr.vhd__________4 File(s)__73.382.322.176 bytesThe suffix “
-base.vhd” indicates that it is a parent VHD.The suffix “
-curr.vhd” (current) indicates a child (differential) VHD.These VHDs contain only the system partition (C: drive). Note that these two Windows installations occupy less than 73 GB of space on the notebook’s physical disk (dynamic VHDs that grow as blocks are written).
This scheme isn’t just useful for troubleshooting. It’s also useful for testing. Want to test installing a new program or make a registry change? Create a child of “curr” (say, “test”) and boot the system from that new child. After testing, boot back from “curr” and discard “test”. Much faster than creating an image of the C: drive and then restoring it.
Regarding performance, dual-booting Server 2022 and Win10 LTSC on the same hardware shows that Server is slightly faster than LTSC.
But how the fuck are you booting them, managing them, installing them? An OS should be installed FROM a booting image TO one or more partitions!
How do I:
– Create partitions.
– Create VHD files.
– Install Windows into such files.
– Configure the boot process to boot from one or another of those VHD files.
– Create children, switch the booting from one to another, etc.
– Share the space between the partition that holds these VHD files and the running system that booted from one such VHD.
And so on. The modern boot process in both Windows and Linux is ALREADY a complete pile of shit compared to the classic MBR. With your setup, you made my head explode!
Oh, the good old times! Sancta fucking simplicitas…
There was that saying: “I’m too old for this shit” (especially when it looks like deserving all the hatred from a Luddite!).