Curiouser and curiouser…
Why do I feel the need to comment on Dedoimedo’s latest post? Well, because I disagree with a fundamental premise that led to a questionable conclusion. It has to do with hardware, despite his opinion to the contrary.
The curious case of sudden Blue Screens of Death
Yes, that’s the name of Dedoimedo’s post. Read it, because otherwise you won’t understand the spoilers below.
But I need to quote abundantly, be it from the ending only:
… Then, I went online, and I did a little search: “AMD Ryzen BSOD idle”. My oh my. I was flooded with posts and threads discussing the exact same issue as mine. Exact same. People with systems that worked superbly for years suddenly had these frequent crashes. And all of those coincide with July updates. These posts also highlighted a potential workaround: changing the CPU voltage a tiny bit.
And then, it dawned on me. The July update brought in all sorts of side channel attack mitigations, and that probably meant messing up power plans. And if a power plan is ever so slightly off, this could potentially trigger a system crash. Indeed, the quickest and cheapest method to verify this would be to roll back to a system state before the July patching. Luckily, I had the image ready.
…
Having done a lot of data collection in the past three or four months, here’s everything that differs between my Windows 10 before and after the July patches:
- After, I had more application and game crashes, including DWM and Nvidia drivers. Before, none.
- After, my Nvidia card was running a whole 10 degrees C hotter on idle. Yes. It would idle around 46-47, whereas beforehand, it idled around 36-37. And since I’ve restored the old image, we’re back to normal temperatures.
- The system is faster and more responsive with the pre-July state.
- Most importantly, no BSOD with the pre-July state.
…
I don’t see malice. I see negligence. And if anything, the BSOD only made me more determined never to use Windows for serious stuff, now or in the future. Until recently, I had the “excuse” of Windows being stable and robust, and my Linux being all naughty. But now that both these systems are meh, the would-be “conflict” is gone. Linux is now as viable. But more importantly, I’m very happy that I made the smart choice of buying myself a Macbook Pro. These software worries are now almost entertainment.
Now I can present my objections in my typical, convoluted way.
OK, so here’s the thing
Once upon a time, in the 1990s, there were CPUs that didn’t need heat sinks nor fans. Then they started needing heat sinks.
But, you see, electronics engineers, as well as their bosses, had this strange idea inherited from the 1970s and 1980s: an IC was not a single transistor; it was smarter. And since it was smarter, it should behave accordingly, in that it should not be able to commit thermal suicide.
That strange idea was inherited from analog electronics. Indeed, if you had an audio amplifier’s final stage in the form of push-pull transistors, and these transistors had an inadequate heat sink or the thermal paste wasn’t properly applied, they could melt and die. Not so with ICs, which had thermal protection.
In the late 70s, you had cheap class B audio amplifier ICs used in radios and cassette players, say, TBA820 (up to 2W) or TBA790 (up to 3.4W, depending on the variant). And they had thermal protection.
Let me quote from a data sheet:
The TBA820 includes built-in protection circuits to guard against short circuits, thermal overload, and overvoltage conditions. These circuits will automatically shut down the amplifier if any of these conditions are detected.
These were not digital ICs! But they were not designed by retards.
So in the early 90s, when I was exclusively using Intel and Cyrix CPUs (I never ever owned an AMD processor!), it went without saying that these CPUs could not die for lack of proper thermal dissipation! When a certain temperature was reached, they would simply halt.
I guess the initial AMD CPUs behaved similarly, until one day when I heard that newer, faster generations would not. Early K6/K7 and some Athlon models were notoriously prone to “melting” (not literally!) if the thermal paste wasn’t properly applied! This is when I lost my trust in AMD, if I ever had trusted them.
Fast-forward to today. Modern CPUs are much, much smarter. They have variable frequencies, and not by moving a jumper or changing a setting in the BIOS. They are supposed to be smart enough to switch to a lower frequency to avoid melting or instability. No more “you’re overclocking at your own risk,” but “the CPU will overclock within its power dissipation limits, then it will adjust itself!”
For Intel, the so-called Intel Turbo Boost Technology allows processors to dynamically increase their frequency and power consumption for short periods (the short-term power limit PL2) before settling back to a lower, sustained power level (the long-term power limit PL1). And let’s ignore the fact that some recent 13th and 14th gen. i7 and i9 models have experienced instability and crashes linked to excessive power boosting and voltage requests authorized by some motherboards’ settings (gaming systems, duh).
In theory, today’s Ryzen CPUs are supposed to have very robust built-in thermal protections, just like Intel.
And yet…
According to Dedoimedo’s conclusions, Windows 10 was able to adopt a power plan that:
- made an AMD Ryzen behave erratically, much like an overclocked CPU of the old days;
- made an Nvidia card run 10 degrees hotter than normal even in the idle state;
- made the system, paradoxically, slower, less responsive, and prone to crashing.
I would not put the blame on Windows! I blame a faulty hardware design of one or more of the following: AMD, Nvidia, or the mobo’s manufacturer.
I don’t know, and I don’t give a hoot about the means Windows tells the CPU and the GPU to “perform better” (so much better that they crash): is it purely through software, or does the mobo actually boost some voltages?
The fact is that, in 2026, a normal design should imply:
- that the CPU dynamically reduces its frequency regardless of what the fucking power plan tells it to do, as long as the built-in thermal protection self-activates;
- that the stupid Nvidia shit does the same, especially as it’s in an idle state, regardless of the power plan.
If instability occurs because of the CPU and/or the GPU being on steroids and running as deranged hamsters on a wheel, why blame Microsoft instead of blaming the manufacturers of such hardware?
In the 1990s, no OS could have crashed a CPU (and even less a video card), because the CPU had thermal protections that worked. (The GPUs weren’t really GPUs in the modern meaning of the term, and they never ran hot.)
OK, neither Win10 nor Win11 can be trusted. I can agree with this. Nothing compares to WinXP SP3 and Win7 SP1. But in this particular case, I’d rather take different decisions:
- Never use an AMD CPU again.
- Never use Nvidia if, like me, you’re not a gamer*, nor are you planning to run LLMs locally.
- Never use a PC with a mobo coming from the respective unspecified manufacturer.
Under Windows or Linux, the user or the OS can set the power plan to “Performance” or “High Performance” or whatever it’s called. It’s then the CPU’s task to prevent instability (or melting); how could an OS know what’s inside the CPU? Even if temperature readings are indeed available, it’s the CPU’s microcode’s task to enforce hard limits that software shouldn’t be able to overrule.
The same applies to the graphics card. If setting “Prefer Maximum Performance” in the Nvidia Control Panel under Windows can prevent the GPU from downclocking properly (apparently, it does!), then this is irresponsible design from Nvidia’s engineers or irresponsible decisions from Nvidia’s management!
I’m sticking with Intel and its integrated GPUs. I will not condone stupid hardware design.
The bottom line:
The ultimate responsibility for both not melting and thermal-related instability crashes should rest with the hardware and the corresponding OEM-provided firmware.
Think of it this way:
If someone tells you to throw yourself in front of the subway, and you do it, whose fault is it?
(Please don’t answer, “The government’s.”)
*NOTE: To me, “gaming” refers either to the “good old MS-DOS games” or to non-GPU-intensive games (chess, GO, puzzles, etc.). “Modern” games are simply sick and in poor taste.
Late update: a note on Linux
On the other hand, I did ditch MX-21_x64 XFCE “AHS” RC3 back in 2021 because on the system I installed it on, its default CPU policy was “frequency should be within 3.60 GHz and 3.60 GHz” on a 2 GHz CPU for which 3.6 GHz was the “turbo boost” frequency that could only be sustained for short periods of time! I also blamed the Linux kernel (and its documentation) for having replaced the classic governors (performance, powersave, userspace, ondemand, conservative, schedutil) with only two (performance, powersave) for more recent CPUs and for having given different meanings to these two words (the new performance and powersave don’t behave like the old ones!).
As I added here, the new powersave acts more like the old ondemand, and the new performance doesn’t keep the CPU at its top speed anymore, because the top speed is not the top nominal speed (say, 2-2.5 GHz), also improperly called “base frequency,” but the top burst speed (say, 3.6-5.2 GHz), also called “max turbo frequency.” The minimum idle frequency, often not specified in Intel’s data sheets, is usually 800 MHz. Semantically, this minimum frequency should have been called “base frequency.” (Retards.)
The reason I cursed that MX version was not related to stability but to the fact that it was an absurd bug to try to keep a CPU at the maximum turbo frequency! The fan was running at high speed, the 1-liter PC was evacuating a lot of heat, and it would have eventually lowered the CPU frequency (hopefully!).
I also had issues with Debian 12, as reported here:
The real problem is that on the respective laptop, I ended with the CPU governor set to “performance” instead of “powersave”; as a result, the fan was making the noise of a Tupolev, even if the CPU load was very low, but the frequency was around 3.3 GHz instead of, say, 800 MHz.
The Linux kernel is not immune to stupidity, and bugs in distros only make things worse.
If they were to ask a Luddite, they’d never have designed “turbo frequencies,” and all CPUs would have run at most at the frequency that can be sustained 100% of the time.
But they wanted “performance, performance, performance!” even more than that sweaty idiot wanted “developers, developers, developers!”
And don’t start me on Nvidia.

The desktop versions of Windows (Win 10, Win 11) are very unstable and bloated. The most robust Windows edition is the Server line. That’s why I use Windows Server 2022 on all my desktops and notebooks.
Combined with native boot from VHD, it’s easy to return to a consistent state if an update causes damage. The parent VHD has a known‑good system state. By creating a child VHD for daily use, you can quickly revert to the consistent state when a problem occurs. Simple and effective.
Windows Server or Windows IoT LTSC.
But I didn’t even know that there is such a thing as VHDX with Native Boot!
What the fuck is this? If it’s not immutable, then why boot from a file instead of using a partition? And what’s a child VHD? A clone? And how do you manage to break the OS so frequently that you need to revert to the parent/master copy?
Native boot from VHD… why simple when it can be complicated? OK, recovery and shit.
I hate virtualization of any kind.
I have to admit that this is the usual reaction when I talk about this topic (“What the heck is this? What are the advantages?”). But if you think about it for a moment, you’ll see that it’s like the best of both worlds: you get the advantages of real hardware (CPU, GPU, USB devices) together with the advantages of virtual machines (dynamic disk allocation, restore points). See the output of the “dir” command in the folder where the VHDs for this notebook are stored, which has two Windows installations: Server 2022 and Win10 LTSC IoT (best viewed in a monospaced-font editor; the dates are in DD/MM/YYYY format):
Directory of D:\VHDs05/01/2026__23:00 .07/03/2025__17:32____26.465.288.192 win10-ltsc21-iot-x64-base.vhd04/01/2026__23:14_____4.036.657.152 win10-ltsc21-iot-x64-curr.vhd16/11/2025__16:43____21.991.731.200 win2k22-base.vhd06/03/2026__21:43____20.888.645.632 win2k22-curr.vhd__________4 File(s)__73.382.322.176 bytesThe suffix “
-base.vhd” indicates that it is a parent VHD.The suffix “
-curr.vhd” (current) indicates a child (differential) VHD.These VHDs contain only the system partition (C: drive). Note that these two Windows installations occupy less than 73 GB of space on the notebook’s physical disk (dynamic VHDs that grow as blocks are written).
This scheme isn’t just useful for troubleshooting. It’s also useful for testing. Want to test installing a new program or make a registry change? Create a child of “curr” (say, “test”) and boot the system from that new child. After testing, boot back from “curr” and discard “test”. Much faster than creating an image of the C: drive and then restoring it.
Regarding performance, dual-booting Server 2022 and Win10 LTSC on the same hardware shows that Server is slightly faster than LTSC.
But how the fuck are you booting them, managing them, installing them? An OS should be installed FROM a booting image TO one or more partitions!
How do I:
– Create partitions.
– Create VHD files.
– Install Windows into such files.
– Configure the boot process to boot from one or another of those VHD files.
– Create children, switch the booting from one to another, etc.
– Share the space between the partition that holds these VHD files and the running system that booted from one such VHD.
And so on. The modern boot process in both Windows and Linux is ALREADY a complete pile of shit compared to the classic MBR. With your setup, you made my head explode!
Oh, the good old times! Sancta fucking simplicitas…
There was that saying: “I’m too old for this shit” (especially when it looks like deserving all the hatred from a Luddite!).
An important observation: when you natively boot from a VHD, the only “virtual” thing is the C: drive. Everything else on the computer is identical to a traditional boot: the CPU, RAM, GPU, and other hardware devices. The only difference is that the C: drive is mapped to a file (VHD) on the computer’s physical disk. In the case of a traditional boot, the C: drive is mapped to a partition on the physical disk.
VHD manipulation is done entirely with native Windows tools (diskpart, Disk Management, dism, imagex). But there’s a free tool called BootICE that’s a graphical front-end for all these actions (creating VHDs, creating child VHDs, merging child into parent, creating/editing/deleting boot entries, etc.), making everything much easier.
To simplify Windows installations on my computers, I created VHDs with sysprepped versions of Windows. One VHD for Win10 LTSC IoT, another VHD for Server 2019, another for Server 2022, and so on. When I want to install a specific Windows version on a computer, I simply copy the respective VHD of the desired version to a folder on the computer and create a boot entry for that VHD.
Let’s imagine the following scenario. You bought a new notebook; it came pre-installed with Windows 11 Home Edition, with the entire disk partitioned into a single NTFS partition, the C: drive, occupying the entire physical disk. You want to test Win10 LTSC with native boot from VHD. What would the steps be?
1) Copy the sysprepped VHD to a folder, say “C:\VHDs\win10-ltsc-sysprepped.vhd”;
2) In diskpart (or Disk Management), attach this VHD. Windows will assign a drive letter to the existing partition on the VHD, let’s say it’s “F:”;
3) In an administrative prompt, run the command “bcdboot F:\Windows”, which creates a new boot entry for the Windows system root indicated in the path passed as a parameter (in this case, it’s the Windows folder on the Win10 LTSC VHD);
4) Upon restarting the computer, a boot menu will appear to choose which Windows to load (the factory Windows 11 or the Win10 LTSC from the VHD).
It’s as simple as that. Of course, after completing the OOBE installation of Win10, you can create a child of the VHD and change the boot entry (using BootICE) to point to the child VHD, preserving the current state of the parent VHD.
Also note that when booting from the VHD, the physical disk partition (mapped as drive C: in Windows 11) will be mapped as drive D: in Win10 LTSC (drive C: is always mapped to the VHD). This is the same behavior as traditional dual-booting, where each Windows installation is set to a different partition.
If you’re interested in doing a test, I can send you a copy of a sysprepped VHD. Just ask.
1. I can’t even find an OFFICIAL SITE for BootICE. Only fuck-ups. Pathetic.
2. Writing to a file should be slower than writing to a filesystem.
3. Sysprepped versions of Windows. One more task to do, one more increase in complexity. Where do you get that VHD from, or how do you prep it?
4. “Copy the sysprepped VHD to a folder.” Can no do, because I don’t purchase systems that already have Windows on them. I DON’T HAVE WINDOWS ON A NEW LAPTOP OR MINI-PC.
5. “In an administrative prompt.” Again, no fucking Windows, no prompt.
6. In the end, this is not “native VHD booting,” but “native VHD booting if you also have another Windows system on the machine,” which is a bit of cheating.
I’d rather undergo another colonoscopy without sedation than go for such a convoluted task. Thanks, but no, thanks. I appreciate your time and effort to explain, but this doesn’t fit my KISS philosophy. I also try to keep my blood pressure low because I need it so I can drink my coffee.
Ok, understood. But to prevent anyone from coming here and getting incorrect or imprecise information, some clarifications:
1. BootICE appears to be abandoned, although it still works fine. It can be downloaded from sites like MajorGeeks. But as I said earlier, it’s not mandatory – everything can be done with native Windows tools.
2. For flash memory-based disks (NVMe, SSD), the overhead of writing to a file is practically negligible.
3. Sysprep is a native Windows tool used to “seal” and “generalize” a Windows installation and prepare it to be used on other hardware. I myself installed the VHDs from scratch, running sysprep at the end of the configuration.
4, 5. You need some operating system to format and partition the computer’s disk, and then copy the VHD. Any Windows installation ISO or WinPE (Preinstallation Environment) is sufficient.
6. It’s possible to have only Windows with native VHD boot on the computer, without having any other Windows (traditional or otherwise). That’s how I set up my machines – none of them have traditional (partition-based) Windows installations. The example I mentioned in my previous comment (buying a computer with Windows pre-installed) was just to illustrate an easy way to configure native VHD boot.
It’s about the raw performance of a filesystem versus one more abstraction layer. You know, all those benchmarks about the performance of a filesystem performed by Phoronix.
In a normal Windows installation, the filesystem abstraction is the only layer between the OS and the abstraction provided by the SSD controller.
When using a virtual VHD/VHDX filesystem, it inserts another software layer. And my feeling is that the impact on performance might be heavier on an SSD than on an HDD. My reasoning: it’s not about the speed of reading/writing the file on a slow HDD, but the delay introduced by the software that translates to/from VHD/VHDX on an otherwise very fast NVMe SSD.
Am I wrong? Kimi says I am right.
I never used this shit. The “prepare it to be used on other hardware” part is confusing. Does it have to do with the resetting of OOBE? Does it help with detecting a different hardware?
The only method that I tried was to use Hasleo WinToUSB to create a WinToGo copy of an installation on an external SSD, then to boot that SSD, update the drivers as needed, then repeat the transfer via Hasleo. No need to learn any Sysprep.
Kimi helped me by providing info such as: “Sysprep removes machine-specific identity (computer name, SID, activation state).” and ” It strips out hardware-specific drivers and SID (security identifier), preventing driver conflicts when the image boots on new chipsets, GPUs, storage controllers, etc.”
OK, my quick fix isn’t appropriate for VMs. It was never meant to be.
Partitioning and formatting can be done with anything, even some Live Linux ISOs, although
ntfsprogsmight need to be downloaded and installed. But copying the VHD file from another external device cannot be done from within a standard Windows installation ISO! You need a WinPE ISO that normal people normally don’t have (I only have a Hiren’s BootCD PE (HBCD_PE_x64.iso) and something called WinPE11_10_8_Sergei_Strelec_x86_x64_2025.12.14_English.iso). Or, again, a Live Linux ISO.Why such complications? I do not want to keep “master copies” of an “optimally installed OS”!
There are restore points in Windows, you know. People use them. I DISABLE THEM COMPLETELY. I don’t like to have dozens of GB of shit on my limited storage. But even if I had a 5TB SSD, I’d still disable the restore points in Windows.
Even in the “good old times” when people said they needed to reinstall Windows (XP/2k/7) once a year or more often, I didn’t need to do that. And I installed and uninstalled tens of thousands of programs over the time; I cleaned the system and the Registry with tools or by hand; I partially screwed up Windows many times, and as many times I fixed it, with one notable exception when I needed to reinstall Win7.
I don’t need worthless stuff and pointless complications. If it’s not simple enough, it sucks. (Yes, everything sucks these days, except for some BSD OSes that unfortunately lack the proper drivers and even software compatibility with some proprietary software I need, so I can’t use them.)
EDIT: OTOH, I could try Windows Server 2022 Version 21H2 (20348.169). It’s based on Win10 and supported through:
– mainstream: October 13, 2026.
– extended/LTSC: October 14, 2031.
I see that MSFT also offers a VHD image! 20348.169.amd64fre.fe_release_svc_refresh.210806-2348_server_serverdatacentereval_en-us.vhd.
I might try to… try it some day.
I really like the system described by VRBF. I’ve never used it myself, but it looks excellent: you create the parent VHD once, and then whenever you reinstall Windows (or install it on a new PC), you just copy that VHD and boot from a child of it, where only the differences will be saved incrementally (and therefore quickly). If something breaks, you go back to the parent and create another child.
I think your analysis with Kim should be taken with a grain of salt. The extra translation layer runs mostly in RAM. And no matter how low the SSD latency is, it is still much higher than RAM latency. “On an NVMe SSD with ~100µs latency, adding even 500µs of software overhead is a 5x relative penalty.” Have we reached the point where persistent storage is faster than virtual memory?!? Besides, the OS uses caching anyway, it doesn’t write data to the SSD immediately.
The only question that should really concern us is how many additional physical I/O operations appear when using a VHD. And probably not many. The cache and the other translations that run in RAM (even the SSD controller itself has an internal translation table, since it doesn’t work with real physical sectors but emulates them) are imperceptible, adding only microseconds or nanoseconds. In any case, worth testing.
“whenever you reinstall Windows (or install it on a new PC)” – I do not (re)install Windows. Like, ever. Maybe once every 5 years or 7 years. (Meanwhile, I reinstall Linux 20 times, because one distro or another becomes shitty every once in a while and pisses me off.)
“where only the differences will be saved incrementally” – I do not like incremental differences, incremental backups, which actually mean differential backups.
“If something breaks, you go back to the parent and create another child.” If things break that often, then someone is too stupid or incompetent. And no, I wasn’t thinking about Microsoft.
“The extra translation layer runs mostly in RAM. And no matter how low the SSD latency is, it is still much higher than RAM latency.” How about the CPU cycles? Also, not everything is cached in RAM. Real-life scenarios are a mixed bag. But OK, people also love ZSWAP and ZRAM. I do not.
I asked Kimi:
It answered, but I needed to protest:
In brief: “The guest OS isn’t exactly lying—it’s operating in a vacuum by design. It has no mechanism to query the host’s physical free space.” IT JUST DOESN’T KNOW!
Every single fucking hypervisor IS DESIGNED IDIOTICALLY, BY RETARDS! “If you need honest capacity, pre-allocate.”
So let’s put it this way: I have a master VHD (of known size) and a child VHD (of dynamic size) on an NTFS partition (of known size). But the child, which is where I run my fucking “optimized” Windows, doesn’t know the actual free space of the external NTFS partition, nor the actual total size and the actual free size of its own filesystem! File Explorer reports whatever the VHD was set to report, maybe 1,000 TB, and I literally cannot know how much free space I have! The alternative is to have everything of fixed sizes, but then, why the fuck don’t I use a NORMAL INSTALL? There fucking are RESTORE POINTS IN WINDOWS!
Only a complete retard could happily live with such a shitty system. Fucking shitty retards.
I don’t expect anyone to understand why I HATE EVERY SINGLE VIRTUALIZATION TECHNOLOGY AND EVERY SINGLE HYPERVISOR ALMOST AS MUCH AS I HATE PUTIN AND TRUMP AND CRYPTOCURRENCIES AND THE CAPS THAT STICK TO PLASTIC BOTTLES IN THE EU!
Okay, I see you’re strongly opinionated about this, which I can’t be. Because, just like you, I’ve never tried this system either. 🙂 But I can see its theoretical merits. All I’m saying is that, if Microsoft has implemented and optimized it well, it looks like a very attractive option to me.
I personally like the virtualization idea. I’ve used it in the past, back in the XP era, playing around with virtual machines in VirtualBox to test any software or weird configurations I wanted, or going on sketchy websites and running keygens inside Sandboxie just to grab some serial keys without risking my real system. The thought that a problematic driver, a buggy Windows update, or a compromised program cannot touch my real system and that I can wipe all of its effects with a single click is pretty cool. But to each their own.
One tiny note, though: your Kimi partner assumes you are asking about a Virtual Machine and speaks about Guest OS, Hypervisor, Host and so on. Which is not the case here. As per documentation: “Native boot enables virtual hard disks (VHDXs) to run on a computer without a virtual machine or hypervisor.” Only the drive is virtualised. Also, as far as I understand, there is no free space “in the CURRENTLY ALLOCATED VHD/VHDX”, because the dynamically expanding VHDX occupies the minimum amount of space needed, growing as necessary. Therefore, it has zero internal free space. So, it probably reports as free space virtual-disk-max (the max you set when creating it, e.g., 100 GB) minus virtual-disk-used (its current size). If it’s “lying”, is because you’ve instructed it that it can only extend up to, say, 100 GB.
Fair points. But what a twisted design to have Windows use a VHD/VHDX file instead of a partition. Not only that, but a dynamically growing one, which certainly slows things down. Who the fuck manages this shit without a hypervisor? A driver? (Curiouser and curiouser…)
Sandboxie. Oh, I’ve used it. Half a dozen times, maybe. Later, I just knew which keygens to trust (the best ones are those detected by Kaspersky as not-a-virus:keygen).
Condoms are good to prevent STDs and unwanted pregnancies. But I hate them, especially in porn movies 😉
EDIT: Sandboxie is not virtualization! It’s containment-grade userspace isolation, closer to Flatpak than to VMs.
I asked Kimi about native booting from a VHDX. Its answer includes, among other things:
It also says that “The technical excuse falls apart”: in native boot, unlike in a Hyper-V VM, it’s the same kernel, them same I/O stack, and the internal Windows APIs could be used directly. However…
In other words:
I rest my case. It’s stupid. The simple fact that this technology exists, that it can be used, doesn’t mean it’s such a great idea.
I’m pretty sure that, in the long run, Windows will become an immutable OS. Application software will be installed like Flatpaks or like iOS and Android apps, in relative isolation. I can’t wait for the future!
Ok, just a few more clarifications based on daily technology use:
a) IMHO, you’re very focused on the performance issue and the overhead of VHD file access. But remember, it’s only the system partition (C: drive) that is virtualized; all other partitions and disks are real. I really can’t perceive any performance difference between a system booting from a real partition and native VHD boot. It exists, of course, but it’s imperceptible.
b) you’re also more focused on the issue of restoration in case of problems. Yes, problems aren’t that frequent (even though Microsoft threatens your system’s health every month with a possibly destructive update), but much more frequent is performing some kind of test. A child VHD is very good for testing. Sure, it’s possible to undo an installation, erase everything, but how can you have full assurance that no trace of the test remained on the system?
c) “System Restore Points” are only available in client versions of Windows, not in Server versions (in fact, this can be considered another advantage of Server versions…).
d) the page file (pagefile.sys) is by definition not saved on the VHD; it’s saved on a real partition. This helps prevent the dynamic VHD size from growing too much, since the page file can be large.
e) both the Windows boot manager and the kernel (since Windows 7) natively support virtual disks that point to VHDs.
Some advantages are only perceived with continuous use. Never again needing to create (and restore) images of my computers – that saves a lot of time. And that’s just one of the advantages. But to perceive them, you need to try it.
I’m mostly focused on unnecessary complications. Are you familiar with The Incredible Machine, or you’re too young for that? You could also watch this Tom and Jerry video (starting from 5:00 if you’re in a hurry).
Believe me, I’m a big fan of the K.I.S.S. principle. What you consider (in theory) “unnecessary complications,” for me (in practice) is a simpler way to do a Windows installation. I may be dumb, but I’m not a masochist… 😀
I’d rather believe you’re masochistic… or maybe you invented a new religion, and you stick to it. This is not a normal scenario for the normal desktop use of Windows!