Not because I was younger. Not because of the simplicity of both DOS and 16-bit Windows. Not because I loved a lot Borland’s IDEs, with their ncurses-like Turbo Vision UI, and with whose help I also wrote my first Windows programs. Not because of the games. OK, because all of these, but not only that.

Here’s a quickie.

I asked ChatGPT why Recuva is so dumb. In the process, I realized why I was happier in the first half of the 1990s. Here’s this short talk:

Take the following scenario: I delete a folder containing 500 GB worth of files from an external drive. Right away, I fire up a recovery tool such as Recuva and, sure enough, as no file has been overridden, all can be perfectly recovered. However, Recuva asks me to provide a different drive or partition to save those files, so I need 500 GB of space somewhere else. Why doesn’t it recover them IN PLACE, exactly where they are, as long as not a single byte of them has been overridden? Are those software developers completely idiot?

The design choice of requiring a different drive or partition for file recovery is not due to the software developers being “idiot,” but rather stems from practical and technical considerations aimed at ensuring data integrity and preventing further data loss. Here are the key reasons why recovery tools like Recuva ask for a different location to save recovered files:

1. Preventing Overwriting of Data:

When files are deleted, the space they occupied is marked as available for new data. If a recovery tool were to write recovered files back to the same drive, it risks overwriting the very data it is trying to recover. Even if the recovery tool is careful, there’s always a risk that the operating system might use some of that space for its own purposes (e.g., temporary files), leading to data corruption or loss.

2. Ensuring Data Integrity:

Recovering files to a different drive ensures that the original data remains untouched. This is important because the recovery process itself might need to read the same sectors multiple times to ensure accuracy. Writing to the same drive might interfere with this process and lead to incomplete or corrupt recovery.

3. File System Consistency:

File recovery tools often work at a low level, directly interacting with the file system structures. Writing recovered files back to the same file system can complicate the process, as the tool would have to manage both reading and writing within the same space, potentially leading to inconsistencies or conflicts.

4. Performance Considerations:

Recovering files to a different drive can often be faster and more efficient. Writing to the same drive would require constant switching between reading the old data and writing the new data, which can slow down the process significantly.

5. Safety and User Error:

Requiring a separate drive reduces the risk of accidental user errors that could further complicate recovery. For instance, a user might accidentally choose to overwrite important data or the tool might encounter issues if there isn’t enough contiguous free space to restore the files.

Practical Example:

Imagine you deleted a folder with 500 GB of data from an external drive. The file system marks those sectors as free, but the data itself is still there. If Recuva tried to recover the data in place, it would need to ensure that no new data gets written to those sectors during the recovery process. Any new writes, whether by Recuva, the OS, or another program, could permanently destroy the recoverable data.

By using a separate drive, Recuva guarantees that the recovery process does not interfere with the original data, maximizing the chances of a successful recovery.


The requirement to use a different drive for saving recovered files is a well-considered design choice based on the need to maximize data recovery success and integrity. It ensures that the recovery process does not inadvertently cause more data loss or corruption. This approach, while sometimes inconvenient, ultimately serves to protect the user’s data and provide the best possible recovery outcome.

But: 1. Nothing has been overridden. 2. Why would the OS use temp files on that external drive instead of %TEMP%? 3. Back in the day, the Symantec tools for MS-DOS and Windows 3.1 did the recovery in place. Why can’t this be done today? There’s plenty of BS in your argumentation.

Your points are valid and indicate a nuanced understanding of file recovery. Let’s address each of your concerns in detail to provide a clearer picture:

1. Nothing Has Been Overwritten:

It’s true that if no data has been overwritten, the deleted files’ data still exists on the disk. However, the space is marked as available for new data. Recovery tools err on the side of caution to ensure data integrity by not writing to these potentially recoverable sectors.

2. Operating System Temporary Files:

You are correct that typically the OS uses the designated %TEMP% directory on the main system drive. However, applications, including recovery tools themselves, may write temporary data to the same drive during the recovery process. Even a small risk of overwriting the data is avoided by using a different drive.

3. Historical Tools and In-Place Recovery:

Older tools, like Symantec’s Norton Utilities for DOS and Windows 3.1, indeed performed in-place recovery. There are several reasons why modern tools do not typically follow this approach:

  • Modern File Systems and OS Behavior: Modern file systems (NTFS, exFAT, etc.) and operating systems are more complex than those in the MS-DOS and Windows 3.1 era. They are constantly performing background tasks that can affect disk data. Ensuring no data gets overwritten during these operations is more challenging now.
  • Increased Risk of Data Corruption: With larger and more complex file systems, the risk of partial recovery or corruption increases. Writing to a different location mitigates this risk, ensuring the original data remains untouched.
  • User Expectations and Data Safety: Modern users often have more valuable and sensitive data. Recovery tools prioritize the safety and integrity of this data over convenience. By requiring a separate drive, they reduce the risk of accidental data loss.
  • Performance and Recovery Efficiency: Modern drives and recovery tools handle larger volumes of data. Reading from one part of the drive and writing to another can be slower and less efficient. Using a different drive streamlines this process.

Practical Considerations and Improvements:

Modern recovery tools could theoretically offer an in-place recovery option, with strict warnings and safeguards to ensure the user understands the risks. However, the default safe approach is to avoid this due to the reasons mentioned.


The design choice to use a different drive for saving recovered files is fundamentally about balancing safety, performance, and data integrity. While in-place recovery is technically possible, it carries risks that modern tools aim to mitigate. Offering an in-place recovery option with clear warnings could be a potential improvement for users who understand and accept the risks involved.

File systems. Kernels. The lack of direct low-level access in modern OSes. Stupid complexity. Either this, which means life was a lot easier and straightforward, hence beautiful back then. Or Recuva’s developers are really stupid. But given that ChatGPT got these arguments from somewhere, somebody real must have issued them in the past, more than once. It doesn’t sound like a typical LLM hallucination. Either way, the practical effects are nonsensical. Let me recover the bloody things where they are! I just changed a marker saying they’re dead; now let me change that marker back! (For each file, of course.)

I have no idea whether extundelete can recover the files in place if they’re on a different partition than the system (and than /tmp and /home), but I wouldn’t hold my breath. I also never tried R-Studio or R-Linux. As for FreeBSD, it seems that the changes made to the ffs2 filesystem in FreeBSD 13 are no longer compatible with ffs2recov, so recovery from UFS2 might be impossible. Progress is beautiful, they say.

In case you’re wondering, yes, there was Internet for Windows 3.1. You only needed a little helper for the missing TCP/IP stack, and this is the solution I was using back then: