SDelete with wear-leveling SSD

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

Intuit

Well-known member
Dec 27, 2020
80
21
https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete
Published: November 25, 2020​
Prior to switching to SSD, I used to use SDelete to overwrite individual file(s).

Confirm or deny the following.

In theory, SDelete won't work for individual file wipe unless the "-c" parameter is used with wear-leveling flash media, such as USB Flash Drives and SSDs.


Absent of cleaning all free space, my thought is, in order to do secure erase with an SSD, one has to first, make sure that the O/S has TRIM enabled.
https://www.windowscentral.com/how-ensure-trim-enabled-windows-10-speed-ssd-performance
fsutil behavior query disabledeletenotify​
fsutil behavior set disabledeletenotify 0​
Then once that is enabled, manually initiate a TRIM cycle.
https://winaero.com/trim-ssd-windows-10/
Optimize-Volume -DriveLetter C -ReTrim -Verbose​

Thanks for commenting and look'n. 🙂
 
The OS has no ability to command a SSD to blank a sector securely. You cannot overwrite (in place) anything on a flash drive. It will take your attempted overwrite, and move the old data to a new block of flash with the overwritten data. The now free block of flash will eventually be recycled one day, on a schedule that the OS has no control over. Which is not to say it's strictly a risk to user data, as getting at the data set aside is not easy... and probably out of reach for most technical users, but if properly motivated, some entities (government three letter agencies say) could do it.
 
Okay thanks for the confirmation @PHolder .

Needless to say, constantly having to overwrite *all* freed space just to target a single file is impractical.

Maybe one day they (meaning those that develop standards for storage media) will fix this.

Curious that they seem to have been avoiding any mention of this scenario for years; more especially as flash media has become ubiquitous.
 
Maybe one day they (meaning those that develop standards for storage media) will fix this
Well maybe, but you can't break the laws of physics. Flash is a block and page oriented device, the only way to erase a page is with higher voltages that zap all cells in the page simultaneously. I suspect to do it another way would either be inefficient on the lifespan or the speed of access.
 
You don't want to fill all the space on a flash device too often because of the wear it creates. And, it's a good idea to leave 10% unprovisioned or empty so the drive can do it's housekeeping. If that unprovisioned space is non partitioned space, you'll never put any files there that you need to erase. If it's unused space but still partitioned, I don't know if the drive can do housekeeping with that. Anyone that knows can chip in.

But, there are some things you can do to mitigate the problem you described.

SDelete is very slow at filling large spaces. One way to solve the problem is to use up much of the space with junk files and only delete them when you need more space. Long ago, I used a random function on the Linux command line, don't remember which, to create files with random gibberish of certain sizes. I generated 1,2,4,8,16,32,64,128,256,512 MB, and 1,2,4,8 GB files. The total of that is 16 GB if you're wondering. I have those sitting on a junk folder on my HDD / SSD that is not backed up.

So, if I want to fill up 300 GB, for example, of space, I just go into that folder and copy the files over and over graphically from file manager to the same folder and let the OS rename the copies. That's still slow, but not as slow as SDelete. Then, when I'm down to 1 MB or so of empty space, I run SDelete to fill the rest. Then I go in and delete some of the junk files but not the originals. Then I empty the trash.

This is tedious but it works. If you anticipate the need to fill empty space, you can leave many of the copies of the junk files sitting around, say all but 50 GB on a 1 TB drive. When you have the need, fill up the 50 GB to delete anything you don't want hanging around. Then free up the 50 GB. SDelete will automatically free up anything it creates. You have to free up whatever you copy. An advantage of this procedure is that you're only adding 50 GB of writes to the drive each time after you copy the junk files the first time if you leave most of the copies. When you need more space, delete some of the copies and empty the trash.

That turned out to be harder to describe than I thought. Hope it helps.

May your bits be stable and your interfaces be fast. :cool: Ron
 
https://www.kanguru.com/ has an interesting product line. The tag line on the website says:

"The Best in Encrypted USB Flash Drives, External Hard Drives, and Remote Management: Secure Your Information with Kanguru"

Some of the USB sticks have a write protect switch, which can also be handy.

May your bits be stable and your interfaces be fast. :cool: Ron
 
@rfrazier - Yeah that's why I said it was impractical. Not only the time it takes, but also the wear it creates. "Contig -n FileName FileLength" can be used to *instantly* create a new empty file that is empty only logically though not physically (takes advantage of NTFS's sparse file feature). I think it would be a good approach for let's say the scenario of creating that spacehog file, writing and viewing sensitive files, deleting those files, then going behind them with a secure-erase utility, then removing the spacehog file. Leaving it as longterm space hog would have a negative impact on wear-leveling though.


Well maybe, but you can't break the laws of physics. Flash is a block and page oriented device, the only way to erase a page is with higher voltages that zap all cells in the page simultaneously. I suspect to do it another way would either be inefficient on the lifespan or the speed of access.
Similar situations have been dealt with, going back decades. For one of many examples, the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".

In order for the OS to TRIM a drive, the drive needs to be programmed with firmware that recognizes the command.

Solving an individual sector secure-erase may merely involve adding another command; similar to what they did with TRIM. The O/S instructs the drive that it wishes to secure-erase a set of sectors and in response, the wear-leveling, bypasses the normal wear-leveling for those sector writes.

Re @DiskTuna's comments, I've seen more expensive 2.5" computer SSDs marketed as "encrypted" at work, but haven't realized a scenario where that would be valuable when you could plug it in to any device to get what's on it. (not locked to BIOS/firmware/computer, no user-entered boottime password, O/S level partition-encryption is utilized)
 
Solving an individual sector secure-erase may merely involve adding another command
This command already exists, but it STILL can't get around the laws of physics. If the page is 8K (or more) and you want to erase 512 bytes, that means you need to find a new home for the other data that is surviving, and then blast the now emptied page. This would in effect require two erase cycles (one for the new home and one for the securely erased old data.) The first erase can be managed in the background by the SSD if TRIM is being used and if the SSD has lots of room or spares. Either way, this extra process WILL impact the speed of your device at one point or another.

 
This command already exists, but it STILL can't get around the laws of physics. If the page is 8K (or more) and you want to erase 512 bytes, that means you need to find a new home for the other data that is surviving, and then blast the now emptied page. This would in effect require two erase cycles (one for the new home and one for the securely erased old data.) The first erase can be managed in the background by the SSD if TRIM is being used and if the SSD has lots of room or spares. Either way, this extra process WILL impact the speed of your device at one point or another.

https://flashdba.com/2014/06/20/understanding-flash-blocks-pages-and-program-erases/
Similar situations have been dealt with, going back decades. For one of many examples, the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".

Depending on when and how the garbage collection is implemented, there are potential performance consequences to running TRIM as well; which is why it selectively scheduled.

Understand, NO ONE anywhere here has implied doing secure erases with each and every or even most operations. We already know there is "overhead" involved. Overhead is nothing new for storage media. From side-channel, to cache timing attacks, we've sacrificed a fair percentage of efficiency for additional security. In the grand scheme of things, wanting to selectively secure-erase a few dozen Word, Excel, PowerPoint and PDF docs occupying 100MB of space isn't going to have a noticeable performance impact; temporary or permanent.
 
I may be mistaken, I thought I had read there were some security options in the ATA spec (beyond the device sanitization ones) but I can't find that message after a quick search, and I don't have a copy of the ATA spec (they charge money for it.)

I did find this interesting article though: https://www.usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf
Portions of that are right up the alley of this discussion. Good find. Personally, I don't elect to use the 100% "NSA can't get my data" option for secure delete on HDD. My only aim is to impact the casual user's ability to recover data using a simple undelete utility. In other words, the once-over works for me. Each user can make their own decisions.

<<..... Enabling single-file sanitization requires changes tothe flash translation layer that manages the mapping be-tween logical and physical addresses. We have devel-

oped three mechanisms to support single-file sanitizationand implemented them in a simulated SSD. The mecha-nisms rely on a detailed understanding of flash memory’sbehavior beyond what datasheets typically supply. The techniques can either sacrifice a small amount of performance for continuous sanitization or they can preserve common case performance and support sanitization on demand. We conclude that the complexity of SSDs relative tohard drives requires that they provide built-in sanitiza-tion commands. Our tests show that since manufacturersdo not always implement these commands correctly, thecommands should be verifiable as well. Current and pro-posed ATA and SCSI standards provide no mechanismfor verification and the current trend toward encryptingSSDs makes verification even ........>>
 
the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".
This is not how flash works. On a HDD you can overwrite a block at any point with ease. You just need to do some head positioning and turn them on at the right time and blammo, old data gone, new data written.

Flash is more like an 8-track tape... in that there is no rewind. You can't update a block of flash (there is no overwrite.) You need to fully erase the whole block, and then you can write the whole block at once. The erase of a block is power hungry and slow. In modern drives it is done in the background so that empty blocks are sitting waiting for write operations. TRIMing a block puts it into the pool of block to be erased for reuse in the future. Erasing a block also reduces its lifespan. Depending on the type of flash (SLC, MLC, etc) the number of erase cycles can be low enough that you don't want to be wasteful or your drive will run out of reusable blocks before its rated lifetime.
 
Yeah but SSD or better said flash is different. On a conventional hard drive you read 4096 bytes and write them back to exact same physical location. On SSD by definition you can not write back to same location*. So the operation requires an erased page, and then page that is read from become stale and has to be erased at a later time.

*Edit: You could but it means you need to erase entire block first.
I pretty much said that in my opening post. That's called wear-leveling. Yes I already knew it does an erase before write... which is irrelevant to any point that I've communicated.
You can run TRIM as often as you want without negative consequences. TRIM is merely informing drive about LBA sectors that are of no interest. It does not mean the drive will go erase these over and over.
As mentioned prior, the SSD must erase before write. When garbage collection is implemented, all queued sectors to be TRIMmed are then written. This writing has a performance impact. Depending on when and how the garbage collection is implemented, there are potential performance consequences to running TRIM as well; which is why it selectively scheduled.

But for flash memory with finite erase/program cycles overhead or write amplification does have consequences.
Understand, NO ONE anywhere here has implied doing secure erases with each and every or even most operations. We already know there is "overhead" involved. Overhead is nothing new for storage media. From side-channel, to cache timing attacks, we've sacrificed a fair percentage of efficiency for additional security. In the grand scheme of things, wanting to selectively secure-erase a few dozen Word, Excel, PowerPoint and PDF docs occupying 100MB of space isn't going to have a noticeable performance impact; temporary or permanent.

I'd like to bring to your attention the bit I got from the PDF:

"Multiple copies This graph shows The FTL duplicating files up to 16 times."

In essence it means secure erasing an individual file has little purpose if it can potentially be recovered from stale pages (in a dr lab).
As with many recent security exploit mitigations, obviously designers of the new secure erase protocol or command would have to take all forms of caching into account.
 
This is not how flash works. On a HDD you can overwrite a block at any point with ease. You just need to do some head positioning and turn them on at the right time and blammo, old data gone, new data written.
That is obvious. The point you're avoiding, the performance impact is a non-issue for the given scenario. If you're going to the extreme of trying to secure erase everything, then sure. But who here, stated or implied that?
Flash is more like an 8-track tape... in that there is no rewind. You can't update a block of flash (there is no overwrite.) You need to fully erase the whole block, and then you can write the whole block at once. The erase of a block is power hungry and slow. In modern drives it is done in the background so that empty blocks are sitting waiting for write operations. TRIMing a block puts it into the pool of block to be erased for reuse in the future. Erasing a block also reduces its lifespan. Depending on the type of flash (SLC, MLC, etc) the number of erase cycles can be low enough that you don't want to be wasteful or your drive will run out of reusable blocks before its rated lifetime.
So there's overhead to secure erase. Again, we knew that already. The research article you linked to stated the following - "... The techniques can either sacrifice a small amount of performance for continuous sanitization or they can preserve common case performance and support sanitization on demand. ..."
 
With modern OS TRIM this is already practically the case. My buddy Krzys demonstrates:

hzClnwGeJUM

I like to add that even if those drives did not yet actually erase, reading the LBA addresses associated with the deleted files will almost immediately result in drive returning zeros without even actually reading the drive.
Depending on the scenario, garbage collection can be multiple power cycles down the road. But you're saying that for TRIM-aware Operating Systems, the drive effectively "hides" the data until garbage collection actually overwrites it. Since you mentioned it, users of Steve Gibson's Read Speed utility were observing bus saturation from just that... the drive returning data without actually performing the reads. Pretty efficient.
 
Drive removes such sectors from user addressable space after it was informed about those via TRIM command. Indeed it does not even have to read the sector, it just returns zeros when attempted.
So the conclusion is, there's no concern about a casual user scanning an SSD to recover deleted files; provided that the O/S is TRIM aware; SDelete isn't necessary.

It would be interesting to test this by turning off TRIM, restart, delete a file, then run a file-recovery scan. Reactivate TRIM, restart, delete a file, then run a file-recovery scan.

Without going that far and knowing that the drive and O/S are TRIM capable, are starting a scan to see what if any deleted files are recoverable...
 
Last edited:
I was able to verify, TRIM-enabled, that the drive returns zeroes.
I mounted an inactive partition from the above SSD.
I created a human-readable text file containing the content, "THIS FILE IS HUMAN READABLE."
I used the program to view the file.
I deleted the file.
I then ran a recovery scan.
I opened the deleted file.
Content was null.

I'll now turn off TRIM and attempt the same experiment. 🙂
 
Out of curiosity, I repeated the above experiment.
But instead of using the simple command prompt "del" command, I used SDelete.
SDelete hides the name of the file. Instead of the deleted "RecoverMe.Txt" file showing up with it's original name, it instead showed up as...
1614437702940.png


So if you don't mind the extra writes and the implications there-of, there is that benefit.