SDelete with wear-leveling SSD

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in:

    This forum does not automatically send notices of new content. So if, for example, you would like to be notified by mail when Steve posts an update to his blog (or of any other specific activity anywhere else), you need to tell the system what to “Watch” for you. Please checkout the “Tips & Tricks” page for details about that... and other tips!

    /Steve.
  • Larger Font Styles
    Guest:

    Just a quick heads-up that I've implemented larger font variants of our forum's light and dark page styles. You can select the style of your choice by scrolling to the footer of any page here. This might be more comfortable (it is for me) for those with high-resolution displays where the standard fonts, while permitting a lot of text to fit on the screen, might be uncomfortably small.

    (You can permanently dismiss this notification with the “X” at the upper right.)

    /Steve.

Intuit

Active member
Dec 27, 2020
37
8
https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete
Published: November 25, 2020​
Prior to switching to SSD, I used to use SDelete to overwrite individual file(s).

Confirm or deny the following.

In theory, SDelete won't work for individual file wipe unless the "-c" parameter is used with wear-leveling flash media, such as USB Flash Drives and SSDs.


Absent of cleaning all free space, my thought is, in order to do secure erase with an SSD, one has to first, make sure that the O/S has TRIM enabled.
https://www.windowscentral.com/how-ensure-trim-enabled-windows-10-speed-ssd-performance
fsutil behavior query disabledeletenotify​
fsutil behavior set disabledeletenotify 0​
Then once that is enabled, manually initiate a TRIM cycle.
https://winaero.com/trim-ssd-windows-10/
Optimize-Volume -DriveLetter C -ReTrim -Verbose​

Thanks for commenting and look'n. 🙂
 

PHolder

Well-known member
Sep 16, 2020
630
2
308
Ontario, Canada
The OS has no ability to command a SSD to blank a sector securely. You cannot overwrite (in place) anything on a flash drive. It will take your attempted overwrite, and move the old data to a new block of flash with the overwritten data. The now free block of flash will eventually be recycled one day, on a schedule that the OS has no control over. Which is not to say it's strictly a risk to user data, as getting at the data set aside is not easy... and probably out of reach for most technical users, but if properly motivated, some entities (government three letter agencies say) could do it.
 
  • Like
Reactions: DiskTuna

Intuit

Active member
Dec 27, 2020
37
8
Okay thanks for the confirmation @PHolder .

Needless to say, constantly having to overwrite *all* freed space just to target a single file is impractical.

Maybe one day they (meaning those that develop standards for storage media) will fix this.

Curious that they seem to have been avoiding any mention of this scenario for years; more especially as flash media has become ubiquitous.
 

PHolder

Well-known member
Sep 16, 2020
630
2
308
Ontario, Canada
Maybe one day they (meaning those that develop standards for storage media) will fix this
Well maybe, but you can't break the laws of physics. Flash is a block and page oriented device, the only way to erase a page is with higher voltages that zap all cells in the page simultaneously. I suspect to do it another way would either be inefficient on the lifespan or the speed of access.
 
  • Like
Reactions: DiskTuna

rfrazier

Well-known member
Sep 30, 2020
231
77
You don't want to fill all the space on a flash device too often because of the wear it creates. And, it's a good idea to leave 10% unprovisioned or empty so the drive can do it's housekeeping. If that unprovisioned space is non partitioned space, you'll never put any files there that you need to erase. If it's unused space but still partitioned, I don't know if the drive can do housekeeping with that. Anyone that knows can chip in.

But, there are some things you can do to mitigate the problem you described.

SDelete is very slow at filling large spaces. One way to solve the problem is to use up much of the space with junk files and only delete them when you need more space. Long ago, I used a random function on the Linux command line, don't remember which, to create files with random gibberish of certain sizes. I generated 1,2,4,8,16,32,64,128,256,512 MB, and 1,2,4,8 GB files. The total of that is 16 GB if you're wondering. I have those sitting on a junk folder on my HDD / SSD that is not backed up.

So, if I want to fill up 300 GB, for example, of space, I just go into that folder and copy the files over and over graphically from file manager to the same folder and let the OS rename the copies. That's still slow, but not as slow as SDelete. Then, when I'm down to 1 MB or so of empty space, I run SDelete to fill the rest. Then I go in and delete some of the junk files but not the originals. Then I empty the trash.

This is tedious but it works. If you anticipate the need to fill empty space, you can leave many of the copies of the junk files sitting around, say all but 50 GB on a 1 TB drive. When you have the need, fill up the 50 GB to delete anything you don't want hanging around. Then free up the 50 GB. SDelete will automatically free up anything it creates. You have to free up whatever you copy. An advantage of this procedure is that you're only adding 50 GB of writes to the drive each time after you copy the junk files the first time if you leave most of the copies. When you need more space, delete some of the copies and empty the trash.

That turned out to be harder to describe than I thought. Hope it helps.

May your bits be stable and your interfaces be fast. :cool: Ron
 

DiskTuna

Well-known member
Jan 3, 2021
84
8
Netherlands
The OS has no ability to command a SSD to blank a sector securely. You cannot overwrite (in place) anything on a flash drive. It will take your attempted overwrite, and move the old data to a new block of flash with the overwritten data. The now free block of flash will eventually be recycled one day, on a schedule that the OS has no control over. Which is not to say it's strictly a risk to user data, as getting at the data set aside is not easy... and probably out of reach for most technical users, but if properly motivated, some entities (government three letter agencies say) could do it.

Depending on specific device you may not even need to be three letter agency.

OP also mentions USB flash drives so I'll address those too. Most of these do not yet encrypt although they scramble. For many devices the XOR key is known, and the contents of those discarded blocks is recoverable as long as they weren't erased yet. A tool like Rusolut VNR can be used to extract unknown XOR keys.

SSD is more complex though for devices supported by PC3000 SSD or portable, both commercially available devices that many data recovery labs use on a daily basis, data is potentially recoverable. That being said, many SSD drives or better said controllers aren't supported at this point.

Difference between USB flash drives and memory cards is the methods: Where data from USB flash drives is often recovered using chip-off method where individual NAND chips are dumped and then recombined virtually, chip-off recovery is impossible with SSD drives mainly due to encryption.

Recovery of data from flash based drives is hard, some times undo-able and some times do-able with the proper equipment and software.
 
Last edited:

rfrazier

Well-known member
Sep 30, 2020
231
77
https://www.kanguru.com/ has an interesting product line. The tag line on the website says:

"The Best in Encrypted USB Flash Drives, External Hard Drives, and Remote Management: Secure Your Information with Kanguru"

Some of the USB sticks have a write protect switch, which can also be handy.

May your bits be stable and your interfaces be fast. :cool: Ron
 

Intuit

Active member
Dec 27, 2020
37
8
@rfrazier - Yeah that's why I said it was impractical. Not only the time it takes, but also the wear it creates. "Contig -n FileName FileLength" can be used to *instantly* create a new empty file that is empty only logically though not physically (takes advantage of NTFS's sparse file feature). I think it would be a good approach for let's say the scenario of creating that spacehog file, writing and viewing sensitive files, deleting those files, then going behind them with a secure-erase utility, then removing the spacehog file. Leaving it as longterm space hog would have a negative impact on wear-leveling though.


Well maybe, but you can't break the laws of physics. Flash is a block and page oriented device, the only way to erase a page is with higher voltages that zap all cells in the page simultaneously. I suspect to do it another way would either be inefficient on the lifespan or the speed of access.
Similar situations have been dealt with, going back decades. For one of many examples, the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".

In order for the OS to TRIM a drive, the drive needs to be programmed with firmware that recognizes the command.

Solving an individual sector secure-erase may merely involve adding another command; similar to what they did with TRIM. The O/S instructs the drive that it wishes to secure-erase a set of sectors and in response, the wear-leveling, bypasses the normal wear-leveling for those sector writes.

Re @DiskTuna's comments, I've seen more expensive 2.5" computer SSDs marketed as "encrypted" at work, but haven't realized a scenario where that would be valuable when you could plug it in to any device to get what's on it. (not locked to BIOS/firmware/computer, no user-entered boottime password, O/S level partition-encryption is utilized)
 

DiskTuna

Well-known member
Jan 3, 2021
84
8
Netherlands
Re @DiskTuna's comments, I've seen more expensive 2.5" computer SSDs marketed as "encrypted" at work, but haven't realized a scenario where that would be valuable when you could plug it in to any device to get what's on it. (not locked to BIOS/firmware/computer, no user-entered boottime password, O/S level partition-encryption is utilized)

I suppose encryption is useful in event of secure erase; all drive needs to do is reset translation tables and wipe encryption key, can be done in seconds that way and would indeed be very secure. It's not useful per se I guess like for example Bitlocker encryption if drive transparently decrypts on the fly much like a on encrypted file would XOR and 'unXOR' data. It would prevent however chip-off recovery which is possible on most USB flash drives and memory cards. You can dump NAND chips but data would be an utterly useless binary blob.
 
Last edited:

PHolder

Well-known member
Sep 16, 2020
630
2
308
Ontario, Canada
Solving an individual sector secure-erase may merely involve adding another command
This command already exists, but it STILL can't get around the laws of physics. If the page is 8K (or more) and you want to erase 512 bytes, that means you need to find a new home for the other data that is surviving, and then blast the now emptied page. This would in effect require two erase cycles (one for the new home and one for the securely erased old data.) The first erase can be managed in the background by the SSD if TRIM is being used and if the SSD has lots of room or spares. Either way, this extra process WILL impact the speed of your device at one point or another.

 

DiskTuna

Well-known member
Jan 3, 2021
84
8
Netherlands
This command already exists,

Interesting, this is new to me, have a reference for that so I can educate myself?

but it STILL can't get around the laws of physics. If the page is 8K (or more) and you want to erase 512 bytes, that means you need to find a new home for the other data that is surviving, and then blast the now emptied page.

It's even worse because that's only possible per block AFAIK.

This would in effect require two erase cycles (one for the new home and one for the securely erased old data.) The first erase can be managed in the background by the SSD if TRIM is being used and if the SSD has lots of room or spares.

If such a secure erase per block was implemented it should be executed immediately or else it's not secure or at least leaves a window of opportunity for an attacker to recover the data . In essence everything the SSD does is handled in the background without us knowing what it does and when it does. TRIM is not a process, it's an ATA command to merely 'inform' the drive about sectors no longer needed. It's up to the drive to use it to help in garbage collection and such. Fact that a drive returns zeros if you read the sector does not mean data has actually been zeroed.

Either way, this extra process WILL impact the speed of your device at one point or another.

Yes agreed, in effect it would need to reallocate an entire block worth of data just to zap a single sector if I am not mistaken.


Nice! I was made aware of this video which I also found very good:
 

PHolder

Well-known member
Sep 16, 2020
630
2
308
Ontario, Canada
  • Like
Reactions: DiskTuna

DiskTuna

Well-known member
Jan 3, 2021
84
8
Netherlands
From your PDF:
I may be mistaken, I thought I had read there were some security options in the ATA spec (beyond the device sanitization ones) but I can't find that message after a quick search, and I don't have a copy of the ATA spec (they charge money for it.)

I did find this interesting article though: https://www.usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf
From the PDF:

"Multiple copies This graph shows The FTL duplicating files up to 16 times."

Although this is expected to happen, I'm quite amazed by this number! The SSD is constantly moving data to new pages for various reasons, leaving stale data behind. At some point I assume it will fall victim to garbage collector, but still. From graph we see 16 was maximum, but 8 to 12 copies are not exceptional.

Cool info.
 

Intuit

Active member
Dec 27, 2020
37
8
This command already exists, but it STILL can't get around the laws of physics. If the page is 8K (or more) and you want to erase 512 bytes, that means you need to find a new home for the other data that is surviving, and then blast the now emptied page. This would in effect require two erase cycles (one for the new home and one for the securely erased old data.) The first erase can be managed in the background by the SSD if TRIM is being used and if the SSD has lots of room or spares. Either way, this extra process WILL impact the speed of your device at one point or another.

https://flashdba.com/2014/06/20/understanding-flash-blocks-pages-and-program-erases/
Similar situations have been dealt with, going back decades. For one of many examples, the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".

Depending on when and how the garbage collection is implemented, there are potential performance consequences to running TRIM as well; which is why it selectively scheduled.

Understand, NO ONE anywhere here has implied doing secure erases with each and every or even most operations. We already know there is "overhead" involved. Overhead is nothing new for storage media. From side-channel, to cache timing attacks, we've sacrificed a fair percentage of efficiency for additional security. In the grand scheme of things, wanting to selectively secure-erase a few dozen Word, Excel, PowerPoint and PDF docs occupying 100MB of space isn't going to have a noticeable performance impact; temporary or permanent.
 

Intuit

Active member
Dec 27, 2020
37
8
I may be mistaken, I thought I had read there were some security options in the ATA spec (beyond the device sanitization ones) but I can't find that message after a quick search, and I don't have a copy of the ATA spec (they charge money for it.)

I did find this interesting article though: https://www.usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf
Portions of that are right up the alley of this discussion. Good find. Personally, I don't elect to use the 100% "NSA can't get my data" option for secure delete on HDD. My only aim is to impact the casual user's ability to recover data using a simple undelete utility. In other words, the once-over works for me. Each user can make their own decisions.

<<..... Enabling single-file sanitization requires changes tothe flash translation layer that manages the mapping be-tween logical and physical addresses. We have devel-

oped three mechanisms to support single-file sanitizationand implemented them in a simulated SSD. The mecha-nisms rely on a detailed understanding of flash memory’sbehavior beyond what datasheets typically supply. The techniques can either sacrifice a small amount of performance for continuous sanitization or they can preserve common case performance and support sanitization on demand. We conclude that the complexity of SSDs relative tohard drives requires that they provide built-in sanitiza-tion commands. Our tests show that since manufacturersdo not always implement these commands correctly, thecommands should be verifiable as well. Current and pro-posed ATA and SCSI standards provide no mechanismfor verification and the current trend toward encryptingSSDs makes verification even ........>>
 

DiskTuna

Well-known member
Jan 3, 2021
84
8
Netherlands
Similar situations have been dealt with, going back decades. For one of many examples, the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".

Yeah but SSD or better said flash is different. On a conventional hard drive you read 4096 bytes and write them back to exact same physical location. On SSD by definition you can not write back to same location*. So the operation requires an erased page, and then page that is read from become stale and has to be erased at a later time.

*Edit: You could but it means you need to erase entire block first.

Depending on when and how the garbage collection is implemented, there are potential performance consequences to running TRIM as well; which is why it selectively scheduled.

You can run TRIM as often as you want without negative consequences. TRIM is merely informing drive about LBA sectors that are of no interest. It does not mean the drive will go erase these over and over.

Understand, NO ONE anywhere here has implied doing secure erases with each and every or even most operations. We already know there is "overhead" involved. Overhead is nothing new for storage media.

But for flash memory with finite erase/program cycles overhead or write amplification does have consequences.

From side-channel, to cache timing attacks, we've sacrificed a fair percentage of efficiency for additional security. In the grand scheme of things, wanting to selectively secure-erase a few dozen Word, Excel, PowerPoint and PDF docs occupying 100MB of space isn't going to have a noticeable performance impact; temporary or permanent.

I'd like to bring to your attention the bit I got from the PDF:

"Multiple copies This graph shows The FTL duplicating files up to 16 times."

In essence it means secure erasing an individual file has little purpose if it can potentially be recovered from stale pages (in a dr lab).
 
Last edited:

DiskTuna

Well-known member
Jan 3, 2021
84
8
Netherlands
Personally, I don't elect to use the 100% "NSA can't get my data" option for secure delete on HDD. My only aim is to impact the casual user's ability to recover data using a simple undelete utility.

With modern OS TRIM this is already practically the case. My buddy Krzys demonstrates:


I like to add that even if those drives did not yet actually erase, reading the LBA addresses associated with the deleted files will almost immediately result in drive returning zeros without even actually reading the drive.
 
Last edited:

PHolder

Well-known member
Sep 16, 2020
630
2
308
Ontario, Canada
the drive operates on 512 byte blocks but the file system R/W only in 4096 byte blocks. Even though only 1024 bytes needs to be written, the system ends up reading and writing the full 4096 bytes. This inefficiency is often referred to as "overhead".
This is not how flash works. On a HDD you can overwrite a block at any point with ease. You just need to do some head positioning and turn them on at the right time and blammo, old data gone, new data written.

Flash is more like an 8-track tape... in that there is no rewind. You can't update a block of flash (there is no overwrite.) You need to fully erase the whole block, and then you can write the whole block at once. The erase of a block is power hungry and slow. In modern drives it is done in the background so that empty blocks are sitting waiting for write operations. TRIMing a block puts it into the pool of block to be erased for reuse in the future. Erasing a block also reduces its lifespan. Depending on the type of flash (SLC, MLC, etc) the number of erase cycles can be low enough that you don't want to be wasteful or your drive will run out of reusable blocks before its rated lifetime.
 
  • Like
Reactions: DiskTuna

Intuit

Active member
Dec 27, 2020
37
8
Yeah but SSD or better said flash is different. On a conventional hard drive you read 4096 bytes and write them back to exact same physical location. On SSD by definition you can not write back to same location*. So the operation requires an erased page, and then page that is read from become stale and has to be erased at a later time.

*Edit: You could but it means you need to erase entire block first.
I pretty much said that in my opening post. That's called wear-leveling. Yes I already knew it does an erase before write... which is irrelevant to any point that I've communicated.
You can run TRIM as often as you want without negative consequences. TRIM is merely informing drive about LBA sectors that are of no interest. It does not mean the drive will go erase these over and over.
As mentioned prior, the SSD must erase before write. When garbage collection is implemented, all queued sectors to be TRIMmed are then written. This writing has a performance impact. Depending on when and how the garbage collection is implemented, there are potential performance consequences to running TRIM as well; which is why it selectively scheduled.

But for flash memory with finite erase/program cycles overhead or write amplification does have consequences.
Understand, NO ONE anywhere here has implied doing secure erases with each and every or even most operations. We already know there is "overhead" involved. Overhead is nothing new for storage media. From side-channel, to cache timing attacks, we've sacrificed a fair percentage of efficiency for additional security. In the grand scheme of things, wanting to selectively secure-erase a few dozen Word, Excel, PowerPoint and PDF docs occupying 100MB of space isn't going to have a noticeable performance impact; temporary or permanent.

I'd like to bring to your attention the bit I got from the PDF:

"Multiple copies This graph shows The FTL duplicating files up to 16 times."

In essence it means secure erasing an individual file has little purpose if it can potentially be recovered from stale pages (in a dr lab).
As with many recent security exploit mitigations, obviously designers of the new secure erase protocol or command would have to take all forms of caching into account.
 

Intuit

Active member
Dec 27, 2020
37
8
This is not how flash works. On a HDD you can overwrite a block at any point with ease. You just need to do some head positioning and turn them on at the right time and blammo, old data gone, new data written.
That is obvious. The point you're avoiding, the performance impact is a non-issue for the given scenario. If you're going to the extreme of trying to secure erase everything, then sure. But who here, stated or implied that?
Flash is more like an 8-track tape... in that there is no rewind. You can't update a block of flash (there is no overwrite.) You need to fully erase the whole block, and then you can write the whole block at once. The erase of a block is power hungry and slow. In modern drives it is done in the background so that empty blocks are sitting waiting for write operations. TRIMing a block puts it into the pool of block to be erased for reuse in the future. Erasing a block also reduces its lifespan. Depending on the type of flash (SLC, MLC, etc) the number of erase cycles can be low enough that you don't want to be wasteful or your drive will run out of reusable blocks before its rated lifetime.
So there's overhead to secure erase. Again, we knew that already. The research article you linked to stated the following - "... The techniques can either sacrifice a small amount of performance for continuous sanitization or they can preserve common case performance and support sanitization on demand. ..."