Export thread

  • SpinRite v6.1 Release #3
    The 3rd release of SpinRite v6.1 is published and may be obtained by all SpinRite v6.0 owners at the SpinRite v6.1 Pre-Release page. (SpinRite will shortly be officially updated to v6.1 so this page will be renamed.) The primary new feature, and the reason for this release, was the discovery of memory problems in some systems that were affecting SpinRite's operation. So SpinRite now incorporates a built-in test of the system's memory. For the full story, please see this page in the "Pre-Release Announcements & Feedback" forum.
  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

Spare track percentage on a hard disk?




A group of us are refurbishing donated disused lapops for disadvantaged school kids. In order to reassure donors, I need to quantify the risk of personal data surviving DBAN in spared tracks. I'm pretty sure it's going to be very small but I could do with some figures to justify that. Can anyone tell me the proportion of a typical hard drive's capacity that is hidden and reserved as spare tracks? I thought MHDD might tell me if I rummaged through its more obscur options but apparently not. I'm sure Steve would know if he stumbles across this.




Thank you DiskTuna - that may well tell me the number of tracks that have been spared on a particular disk (though I understand the interpretation of SMART data is manufacturer-specific and not published, hence not necessarily reliable). But that's not actually what I asked.

I might tell a donor of a device that the risk is "extremely low" (or whatever) - trust me. That's almost certainly true but without figures I can't justify it. What is the worst case, where nearly all the available spare tracks are used? Just what percentage of the disk capacity is that, representing bad tracks which may contain recoverable data not wiped by DBAN?




Thank you again, but forgive me - you still haven't understood what I'm getting at. I'm not looking at a specific hard disk but the general case in order to enable me to write a reasoned policy and procedures document. Note that DBAN is not acceptable in UK Government use bcause it only cleans LBA-addressable sectors. You have to use Blancco or some other accredited utility, which also wipes spare sectors and ex-LBA sectors that have gone bad and been spared out (grown defects). It's also why Steve has Beyond Recall as a slated future project - his knowledge of the ATA command set would allow him to do a much more thorough job than DBAN.

Donor: I've got this laptop I'd give you but I'm worried about my personal data.

Me: Don't worry, we'll wipe it with DBAN. But I have to tell you there is a very small risk of residual data remaining.

Donor: <Sharp intake of breath> Hmmm... Can you quantify it? I'm a bit paranoid and need to know just what the risk is before I agree to let you touch it.

Me: Well, if a sector mis-reads, even if it's only a transitory problem due to static, the disk may mark the sector (containing your data) as bad and reallocate it. Forensic programs might still be able to read it.

Donor: Err, well, we get quite a lot of static. My disk is fairly full but I reckon just a proportion n of my data might be sensitive. So if the spare sectors are a proportion m of the advertised size of the disk, in the worst case where it's used nearly all its spare sectors, the chance is no greater than n * m that some of my sensitive data might survive. I reckon a one in a million chance might be within my risk appetite. So what exactly is the the value of "m"?

Me: Not actually quite sure about that. I might have to ask @Steve.
I don't understand your problem then I suppose or you do not understand my answer. I know it's not what you asked but I tried to explain why knowing amount of spare sectors is useless knowledge in relation to the problem at hand. Only useful info is whether sectors were actually reallocated or not.
SPARE SECTOR POOL: excess sectors set reserved to replace bad sectors, these are OUTSIDE user addressable LBA and can therefor not be accessed by DBAN. This is NOT a problem as these sectors are EMPTY.
- unless of course Spinrite has determined that the spared sector is ok after all and un-spared it!




Question is, a reallocated sector will be what, 4k, 8k, 16k of data. Just what part of a sensitive file from the most common document types ( zip, JPG, any office document) will you be able to actually use as just a 4/8/16k section, out of perhaps the smallest document. Smallest Openoffice document I have is 20k, and it is basically a blank table with a touch of text, and as most document types these days are stored as some form of compressed archive, you will not get much out of them, as only the header and the first part will be surviving in the recovered sector, and it would have to be the one with the header to be able to use it in any sort of form.

You are not going to recover much from the tiny snippet, especially as you will have no context as to file type, and where it was in the original file. Might be enough to prove that a certain file was there, but the overall chance of it being useful is vanishingly small. If you are worried about this, it is trivial to use full disk encryption, or if you are on Linux use /home encryption, which only encrypts the data you store, not the whole OS, which has a good speed advantage, even on modern hardware.

Yes good to erase all data, but in most cases the scraps are almost unusable, I would worry more about scraps of data left in slack space, as most OS versions use a buffer to store data before it is written to dick, and the buffer is often not erased fully after completing a write, so every partial cluster write does write out, completely by default, the full uncleared data that was there before, and this is faithfully copied to disk as well. You final bits of log files and such, that windows and all OS are so determined to keep, always contain a chunk of data that was currently written before, and these scraps are often going to be around for a while in the tail of the log files.




Having heard every SN episode since #1, I'm sure Steve has said that if a track is automatically spared out by a drive as a result of a transitory error, Spinrite level 4 can test the orginal track, and if it looks good after all, can (or can tell the drive) to "unspare" the track. In that case you would have a non-LBA sector potentially containg user data.

Level 4 saves the content of a track before hammering it. Since every LBA sector potentially contains user data it can only save it to a non-LBA track - it has nowhere else. If you pull the plug while it's doing so, your data won't be lost because the drive has remapped that sector to the spare sector where Spirite (or the drive at Spinrite's prompting) put it. Blancco and other government accredited tools can similarly access and wipe non-LBA sectors, presumably in the same way as Spinrite accesses them (and BeyondRecall will), whereas DBAN runs under a Linux kernel and accesses a disk through the Linux drivers, which only give you access to LBA tracks.

As a retired government-accredited security consultant it was my job to be paranoid so I could tell my clients when they didn't need to be. Or more often when they needed to be a good deal more paranoid than they usually were! In those circles, we were looking for proveably secure solutions. You had to assume that any bit of magnetic coating that could possibly store data potentially would. That's why Blancco is accredited by DBAN isn't.

Obviously (and you don't need to tell me), that's a million miles from the situation with our charity, but habits die hard. For my own satisfaction and integrity I still want to know how many spare tracks there typically are which might, however remotely, contain user data. (And I still want to know whether there's intelligent life amongst the stars. What practical use would the answer be to me? Absolutely zero.) You'd never have thought Heartbleed or Spectre/Meltdown could possibly leak private keys, but they can. Everything that can possibly happen, will, given long enough.




This thread has becom nugatory. You seem to have no interest in understanding my point of view but only in berating my intelligence.

One tiny point before I un-watch it: the same national intelligence agency that says it trusts Blancco but not DBAN also says that ATA Secure Erase is not reliably implemented on all drives and hence no better (possibly worse?) than DBAN.

And I've never mentioned bad clusters and I'm perfectly well that they are at a totally different level of data abstration.




Someone on another forum provided a meaningful answer: on a 8TB Seagate drive, a proprietary utility reported spare sectors numbering roughly 0.045% of the LBA size. Whilst a sample size of 1 has its limitations that's a good starting point.

But people have kept telling me there's no way of accessing non-LBA sectors. That's at variance with 2 facts that I've picked up from Steve talking in-depth about Spinrite some years ago:
  1. Spinrite is coded ultra-defensively, so that even if you pull the plug on Level 4 at any time, Steve guarantees you'll not loose any data. So where does he save a sector he's working on if not in a non-LBA sector? (I'm not talking about pulling the plug on the drive - only on Spinrite.)
  2. Steve has long had an ambition to implement BeyondRecall because he claims he can sanitise areas that tools like DBAN can't. DBAN sanitises all LBS sectors, so what else?
So what deep magic does he use, or are all those nay-sayers wrong? Looking at ATA/ATAPI-7 V1 I don't see it.




so that even if you pull the plug on Level 4 at any time, Steve guarantees you'll not loose any data.
This is not possible to guarantee. Modern drives have caches, and they won't listen to Steve or anyone else telling them how to disposition their data. If you allowed the caches to be fully disabled, then the drive would be so slow you'd never want to use SpinRite on it.




Simple way to implement the storage is to use unallocated clusters, either those spares used to align a partition table, or just file system spare space. Those are available and unused, so will be a convenient place to assign to store scratch data, as you only write a block there, then do the suite of tests, and then write the data back from a memory store afterwards, then write the next block to the on disk cache. That way you have a block on disk where you have the current block being worked on, the data there and likely a magic signature used to identify it to Spinrite. Only time this breaks down is on a drive with an unknown file system, so there the default likely is not to write any data as cache, and simply hope there is no power loss during operation, though of course Steve likely also assigned a NMI for power fail detection, to do a last ditch write of the buffer to the disk and a flush cache command on detecting power loss, so there is at least 50ms of power available for the drive to complete the writes, and then enough time for the drive to return a completed status before it is commanded to shut down, instead of the normal OS just abandoning the drive to do an uncommanded power down.