Interpreting DynaStat Behavior

  • SpinRite v6.1 Release #3
    The 3rd release of SpinRite v6.1 is published and may be obtained by all SpinRite v6.0 owners at the SpinRite v6.1 Pre-Release page. (SpinRite will shortly be officially updated to v6.1 so this page will be renamed.) The primary new feature, and the reason for this release, was the discovery of memory problems in some systems that were affecting SpinRite's operation. So SpinRite now incorporates a built-in test of the system's memory. For the full story, please see this page in the "Pre-Release Announcements & Feedback" forum.
  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)


New member
Nov 13, 2023
I've been running a Level 2 scan on a 500 GB Toshiba MQ01ABF050 for 51+ hours, and am only at 3.793% completion. So far it has declared 616 sectors not recoverable, 2 that were recovered, 2 command timeouts, and 1 comm/cable error (this one happened in the first several minutes though). This drive was in a laptop and I suspect it had normal to moderate abuse. But Windows ultimately stopped booting on it, and so it went into storage waiting for the day SpinRite could process GPT disks.

Of those 600+ sectors there have so far been 15 contiguous regions for which DynaStat was invoked. A few of these were a single sector. One region spans 365+ sectors and counting. I can give a more detailed list or screenshots if that helps sharpen the picture.

Anyway, I have a few questions about how DynaStat behaves and how that might relate to the condition of the drive.

1. For nearly all of the sectors for which I've seen DynaStat invoked, the line on the graph stays entirely straight, and is one notch below the question mark horizon. Data samples and reads attempted increment in lockstep until Recovery times out, while unique samples stays locked at 1. I've also never seen numbers fill in for the first, last, and span of uncertain bits. DynaStat will continue to display "Defective Region Search in Progress" until it times out. Note: when it does hit sectors that read just fine, the DynaStat display looks like what you'd expect.
What might this mean?

2. Sometimes, thanks to Steve's use of the speaker, I can hear the samples being read at different speeds while in recovery mode. It's usually at a rate of 2-4 samples per second for this drive, but sometimes it's as slow as 1 sample every 4 seconds. Sometimes one sector will poll at 3 per second, and the very next sector polls once every 3 seconds.
Does SpinRite intentionally modulate its sample rate or is it always issuing commands at full speed? In other words, how much can this tell me about the drive?
If you really don't care about the data on the drive, you can lower the effort that DynaStat will put into it. You can lower it all the way to 0 and it will not make any recovery attempt what-so-ever, just writing to the LBA to get the drive to try and do its own recovery/relocation. It's a command line only option though. Ask for help on the command line to get some info.
DynaStat cannot possibly work for any drive produced in the last 30 years. That's because the data that SpinRite thinks it is writing to the drive looks nothing like the scrambled data that is actually written to the platters.
I don't know who you are or why you think that, but that's not the least bit true. There is a massive body of direct evidence to the contrary and a nearly endless stream of users who can attest to the fact that DynaStat can be and often is highly effective in performing data recovery.
@fzabkar You seem to have an axe to grind. Please state for the record either who you are, what your credentials are, or why you're being so particularly disrespectful. If you continue on the path you're on, you're going to find your participation here limited.
I am an electrical engineer who has worked with hard drives at component level since the early 1980s. I saw SpinRite way back in 1990 when it was a useful tool, so I know what it can and can't do. The only person who has been "so particularly disrespectful" is Steve Gibson himself. This "ignorant Internet troll" is merely pointing out the errors and flaws in SpinRite, and I have backed up my statements with documentation from the actual manufacturers. In fact, just about every claim that is made in respect of SpinRite today can be easily refuted.
And you are here, why?
Hey @fzabkar...

Before I saw this most recent post of yours (I had seen your previous read channel mentions), I was planning to explain that that apparent reason for this confusion was that you were assuming that SpinRite was still concerning itself in some way (in any way) with the flux reversal patterns on the magnetic media. And it was, indeed, doing that with version 3.1 back in 1993. It "knew" the designs of all of the various ENDECs of that era, by manufacturer, and used that awareness to design a system of user data patterns that mapped to flux reversal patterns which allowed it to deliberately place flux reversals in every location on a disc. I referred to this as SpinRite's “Flux Synthesis” technology since that's exactly what it was, and your use of that term in your most recent note, above, confirmed that this was what you've been thinking about.

For exactly the reasons you've been elaborating, with which I agree 100%, SpinRite abandoned doing any of that decades ago. I'd have to go back through the post-v3.1 version history to see when I dropped all of that completely, but it was long ago.

As we both know, and as you have quite clearly demonstrated, there is now virtually no way to know what a drive is placing on its magnetic platters, so any attempt to "control" that is a fool's errand. You probably also know that the drives back then often listed their manufacturer-located defects on a map printed on the outside of the drive. But OEMs of the era rarely bothered to enter those locations into the drive when it was being low-level formatted before delivery. So locating those spots was important, and SpinRite would move file system data that was found in a defective location, elsewhere. Also, when SpinRite was adjusting a drive's low level format for optimum performance, the physical detects — known and unknown — would land in different logical sectors. So, again, SpinRite needed to locate those defective sectors and re-knit portions of the file system since the logical-to-physical mapping had changed. Once IDE drives emerged and were (a) able to read entire tracks in a single revolution and (b) contained embedded servoing data and could no longer be low-level reformatted, SpinRite also dropped all of that re-interleaving technology.

Another big change in drive technology since then, and through the intervening years, is the development of far more powerful and sophisticated error correction technology. Now we have interleaved ECC that's able to correct longer and multiple bursts of errors in a single block (with some limits). And we have longer (typically 4K) physical sectors, since ECC efficiency increases significantly as the block size being corrected is increased. As a result of this, where a defect was once a near death sentence for a sector of a drive's data, defects are now assumed and are taken in stride. Drives look at the length of the ECC syndrome and decide whether "the problem" has grown bad enough to merit sparing out and relocating the sector.

So, yeah, pretty much everything has changed in the 30 years since v3.1 of SpinRite... and with successive major versions, SpinRite has been changing with the times. I wrote here recently that SpinRite has been suffering from neglect (big time) and that the next few years of my life will be spent catching up. That's happened before, many times, and it'll happen again. Some of the work we did in the early days of v6.1 revealed surprising and interesting details that promise to be quite interesting to explore for v7.0
  • Like
Reactions: SeanBZA