I've just (last week or so) started running Spinrite 6.1 prerelease 5, on several of my 4 TB to 8 TB spinning rust drives.
Presently, for example, I have less than 3 hours remaining on a Level 4 scan of an 8TB WDC WD80EZZX. All my drives have been clean, both according to their SMART data and Spinrite.
The only "glitch" I notice in this 6.1 pre-release is that Spinrite is overly 10 to 20 % optimistic in how long it will take to finish a scan, and only slowly, and rather predictably adjusts.
This present scan started out telling me to expect it to finish what is now 4 hours ago, and gradually extended that time, to what is now 3 hours still remaining. In other words, initial time to scan was about (I forget exactly) 42 hours, and it now looks to finish in a total of 49 hours.
It's understandable to me that the very first estimate would easily be off by 10 or 20%, but I would expect it to get a more accurate estimate quicker, as it got some more data on the drive in question (assuming that nothing difficult happened that required more work on Spinrite's part).
My (wild) speculation is that either:
1) estimates are not taking into account the lower data density (bytes transferred per second) on the inner tracks, or
2) the early estimates of how many blocks/second can be scanned, for that drive, at that level, are not refined as the scan proceeds.
I've long used a simple filter to blend such changing estimates smoothly. For example, once every so often (as is convenient for the
code in question) update the "New Estimate" to be 15/16-th of the "Old Estimate" plus 1/16 of the latest observation. I've been using
this method since I first read of it, in Richard Hamming's excellent "Digital Filters (1977)" book, long ago. With a little math, the
15/16 fraction can be adjusted to provide a more suitable half-life to the contribution of each sample in the running average, or one
can just wing it.
This is certainly not a show stopper for 6.1 ... so it's totally fine by me if this observation is tabled until Spinrite 7.0
Presently, for example, I have less than 3 hours remaining on a Level 4 scan of an 8TB WDC WD80EZZX. All my drives have been clean, both according to their SMART data and Spinrite.
The only "glitch" I notice in this 6.1 pre-release is that Spinrite is overly 10 to 20 % optimistic in how long it will take to finish a scan, and only slowly, and rather predictably adjusts.
This present scan started out telling me to expect it to finish what is now 4 hours ago, and gradually extended that time, to what is now 3 hours still remaining. In other words, initial time to scan was about (I forget exactly) 42 hours, and it now looks to finish in a total of 49 hours.
It's understandable to me that the very first estimate would easily be off by 10 or 20%, but I would expect it to get a more accurate estimate quicker, as it got some more data on the drive in question (assuming that nothing difficult happened that required more work on Spinrite's part).
My (wild) speculation is that either:
1) estimates are not taking into account the lower data density (bytes transferred per second) on the inner tracks, or
2) the early estimates of how many blocks/second can be scanned, for that drive, at that level, are not refined as the scan proceeds.
I've long used a simple filter to blend such changing estimates smoothly. For example, once every so often (as is convenient for the
code in question) update the "New Estimate" to be 15/16-th of the "Old Estimate" plus 1/16 of the latest observation. I've been using
this method since I first read of it, in Richard Hamming's excellent "Digital Filters (1977)" book, long ago. With a little math, the
15/16 fraction can be adjusted to provide a more suitable half-life to the contribution of each sample in the running average, or one
can just wing it.
This is certainly not a show stopper for 6.1 ... so it's totally fine by me if this observation is tabled until Spinrite 7.0