Scanning large hard drives

  • Release Candidate 6
    We are at a “proposed final” true release candidate with nothing known remaining to be changed or fixed. For the full story, please see this page in the "Pre-Release Announcements & Feedback" forum.
  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in:

    This forum does not automatically send notices of new content. So if, for example, you would like to be notified by mail when Steve posts an update to his blog (or of any other specific activity anywhere else), you need to tell the system what to “Watch” for you. Please checkout the “Tips & Tricks” page for details about that... and other tips!



Active member
Dec 6, 2020
Along with others on this forum, I am waiting for the new SR release, I have a pile of hard drives that are 6TB+ that I'd like to scan. I also have new hard drives that are arriving DOA and having to RMA those drives. I'm reading on other forums that DOAa are very, very common, unfortunately.

What can be done to scan/test these drive until the newest version of SR is available?


Edit- I am about 6 episodes behind on SN which is where I get SR updates. I did see that Steve hasn't updated the Blog on SR, recently, but maybe he has given some updates on the podcast (need to catch up).
Last edited:
  • Like
Reactions: mangopacket
I have a pile of hard drives that are 6TB+ that I'd like to scan. I also have new hard drives that are arriving DOA and having to RMA those drives. I'm reading on other forums that DOAa are very, very common, unfortunately.

What can be done to scan/test these drive until the newest version of SR is available?
A possible solution - but NOT elegant!

I presume that data is not of concern here? If so, they could be partitioned in 2 TB (or smaller) partitions. Then SpinRite could scan them 1 partition at a time, via an ATA/IDE controller. It would be rather slow. Then re-partition as desired when done.

As I said, not elegant.
A possible solution - but NOT elegant!

Yes, but depending on what is being scanned and repaired and/or recovered, elegance might not matter and it'll either be WORTH IT or, in the other case, NOT WORTH IT (UNTIL 6.1 and newer versions are ready)...
At a minimum just use the drive in a case via USB, and use SMART to run the full surface scan on it. Will take a few hours, but at least will allow you to see if the drive can read all the sectors. Better than nothing.
Many drives will let you know how many reallocated, pending and uncorrectable sectors are present, right off the bat without running a new scan. Many 2.5" Toshiba HDDs were (conveniently) bad about not telling you pending and uncorrectable counts though; and I think it's because they knew they had reliability issues with their products. Instead I would have to rely on error log time of occurrence versus current power on hours. Seems like few SSDs simulate those pending and uncorrectable stats.

Might see whether SR can access the drive via VM. Obviously some of it's function could be neutered, depending. I've tried several other tools and the problem I always encounter is, they corrupt the data they've "recovered". SR seemed to be pretty good about not doing that and when it couldn't recover it then you pretty well knew that the only recourse involved invasive methods. (or maybe the data was just mis-recorded to start with)
Here are a few random access thoughts. Forgive my naivete, but I didn't know SR couldn't scan drives over 2 TB. I don't own any bigger than that. These ideas are for Windows 7. Hopefully they work on Windows 8, 10, 11.

I always run a burn in test on new drives, even SSD's. If SpinRite can scan it, I do a full level 4 on the drive. Even if it uses up 2 drive writes, I think it's worth it to verify every sector can read and write and all the parts are happy.

As someone else above stated, assuming you don't care about data and assuming you have Windows:

Create a Linux bootable memory stick. Make sure the memory stick is formatted as NTFS or has an NTFS partition. In the past, I've used Mint and Ubuntu. There's a Linux command which can create a file of random gibberish. I don't remember the command as I did this years ago. Create gibberish files of 256 MiB, 512 MiB, 1024 MiB, 2048 MiB, 4096 MiB, and 8192 MiB with appropriately descriptive file names and dump them to the memory stick NTFS partition.

For what MiB means:

There may be ways to do this in Windows as well.

Shut down Linux, remove the memory stick, and boot Windows.

Make sure you know which drive you're about to erase.

From Windows Explorer, right click on the drive YOU WANT TO ERASE and click format. NTFS file system should be fine. MAKE SURE it's the right drive. UNCHECK the box that says Quick Format. This should do a full format which includes a read only sector analysis. I THINK it also fills the drive with zeros.

For the following, I like to use multiple tiled Windows Explorer windows.

Make 4 test folders on the drive you're testing. Insert the memory stick and copy the ~ 16 GiB of junk files to one of the folders on the DUT (drive under test). You can then SAFELY remove the memory stick.

Then, copy all those files on the DUT from folder 1 to folder 2. You can do this in Windows Explorer by selecting all the files in a folder then holding the CTRL key while dragging them to the 2nd folder. Then, copy from folder 2 to folder 3. Then copy from folder 3 to folder 4. Each time you do this you're stressing the read and write circuits of the drive and filling up some sectors with random data, not just zeros.

Now the procedure changes a bit. Select and copy all the files from folder 1 to folder 2 (again). When Windows complains about duplicate file names, select the option to COPY BUT KEEP BOTH FILES. Also select the check box that says DO THIS FOR ALL CONFLICTS. This will copy all the files again and give the duplicates new names. Now copy from folder 2 to folder 3 in the same way. Copy from folder 3 to folder 4 the same way.

Once this is done, you will have gone from using up ~ 48 GiB to using up ~ 96 GiB. The used space will double each time you go through this cycle. All this copying will take a LONG time even on a SATA bus. Don't even consider USB unless you have USB 3. Once the drive starts having to do massive sector erasing and garbage collection, the process may slow down. Write caching should be on to make the process as efficient as possible.

Some of you coders may be able to automate this stuff.

At some point, you will get to a state where you cannot double the space you're using. Right click on the DUT in Windows Explorer, clock properties, and determine how much space you have left. On your next round of copies, select just enough junk files to fill that space. After that you will have a small amount of space left. On your next round of copies, select just enough junk files to only fill this space. Keep this up until you've completely filled the drive to less than say 1 MiB of free space.

Some of you geeks might have said, just use SDELETE to fill the free space. You could. But it turns out SDELETE is really slow at filling free space. You could use that to fill the last 1 MiB of space, etc. For those that don't know, SDELETE is a separate utility that used to be downloadable from Microsoft. Don't know if it still is. I think it originally came from Sysinternals.

At this point, you have a big drive full of a bunch of files of all sizes. Right click on the DUT in Windows Explorer, click properties, click tools, and click check now. This will run CHKDISK. Turn off the checkbox that says automatically fix errors. The CHKDISK should show no problems.

At this point, you've gone a really long way to show that the drive is mechanically, electronically, and firmware sound. There are some steps you could do to go further if you wish, most of which I haven't done, other than the first one.

* If the drive is an SSD, load the manufacturer's own analysis software (IE Samsung Magician). Run a long drive test. Check for the latest firmware. I didn't think about it at the beginning of the post, but remember that SSD's should have at least 10% of their space unallocated. Samsung calls this overprovisioning. You can't overprovision the drive if it is totally full and not already overprovisioned. You may have to delete some files first. If you do that, empty the recycle bin or they won't actually go away.

* You could write a batch / powershell / other language script to calculate the SHA 256 (etc.) hash of each file in the test folders, or maybe just the small ones, and compare to the others. All files of the same size should have the same hash.

* You could have the virus scanner scan all the smaller files, which, of necessity, will have to read them. Scanning really huge files will take a really long time or possibly crash the scanner.

* You could boot the Linux system and copy every single file on the drive to /dev/null. Make sure there are no read errors.

All this will do even more to stress the drive, along with potentially overheating it. But, if the drive is good, and the PC's cooling system is good, it shouldn't be a problem. Actually, I've been known to stress test my PC's by running the CPU at 100% with Prime95 AND running AV scans, etc. Neither CPU temperature nor SSD / HDD temperature should go out of bounds.

* You can use Crystal Disk Info to show stats and temperature and SMART data on the drive. But, some drive makers use non standard SMART data. The manufacturer's analysis software may also show SMART data.

* You can use Crystal Disk Mark or similar to run benchmarks on the drive.

If it survives all that, it's probably good to go!

Well those are my thoughts. It got longer than I anticipated. As mentioned prior in the thread, only you can decide if this is "worth" it.

May your bits be stable and your interfaces be fast. :cool: Ron