ReadSpeed results on 2TB Raid spinning drives

  • Release Candidate 6
    We are at a “proposed final” true release candidate with nothing known remaining to be changed or fixed. For the full story, please see this page in the "Pre-Release Announcements & Feedback" forum.
  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in:

    This forum does not automatically send notices of new content. So if, for example, you would like to be notified by mail when Steve posts an update to his blog (or of any other specific activity anywhere else), you need to tell the system what to “Watch” for you. Please checkout the “Tips & Tricks” page for details about that... and other tips!



Active member
Oct 8, 2020
Had to enable CSM, boot to legacy only, save and reboot, then run it.
ASUS ROG Maximus XI Formula

It successfully tested the 2 2TB Raid spinning drives.
The UEFI boot drive (1TB M2) was ignored by ReadSpeed. I thought this was odd initially, though legacy mode is probably the cause. USB attached SD card also ignored, which was expected.

Driv Size  Drive Identity     Location:    0      25%     50%     75%     100
---- ----- ---------------------------- ------- ------- ------- ------- -------
 81  2.0TB ST2000DM008-2FR102            220.3   214.1   189.8   155.9    99.0
 81  2.0TB ST2000DM008-2FR102            220.3   213.2   189.0   154.7   100.9

                  Benchmarked: Tuesday, 2021-01-05 at 13:37

Weird updating the BIOS to use UEFI mode again as the UEFI boot drive was not available to select as primary boot device in the BIOS. It booted to it anyway as the only UEFI boot device in the system.
Is your UEFI boot drive SATA or NVMe on an M.2 form factor? If it is SATA, it should be seen, if it is NVMe, that won't be seen by this version of ReadSpeed.
I did the same. My NVMe drive benchmarks in Linux Disks at 1500MB/s compared with 140MB/s for the spinners. The spinners average something similar in ReadSpeed, I can't wait to try it on the NVMe.
Sorry for reviving this old thread, by I couldn't find any discussion about ReadSpeed and SMR drives. The ST2000DM008 is in fact an SMR model, and is probably a bad choice for a RAID.
DEFINITELY 100% agree about SMR and RAID. SMR is now being referred to as "for archiving" since its technology doesn't do well in highly active writing environments due to the technology of shingling (overlapping adjacent tracks, as we know.)

ReadSpeed was born out of the early work on SpinRite's native drivers. We were initially confused by some if its results which appeared to be “impossible” -- and specially because whereas we were expecting to see the typical declining performance as we moved further “back” in the drive, the ends of some drives were appearing to be much faster (as you wrote, near link speed). As we now know, this was due to the fact that those nether regions had never been written to, so they were still “trimmed” with no logical addressing mapped to the physical media.

SpinRite attempts to detect SMR drive technology (as it does SSD for a similar reason) and caution its user when it sees them running any SpinRite level that performs gratuitous writing to the drive. But some drives (like that Seagate for example) do not declare then SMR'ness in their Identify data. This discussion gave me an idea for heuristic (behavioral) detection, so I dropped myself a note in SpinRite's GitLab to remind me about the idea. (y)
Heuristic (behavioural) detection sounds useful. Could it also detect "single/multiple/total" head failure on spinners?

New Feature: Heuristic detection of Head Failure ??​

It makes so much sense that SMR drives would be a bit "hybridized" like that as a means of buffering (even a lot of) write data which would then later be transferred over to the slower-to-write but denser storage. Very slick. And you're right that if recently written data were still in the Media Cache, it would be read back from there rather than from the SMR region.
LBA-48 drives are supposed to be able to transfer 65,536 logical sectors at once. But during our early testing we found that some drives, lord only knows why, have trouble (presumably firmware trouble) as the requested count gets up near 65,536. They start stumbling. My goal was to transfer the largest possible blocks in order to minimize the per-transfer command overhead. (We had already verified that using command queuing didn't buy any measurable performance, presumably because transfers were linear and I was able to immediately initiate another transfer upon the completion of one. So the drive's own read-ahead would span that brief pause.) So... rather than fight unpredictable non-spec drive behavior I just cut ReadSpeed's (and SpinRite's) maximum transfer request size in half, to 32,768 logical sectors (16MB) and we never have any more trouble with that, and no measurable decrease in performance.

I haven't looked as ReadSpeed in years, but as I recall, I think it performs a constant-time benchmark — like running for as many 16MB blocks as it can fit into the prescribed time. Then once the overall time is up it calculates the number of sectors it was able to transfer in whatever precise length of time it was.

So, I don't know how that translates into Seagate, Samsung, or other drive's zone recording, but it does presumably "smooth them out" by cruising through a great many of them, and thus represent real-world read performance for that general region of the drive.
Would it be possible for ReadSpeed to measure the real RPM of a HDD?

WD's IntelliPower HDDs report 0 RPM via Identify Device, and many of their "5400 RPM class" models actually spin at 7200 RPM, even though they report 5400. I think tools such as Victoria can do this, and I can visually determine the RPM from a HD Tune read benchmark graph.
How you think Victoria does this?

Would it be possible for ReadSpeed to measure the real RPM of a HDD?
It certainly could. As we know, SpinRite originally determined inter-sector angles and part of that was the use of a software phase-locked loop to mimic the rotation of the drive, then look at the timing of sector read completions against the timing created by the PLL. That would be some stuff to put into v7... Showing zones and sector counts, etc.