Speed & RAID

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

blaq

Member
Sep 29, 2020
11
3
Apologies if these questions have been asked before,

1) I have a 2x2TB drive RAID1. The BIOS sees them as AHCI 0 and AHCI 1, and a BIOS drive of 2.0TB.
Is it correct to say that the AHCI 0 and 1 are the individual drives and BIOS is the RAID? Am I OK to run SR on the AHCI 0 and 1 individually? I know Steve has long said not to run SR on a RAID array directly.

2) SR doesn't seem to bench my NVMe Samsung 950EVO 512GB properly. In Windows it's working fine with over 2GB/s read, but in SR 6.1 it only reads at ~50Mb/s is that normal?
 

Attachments

  • 20241208_163701.jpg
    20241208_163701.jpg
    120.7 KB · Views: 62
SpinRite 6.1 only directly supports IDE and SATA drives, everything else is supported by the BIOS. If SpinRite can only access a device with the BIOS, then the speed will be less optimal. The difference seems a little off to me but you would probably need to run some diagnostics and send your logs before anything could be determined with certainty.

The issue with running SpinRite on portions of a RAID array is that the changes it would make in the face of any problem would be done without the RAID controller/firmware being aware, and that can have consequences for the integrity of the array. Ideally the RAID software/firmware would have inbuilt tools/audits to exercise the drives.
 
Good thoughts. I remember once on SN Steve saying emphatically that SR never changes any data on the drive, but maybe that's not the same with 6.1. And I guess by definition if the data is corrupted then it does "change" the data.
Anyway,,,running L3 on the BIOS-"emulated" NVMe drive... hopefully it'll do something...
I noticed that SR drops back to using 127 transfer block size instead of the full 32k with the NVMe (which may be the cause of the slow speed...
 
SR never changes any data on the drive
SpinRite does its best to be non-destructive, but when things are already in a bad state, any attempt to help can also be a negative.

If you have data that is "lost" and you ask SpinRite to recover it (level 2 and above) then it will do what it can to recover the data and then rewrite it. On level 1, it will only do reads and that should not cause data to change, but remember that physical hardware doesn't follow software rules and it could fail while being worked.
 
  • Like
Reactions: blaq
2) SR doesn't seem to bench my NVMe Samsung 950EVO 512GB properly. In Windows it's working fine with over 2GB/s read, but in SR 6.1 it only reads at ~50Mb/s is that normal?
I noticed that SR drops back to using 127 transfer block size instead of the full 32k with the NVMe (which may be the cause of the slow speed...
To clarify what has been said: SpinRite 6.1 does not have native drivers for NVMe drives. Therefore access is via the BIOS at s-l-o-w BIOS I/O Speed. Therefore, it will not be possible to benchmark an NVMe drive when accessing it via the BIOS.
All you will ever see is the slow BIOS I/O speed.

BIOS access also limits block transfer size to 127. Scanning via the BIOS will be slow, and scan time long. But it will get the job done eventually.

SpinRite 7 Pro (a ways off) will have native drivers for NVMe drives, allowing for 32K transfer block size and taking advantage of whatever speed the controller-drive combo is capable of.
 
  • Like
Reactions: blaq
Good thoughts. I remember once on SN Steve saying emphatically that SR never changes any data on the drive, but maybe that's not the same with 6.1. And I guess by definition if the data is corrupted then it does "change" the data.
Whilst SR may not change any DATA on the drive, it might change the LOCATION of the data on the drive and that could upset any RAID controller. There may also be cases where the data on a single drive of a RAID array is not recoverable, and whilst SR will force the block to be marked as bad, the RAID controller might have been able to save it by reference to the second copy/ECC data.
 
  • Like
Reactions: hyperbole
Glad I came here! I'm running a 10 year old Synology NAS in RAID1 and, once my cloud backup finishes running, was planning to take the disks out to give them a run through level 3 as I find it hard to believe they won't be in need of some TLC by this point.

In light of what's been said above, perhaps a better solution would be:
1. Run a level 1 scan on both disks to see which is in better shape.
2. Run a level 3 on the healthier disk.
3. Wipe the 'unscanned' disk.
4. Add the unscanned disk back into the RAID1 array and allow it to be rebuilt from the 'healthy' disk.

Does that sound sensible?

Of course, if it neither disk flags up any errors after an L1 scan then there's no need to do any of this, but I very much doubt that will be the case.
 
Last edited:
Thanks for the feedback.
And thanks for the link, Colby.

Unfortunately, Data Scrubbing isn't available to me:

"Data scrubbing is only supported on Btrfs volumes or storage pools of the following RAID types: SHR (consisting of three or more drives), RAID 5, RAID 6, or RAID F1."

I am running a RAID1 mirror on 2x 2TB drives formatted as ext4. This was setup in 2015 before Synology supported Btrfs and (at the time, at least) there was no convenient way to convert to the new FS.

So, in that light, is the aforementioned plan still a bad idea? In the worst case scenario, I still have the cloud backup to fall back to.
 
Okay, well that sucks to hear, but no matter what action you take, you are putting your data at risk if you're not involving the hardware managing the RAID in the updating of any data composing said RAID. Logically, running a level 1 scan on a drive shouldn't be an issue, as it's a read only scan. The issue, as ever, is that a read only scan can force the drive to come to the conclusion that some repairative action is necessary, including, but not limited to, remapping the sector. These are the exact events that the hardware RAID controller is meant to be involved in to keep your RAID healthy. I might suggest you do a backup, rebuild the underlying filesystem and then restore. That would certainly have the effect of reading all your data and rewriting it all too.
 
  • Like
Reactions: Hoppertron
Thanks PH. I'm naive and thought that RAID1 didn't really need any management intervention from the RAID controller and simply allowed the storage device's onboard controller to decide mappings etc. (Especially in a cheap RAID device like a 2-bay Synology NAS).
Advice/warnings gratefully received.