@DanR ,
@DiskTuna You guys bring up some excellent points which make me think, which is a good thing. They also make me have more questions. Some of this makes my eyes cross, so I'll try to come up with something coherent. One thing I didn't say in my original post is that my initial burn in procedure for a new drive involves filling it with random data. There are Linux commands that can create random files. Years ago, I created 1 MB, 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, 64 MB, 128 MB, 256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, 16 GB, and 32 GB files of gibberish. So, I can use these to fill up a new drive.
I do not perceive a Level 4 pass accomplishing any benefit that a safer less stressful Level 3 pass can not do?
I've never actually used Level 3 for anything. I looked at the SR v5 docs from GRC which was a blast from the past to see what Level 3 does (single read, single write). I guess I just figured that it would be useful to fill each byte with 0's and read it as well as 1's. This is because there are some patterns of failures in memory arrays which are more likely to manifest with a 1 or a 0 but not both. Other reasons to prefer Level 4 below.
But with SSD you can't tell if it's just 2 cycles. ...
Good points all. Let's say the drive is overprovisioned by 10% as an example. If I'm writing the entire disk, there's no way the SSD can allocate a new page or sector for every write. It's going to have to write at least 90% of the cells in the disk. And, for every one of those LBA's, we would at least know that the controller has looked at all the PBA's linked to those LBA's, done its magical deep analysis, and concluded all was in good shape. I cannot assume that the original data written at random was written correctly necessarily as normal writes do not do a read after write verify. Also, if you do a Level 3, as an example, which is a read then a write, this still doesn't assure that the cells have been written correctly. It seems to me that the ONLY way I can get a controlled write to every LBA AND a read and verify of said data, is to do a Level 4. In that case, after the data is read, inverted, and written, and read; only then do you actually get a WRITE / VERIFY cycle. A Level 3, with it's READ / WRITE cycle can never verify. Even on a Level 4, when the data is finally inverted and written back to its original state, that last write is not verified.
Chkdsk cares about a consistent file system mainly.
Let's fast forward to the future when the drive has been in service for a while. Electrons leak out of SSD storage cells over time. IE, Bit Rot. Or, maybe there are power flickers, or computer freezes, etc. So, say the file allocation table is corrupt. Theoretically, Chkdisk would catch this. It's possible that SpinRite would read the associated sector and find it just fine, so there's nothing it can do. Conversely, it's also possible that SpinRite will find a damaged sector and try to repair it with its statistical analysis. But, that may not work either. And, the file allocation table may still be damaged. I know Chkdisk can orphan files. So, I'm not sure what the best sequence is if there's damage, but I still think Chkdisk should be run before SpinRite. At least that way, if Chkdisk complains, I can find that out without activating repairs while I decide what to do. When SpinRite hits errors, it's immediately going to start repairs, which may try to write the sector with not quite correct data. Almost all the time, my Chkdisk passes, then SpinRite passes. So, it's doing therapeutic maintenance and hopefully preventing problems from occurring.
If I go too far off the rails, you guys can set me straight.
May your bits be stable and your interfaces be fast.
Ron