Slow SSD up front - improved by SpinRite!

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

thompsondn

New member
Dec 31, 2020
3
1
My first run against a secondary SSD drive:

Code:
Driv Size  Drive Identity     Location:    0      25%     50%     75%     100
---- ----- ---------------------------- ------- ------- ------- ------- -------
 81  512GB PLEXTOR PX-512S2C              26.0   543.7   543.7   543.7   139.6

That first section sure is alarming! I ran a level 2 spinrite against it (took about 2 hours), and here are the followup results.

Code:
Driv Size  Drive Identity     Location:    0      25%     50%     75%     100
---- ----- ---------------------------- ------- ------- ------- ------- -------
 81  512GB PLEXTOR PX-512S2C             387.9   543.7   543.7   543.7   133.1

Looks like the front of the drive still has some issues, but much improved.

Anyone have any thoughts what's up with the end of the drive? I debated running a level 4 against it to see how that did.


I'm disappointed this doesn't run against NVMe drives though. Been looking forward to it for so long, and all my main drives are NVMe now.
 
@thompsondn obviously level 2 has fixed a number of things there.

The suggestion is the follow up is Level 3, it does a complete write to the drive once which will use some write cycles. This has been seen to improve performance after a Level 2 can't do anything further.

A level 4 does two complete writes (if I'm not mistaken) and may be excessive.

There's been some discussion on NVME here https://forums.grc.com/threads/nvme-timeline.354/

Steve is working every day without a break (or so it seems), so it's coming as fast as it can!
 
Oh I get how Steve works, clearly he's got a good system =) As much as this has been discussed on the podcast, I never even caught a hint that NVMe was not going to be supported, so I was surprised.

I'll give the level 3 a go when I'm out of the house (I expect that'll take more than the 2h) and report back on the results for any future users who see a similar issue. This used to be the primary drive for a laptop that went kaput, so I just threw it in as a backup drive - I don't use it much so I'm not overly concerned about some extra writes.
 
  • Like
Reactions: Steve
Drive endurance is a pet subject of mine. I googled and found this about your drive:


It looks like it's rated for 150 TBW (TeraBytes Written). So, writing the whole drive, .5 TB, would use up 1/300 of the drive's life. Of course, a premature crash uses up lots more. :cool: Good luck with the Level 3. I guess, as a metaphor, would I remove 1/300 of my car tire's tread to save the tire? I guess I would. Ron
 
I ran the level 3, but decided to just do it on the last quarter (I told SpinRite to start at 75%), presuming that SpinRite's 75% and the read speed tool's 75% are the same place (or close enough).

Unfortunately, there was little to no effect from running a level 3 scan.

Code:
Driv Size  Drive Identity     Location:    0      25%     50%     75%     100
---- ----- ---------------------------- ------- ------- ------- ------- -------
 81  512GB PLEXTOR PX-512S2C             390.5   543.6   543.6   543.6   134.3
 
@thompsondn I'm in the process of writing a reply to @DarkwinX, but I just noticed your post come up. I just wanted to say that, because of the drive's wear leveling, I don't think you can assume that 75% writes to the same cells each time. I THINK that means that, for example, if you're writing to sector 10,000 when the original data was stored, then sector 10,000 (from the point of view of the SATA interface) when SpinRite writes it, that may be going to different cells. Consequently, I think you'll have to run the Level 3 on the whole drive to really see what will happen. If I'm off base on that, @Steve or someone else can jump in. Ron
 
@DarkwinX This is a long answer to a short question. Hopefully, this won't be too much info. :cool: I was considering how to reply and I appreciate the question. I don't pretend to be an expert. It's just a topic I've read a bit about over time. And, it's been a long while since I've done the reading. The details get complicated. So, I'll share a few tidbits best I can from memory and refer you to some references I looked up. You can look up more references if you wish. I just googled how is ssd endurance calculated. I'll be glad to answer any questions I can or look up more data if I can. If I get too off the rails, @Steve or someone can jump in and correct me.

Here's my understanding of the basics. SSD memory cells store information in the form of a voltage stored in a capacitor (electrical storage tank). The storage tank takes the form of really, REALLY, * REALLY * tiny storage cells in the memory chips (IE trillions of storage cells in the chips of a 1 TB drive). So, if you want to put a binary 1 in the cell, you put a certain voltage in it. If you want to put a binary 0 in the cell, you put a different voltage in the cell (or maybe no voltage, or maybe the other way around, not sure).

Reading the voltage in a cell has minimal effect on the cell. BUT, the storage cell physically has an insulator on top of the cell structure. Storing data, by putting voltage into the cell, punches electrons through the insulator. I have no idea why it's structured this way, but it probably relates to getting large density and low cost. Bottom line is that storing data damages the storage cell ever so slightly. After a certain number of writes, the cell gets to where the electrons leak out, and you lose your data. You could call it the SSD version of bit rot.

By, the way, it's been a while since I read this, but I think consumer drives are only rated to retain data with power off for a year after they've reached end of life. This is one reason it's good to SpinRite Level 2 your drives once or twice per year to jar the controller into checking the cell state. It's also why I wouldn't necessarily trust SSD's for long term archival storage. (That's just my opinion.) Magnetized rust, on a HDD or tape, can retain data for years under some circumstances. There's also a thing called M-Disc, which is like synthetic stone. The (claimed) lifetime of that is several hundred years.

So, the drive makers assign a TBW rating which is how many writes from the host PC they think it can take before becoming unreliable. There seems to be debate as to whether TBW means Total Bytes Written or TeraBytes Written. I assume the latter if there is no TB after the number. Note that, while this is related to the warranty, they may not warrant the drive for the full TBW rating. You can generally get utilities from drive manufacturers to monitor the drive's health. Once the TBW rating is reached, if I remember correctly, drives may do different things. They may become read only, do nothing, or become unusable. The latter option tends to tick off customers. But, it is a good idea to monitor your drive's health with such a utility if possible, and of course, make regular backups. COUGH COUGH - preaching to myself.

Gravitating more toward what you asked, the drives do something called wear leveling. So, if you repeatedly make small writes to the drive, compared to its capacity, it will not write to the same cells each time, even if you're specifying the same sectors each time from the SATA interface. So, if you write 1 TB of data to a 1 TB drive in lots of little chunks, even if you're not trying to fill the drive, that will eventually touch all the cells.

BUT, it gets more complicated, and, the calculation I gave was overly simplistic. There is a phenomenon called write amplification. Long story short is that the drive actually ends up writing more data to the cells than what the user or OS actually sends to the SATA interface.

Here's a simplistic explanation of why. Making up this example, including the numbers. Say you have a block of 64 bits of memory cells, IE, an 8 x 8 array. (I'm not saying that blocks are 64 bits in real life.) So, let's say you write alternating rows of 1's and 0's, or whatever, to fill the array. Then, let's say you want to rewrite 2 of the 8 rows, or 1/4 of them. Or, maybe you just want to rewrite 1 bit. The controller has to erase the entire block, causing all the bits to endure a write cycle. So, even if you wrote 16 bits, or 1 bit the second time, all 64 bits in the block endure a write cycle. So, the write amplification factor describes how much more data is written to the cells than what is written by the host PC. I heard a guy in a Kingston video say that, for consumers, this might be 4X. So, as you're writing 25 GB, for example, over time, the drive might actually write 100 GB to the cells. The drive jumps through hoops to minimize this. It might, for example, write your new data to a totally different block that's already erased, and erase the 1st one later when you actually have 64 bits to send to it.

However, if you're writing the WHOLE drive, with SpinRite for example, I don't really THINK you'll have a large write amplification factor.

There's also a thing called overprovisioning. This means that extra space is left available in unused memory cells. This can be built in. Or you can sometimes set it with a utility. Or, you can just not partition part of the drive. I try to leave at least 10% of the drive unused. The controller will use this for housekeeping, such as block swapping and scheduling block erasure. It will also use it for reallocating bad sectors, etc.

There are things a PC does that can substantially shorten SSD life. In my opinion, these should not necessarily be ignored. Some people just say forget it, modern drives are good enough to ignore it. Maybe, maybe not. To each his own on this. But, @Steve and @leolaporte talked a while back about browsers writing cache data excessively often. Also, an OS writes log files, temp files, and cache files, etc., all the time and repeatedly. On my laptop, with one SSD, I have no choice as to where to put things. On my desktop, I've redirected those things to a spinning drive. This reduces wear on the SSD but puts more wear on the spinners. I'm sure there's pros and cons either way. I don't remember where my page file and hibernation file are on that PC.

I tried really hard to find some good videos. I hate Google's and YouTube's search engines. (Amazon too, but I digress.) You would think that if I type a highly specific phrase like ssd write amplification in, I would get lots of cool results. Well, I failed. But, I did find these OK results.

206 Flash Write Amplification

What is SSD Overprovisioning?

Here's the article link I found. It just happens to be one of the first ones I ran across. I'm sure there are others.


Wow. That turned out to be more complicated than I thought when I started typing. Hope it helps.

Ron
 
Last edited:
@rfrazier that was brilliant! Although I already had some rudimentary understanding of how SSDs operate (from reading through posts from Steve and Milton - and others) it was great to have that version. I'll be bookmarking this page for re reading and reference!
 
@DarkwinX I'm so glad that was helpful to you. Thanks for the compliment. I reread the post and I don't think I made any horrible errors. I've edited some minor grammatical things. I took another look at the 1st video I posted. Then, I went to the guy's video channel and looked around. His about page says: Cesar Duran Principal Systems Engineer | Server & Cloud Infrastructure Geek | Azure/AWS Padawan | PowerShell Junkie | Star Wars Nerd

His about page gives some links to his Linkedin and other things, but I didn't look at those. The title of that original video, with the number 206, implied more videos. It turns out that I found 11 videos by him from about the same time about SSD operation. I'm going to post all those in their own thread and will cross link it here when it's ready so everyone can look at them if they wish.

May your bits be stable and your interfaces be fast. :cool: Ron