File Fragmentation, Purposeful, or?

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in:

    This forum does not automatically send notices of new content. So if, for example, you would like to be notified by mail when Steve posts an update to his blog (or of any other specific activity anywhere else), you need to tell the system what to “Watch” for you. Please checkout the “Tips & Tricks” page for details about that... and other tips!

    /Steve.
  • Larger Font Styles
    Guest:

    Just a quick heads-up that I've implemented larger font variants of our forum's light and dark page styles. You can select the style of your choice by scrolling to the footer of any page here. This might be more comfortable (it is for me) for those with high-resolution displays where the standard fonts, while permitting a lot of text to fit on the screen, might be uncomfortably small.

    (You can permanently dismiss this notification with the “X” at the upper right.)

    /Steve.

Ceyarrecks

Member
Sep 29, 2020
24
3
First off, I am unsure where this question ought be posted, Hardware, or Software, as it seems the underlying problem could reside in both.
Now to answer the fact of "Fragmentation" and "Purposeful" in the same sentence, I would refer one to the attached picture of a recent Vopt defragmentation session:

ok! I /was/ going to attach here a whoopingly super-huge 1.5MB .jpg picture; but, something vaguely and ambiguously went wrong with the error: "Oops! We ran into some problems." gee, i wonder WHAT those problems could be,--mmmmmmm--nope, mind reading did not help. Since Cookies and JS are both allowed to function on this site, there is probably issue with site since there was no admittance to what the "problem" was; had their been, maybe I could have investigated further to find if i am at issue or advised of what was found to those that could have done something about it.


Anyway.
as noted by the above image a relatively speaking-small 9 "square" file was initially sprayed over the entire drive taking up MULTIPLE sectors!

Something (or someone) made the choice to "get every file onto the drive as fast as possible! quick! panic!" as opposed to taking a whole one second of time, reviewing the drive for the last file contiguously copied file, and then placing the new file on the next following or shared sector available.

Which of course, the whole industry, and all whom work in the IT field know full well the benefit of an unfragmented or contiguous drive.
Rather akin to the benefit in time of having your entire Sunday outfit in ONE location as opposed to one sock in sock drawer, other sock somewhere in living room, shirt on floor in basement, pant somewhere in garage, tie,.. tie,.. now where it the color-matched tie?! oh forget it, will just wear a purple poka-dotted tie since it is right here on the lamp shade,...

As the fragmentation of SSDs problem noted in the recent SN! #807 episode, so similar to HDD fragmentation;

The question, then is, WHY the policy of "panic and write ANYWHERE"? Why is there not a policy of "review drive, find last contiguously written file, then place new file immediately after"?

Certainly even those who want their file RIGHT! NOW! can wait that one second for the obviously grand benefit of maintained performance? right?
or
am I expecting too much again?
 

rfrazier

Well-known member
Sep 30, 2020
231
77
Here's what I think. It may even be factual. :) Somebody can jump in if I munge it up.

Fragmentation should have little effect on an SSD regardless of what a defrag utility says. In fact, my understanding is that you should almost never defrag an SSD. The block or sector numbers don't have any constant association with the individual flash cells. And, an SSD can read any block or sector equally as fast as any other. And, jumping to file fragments has minimal overhead. The SSD has to erase an entire page at a time to write, so that may affect how data is laid out when it's written.

But, with a HDD, it's different. Reading and writing a HDD is much faster at the front of the drive due to the physical geometry. So, the OS will preferentially try to put files there first as far as I know. It especially will try to put the Windows files at the front. So, say you use of the 1st 20% of your HDD installing the OS. Then you put a bunch of videos on and use the next 30%. Then you add games and use another 10%. Then you add email and documents and use another 20%. So, you're up to 80% usage. Then, say you delete some games and videos and put others in of different sizes. Or say you download a bunch of new game data and the size of some of the games expand. Let's say you get lots of email every day and the size of your email database expands, until you compact it and it shrinks. As you can imagine, there may be times when you delete data and make a hole, but then you add more data. The system may plug the hole but the file gets fragmented and part of it is put in another place. Now imagine these processes going on every day dozens or hundreds of times. That's essentially how the drive gets fragmented.

Bottom line, I think, is that the system will preferentially place data toward the front of the drive, and if that ends up fragmenting files, then at least some of the data will be near the front. Also, even if it only ever wrote to free contiguous space, there would eventually be holes of unused space which you would then have to use or sacrifice.

So, it's an almost unavoidable consequence of running the machine. Others feel free to jump in and clarify.

Disk Defragmentation Explained - Defrag Hard Drive - Speed Up PC

Disk Defragmentation & Drive Optimization as Fast As Possible

Fragmentation and Defragmentation

May your bits be stable and your interfaces be fast. :cool: Ron
 

PHolder

Well-known member
Sep 16, 2020
627
2
307
Ontario, Canada
In most OSes if you only ever wrote files and never deleted any, there would be virtually no fragmentation. I say virtually none, because some OSes decide to place the file system control data in the center of the disk assuming that is more efficient than placing it at the beginning or end. Also the file system control data grows as the number of files grow, so it would end up being written in multiple chunks. Additionally, even if the user never deleted a single file, the file system control data will need to have changes as more files get added, and it might delete some of that after it writes it to a new location, because write and then "link old to new" is safer for your data than "overwrite in place."

Based on what I just wrote, and knowing that the OS uses many temporary files, and most users also make changes, it becomes clear that fragmentation is an inevitability. Most users make many deletions without realizing it while using the PC. "Save often" creates a new file which leads to the old file being renamed to a backup and the older file being deleted. It's better if the OS has a plan to deal with HDD [de]fragmentation than it is to assume it just never happens. Luckily, on SSDs, there isn't much penalty for fragmentation, so the OS doesn't have to plan for any optimization there.
 
  • Like
Reactions: DiskTuna

Tig77

SpinRite Customer
Dec 29, 2020
11
1
44
Ontario, Canada
I would recommend that you give PerfectDisk a try - https://www.raxco.com/home/products/perfectdisk-pro

It has a technology that runs in the background that prevents most fragmentation.

From their website:

Fragmentation Prevention with OptiWrite®


PerfectDisk's OptiWrite technology prevents most fragmentation on your drive before it occurs, which prevents your system from slowing down. OptiWrite detects when Windows is going to fragment files and intelligently redirects I/O to stop the fragmentation from occuring. System performance is maintained and the need to utilize CPU and Disk I/O resources to defragment files is greatly reduced. Because less defragmentation is needed, there is a direct benefit in the form of energy saved in both CPU usage and in the reading and writing to a disk. It saves users both in energy costs and in the time required to defragment a file system.

I tried it myself for the 30 day free trial and it does run seamlessly in the background and takes almost no memory and doesn't slow down your system at all. It keeps track of how many fragments it prevents too. Worth checking out.

I would think that this kind of technology should be built-in to modern OSes, but I don't think it is.

Shawn
 

pmikep

Well-known member
Dec 26, 2020
60
9
While the result of defragmentation looks pretty on a GUI, I have never noticed a difference in (user perceived) performance.

When I owned an airplane, I could tell when the cylinders were getting old and not developing the same power that they used to. And, consistent with this, when I overhauled the cylinders, I could tell that the engine was making more power again.

But I've never noticed my computer slowing down because my hard drive was terribly defragmented. And more noteworthy, I never noticed a change in performance after defragging.

I think I saw somewhere (perhaps on a PerfectDisk promo) the benefits of defragging - but that was for server stuff, I/O'ing a billion small files all day.
 
  • Like
Reactions: DiskTuna

DiskTuna

Well-known member
Jan 3, 2021
83
8
Netherlands
As the fragmentation of SSDs problem noted in the recent SN! #807 episode, so similar to HDD fragmentation;

The question, then is, WHY the policy of "panic and write ANYWHERE"? Why is there not a policy of "review drive, find last contiguously written file, then place new file immediately after"?
Defragmentation software addresses fragmentation at file system level and has no influence what-so-ever on where an SSD stores data. With regards to SN 807 I doubt this supposed fragmentation is THE issue, if I recall correctly significant improvement was observed after only reading too.

as noted by the above image a relatively speaking-small 9 "square" file was initially sprayed over the entire drive taking up MULTIPLE sectors!

Clusters.

The question, then is, WHY the policy of "panic and write ANYWHERE"? Why is there not a policy of "review drive, find last contiguously written file, then place new file immediately after"?

I do not understand. If you fill an empty drive you'd see that largely files are in the order you wrote them. Then on a drive that has been used a while there's no guarantee that space after last contiguous file is free. BTW fragmentation is just one consideration when writing files if we assume conventional hard drives.
 

DiskTuna

Well-known member
Jan 3, 2021
83
8
Netherlands
While the result of defragmentation looks pretty on a GUI, I have never noticed a difference in (user perceived) performance.

When I owned an airplane, I could tell when the cylinders were getting old and not developing the same power that they used to. And, consistent with this, when I overhauled the cylinders, I could tell that the engine was making more power again.

But I've never noticed my computer slowing down because my hard drive was terribly defragmented. And more noteworthy, I never noticed a change in performance after defragging.

I think I saw somewhere (perhaps on a PerfectDisk promo) the benefits of defragging - but that was for server stuff, I/O'ing a billion small files all day.

True. That being said I wrote my own defragger (https://www.disktuna.com/da-disktuna/ - I do not recommend using it anymore although I occasionally do on conventional drives) many years ago (XP era) and did see significant improvement after I moved frequently used files, and certain meta data towards the fastest area of the drive, after I first made room for it. Improvements were mainly in boot time and time to for example list a large directory in the file explorer (IOW system behaves snappier). Much like an older car having had maintenance done.
 

PHolder

Well-known member
Sep 16, 2020
627
2
307
Ontario, Canada
I doubt this supposed fragmentation is THE issue
I haven't listened to the issue, but there is thinking that running a defragmentation pass on an SSD invokes TRIM on sectors and that the TRIM pass can be helpful for SSD performance (especially helpful for fuller SSDs.)
 

rfrazier

Well-known member
Sep 30, 2020
231
77
if I recall correctly significant improvement was observed after only reading too.
This comment prompted me to look back at the 1st page of the threads in the Readspeed Results forum. I've been following some of the threads and contributed to some, but certainly don't remember all of them. I also didn't have time to read every thread again. But my very small and unscientific survey showed that 5 threads from that 1st page mentioned reading and writing for balky SSD remediation (IE Spinrite Level 3 or 4). 3 threads mentioned read only for remediation. Some threads mentioned both. But, it appears that there is evidence that both reading as well as writing can have beneficial effects.

I'm not an SSD expert and don't claim to be. The idea that writing helps doesn't really surprise me, especially when all the trillions of little capacitors (what a scary thought) drain anyway and the drive's firmware resets the "threshold" voltages to reconsider what is a 1 or 0. It makes sense that writing would reset everything and top off the capacitor with some nice juicy electrons and nice fresh thresholds. Why reading has an effect (not whether it does affect [hope I got those two words right] ) is somewhat more of a mystery. But, as @Steve always says about SpinRite in general, maybe it jars the controller into realizing the error levels are too high when attempting to read and it does SOMETHING. Another thing I find scary (when I found out a few years ago) is that drives do not verify after write. If an app wants to know a sector was written properly, it has to read it and, hopefully, the drive will correct any errors.

May your bits be stable and your interfaces be fast. :cool: Ron
 

DiskTuna

Well-known member
Jan 3, 2021
83
8
Netherlands
I haven't listened to the issue, but there is thinking that running a defragmentation pass on an SSD invokes TRIM on sectors and that the TRIM pass can be helpful for SSD performance (especially helpful for fuller SSDs.)

Yes. But then effect is from TRIM. Using defrag API OS can determine all free clusters > convert to LBA sector addresses > send drive TRIM command.
 

DiskTuna

Well-known member
Jan 3, 2021
83
8
Netherlands
This comment prompted me to look back at the 1st page of the threads in the Readspeed Results forum. I've been following some of the threads and contributed to some, but certainly don't remember all of them. I also didn't have time to read every thread again. But my very small and unscientific survey showed that 5 threads from that 1st page mentioned reading and writing for balky SSD remediation (IE Spinrite Level 3 or 4). 3 threads mentioned read only for remediation. Some threads mentioned both. But, it appears that there is evidence that both reading as well as writing can have beneficial effects.

Yes, I am not claiming 'refreshing' has no effect, but read has observed effect which can not be explained by SSD level defragmentation, so my guess is pages that require x amount of effort to read are reallocated.

I'm not an SSD expert and don't claim to be. The idea that writing helps doesn't really surprise me, especially when all the trillions of little capacitors (what a scary thought) drain anyway and the drive's firmware resets the "threshold" voltages to reconsider what is a 1 or 0.

I wouldn't call myself expert either. It is of course even worse as each cell on modern NAND contains several bits. AIUI the threshold isn't reset. Drive can try several thresholds and one with least amount of ECC errors, or at least enough so it can be ECC corrected, results in successful read. Once we have successful read we can write data to different page.

It makes sense that writing would reset everything and top off the capacitor with some nice juicy electrons and nice fresh thresholds.

It's written to another page AFAIK. You can not program non erased cells.

Why reading has an effect (not whether it does affect [hope I got those two words right] ) is somewhat more of a mystery.

Well if firmware detects data can not be read or x bits are corrupt it can apply RR thresholds and try get a better read. Once data can be read, it's then written to a fresh page.

But, as @Steve always says about SpinRite in general, maybe it jars the controller into realizing the error levels are too high when attempting to read and it does SOMETHING.

Yes, indeed that's my theory too. Read it and write to a fresh page while it still can.

Another thing I find scary (when I found out a few years ago) is that drives do not verify after write. If an app wants to know a sector was written properly, it has to read it and, hopefully, the drive will correct any errors.

lol, yes the whole thing is rather scary. This goes for conventional hard drives too.

May your bits be stable and your interfaces be fast. :cool: Ron

Same!
 

pmikep

Well-known member
Dec 26, 2020
60
9
Regarding "defragging" SSD's - the author of JKDefrag has a script for running it on flash drives. ("Flash memory" he called them. Old NAND thumb drives.) I have no idea why that would make an improvement.
 
  • Like
Reactions: DiskTuna

DiskTuna

Well-known member
Jan 3, 2021
83
8
Netherlands
Regarding "defragging" SSD's - the author of JKDefrag has a script for running it on flash drives. ("Flash memory" he called them. Old NAND thumb drives.) I have no idea why that would make an improvement.

Jeroen Kessels, he used to run my webserver, too bad he dropped development of his defragger, it was so nice! I guess the script is a do a little as possible one. Like almost any other defragger JKDefrag used Windows defrag API which gives you the tools to move clusters etc.. So where defraggers differ is the strategy.

Personally I would not defrag NAND flash drives although I do allow Windows defrag to run. There are certain file system related fragmentation issues that it could handle and it will TRIM the drive.
 
Last edited:

rfrazier

Well-known member
Sep 30, 2020
231
77
We discussed this in another thread a while back the title of which I don't remember. But, AFAIK defrag in Windows 7 is not smart enough to know that it should "optimize" IE run TRIM rather than defrag an SSD. Therefore, the scheduled defrag should be manually turned off if it's not automatically turned off. Some SSD makers provide monitoring utilities, IE Samsung Magician, which will allow you to trigger an optimize cycle. I believe Windows 8 and 10 are better about this. I don't know about Linux and Mac.

May your bits be stable and your interfaces be fast. :cool: Ron
 

rfrazier

Well-known member
Sep 30, 2020
231
77
Yes ... BUT ... I decided to do a little test. The C drive on this Windows 7 laptop is an SSD. I started Disk Defragmenter. The text on the program says "Only disks that can be defragmented are shown." It showed the C drive. I told it to analyze the drive. It reported 22% fragmentation. I started defragmentation and it obediently let me. I immediately stopped it. There is no option to optimize anywhere in sight.

SO, I recommend that all Windows 7 users go into Disk Defragmenter and turn off scheduled defragmentation for any SSD's you have. If you want to optimize or trim your drive, you may have to figure out a different way to do that. If your defrag utility is newer or smarter than mine, you may have other options.

May your bits be stable and your interfaces be fast. :cool: Ron
 

DanR

Dan
Sep 17, 2020
138
37
Ron, there is a notable difference between a manual defrag and a scheduled defrag, at least on my Win 7 Pro box.

Disk Defragmenter shows C: (SSD), E: ( Spinner), and G: (USB Spinner).

C is shown as never run; E: 0% fragmented; G: 1% fragmented; with E: and G: last run 2/24/2021 (the schedule is once a week).

The scheduler only shows the E: and G: spinner drives as schedule-able.

So in my case:
- I could manually defrag the C: SSD (I started and immediately stopped a manual defrag of the C: SSD to verify). But I would not! :oops:
- I cannot schedule a defrag of the SSD; I can only schedule the spinners for defrag.

The Scheduler keeps the spinners defragged, but it does not touch the SSD. It is doing just what it should!
 
  • Like
Reactions: rfrazier

rfrazier

Well-known member
Sep 30, 2020
231
77
@DanR You make an interesting point. I went to my desktop which has an SSD and some spinners. I looked at the defrag schedule which was on. I clicked on select disks and there was a check box which says select all which was on. But, the SSD was not listed as you mentioned. So, yes, it looks like it's catching the spinners only. Nice call. Still, that discussion that @Barry Wallis mentioned says that Windows doesn't always detect the SSD's properly. So, It wouldn't be a bad idea to check your defrag schedule and make sure that SSD's are excluded. I still don't think that Windows 7 defragger will optimize / trim the SSD.

May your bits be stable and your interfaces be fast. :cool: Ron
 
  • Like
Reactions: Barry Wallis

DanR

Dan
Sep 17, 2020
138
37
  • Like
Reactions: Barry Wallis