Pending Inconsistent results between sub-block options (/3, /4, etc)

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in:

    This forum does not automatically send notices of new content. So if, for example, you would like to be notified by mail when Steve posts an update to his blog (or of any other specific activity anywhere else), you need to tell the system what to “Watch” for you. Please checkout the “Tips & Tricks” page for details about that... and other tips!

    /Steve.
  • Larger Font Styles
    Guest:

    Just a quick heads-up that I've implemented larger font variants of our forum's light and dark page styles. You can select the style of your choice by scrolling to the footer of any page here. This might be more comfortable (it is for me) for those with high-resolution displays where the standard fonts, while permitting a lot of text to fit on the screen, might be uncomfortably small.

    (You can permanently dismiss this notification with the “X” at the upper right.)

    /Steve.

Dagannoth

Active member
Dec 26, 2020
34
8
I stumbled across this while messing around with a drive I got recently (the Kingston below). I noticed that its /4 results were noticeably higher than its regular results. I tested a few more drives and discovered that they're the opposite. Most drives get slower with the higher options. I ran each test multiple times and calculated the average. I attached all of the logs below.

Crucial BX100
Sub-block level0%25%50%75%100%
0516.6542.1542.2542.1542.0
1516.6542.1542.2542.1542.0
2516.2541.8541.9541.8541.8
3514.3541.5541.7541.5541.5
4510.2540.6541.0540.6540.8

Kingston SA400M8120G
Sub-block level0%25%50%75%100%
0294.3281.2280.8281.3287.4
1294.3281.2280.8281.3287.5
2293.2281.7280.9282.1286.8
3291.7282.7281.3283.3285.4
4306.4300.2297.6301.2300.0

KingDian S100
Sub-block level0%25%50%75%100%
0193.3194.5190.5189.5189.3
1193.3194.5190.5189.5189.3
2193.1194.4190.4189.4189.2
3192.8194.2190.2189.2188.9
4192.0193.5189.6188.6188.2

Intel SSDSA2M080G2HP
Sub-block level0%25%50%75%100%
0250.5261.2264.5262.3265.2
1250.5261.4264.4262.3265.2
2249.4260.5261.7261.0264.1
3247.3257.8258.8253.7262.2
4242.4248.9254.9244.0258.0
 

Attachments

  • Sub-block Testing.zip
    156.7 KB · Views: 45

DiskTuna

Well-known member
Jan 3, 2021
85
9
Netherlands
SSDs are quite complex and data from a large read may be actually spread over several chips which can be accessed in parallel, in a RAID like manner. That's just one possible explanation I can quickly think of. So big chunk can be read from several chips in parallel > more data p/s > better benchmark.

Also, as 'chunk' size increases what also may come into play is handling of errors. More data you read at once, the bigger chance the drive has to handle an error, for example employing ECC correction. If we assume 1 MB data occupying 3 cells that need ECC correction, of several small chunk reads will not encounter error while only a few will, 'upping' average, reading data in 4 chunks while 3 need to wait for ECC correction may affect average in benchmark. I guess depending on properties of NAND chips, how well firmware is optimized, buffer size etc. you may find different tipping points.

Just thinking out loud, not pretending to be giving definitive answers.