Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Corsair Neutron XT 240GB
25% Over-Provisioning

Performance consistency has never been Phison's biggest strength and that continues to be the case with the S10 controller. The consistency is actually worse compared to the older S8 controller (i.e. Corsair Force LS) because the variance in performance is so high. I'm pretty sure the issue lies in Phison's garbage collection architecture as it doesn't seem to give enough priority for internal garbage collection, which results in a scenario where the drive has to stop for very short periods of time (milliseconds) to clean up some blocks for the IOs in the queue. That is why the performance frequently drops to ~1,500 IOPS, but on the other hand the drive may be pushing 70K IOPS the second after. Even adding more over-provisioning doesn't produce a steady line, although the share of high IOPS bursts is now higher. 

For average client workloads, this shouldn't be an issue because drives never operate in steady-state and IOs tend to come in bursts, but for users that tax the storage system more there are far better options on the market. I'm a bit surprised that despite having more processing power than its predecessors, the S10 can't provide better IO consistency. With three of the four cores dedicated to flash management, there should be plenty of horsepower to manage the NAND even in steady-state scenario, although ultimately no amount of hardware can fix inefficient software/firmware. 

Corsair Neutron XT 240GB
25% Over-Provisioning


Corsair Neutron XT 240GB
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drives & The Test AnandTech Storage Bench 2013
Comments Locked


View All Comments

  • SanX - Monday, November 17, 2014 - link

    One more average drive. Speeds need to double or price drop to double for that stuff to be interesting again.
  • hojnikb - Monday, November 17, 2014 - link

    Yup. If this ends up being priced closer to 850pro, it wont make any sense whatsoever.
  • hojnikb - Monday, November 17, 2014 - link

    Any reason why they are using 64Gbit flash on 480GB aswell ?
    at 512GB raw flash, it should be enough to saturate controller with 128Gbit dies (thats 32 dies).
  • SleepyFE - Monday, November 17, 2014 - link

    How did you count 32 dies? 4x128=512, that's 4 dies. With 8 dies (8x64) you fill all eight channels. Better parallelism. That's how i understand it.
  • Mikemk - Monday, November 17, 2014 - link

    4*128Gbit = 4*16GB = 64GB
    32*128Gbit = 32*16GB=512GB
  • hojnikb - Tuesday, November 18, 2014 - link

    Its in Gigabits, not gigabytes. Single die is 128Gbit (so 16GB) so you need 32 of them to get 512GB.
  • SleepyFE - Tuesday, November 18, 2014 - link

    Sorry about that. So used to gigabytes. Aren't the dies stacked to make 64GB packages and then a single bus leads to that bundle?
  • makerofthegames - Monday, November 17, 2014 - link

    tl;dr it's not a bad drive, but it's not good in any particular niche. If it's not cheaper than the dozens of similarly good-enough drives out there, it's a dead product.
  • beginner99 - Monday, November 17, 2014 - link

    Exactly. And given the crucial mx100 pricing and performance which should suit almost any consumer and enthusiast it's hard to come up for any reason to buy this unless it is cheaper (highly doubt that). And if you really need ultimate performance you will go Sandisk or 950 pro (or intel pcie).
  • Mikemk - Monday, November 17, 2014 - link

    850 pro?

Log in

Don't have an account? Sign up now