Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Corsair Neutron XT 240GB
25% Over-Provisioning

Performance consistency has never been Phison's biggest strength and that continues to be the case with the S10 controller. The consistency is actually worse compared to the older S8 controller (i.e. Corsair Force LS) because the variance in performance is so high. I'm pretty sure the issue lies in Phison's garbage collection architecture as it doesn't seem to give enough priority for internal garbage collection, which results in a scenario where the drive has to stop for very short periods of time (milliseconds) to clean up some blocks for the IOs in the queue. That is why the performance frequently drops to ~1,500 IOPS, but on the other hand the drive may be pushing 70K IOPS the second after. Even adding more over-provisioning doesn't produce a steady line, although the share of high IOPS bursts is now higher. 

For average client workloads, this shouldn't be an issue because drives never operate in steady-state and IOs tend to come in bursts, but for users that tax the storage system more there are far better options on the market. I'm a bit surprised that despite having more processing power than its predecessors, the S10 can't provide better IO consistency. With three of the four cores dedicated to flash management, there should be plenty of horsepower to manage the NAND even in steady-state scenario, although ultimately no amount of hardware can fix inefficient software/firmware. 

Corsair Neutron XT 240GB
25% Over-Provisioning


Corsair Neutron XT 240GB
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drives & The Test AnandTech Storage Bench 2013
Comments Locked


View All Comments

  • hojnikb - Monday, November 17, 2014 - link

    Because they are using samsung's controllers. And they already have pci-e controllers.
  • close - Monday, November 17, 2014 - link

    Because Apple only has to worry about their own product, and their PCIe SSDs come attached to a device capable of using it. So you don't buy a PCIe SSD, you buy an Apple device that comes with a PCIe SSD inside. Other integrators/OEMs don't care to do it as it increases costs so it's suitable only for high end. For now. Apple is doing it because it would seem that their customers can afford to pay the premium regardless of other aspects.
  • Kristian Vättö - Monday, November 17, 2014 - link

    There has been a handful of PCs with the XP941, but you are right that Apple is mostly the only one.

    The PC OEMs tend to cut in cost wherever possible because their margins are already razor thin. The XP941 is more expensive than SATA drives because it's the only PCIe x4 drive on the market and in addition the PC OEMs can use the same SATA drives in various models, whereas the XP941 would only fit in high-end models due to the cost.

    For Apple this isn't an issue because the quantities they buy the XP941 in are so large and Apple also has a significant share of the high-end market, which is where the PC OEMs struggle. Plus Apple is one of the only companies that fully understand that it's the user experience that counts.
  • alaricljs - Monday, November 17, 2014 - link

    > Plus Apple is one of the only companies that fully understand that it's the user experience that counts.

    Have to point out here that Apple is one of the only companies where the hardware is just another piece of the user experience puzzle that they have control over. Whereas for PC manufacturers it's almost the only part of the user experience they have control over.
  • Mikemk - Monday, November 17, 2014 - link

    Apple needs to realize that again.
  • warrenk81 - Tuesday, November 18, 2014 - link

    thanks! i've been wondering about this since the MacBook Airs started with the PCIe SSDs in 2013.
  • Shiitaki - Wednesday, November 19, 2014 - link

    Apple produces the entire machine, so they don't have to worry about what the rest of the industry is or isn't doing. Because Apple can put the necessary driver support in their motherboard firmware to boot from a PCI-E drive. Apple also produces the operating system, so they can use a custom driver and not wait for 'official support'.
  • Flunk - Monday, November 17, 2014 - link

    Oh god I hope not, SATA Express' cable standards are a huge mess I hope we never need to deal with. Why we need yet another standard where M.2 makes massively more sense I can't imagine.
  • SleepyFE - Monday, November 17, 2014 - link

    Because of the cable. When you have a tower case you can fit 5+ drives in it and connect via cable. The M.2 is just for laptops as it has to be fixed at the end with a screw an therefor has to lay on something. To put it on an ATX motherboard would take up too much space or it would dangerously dangle from the board. You could use a M.2 to PCI-e connector, but why waste the PCI-e slot? And what's the point of M.2 if you're just gonna plug it into PCI-e anyway? For big cases you need cables. They might be a mess, but you can use the SATA-X (-X= Express) for the boot drive and put all your old drives into SATA ports, so you don't waste them.
  • MrSpadge - Monday, November 17, 2014 - link

    It's a bit surprising that it takes so long - not because it would be easy, but because we've known since a long time this would be coming. The manufacturers should have known it long before us. And it's not like there has been any other significnat movement regarding SSD controllers in the past 2 years.

    On the other hand - I don't mind if they take their time and deliver polished products with firmware which is not in beta state any more!

Log in

Don't have an account? Sign up now