Three years ago, a new variant of flash memory hit the SSD market which stores four bits of data in each memory cell called QLC. This new QLC NAND flash memory offered 33% better bit density compared to three bits per cell with mainstream TLC NAND. QLC initially arrived as a low-end alternative that provides better density and price, but the trade-off has been worse performance (and endurance). So far the use of QLC NAND has always meant that any drive with QLC belongs in an entry-level market segment, competing against the cheaper TLC NAND SSDs that cut corners on other components. But as more SSD vendors adopt QLC NAND in a wider range of products, some are starting to challenge the assumption that QLC is only for low-quality bargain products.

Prime Time QLC SSDs

Sabrent and Corsair are two very familiar brands that market SSDs based on reference designs from SSD controller vendor Phison. Both brands have followed Phison's lead in using QLC NAND for M.2 NVMe SSDs. The latest and greatest QLC solution from Phison uses its E16 SSD controller, which was the first consumer SSD controller to support PCIe Gen4. The Sabrent Rocket Q4 and Corsair MP600 CORE we are reviewing today are part of the first generation of PCIe 4 SSDs to use QLC NAND: an almost paradoxical combination of a high-end PCIe 4.0 connectivity with low-end QLC NAND. The question to answer is whether if QLC NAND is moving up market into mainstream or high-end products, or is it just PCIe 4.0 support now trickling down to low-end higher-capacity drives? This is what we set to find out with this review.

Corsair MP600 CORE

Many aspects of the Corsair MP600 CORE's spec sheet look pretty high-end. The drive comes with basically the same heatsink as Corsair's high-end TLC drives, albeit in a slightly different color.

Peak performance ratings are close to 5GB/s for reads and 4GB/s of writes, making PCIe 4.0 a necessity to hit those numbers.

Corsair MP600 CORE Specifications
Capacity 1 TB 2 TB 4 TB
Form Factor M.2 2280 PCIe 4 x4 with heatsink
Controller Phison E16
NAND Flash Micron 1Tbit 96L 3D QLC
DRAM DDR4
Sequential Read (MB/s) 4700 4950
Sequential Write (MB/s) 1950 3700 3950
Random Read IOPS (4kB) 200k 380k 630k
Random Write IOPS (4kB) 480k 580k
Warranty 5 years
Write Endurance 225 TB
0.1 DWPD
450 TB
0.1 DWPD
900 TB
0.1 DWPD
MSRP $154.99
(15¢/GB)
$309.99
(15¢/GB)
$644.99
(16¢/GB)

The five-year warranty and pricing around $0.15/GB are also indicative that the MP600 CORE isn't exactly entry-level. But on the other hand, the write endurance rating of just over 0.1 drive writes per day is much lower than the usual 0.3 DWPD expected from mainstream consumer SSDs. There are also some unimpressive performance metrics, especially for the smallest 1TB capacity.

Sabrent Rocket Q4

Sabrent's published specs for their Rocket Q4 are quite a bit less detailed, but follow the same general pattern:

Sabrent Rocket Q4 Specifications
Capacity 1 TB 2 TB 4 TB
Form Factor M.2 2280 PCIe 4 x4
(optional heatsink)
Controller Phison E16
NAND Flash Micron 1Tbit 96L 3D QLC
DRAM DDR4
Sequential Read (MB/s) 4700 4800 4900
Sequential Write (MB/s) 1800 3600 3500
Warranty 1 year (5 with registration)

 

The Phison E16 controller was originally a very successful bid to be first on the market with a controller supporting PCIe 4.0: Phison ended up having a monopoly on PCIe 4.0 SSDs for over a year, and even some SSD brands that don't routinely use Phison controllers brought out new flagship models based on this controller. But now the successor E18 controller is shipping, as well as competing top of the line PCIe 4.0 drives from both Samsung and Western Digital with their own custom controllers. That leaves the E16 as an outdated part, with performance that is no longer sufficient for a flagship model, and power consumption that is quite high. The E16 is manufactured on 28nm whereas all the other PCIe 4.0 SSD controllers are build on more advanced nodes like 12nm. This means that the E16 is ripe for that cheaper PCIe 4.0 market, which makes sense when adding in some high density QLC.

All that being said, the E16 controller is still a step up from Phison's very successful E12 PCIe 3.0 SSD controller. These PCIe 4.0 controllers are backwards compatible with PCIe 3.0, so even in a system that only supports PCIe 3.0, a drive based on the Phison E16 (like the ones we are testing today) is a bit faster than the E12.

It doesn't currently make sense to pair QLC NAND with the expensive flagship E18 controller, but the E16 has found a second life as Phison's more affordable and mature Gen4 controller when paired with QLC. One consequence of replacing E12+QLC drives with E16+QLC designs is that the E16 controller is physically larger than the compact E12S version's package, and that size difference means the 8TB models cannot yet move up to the E16 controller due to lack of space on a M.2 2280 PCB. That's a shame, because the best showings for QLC NVMe SSDs have been at the largest capacities where huge SLC caches and the highest possible degree of parallelism allow drives to mostly overcome the worst downsides of QLC NAND.

The Competition

In this review, we're comparing the 4TB Sabrent Rocket Q4 and 2TB Corsair MP600 CORE against a variety of other SSDs, from low-end QLC SATA SSDs to high-end gen4 drives with TLC NAND. Particularly interesting points of comparison include:

Intel SSD 670p PCIe 3.0 x4 SM2265 QLC  
Sabrent Rocket Q PCIe 3.0 x4 Phison E12S QLC  
Corsair MP400 PCIe 3.0 x4 Phison E12S QLC  
Samsung SSD 980 PCIe 3.0 x4 Samsung Pablo TLC DRAM-less
WD Blue SN550 PCIe 3.0 x4 WD Custom TLC DRAM-less
Seagate FireCuda 520 PCIe 4.0 x4 Phison E16 TLC  
Trace Tests: AnandTech Storage Bench and PCMark 10
POST A COMMENT

60 Comments

View All Comments

  • ZolaIII - Friday, April 9, 2021 - link

    Actually 5.6 years but compared to same MP600 TLC 8x that much or 44.8 years and for just a little more money. But seriously buying a 1 TB mp600 which will be enough regarding capacity and which will last 22.4 years under same explanation (vs 2.8 for Core) then that makes a hell of a difference. Reply
  • WaltC - Saturday, April 10, 2021 - link

    In far less than 22 years your entire system will have been replaced...;) IE, for the use-life of the drive you will never wear it out. The importance some people place on "endurance" is really weird. I have a 960 EVO NVMe with endurance estimates of 75TB: the drive is three years old this month and served as my boot drive for two of those three years, and I've used 19.6TB of write as of today. Rounding off, I have 55TB of write endurance remaining. That makes for an average of 6.5 TBs written per year--but the drive is no longer my boot/Win10-build install drive, so an average of 5TBs per year as strictly a data drive is probably overestimating, but just for fun, let's call it 5 TBs write per year. That means I have *at least* 11 years of write endurance remaining for this drive--which would mean the drive would have lasted at least 14 years in daily use before wearing out. Anyone think that 11 years from now I'll still be using that drive on a daily basis? I don't...;) The fact is that people worry needlessly about write endurance unless they are using these drives in some kind of mega heavy-use commercial setting. Write endurance estimates of 20-30 years are absurd and when choosing a drive for your personal system such estimates should be ignored as they have no meaning--they will be obsolete long before they wear out. So, buy the drive performance at the price you want to pay and don't worry about write endurance as even 75TB is plenty for personal systems. Reply
  • GeoffreyA - Sunday, April 11, 2021 - link

    It would be interesting to put today's drives to an endurance experiment and see if their actual and advertised ratings square. Reply
  • ZolaIII - Sunday, April 11, 2021 - link

    I have 2 TB writes per month, using PC for productivity, gaming and transcoding and still not to much. If I used it professionally for video that number would be much higher (high bandwidth mastering codes). To hell transcoding a single Blu-ray movie quickly (with GPU for sakes of making it HLG10+) will eat up to 150GB of writes and that's not a rocket science task to perform. By the way its not that PCIe interface will go anywhere and you can mont old NVMe to a new machine. Reply
  • Oxford Guy - Sunday, April 11, 2021 - link

    One can't choose performance with QLC. It's inherently slower.

    It's also inherently reduced in longevity.

    Remember, it has twice as many voltage states (causing a much bigger issue with drift) for just a 30% density increase.

    That's diminished returns.
    Reply
  • haukionkannel - Friday, April 9, 2021 - link

    Well, soon QLS can be seen only in highend top models, when middle range and low end go to PLS or what ever...
    for SSD manufacturers it makes a lot of Sense because they save money in that way. Profit!
    Reply
  • nandnandnand - Saturday, April 10, 2021 - link

    5/6/8 bits per cell might be ok if NAND manufacturers found some magic sauce to increase endurance. There was research to that effect going on a decade ago: https://ieeexplore.ieee.org/abstract/document/6479...

    TLC is not going away just yet, and they can just increase drive capacities to make it unlikely an average user will hit the limits.
    Reply
  • Samus - Sunday, April 11, 2021 - link

    When you consider how well perfected TLC is now that it has gone full 3D and the SLC cache + overprovisioning eliminate most of the performance\endurance issues, it makes you wonder if MLC will ever come back. It's almost completely disappeared even in enterprise. Reply
  • Oxford Guy - Sunday, April 11, 2021 - link

    3D manufacturing killed MLC. It made TLC viable.

    There is no such magic bullet for QLC.
    Reply
  • FunBunny2 - Sunday, April 11, 2021 - link

    "There is no such magic bullet for QLC."

    well... the same bullet, ver. 2, might work. that would require two steps:
    - moving 'back' to an even larger node, assuming that there's sufficient machinery at such node available at scale
    - getting two or three times the layers as TLC currently uses

    I've no idea whether either is feasible, but willing to bet both gonads that both, at least, are required.
    Reply

Log in

Don't have an account? Sign up now