The Intel SSD 600p (512GB) Review
by Billy Tallis on November 22, 2016 10:30 AM ESTIntel's SSD 600p was the first PCIe SSD using TLC NAND to hit the consumer market. It is Intel's first consumer SSD with 3D NAND and it is by far the most affordable NVMe SSD: current pricing is on par with mid-range SATA SSDs. While most other consumer PCIe SSDs have been enthusiast-oriented products aiming to deliver the highest performance possible, the Intel 600p merely attempts to break the speed limits of SATA without breaking the bank.
The Intel SSD 600p has almost nothing in common with Intel's previous NVMe SSD for consumers (the Intel SSD 750). Where the Intel SSD 750 uses Intel's in-house enterprise SSD controller with consumer-oriented firmware, the Intel 600p uses a third-party controller. The SSD 600p is a M.2 PCIe SSD with peak power consumption only slightly higher than the SSD 750's idle. By comparison, the Intel SSD 750 is a high power and high performance drive that comes in PCIe expansion card and 2.5" U.2 form factors, both with sizable heatsinks.
Intel SSD 600p Specifications Comparison | |||||
128GB | 256GB | 512GB | 1TB | ||
Form Factor | single-sided M.2 2280 | ||||
Controller | Intel-customized Silicon Motion SM2260 | ||||
Interface | PCIe 3.0 x4 | ||||
NAND | Intel 384Gb 32-layer 3D TLC | ||||
SLC Cache Size | 4 GB | 8.5 GB | 17.5 GB | 32 GB | |
Sequential Read | 770 MB/s | 1570 MB/s | 1775 MB/s | 1800 MB/s | |
Sequential Write (SLC Cache) | 450 MB/s | 540 MB/s | 560 MB/s | 560 MB/s | |
4KB Random Read (QD32) | 35k IOPS | 71k IOPS | 128.5k IOPS | 155k IOPS | |
4KB Random Write (QD32) | 91.5k IOPS | 112k IOPS | 128k IOPS | 128k IOPS | |
Endurance | 72 TBW | 144 TBW | 288 TBW | 576 TBW | |
Warranty | 5 years |
The Intel SSD 600p is our first chance to test Silicon Motion's SM2260 controller, their first PCIe SSD controller. Silicon Motion's SATA SSD controllers have built a great reputation for being affordable, low power and providing good mainstream performance. One key to the power efficiency of Silicon Motion's SATA SSD controllers is their use of an optimized single core ARC processor (via Synopsys), but in order to meet the SM2260's performance target, Silicon Motion has finally switched to a dual core ARM processor. The controller chip used on the SSD 600p has some customizations specifically for Intel and bears both Intel and SMI logos.
The 3D TLC NAND used on the Intel SSD 600p is the first generation 3D NAND co-developed with Micron. We've already evaluated Micron's Crucial MX300 with the same 3D TLC and found it to be a great mainstream SATA SSD. The MX300 was unable to match the performance of Samsung's 3D TLC NAND as found in the 850 EVO, but the MX300 is substantially cheaper and remarkably power efficient, both in comparison to Samsung's SSDs and to other SSDs that use the same controller as the MX300 but planar NAND.
Intel uses the same 3D NAND flash die for its MLC and TLC parts. The MLC configuration that has not yet found its way to the consumer SSD market has a capacity of 256Gb (32GB) per die, which gives the TLC configuration a capacity of 384Gb (48GB). Micron took advantage of this odd size to offer the MX300 in non-standard capacities, but for the SSD 600p Intel is offering normal power of two capacities with large fixed size SLC write caches in the spare area. The ample spare area also allows for a write endurance rating of about 0.3 drive writes per day for the duration of the five year warranty.
Intel 3D TLC NAND, four 48GB dies for a total of 192GB per package
The Intel SSD 600p shares its hardware with two other Intel products: the SSD Pro 6000p for business client computing and the SSD E 6000p for the embedded and IoT market. The Pro 6000p is the only one of the three to support encryption and Intel's vPro security features. The SSD 600p relies on the operating system's built-in NVMe driver and Intel's consumer SSD Toolbox software which was updated in October to support the 600p.
For this review, the primary comparisons will not be against high-end NVMe drives but against mainstream SATA SSDs, as these are ultimately the closest to 'mid-to-low range' NVMe as we can get. The Crucial MX300 has given us a taste of what the Intel/Micron 3D TLC can do, and it is currently one of the best value SSDs on the market. The Samsung 850 EVO is very close to the Intel SSD 600p in price and sets the bar for the performance the SSD 600p needs to provide in order to be a good value.
Because the Intel SSD 600p is targeting a more mainstream audience and more modest level of performance than most other M.2 PCIe SSDs, I have additionally tested its performance in the M.2 slot built in to the testbed's ASUS Z97 Pro motherboard. In this configuration the SSD 600p is limited to a PCIe 2.0 x2 link, as compared to the PCIe 3.0 x4 link that is available during the ordinary testing process where an adapter is used in the primary PCIe x16 slot. This extra set of results does not include power measurements but may be more useful to desktop users who are considering adding a cheap NVMe SSD to an older but compatible existing system.
AnandTech 2015 SSD Test System | |
CPU | Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled) |
Motherboard | ASUS Z97 Pro (BIOS 2701) |
Chipset | Intel Z97 |
Memory | Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T) |
Graphics | Intel HD Graphics 4600 |
Desktop Resolution | 1920 x 1200 |
OS | Windows 8.1 x64 |
- Thanks to Intel for the Core i7-4770K CPU
- Thanks to ASUS for the Z97 Deluxe motherboard
- Thanks to Corsair for the Vengeance 16GB DDR3-1866 DRAM kit, RM750 power supply, Carbide 200R case, and Hydro H60 CPU cooler
63 Comments
View All Comments
Samus - Wednesday, November 23, 2016 - link
Multicast helped but when you are saturating the backbone of the switch with 60Gbps of traffic it only slightly improves transfer. With light traffic we were getting 170-190MB/sec transfer rate but with a full image battery it was 120MB/sec. Granted with Unicast it never cracked 110MB/sec under any condition.ddriver - Wednesday, November 23, 2016 - link
Multicast would be UDP, so it would have less overhead, which is why you are seeing better bandwidth utilization. Point is with multicast you could push the same bandwidth to all clients simultaneously, whereas without multicast you'd be limited by the medium and switching capacity on top of the TCP/IP overhead.Assuming dual 10gbit gives you the full theoretical 2500 mb/s, if you have 100 mb/s to each client, that means you will only be able to serve no more than 25 clients. Whereas with multicast you'd be able to push those 170-190 mb/s to any number of clients, tens, hundreds, thousands or even millions, and by daisy chaining simply gigabit routers you make sure you don't run out of switching capacity. Of course, assuming you want to send the same identical data to all of them.
BrokenCrayons - Wednesday, November 23, 2016 - link
"Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D"Technical sites are great places to speculate about what-ifs of computer technology with like minded people. It's absolutely okay to disagree with someone's opinion, but I don't think you're doing so in a way that projects your thoughts as calm, rational, or constructive. It seems as though idle speculation on a very insignificant matter is treated as a threat worthy of attack in your mind. I'm not sure why that's the case, but I don't think it's necessary. I try to tell my children to keep things in perspective and not to make a mountain out of a problem if its not necessary. It's something that helps them get along in their lives now that they're more independent of their system of parental checks and balances. Maybe stopping for a few moments to consider whether or not the thing that's upsetting you and making you feel mad inside is a good idea. It could put some of these reader comments into a different, more lucid perspective.
ddriver - Tuesday, November 22, 2016 - link
Oh and obviously, he meant "image" as in pictures, not image as in os images LOL, that was made quite obvious by the "media" part.tinman44 - Monday, November 28, 2016 - link
The 960 EVO is only a little bit more expensive for consistent, high performance compared to the 600p. Any hardware implementation where more than a few people are using the same drive should justify getting something worthwhile, like a 960 pro or real enterprise SSD, but the 960 EVO comes very close to the performance of those high-end parts for a lot less money.ddriver: compare perf consistency of the 600p and the 960 EVO, you don't want the 600p.
vFunct - Wednesday, November 23, 2016 - link
> There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :DYou've never served an image gallery, have you?
You know it takes 5-10 ms to serve a single random long-tail image from an HDD. And a single image gallery on a page might need to serve dozens (or hundreds) of them, taking up up to 1 second of drive time.
Do you want to tie up an entire hard for one second, when you have hundreds of people accessing your galleries per second?
Hard drives are terrible for image serving on the web, because of their access times.
ddriver - Wednesday, November 23, 2016 - link
You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.You will not be losing any service time waiting for the hdd, you will be only limited by your internet bandwidth. Which means that regardless of the number of images, the client will receive the entire data set only 5-10 msec slower compared to an ssd. And regardless of how many clients you may have connected, you will always be limited by your bandwidth.
Any sane server implementation won't read the entire gallery in a burst, which may be hundreds of megabytes before it services another client. So no single client will ever block the hdd for a second. Practically every contemporary hdd have ncq, which means the device will deliver other requests while your network is busy delivering data. Servers buffer data, so say you have two clients requesting 2 different galleries at the same time, the server will read the first image for the first client and begin sending it, and then read the first image for the second client and begin sending it. The hdd will actually be idling quite a lot waiting, because your connection bandwidth will be vastly exceeded by the drive's performance. And regardless of how many clients you may have, that will not put any more strain on the hdd, as your network bandwidth will remain the same bottleneck. If people end up waiting too long, it won't be the hdd but the network connection.
But thanks for once again proving you don't have a clue, not that it wasn't obvious from your very first post ;)
vFunct - Friday, November 25, 2016 - link
> You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.ddriver, just stop. You literally have no idea what you're talking about.
Image galleries aren't hundreds of megabytes. Who the hell would actually send out that much data at once? No image gallery sends out full high-res images at once. Instead, they might be 50 mid-size thumbnails of 20kb each that you scroll through on your mobile device, and send high-res images later when you zoom in. This is like literally every single e-commerce shopping site in the world.
Maybe you could take an internship at a startup to gain some experience in the field? But right now, I recommend you never, ever speak in public ever again, because you don't know anything at all about web serving.
close - Wednesday, November 23, 2016 - link
@ddriver, I really didn't expect you to laugh at other people's ideas for new hardware given your "thoroughly documented" 5.25" hard drive brain-fart.ddriver - Wednesday, November 23, 2016 - link
Nobody cares what clueless troll wannabes like you expect, you are entirely irrelevant.