Comments Locked

63 Comments

Back to Article

  • vFunct - Tuesday, November 22, 2016 - link

    These would be great for server applications, if I could find PCIe add-in cards that have 4x M.2 slots.

    I'd love to be able to stick 10 or 100 or so of these in a server, as an image/media store.
  • ddriver - Tuesday, November 22, 2016 - link

    You should call intel to let them know they are marketing it in the wrong segment LOL
  • ddriver - Tuesday, November 22, 2016 - link

    To clarify, this product is evidently the runt of the nvme litter. For regular users, it is barely faster than sata devices. And once it runs out of cache, it actually gets slower than a sata device. Based on its performance and price, I won't be surprised if its reliability is just as subpar. Putting such a device in a server is like putting a drunken hobo in a Lamborghini.
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    Assuming a media storage server scenario, you'd be looking at write once and read many where the cache issues aren't going to pose a significant problem to performance. Using an array of them would also mitigate much of that write performance using some form of RAID. Of course that applies to SATA devices as well, but there's a density advantange realized in NVMe.
  • vFunct - Tuesday, November 22, 2016 - link

    bingo.

    Now, how can I pack a bunch of these in a chassis?
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis. As for PCIe slot expansion cards, there's a few out there that would let you install 4x M.2 SSDs on a PCIe slot, but they'd add to the cost of building such a storage array. In the end, I think we're probably a year or three away from using NVMe SSDs in large storage arrays outside of highly customized and expensive solutions for compaines that have the clout to leverage something that exotic.
  • ddriver - Tuesday, November 22, 2016 - link

    So are you going to make that custom motherboard for him, or will he be making it for himself? While you are at it, you may also want to make a cpu with 400 pcie lanes so that you can connect those 100 lousy budget p600s.

    Because I bet the industry isn't itching to make products for clueless and moneyless dummies. There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :D
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    "So are you going to make that..."

    Sure, okay.
  • Samus - Tuesday, November 22, 2016 - link

    ddriver, you are ignoring his specific application when judging his solution to be wrong. For imaging, sequential throughput is all that matters. I used to work part time in PC refurbishing for education and we built a bench to image 64 PC's at a time over 1Gbe with a dual 10Gbe fiber backbone to a server using, which was at the time the best option on the market, an OCZ RevoDrive PCIe SSD. Even this drive was crippled by a single 10Gbe connection let alone dual 10Gbe connections, which is why we eventually installed TWO of them in RAID 1.

    This hackjob configuration allowed imaging 60+ PC's simultaneously over GBe in about 7 minutes when booting via PXE, running a diskpart script and imagex to uncompress a sysprep'd image.

    The RevoDrive's were not reliable. One would fail like clockwork almost annually, and eventually in 2015 after I had left I heard they fell back to a pair of Plextor M2 2280's in a PCIe x4 adapter for better reliability. It was, and still is, however, very expensive to do this compared to what the 600p is offering.

    Any high-throughput sequential reading application would greatly benefit from the performance and price the 600p is offering, not to mention Intel has class leading reliability in the SSD sector of 0.3%/year failure rate according to their own internal 2014 data...there is no reason to think of all companies Intel won't keep reliability as a high priority. After all, they are still the only company to mastermind the Sandforce 2200, a controller that had incredibly high failure rates across every other vendor and effectively lead to OCZ's bankruptcy.
  • ddriver - Tuesday, November 22, 2016 - link

    So how does all this connect to, and I quote, "stick 10 or 100 or so of these in a server, as an image/media store"?

    Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D

    Lastly, next time try multicasting, this way you can simultaneously send data to 64 hosts at 1 gbps without the need for dual 10gbit or an uber expensive switch, achieving full parallelism and an effective 64 gbps. In that case a regular sata ssd or even an hdd would have sufficed as even mechanical drives have no problem saturating the 1 gbps lines you to the targets. You could have done the same work, or even better, at like 1/10 of the cost. You could even do 1000 system at a time, or as many as you want, just daisy chain more switches, terabit, petabit effective cumulative bandwidth is just as easily achievable.
  • Samus - Wednesday, November 23, 2016 - link

    Multicast helped but when you are saturating the backbone of the switch with 60Gbps of traffic it only slightly improves transfer. With light traffic we were getting 170-190MB/sec transfer rate but with a full image battery it was 120MB/sec. Granted with Unicast it never cracked 110MB/sec under any condition.
  • ddriver - Wednesday, November 23, 2016 - link

    Multicast would be UDP, so it would have less overhead, which is why you are seeing better bandwidth utilization. Point is with multicast you could push the same bandwidth to all clients simultaneously, whereas without multicast you'd be limited by the medium and switching capacity on top of the TCP/IP overhead.

    Assuming dual 10gbit gives you the full theoretical 2500 mb/s, if you have 100 mb/s to each client, that means you will only be able to serve no more than 25 clients. Whereas with multicast you'd be able to push those 170-190 mb/s to any number of clients, tens, hundreds, thousands or even millions, and by daisy chaining simply gigabit routers you make sure you don't run out of switching capacity. Of course, assuming you want to send the same identical data to all of them.
  • BrokenCrayons - Wednesday, November 23, 2016 - link

    "Also, he doesn't really have "his specific application", he just spat a bunch of nonsense he believed would be cool :D"

    Technical sites are great places to speculate about what-ifs of computer technology with like minded people. It's absolutely okay to disagree with someone's opinion, but I don't think you're doing so in a way that projects your thoughts as calm, rational, or constructive. It seems as though idle speculation on a very insignificant matter is treated as a threat worthy of attack in your mind. I'm not sure why that's the case, but I don't think it's necessary. I try to tell my children to keep things in perspective and not to make a mountain out of a problem if its not necessary. It's something that helps them get along in their lives now that they're more independent of their system of parental checks and balances. Maybe stopping for a few moments to consider whether or not the thing that's upsetting you and making you feel mad inside is a good idea. It could put some of these reader comments into a different, more lucid perspective.
  • ddriver - Tuesday, November 22, 2016 - link

    Oh and obviously, he meant "image" as in pictures, not image as in os images LOL, that was made quite obvious by the "media" part.
  • tinman44 - Monday, November 28, 2016 - link

    The 960 EVO is only a little bit more expensive for consistent, high performance compared to the 600p. Any hardware implementation where more than a few people are using the same drive should justify getting something worthwhile, like a 960 pro or real enterprise SSD, but the 960 EVO comes very close to the performance of those high-end parts for a lot less money.

    ddriver: compare perf consistency of the 600p and the 960 EVO, you don't want the 600p.
  • vFunct - Wednesday, November 23, 2016 - link

    > There is already a product that's unbeatable for media storage - an 8tb ultrastar he8. As ssd for media storage - that makes no sense, and a 100 of those only makes a 100 times less sense :D

    You've never served an image gallery, have you?

    You know it takes 5-10 ms to serve a single random long-tail image from an HDD. And a single image gallery on a page might need to serve dozens (or hundreds) of them, taking up up to 1 second of drive time.

    Do you want to tie up an entire hard for one second, when you have hundreds of people accessing your galleries per second?

    Hard drives are terrible for image serving on the web, because of their access times.
  • ddriver - Wednesday, November 23, 2016 - link

    You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.

    You will not be losing any service time waiting for the hdd, you will be only limited by your internet bandwidth. Which means that regardless of the number of images, the client will receive the entire data set only 5-10 msec slower compared to an ssd. And regardless of how many clients you may have connected, you will always be limited by your bandwidth.

    Any sane server implementation won't read the entire gallery in a burst, which may be hundreds of megabytes before it services another client. So no single client will ever block the hdd for a second. Practically every contemporary hdd have ncq, which means the device will deliver other requests while your network is busy delivering data. Servers buffer data, so say you have two clients requesting 2 different galleries at the same time, the server will read the first image for the first client and begin sending it, and then read the first image for the second client and begin sending it. The hdd will actually be idling quite a lot waiting, because your connection bandwidth will be vastly exceeded by the drive's performance. And regardless of how many clients you may have, that will not put any more strain on the hdd, as your network bandwidth will remain the same bottleneck. If people end up waiting too long, it won't be the hdd but the network connection.

    But thanks for once again proving you don't have a clue, not that it wasn't obvious from your very first post ;)
  • vFunct - Friday, November 25, 2016 - link

    > You probably don't know, but it won't really matter, because you will be bottlenecked by network bandwidth. hdd access times would be completely masked off. Also, there is caching, which is how the internet ran just fine before ssds became mainstream.

    ddriver, just stop. You literally have no idea what you're talking about.

    Image galleries aren't hundreds of megabytes. Who the hell would actually send out that much data at once? No image gallery sends out full high-res images at once. Instead, they might be 50 mid-size thumbnails of 20kb each that you scroll through on your mobile device, and send high-res images later when you zoom in. This is like literally every single e-commerce shopping site in the world.

    Maybe you could take an internship at a startup to gain some experience in the field? But right now, I recommend you never, ever speak in public ever again, because you don't know anything at all about web serving.
  • close - Wednesday, November 23, 2016 - link

    @ddriver, I really didn't expect you to laugh at other people's ideas for new hardware given your "thoroughly documented" 5.25" hard drive brain-fart.
  • ddriver - Wednesday, November 23, 2016 - link

    Nobody cares what clueless troll wannabes like you expect, you are entirely irrelevant.
  • close - Thursday, November 24, 2016 - link

    ddriver, you're the guy who insisted he designed a 5.25" hard drive that's better than anything on the market despite being laughed at and proven wrong beyond any shadow of a doubt but still insist on beginning and ending almost all of your comments with "you don't have a clue", "you probably don't know". Projecting much?

    You're not an engineer and you're obviously not even remotely good at tech. You have no idea (and it actually does matters) how this works. You just make up scenarios in your head with how you *think* it works and then you throw a tantrum when you're contradicted by people who don't have to imagine this stuff, they know it.

    In your scenario you have 2 clients using 2 galleries at the same time (reasonable enough, 2 users/server just like any respectable content server). You server reads image 1, sends it, then reads image 2 and sends it because when working with a gallery this is exactly how it works (it definitely won't be 200 users requesting thousands of thumbnails for each gallery and then having to send that to each client). Then the network bandwidth will be an issue because your content server is limited to 100Mbps, maybe 1Gbps, since you only designed it for 2 concurrent users. A server delivering media content - so a server who's ONLY job is to DELIVER MEDIA CONTENT - will have that kind of bandwidth that's "vastly exceeded by the drive's performance", the kind that can't cope with several hard drives furiously seeking hundreds or thousands of files. And of course it doesn't matter if you have 2 users or 2000, it's all the same to a hard drive, it simply sucks it up and takes it like a man. That's why they're called HARD...

    Most content delivery servers use a hefty solid state cache in front of the hard drives and hope that the content is in the cache. The only reasons spinning drives are still in the picture are capacity and cost per GB. Except ddriver's 5.25" drive that beats anything in every metric imaginable.

    Oh and BTW, before the internet became mainstream there was slightly less data to move around. While drive performance increased 10 fold since then the data being move increased 100 times or more.
    But heck, we can stick to your scenario that 2 users access 2 pictures on a content server with a 10/100 half duplex.

    Now quick, whip out those good ol' lines: "you're a troll wannabe", "you have no clue". Than will teach everybody that you're not a wannabe and not to piss all over you. ;)
  • vFunct - Wednesday, November 23, 2016 - link

    > I'd think the best answer to that would be a custom motherboard with the appropriate slots on it to achieve high storage densities in a slim (maybe something like a 1/2 1U rackmount) chassis.

    I agree that the best option would be for motherboard makers to create server motherboards with a ton of vertical M.2 slots, like DIMM slots, and space for airflow. We also need to be able to hot-swap these out by sliding out the chassis, uncovering the case, and swapping out a defective one as needed.

    A problem with U.2 connectors is that they have thick cabling all over the place. Having a ton of M.2 slots on the motherboard avoids all that.
  • saratoga4 - Tuesday, November 22, 2016 - link

    If only they made it with a SATA interface!
  • DanNeely - Tuesday, November 22, 2016 - link

    As a SATA device it'd be meh. Peak performance would be bottlenecked at the same point as every other SATA SSD, and it loses out to the 850 evo, nevermind the 850 pro in consistency.
  • Samus - Tuesday, November 22, 2016 - link

    There are lots of good reliable SATA m2 drives on the market. The thing that makes the 600p special is it is priced at near parity with them when most PCIe SSD's have a 20-30% premium.

    A really good m2 2280 option is the MX300 or 850 EVO. Sandisk has some great m2 2260 drives.
  • ddriver - Tuesday, November 22, 2016 - link

    Even in the case of such "server" you are better off with sata ssds, get a decent hba or raid card or two, connect 8-16 sata ssds and you have it. Price is better, performance in raid would be very good, and when a drive needs replacing, you can do it in 30 seconds without even powering off the machine.

    The only actual sense this product makes is in budget ultra portable laptops or x86 tablets, because it takes up less space, performance wise there will not be any difference in user experience between that and a sata drive, but it will enable a thinner chassis.

    There is no "density advantage" for nvme, there is only FORM FACTOR advantage, and that is only in scenarios where that's the systems primary and sole storage device. What enables density is the nand density, and the same dense chips can be used just as well in a sata or sas drive. Furthermore I don't recall seeing a mobo that has more than 2 m2 slots. A pci card with 4 m2 slots itself will not be exactly compact either. I've seen such, they are as big as upper mid-range video card. It takes about as much space as 4 standard 2.5' drives, however unlike 4x2'5" you can't put it into htpc form factor.
  • ddriver - Tuesday, November 22, 2016 - link

    Also, the 1tb p600 is nowhere to be found, and even so, m2 peaks at 2tb for the 960 pro, which is wildly expensive. Whereas with 2.5" there is already a 4tb option and 8tb is entirely possible, the only thing that's missing is demand. Samsung demoed 16tb 2.5" sdd over a year ago. I'd say that the "density advantage" is very much on the side of 2.5" ssds.
  • BrokenCrayons - Tuesday, November 22, 2016 - link

    Probably not.
  • XabanakFanatik - Tuesday, November 22, 2016 - link

    If Samsung stopped refusing to make two-sided M.2 drives and actually put the space to use there could easily be a 4TB 960 Pro.... and it would cost $2800.
  • JamesAnthony - Tuesday, November 22, 2016 - link

    Those cards are widely available, (I have some), 16x PCIe 3.0 interface and then 4 M.2 slots with each slot getting 4x PCIe 3.0 bandwidth, then a cooling fan for them.

    However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive. Standard PCIe interface format card for either low profile or standard profile slots?

    The only advantage an M.2 drive has is being small, but if you are going to put it in a standard PCIe slot, then why not just go with a purpose built PCIe NVMe SSD drive & not have to worry about thermal throttling on the M.2 cards?
  • ddriver - Tuesday, November 22, 2016 - link

    A fool can dream James, a fool can dream...

    He also wants to live in a really big house made of cards and bathe in dry water, so his hair don't get wet :D
  • Kevin G - Wednesday, November 23, 2016 - link

    Conceptually a PCIe bridge/NVMe RAID controller could implement additional PCIe lanes on the drive side for RAID5/6 purposes. For example, 16 lanes to the bridge and six 4 lane slots on the other end. There is still the niche in the server space where reliability is king and having removable and redundant media is important. Granted, this niche is likely served better by U.2 for hot swap bays than M.2 but they'd use the same conceptual bridge/RAID chip proposed here.
  • vFunct - Wednesday, November 23, 2016 - link

    > However WHY would you want to do that when you could just go get an Intel P3520 2TB drive or for higher speed a P3700 2TB drive.

    Those are geared towards database applications (and great for it, as I use them), not media stores.

    Media stores are far more cost sensitive.
  • jjj - Tuesday, November 22, 2016 - link

    And this is why SSD makers should be forced to list QD1 perf numbers, it's getting ridiculous.
  • powerarmour - Tuesday, November 22, 2016 - link

    I hate TLC.
  • Notmyusualid - Tuesday, November 22, 2016 - link

    I'll second that.
  • ddriver - Tuesday, November 22, 2016 - link

    Then you will love QLC
  • BrokenCrayons - Wednesday, November 23, 2016 - link

    I'm not a huge fan either, but I was also reluctant to buy into MLC over much more durable SLC despite the cost and capacity implications. At this point, I'd like to see some of these newer, much more durable solid state memory technologies that are lurking in labs find their way into the wider world. Until then, TLC is cheap and "good enough" for relatively disposable consumer electronics, though I do keep a backup of my family photos and the books I've written...well, several backups since I'd hate to lose those things.
  • bug77 - Tuesday, November 22, 2016 - link

    The only thing that comes to mind is: why, intel, why?
  • milli - Tuesday, November 22, 2016 - link

    Did you test the MX300 with the original firmware or the new firmware?
  • Billy Tallis - Tuesday, November 22, 2016 - link

    The old firmware. The testbed has been too busy with PCIe SSDs lately for me to have a chance to put the November MX300 update through its paces.
  • seanmac2 - Wednesday, November 23, 2016 - link

    I would never intentionally buy this product but it bothers me anyway because laptops advertise things like "512 GB PCIe SSD" and I'm left wondering if I'll get this or something sweet like a Samsung 950/951/960.
  • ddriver - Wednesday, November 23, 2016 - link

    You get what you pay for. The 600p will likely go into budget products, which won't be CPU powerhouses which may be limited by the sdd performance. Most applications, even prosumer grade software, shows like 1-2$ improvement from going sata to nvme, and this particular product, although technically nvme is more in the sata ballpark.
  • Flying Aardvark - Friday, November 25, 2016 - link

    That's why Intel products cost more than others. You do get what you pay for. Intel SSDs have the industry's best reliability, which matters most when your drive fails prematurely. Unlikely if using M.2 you'll see any real world difference between the 600P and anything else.
    The true step up is the heavy duty Intel 750 stuff with heatsink and zero throttling concerns under heavy, sustained load.
  • Meteor2 - Wednesday, November 23, 2016 - link

    A suggestion: you could link to the previous reviews of devices the first time you mention them, e.g. the 850 Evo. Would save hunting around for them/encourage more page views a people read those reviews before coming back.
  • Meteor2 - Wednesday, November 23, 2016 - link

    So where's this drive falling down compared to the other NVMe drives? Is it the TLC NAND, the construction of the dies, the controller, or something else?
  • DominionSeraph - Wednesday, November 23, 2016 - link

    "1750MB/s sequential read", and not a single test showing if it could actually reach 1750MB/s sequential read in any real life tasks.
    Great job there.
  • beginner99 - Wednesday, November 23, 2016 - link

    WTF is this? It's another useless TLC crap drive. Intel, your ruining your reputation and brand with crap like this. I don't see why I should buy this over a MX300 or similar crappy TLC entry level ssd that is even cheaper.
  • Flying Aardvark - Friday, November 25, 2016 - link

    Everything is going to be 3D TLC soon except the truly next-level stuff like the Intel 750. 3D TLC is not planar TLC.
  • creed3020 - Wednesday, November 23, 2016 - link

    Billy, when are these results going to be included in Bench? I was hoping to compare to my Crucial MX100 but cannot find these Intel drives under SSD2015.
  • Billy Tallis - Thursday, November 24, 2016 - link

    They're in Bench now.
  • Flying Aardvark - Friday, November 25, 2016 - link

    I have the 1TB 600P and love it. I bought it knowing full well it wasn't a benchmark king. Don't care, low QD performance has hardly improved for quite some time. But at the price, for a 5-year warranty with 0.3% failure rate per year was a no brainer over the 960 EVO for me.
    I can't get it to slow down or stutter in my case and if you can, you should probably step all the way up to the Intel 750, heatsink intact and all.
  • crazyowl - Sunday, November 27, 2016 - link

    I'm not sure now to formulate this correctly so as not to hurt the reputation of the product, but there's been a report of a 600p burning a motherboard's traces when installed via a DeLonghi adapter card. Anandtech, what could you comment on that issue? Came across it in a review for the 600p at a respectable decent online shop.
  • crazyowl - Sunday, November 27, 2016 - link

    Sorry, it was DeLock, not DeLonghi. The latter seems to be a houseware brand.
  • Billy Tallis - Sunday, November 27, 2016 - link

    Our testing showed the 600p to be relatively power hungry by the standards of M.2 PCIe SSDs, but it most certainly wasn't drawing enough current to be a danger to any equipment that is capable of safely powering other M.2 PCIe SSDs. Whatever you read about was likely the result of either a manufacturing defect in the board that was supplying power to the M.2 drive, or the result of improper installation leading to a short circuit. I've killed a motherboard through the latter means, but it was only due to the modifications I've made to facilitate measuring PCIe card power consumption.
  • Xajel - Monday, November 28, 2016 - link

    I would love to have an NVMe SSD, sadly my system is old (ASUS P8Z77 ) so, it's not able to boot from NVMe.. although nothing is wrong with the chipset it can do it, but ASUS never released any BIOS update, there's some unofficial mod's which can enable this but there's no guarantee it will work or it will brick.
  • el-loc0 - Monday, November 28, 2016 - link

    @Anandtec: what equipment do you use to measure power consumption?
  • Billy Tallis - Tuesday, November 29, 2016 - link

    For PCIe SSDs, I use a riser card from Adex Electronics with current sense resistors on the 3.3V and 12V supply lines. For SATA SSDs, I use a multimeter spliced into the power cable to measure current directly.
  • el-loc0 - Tuesday, November 29, 2016 - link

    Thanks for quick reply, Billy. Do PCIe SSD Draw Power from both lines, 3.3 V and 12 V? Do you use a current clamp or how do you measure on the riser card? Which multimeter do you use?
  • SanX - Thursday, December 1, 2016 - link

    Did Intel pay for this BS review?
  • ramvalleru - Tuesday, December 6, 2016 - link

    What advantages does Intel 600p has over Samsung 850 Evo with its 4 x pci-e. Less bottleneck with multi application writes and reads?
  • KAlmquist - Friday, December 9, 2016 - link

    If you mean compared to the 960 EVO, the 600p is less expensive. Also, with the 600p you are getting the Intel brand name and quality control, backed up with a 5 year warranty vs. a 3 year warranty on the 960 EVO.
  • RetsamCP - Saturday, December 24, 2016 - link

    I may just be a little confused but how did the 960 Pro 2TB bench and average service time latency of 160.9 ms in the Destroyer bench but score 0 for percentage of service times >100 ms?

    There had to be service times over 100 ms for the average to be over 100 ms, but how was the average affected so much when service times >100 ms made up <0.01% of the total benchmark?

    What am I missing?

Log in

Don't have an account? Sign up now