Comments Locked

39 Comments

Back to Article

  • sovanbu - Monday, July 7, 2014 - link

    Glad I still have 20 days to return the 840 Pro I bought. This will be much faster and more reliable.
  • Solid State Brain - Monday, July 7, 2014 - link

    Interesting test. A bit disappointing that the NAND on this drive is officially "only" rated for 6000 P/E cycles. Not that it's going to ever be a problem for most users, even likely most of those dealing with video editing (often involving a large amount of writes, but usually of sequential nature - meaning low write amplification), also given Samsung's client environment scenario policy on its 10 year warranty for this drive.

    I'm assuming that to obtain a 11x worst case write amplification you hammered the drive with sustained writes raw to the drive (bypassing the OS), am I correct? During normal or even intense usage with typical pro/consumer workloads I think it would be very difficult for users to observe such a high value, with TRIM and garbage collection algorithms sorting out data on the NAND. I also guess that people who consciously write a huge amount of data will want to increase the overprovisioning area for performance and NAND endurance reasons.

    Speaking of which, it would be interesting to find out how much of an effect increasing OP has in decreasing worst case write amplification.
  • Kristian Vättö - Monday, July 7, 2014 - link

    Yes, I tested with a raw drive. Like I said, it's a worst-case scenario that may apply to some write intensive enterprises -- client workloads, even heavy ones, don't usually exceed queue depth of 5 anyway.

    I'll probably run some more in depth write amplification tests including playing with the over-provisioning for the enterprise review of the 850 Pro, assuming I manage to find the time for all that :)
  • mapesdhs - Monday, July 7, 2014 - link


    What do you mean by 'much' faster? And as for more reliable, it's usually more risky to
    adopt a totally new SSD. Besides, the 830/840/EVO/840-Pro series are all reliable models,
    so how have you quantified what 'more' reliable will mean when it's a new SSD vs. existing
    models that have already gone through the hunt for bugs to be ironed out, with consequent
    fw updates issued?

    In reality, I highly doubt you'd ever notice the difference between a 256GB 840 Pro and a
    256GB 850 Pro.

    Ian.
  • wintermute000 - Monday, July 7, 2014 - link

    Normal desktop/laptop user agreed but there are people with more demanding requirements.....
  • mapesdhs - Tuesday, July 8, 2014 - link


    Demanding in what way? For something so demanding that an 840 Pro isn't enough,
    one ought to be using an Enterprise model instead.

    Ian.
  • extide - Wednesday, July 9, 2014 - link

    You know, the thing about this SSD is that it uses the same controller as the 840 EVO, which would mean it uses very similar firmware as the 840 EVO. Of course, you may see some changes in the FW related to the fact that the V-NAND has greater abilities, but most of the firmware should be pretty much the same. So, same controller, similar firmware, and new flash. So, it's really not too bad, and I would definitely trust it more than a drive from, say... a brand new series, with a new controller, and maybe even a radically different interface or protocol, or even a brand new/unknown company!
  • Mark_gb - Wednesday, July 9, 2014 - link

    It will be so much faster that you wont even notice the difference... LOL

    It will probably last a lot longer than your 840 Pro, but even that is a crap shoot. Your 840 Pro could last another 10 years, or the 850 could die the second day you use it.
  • jjj - Monday, July 7, 2014 - link

    "I am guessing the smaller die size is better for yields (larger chips have higher probability of manufacturing errors), which makes the second generation more cost efficient despite the slightly lower density. "

    You have more layers for the second gen and that impacts both yield and cost so you can't make that kind of assumption.
    Maybe you should have also compared density on a per layer basis ,the difference seems quite huge then between the 2 gens.
  • Kristian Vättö - Monday, July 7, 2014 - link

    You are right that having more layers impacts yield and production cost but it does not refute the fact that the die size affects yield as well. There must be something in 2nd Gen V-NAND to make it more cost efficient than the first Gen because otherwise it makes absolutely no sense. It's not denser because the array efficiency is significantly lower thanks to the lower capacity, so the only thing I can think of is higher yield thanks to the smaller die size.

    What do you mean by "density on a per layer basis"? The cell size should have remained the same because it is still a 40nm process with no change in equipment, meaning that the second Gen simply adds more layers.
  • jjj - Monday, July 7, 2014 - link

    You shouldn't assume that it makes sense from a cost perspective. Samsung is huge and sometimes it might focus less on cost ,maybe it's a commercial beta for now and they are not quite where they need to be.
    As for per layer you have 128Gb in 24 layers and 86Gb in 32 layers so a layer is 5.3333 Gb and 2.6875Gb and if you factor in die size the difference seems strangely high.
  • jjj - Monday, July 7, 2014 - link

    Double posting here but just remembered,that i always wondered about temps for 3D NAND so , have you looked at heat and how it compares with 2D?
  • Kristian Vättö - Monday, July 7, 2014 - link

    Sure it's possible that Samsung is just "playing around with 32-layer design" but after all Samsung is a company with a target to generate profit for the shareholders. There must be a reason why Samsung didn't just add eight layers to the first Gen design and make it a ~170Gbit chip with about the same die size.

    Here are some numbers I crushed:

    Die Size: 133mm^2 (1st gen) - 95.4mm^2 (2nd gen)
    Observed Array Efficiency: 85% - 64%
    Die Area Dedicated to Arrays: 113.1mm^2 - 61.1mm^2
    Array Density: 1.13Gbit/mm^2 - 1.41Gbit/mm^2
    Array Density per Layer: 0.047Gbit/mm^2 - 0.044Gbit/mm^2

    In other words, when you take all the factors into account, it all adds up. Sure there is still a ~10% difference but nearly everything we have is estimations, so a 10% error sounds fair.
  • jjj - Monday, July 7, 2014 - link

    Obviously smaller is most likely for yields reasons but it's not enough to draw the conclusion that cost per bit got better. I'm also not very sure how perf and power and heat changed from first gen to second, that's also a factor to consider when deciding to go this way.
    It is very likely that the first gen wasn't cost effective compared to 2D and all the other 3 big players are not jumping in just yet because of cost and that would suggest that for Samsung cost wasn't the main factor when they decided to launch 3D. As far as we can tell 3D is pretty low volume for now so the cost might matter very little for someone like Samsung. An objective could be to advance the technology while gaining share in enterprise SSD and enhancing the brand's image.
    Second gen cost per bit could be better already or they could hope to get there soon but we just don't have the data to draw any conclusion.
  • jjj - Monday, July 7, 2014 - link

    Looked at the numbers, i guess by looking at the area dedicated to the arrays there is only a small enough difference.
    Would be really nice to have some yield and cost projections based on die size, layers and process to try to figure out what the next steps will be.Second gen seems rather small given the array efficiency so maybe they'll aim for something bigger in gen 3. I am already dreaming of 140mm2 48 layer on 28nm at 512Gb at some point soon(ish).
  • repoman27 - Monday, July 7, 2014 - link

    Some back of the envelope calculations show that the 2nd gen V-NAND would provide 6% fewer total bits per wafer, echoing your bit density calculations, but would come in the form of 40% more dies. If you expect even moderate issues with defects, that could make the 2nd gen much more profitable to produce, or simply a hedge as they ramp the number of layers with a relatively novel process.

    Samsung is pretty good at die stacking, but the other aspect may have to do with the smaller dies being much easier to stack in a standard 12 or 14 x 18mm package.
  • Per Hansson - Monday, July 7, 2014 - link

    I'm having a hard time understanding the correlation of these numbers.
    A friend has a Intel X25-M G2 80GB drive that according to the SMART data has logged 73TB host writes.
    That SSD has become slow, when testing using AS SSD it logs 1000ms on read access time for example.
    Yet still the "media wearout indicator" is only at 97, implying the SSD has only used up 3% of it's life.
    However the drive is warranted by Intel at a maximum of 7.5TB, so how do these numbers relate?
  • extide - Tuesday, July 8, 2014 - link

    Speed and age have nothing to do with each other
  • extide - Tuesday, July 8, 2014 - link

    Well, not nothing, but little enough that it is essentially nothing. The speed difference you are seeing are due to fragmentation at various levels. A secure erase would restore the drive to like-new speed.
  • Per Hansson - Tuesday, July 8, 2014 - link

    Thank you so much extide for your insight!
    I created a Ghost backup of the drive and did a secure erase (took a while to find a computer where hdderase.exe would work, and which would release the security frozen lock)

    But once I did and restored the Ghost the drive is almost like new, write performance more than doubled and read access time went from ~1000ms to ~180ms

    I also found another interesting thing out and that was by upgrading Intel SSD Toolbox from v3.1.6 to v3.2.1 the value "E1 Host Writes" went from 73TB down to 7.57TB (None of the other SMART values changed)
    The Windows Experience Index score also went from 5.9 > 7.7

    So far this SSD has 4.5 years of power on hours logged, so it's quite well used, only 96 power cycles, would be allot more without the UPS :)
  • isa - Monday, July 7, 2014 - link

    A thoughtful article, Kristian. As an ex-ASIC designer, I agree with your assessment that the SMART value is likely being manipulated by Samsung - no competent IC fab would have the bimodal yield variability that would otherwise account for the 2x SMART value versus the 10x marketing claim. If the result was in fact based on yield variability, then you'd see a range of SMART decimation values for a range of samples. But if all the drives report a consistent 2x effect, then it's being manipulated.
  • emvonline - Monday, July 7, 2014 - link

    Great article and comments Kristian! just what I was looking for.... Thanks!
    The lateral spacing is far larger than expected with 40nm lithography and hence the cell density is much lower than expected for theoretical VNAND.

    Most companies think 3D NAND makes sense only with 256Gbit to be cost effective. Lower densities get expensive due to array efficiency. IMO, Samsung is introducing a new part at high cost and high price so they can fix efficiencies later. smart move
  • FunBunny2 - Monday, July 7, 2014 - link

    With regard to the 2X and 10X issue. Is Samsung claiming both numbers, in different places, specifically to NAND performance? Or is "endurance" an SSD number, which would encompass controller logic, spare area, and such, thus 10X while raw NAND performance is 2X?
  • mkozakewich - Tuesday, July 8, 2014 - link

    In the world of enterprise and the government, you don't generally want things failing unexpectedly. Most tolerance standards incorporate that truth, so you get ratings that aren't even close to failure at their maximums.

    I assume that's what's going on here. It seems weird that AnandTech is acting incredulous about it, because they specifically called it out when testing other (Intel, I think?) drives. Basically, the wear-levelling factor usually means nothing. Like an expiry date on milk, you can be statistically assured that nearly all the products will be fine within that boundary. That the number is so low on these SSDs makes me think there's a large amount of variance in their samples, and it has to be that low to catch a high enough percentage to fit their failure tolerance.

    That and I *think* they got the 10x number from a single drive Samsung was boasting about.
  • Sivar - Wednesday, July 9, 2014 - link

    Units missing. Please add.
    1.10 what per day? 0.43 what?
  • Kristian Vättö - Thursday, July 10, 2014 - link

    Drive writes per day, it's mentioned in the left-hand side column.
  • LordConrad - Wednesday, July 9, 2014 - link

    "... (GB in Windows is really GiB, i.e. 1024^3 bytes)..."

    Windows is correct to use GB. When a prefix (Giga) is used in front of a binary measurement (Byte), the resulting measurement remains binary. Hard drive manufacturers use of the prefixes is wrong (and done for the sole purpose of inflating their numbers), Windows is right, and GiB is a redundant measurement.
  • Zan Lynx - Wednesday, July 16, 2014 - link

    You are right in your own imagination.

    SI units are SI units, period, end, stop.
  • sonicmerlin - Thursday, July 10, 2014 - link

    Does this mean Samsung could deploy V-NAND on a 1X nm process and effect a 32x increase in density? Or even 2X nm process and 16x density increase? That would put them ahead of HDDs in terms of cost per bit, wouldn't it? Is there any reason they're not going down this path in at least the consumer market, where SSD endurance isn't a top priority?
  • garadante - Friday, July 11, 2014 - link

    Probably because they have no reason to absolutely crash the price/GB standard for SSDs. They'd have absolutely no competitors at that density in the immediate future so nobody could compete. And considering their current model allows them to just barely eke out densities ahead of competitors in order to give them the most competitive product (if barely) it allows them to continue to make profit on many generations of product rather than completely changing the entire industry in one sweep. Just like what Intel does with CPUs. Why give the consumer all the cards in your hand when you can tease them with a bare fraction of your true potential to get sales from them year after year over the next several decades?
  • sonicmerlin - Friday, July 11, 2014 - link

    But the cost per bit would be 16x less for Samsung as well, and they could sell at the same prices as they are now while raking in preposterous margins. There must be another reason they're not printing at at least the 2X node.
  • pukemon1976 - Tuesday, July 15, 2014 - link

    Thus the stagnation in innovation in a lot of industries. Smart phones are a great example of this. Look no further than Samsung and apple. Heck, Intel should throw AMD a bone or two so there isore competition in the x86 space.
  • MrSpadge - Saturday, July 26, 2014 - link

    The 40 nm process is very mature, but can't be used for anything power or performance critical anymore. Using these fabs for 3D NAND is making good use of them. I suppose they'll transition 3D NAND to smaller geometries once competitive pressure arises or once their 32 / 28 nm fabs are not completely utilized by other chips any more.
  • jordanl17 - Thursday, August 28, 2014 - link

    I just put 16x of the 512gb Samsung EVO 850's in an Equallogic PS6000 SAN. Raid 6 w/ hot spare. pretty sick. I'm going to get some cold spares for when these babies start to fail. but I'm hoping they last a long time!
  • mdw9604 - Saturday, October 17, 2015 - link

    How did that turn out?
  • mrigi - Friday, September 19, 2014 - link

    The worst scenario is not random write across the drive and not 100GiB per day but writing zillion times a day to a single sector like Chrome browser does with it's cache/history/bookmarks. That's the real world scenario. That is what kills drives regardless their firmwares that are cool on a paper only.
    Have you tested that? Is this what "write amplification" test does?
  • frostdude2025 - Sunday, November 9, 2014 - link

    what do the two situations of writing 20 gb or 100 gb at 1.5 or 3.0 write amp even look like, always wondered this
  • IAEInferno - Sunday, September 18, 2016 - link

    Hi I'm confused, it says that the limit is 150TB write limitation or 10 years warranty for the Samsung 850 Pro, how many years exactly does the 850 pro SSD at 512gb would last me if I'm just gaming while I store my movies, music, pictures, multiplayer games and etc. on a separate HDD while I just put my OS, some games and programs in the SSD?
  • chrcoluk - Saturday, August 14, 2021 - link

    Bait and switch again?

    Reviewer sample 1% for every 60 cycles.

    My 850 pro 63 cycles and 98%. Which seems to be 30 cycles per 1% same as their MLC 2D drives lol.

    With that said the drive is a few years old now, has been used in a ps4 pro which constantly records footage, and 2-3 years in my main PC. So to be at 98% of rated cycles is very nice for that length of time. Although its about 80% of the way to rated TB of warranty which was set very low by samsung.

Log in

Don't have an account? Sign up now