Back at Flash Memory Summit I had the opportunity to meet with all the key people at SanDisk. There is a lot going on at SanDisk at the moment with the Fusion-io acquisition, TLC NAND, and other things, so I figured I would write a piece that outlines SanDisk's current situation and what they're planning for the future.

I'll start with the client side. For SanDisk the big topic at this year's Flash Memory Summit was TLC NAND and we were given a sneak peek of the SanDisk Ultra II back at the show, which was then released a few weeks later. Since we have already reviewed the Ultra II, I'm not going to talk about the drive itself and its technical merits, but there are a few things that Kevin Conley, senior vice president and general manager for SanDisk's client business brought up about TLC and the client market in general.

I'm sure most of our long-time readers remember how SSD prices plummeted between 2010 and 2012. The reason for that wasn't a breakthrough in NAND technology, but merely the fact that all manufacturers increased their manufacturing capacity with the expectation of exponential NAND demand growth. As you can see in the graph above, the industry bit growth was over 60% year-over-year between 2010 and 2012, which lead to oversupply in the market and deflated the prices.

The reason why all NAND manufacturers invested so heavily on capacity increases was the popularity of smartphones and tablets; it was expected that the average storage capacity would increase over time. Basically, the NAND manufacturers assumed that decreases in NAND prices due to smaller lithographies would translate to higher capacity smartphones and tablets, but in fact the mobile companies chose to save on onboard storage and invest in other components instead (camera, SoC, etc.).

It's only been recently that smartphone and tablet manufacturers have started to increase the internal NAND and offer higher capacity models (e.g. the 128GB iPhone 6/6+), but even today the majority of devices are shipping with 16GB, which is the same capacity that the low-end iPhone 3GS had when it was introduced in 2009. Of course a large reason for the reduced sales of higher capacity smartphones/tablets has a lot to do with pricing, where 32GB devices often cost $100 more than the 16GB model.

Since the NAND manufacturers are now adding fab space at a slower pace, they are looking for alternate ways to increase bit growth and scale costs down – and that's where TLC kicks in. Because TLC packs in 50% more bits than MLC (three bits per cell instead of two), increasing the share of TLC production is an efficient way to boost bit growth without additional fab investments.

Currently about 45-50% of SanDisk's NAND production is TLC and by next year TLC will be overtaking MLC in terms of production volume. Note that SanDisk will have 3D NAND ready in 2016, so the graph doesn't imply that SanDisk will move to TLC-only production in 2017 – it is just the 2D NAND production moving to TLC since it will mostly be used in applications like USB flash drives and other low cost devices, while 3D NAND will be used in SSDs.

TLC will also be one of the driving forces behind average capacity increase. The main obstacle in SSD adoption is obviously the cost per gigabyte, and the lower production costs of TLC will help to bring the prices down. I think it's too early to say what kind of impact TLC will have on prices because currently there are only two drives available (SanDisk's Ultra II and Samsung's 840 EVO), but once more OEMs are ready with their TLC SSDs later this year and early next year, I believe we will see more aggressive pricing.

One of SanDisk's presentations at the show had a very interesting slide about the company's internal SSD deployment program. The question that is often debated when it comes to SSD endurance is the number of gigabytes that a user writes per day. There aren't really any studies with large sample sizes, but SanDisk's own study provides an interesting insight into typical office workloads.

What the data shows is that a typical office user only writes about 7GB per day on average and the number of people that write over 20GB is only a few percent, so very few users actually need more endurance than what TLC SSDs can offer (~20GB/day). Of course, everyone's usage is different and I doubt SanDisk's data takes e.g. media professionals properly into account, but it is still interesting and valuable data nonetheless.

Another thing I discussed with SanDisk was the obstacles for higher SSD adoption rate. While there is growth, the attach rate in the consumer space is still fairly modest and will remain as such for the next few years at least. Price is obviously one of the most important factors as hard drives are still an order of magnitude cheaper when measured in price per gigabyte, but I'm not sure if absolute price and capacity are the only hurdles anymore. I mean, 256GB is more than sufficient for the majority of users – especially now that we live in the era of Netflix and Spotify – and at ~$100 it's fairly affordable, so I think we have reached a point where the price is no longer the barrier preventing users from upgrading to SSDs.

This is actually the part where we ask for your, our readers, help. What is it that we or manufacturers like SanDisk could do to boost the SSD penetration in the market? Would live demonstrations at malls and other public places help? Or upgrade programs where you could take your PC to a store and they would do the upgrade there for you? Let us know your ideas in the comment section below and I'll make sure to bring them up with SanDisk and other SSD manufacturers. Remember that we are talking about the masses here, so think about your parents for instance – what would it take for them or other people who are not very comfortable around computers to upgrade their PCs with an SSD?

The one huge problem is of course the PC OEMs and convincing them to adopt SSDs for mainstream laptops. The race to the bottom practically killed the profits in the PC industry, which is why most of the mainstream (~$400-600) laptops have such a bad user experience (low-res TN panels, cheap plastic chassis, etc...). With already razor thin margins, the OEMs are very hesitant about increasing the BOMs and taking the risk of cutting their already-near-zero margins with SSDs. I know SanDisk and other SSD OEMs have tried to lobby SSDs to the PC OEMs as much as possible, but anything that adds cost gets a highly negative response from the PC OEMs.

The Enterprise & Final Words
POST A COMMENT

132 Comments

View All Comments

  • PeterMorgan573 - Friday, December 5, 2014 - link

    For people who use their computers only intermittently (which is perhaps the target of this question), the difference between quick startup and slow is that they turn off their computers instead of leaving them on all day. Microsoft is responsible for decades of many millions of computers being left on all day, perhaps 100 GWh/year per million computers (30 Watts for 8 hours, say, that could be saved per day), because of slow startup, which is potentially ameliorated by SSDs. People who use their computers all day aren't affected by this, of course. Reply
  • bigboxes - Friday, December 5, 2014 - link

    I leave my PC on 24/7. It's doing work all of the time. If I'm out I can remotely access it and all of my documents. I leave my file server on 24/7. Do I really want to boot it up only when I know I (or anyone else on my network) want access to the files? That's what convenience is for. Now, my wife turns her pc off at night, but she's not a power user. Reply
  • sheh - Friday, December 5, 2014 - link

    Hibernation is pretty quick, and sleep is quicker even if not as power efficient. Reply
  • Jalek99 - Friday, December 5, 2014 - link

    Hibernation on my Windows 8.1 machine only ends in blue screens within minutes. It was that way in beta and clean installs made no difference. If I leave it on all the time, it never seems to crash. Reply
  • Hrel - Monday, December 8, 2014 - link

    This is an excellent point. Before I had an SSD in my desktop I would leave it on all day while I went to work because I didn't want to have to wait for it when I got home. Sleep mode always causes problems so I avoid that and strictly use Shutdown.

    Now I don't care about shutting it down, hell, I even do Windows updates a lot more often because it's not so painful to restart the computer.

    20 seconds is what I believe the threshold for this is, the machine needs to be usable, as in booted, responsive with at least one program open, within 20 seconds of me hitting "restart".
    Reply
  • paradeigmas - Friday, December 5, 2014 - link

    Actually I think the real problem is that the consumers are not educated on the impact of having an SSD. Today's consumers are trained by advertising to look for "1080p", "Intel i5+", and "6+GB" of RAM. They are certainly not aware of the fact that having an SSD will increase their relative performance by a significant margin. What SanDisk needs to do is to get a "Relative Performance Rating" into consumer's mind, and have the bulk of the score weighted heavily by an SSD (which is true). Then, setup demo units in Malls and Best Buys that include a SSD laptop with mediocre spec, and a top-spec laptop with a traditional hard drive and do a side-by-side demonstration of exactly what SSDs does when it comes to boosting speed. SanDisk should aim to shift the paradigm of what is important for consumersa to have when purchasing a computer. Reply
  • nirolf - Friday, December 5, 2014 - link

    This. Many people I know that are computer literate don't really know what's up with these SSDs and what are they good for. They don't realize how much time they would save in their everyday tasks if they would use a SSD. Reply
  • Jalek99 - Friday, December 5, 2014 - link

    I recall the early days when people warned that they were only good for 50,000 writes or something, then they would be unreliable. Clearly something changed with so many people using them as boot drives now, but like the initial worries about plasma televisions, those things linger even if they're obsolete concerns. Reply
  • Cerb - Sunday, December 7, 2014 - link

    Now, we're down to 1,000! But, in hose early days, there was not wear-leveling. Then, there was wear-leveling that was very bad at handling small writes. Even at 50,000 writes, those 50,000 writes going to the same page over and over again could kill them much quicker than a modern TLC drive rated for 2% of that, due to spreading out the writing across pages and blocks. Reply
  • Cinnabuns - Friday, December 5, 2014 - link

    Yep. Consumers are absolutely not trained at looking for SSDs. The only thing they look for in storage is the number next to overall capacity, and advertising for stores happily oblige in providing only this info.

    SSD manufacturers should get together to push for a rating system that demonstrates how much faster it is for most people in everyday scenarios.
    Reply

Log in

Don't have an account? Sign up now