The Enterprise

In mid-July SanDisk announced their acquisition of Fusion-io and the acquisition was completed a couple of week prior to Flash Memory Summit. I posted my initial thoughts when the news hit the public, but I feel that it's worth doing a bit deeper analysis now that I have given it some more thought and discussed it with John Scaramuzzo, senior vice president and general manager of SanDisk's enterprise business.

SanDisk has managed to establish itself as one of the key players in the enterprise SSD space over the past few years. The acquisitions of Pliant in 2011 and SMART Storage Systems in 2013 provided SanDisk with strong expertise and product lineups for SATA and SAS SSDs but left the company without a solid long-term plan for PCIe. I heard Pliant's initial roadmap included plans for PCIe-based solutions as well, but it looks like those plans never materialized.

Up until the Fusion-io acquisition, the Lightning PCIe SSA was the only PCIe solution in SanDisk's enterprise product portfolio, and as a matter of fact that drive is internally a SAS-based design with a PCIe to SAS bridge onboard. In other words, SanDisk had practically zero real PCIe solutions for the enterprise, while at the same time SanDisk's biggest competitors, such as Intel and Samsung, have had PCIe drives for a long while already.

Fusion-io's 3.2TB Atomic Series SSD

Fusion-io's strategy and product portfolio, on the other hand, was a complete opposite. From the beginning Fusion-io has focused on PCIe storage, which dates all the way back to 2007 when the company released its first ioDrive that utilized a PCIe x4 interface and was capable of speeds up to 800MB/s. Not only was Fusion-io early in the market, but the company was also able to garner a few massive and very important clients – the most notable being Facebook and Apple. I don't think it's an overstatement to say that Fusion-io can be considered as the pioneer of PCIe storage because it was the first company to turn PCIe SSDs and storage in general into a large, successful business.

But stories eventually come to an end. The competitive advantages Fusion-io had were its PCIe technology and several high-level customers, but the advantages were lost when the NAND manufacturers stepped into the PCIe territory. It's nearly impossible for a company that has to source its NAND from a third party to compete against another company that manufactures NAND in-house since the latter will always have advantages in cost. While Fusion-io didn't lose its customers to competitors overnight, it's clear that especially Intel and Samsung snagged a share of Fusion-io's business in the past couple of years.

In a nutshell, the acquisition brings SanDisk the long-needed expertise in PCIe storage along with Fusion-io's broad PCIe product portfolio. The acquisition is now a bit over 100 days in and the Fusion-io employees have been integrated into SanDisk's existing teams. Initially Fusion-io's engineering team was separate and worked under Lance Smith, the former President and COO of Fusion-io, but Mr. Smith decided to leave SanDisk and pursue other options. Last week a data virtualization startup Primary Data announced that Mr. Smith has joined the company as the new CEO, which explains his quick departure from SanDisk.

All the engineering talent has now been unified and the team is lead by Mr. Scaramuzzo. With everyone under the same roof, the roadmaps are now in the process of being integrated to bring the expertise together. It will be a while before we see the fruits of the acquisition, but in the meantime the latest Fusion-io products will transition to SanDisk NAND for increased cost efficiency.

But what about NVMe? That has been the hot topic in the industry this year and I bet many of you are wondering what is SanDisk's and Fusion-io's play in that field. The short version of their strategy is that Fusion-io already has a technology called Virtual Storage Layers (VSL), which is essentially a driver/software stack similar to NVMe. The truth is that NVMe isn't really anything new from a technology perspective, but what makes it alluring for many manufacturers is the fact that the NVMe drivers are universal and already supported by the latest operating systems. Technologies like VSL are rather expensive to develop and require expertise because there is no framework available (i.e. everything has to be developed from scratch), but on the other hand an in-house driver like VSL allows for more customization and optimization.

However, that doesn't mean that SanDisk has no interest on NVMe whatsoever. The company sees that as the entry and mid-level enterprise SSDs move from SATA and SAS to PCIe, NVMe will be one of the key factors because of easy and quick deployment. For that market segment the NVMe spec and its limitations are fine – it's only the high-end segment where the benefits of VSL are more prominent. It's actually likely that many manufacturers will turn to custom NVMe drivers anyway for higher and more optimized performance, and in fact that is already happening with Intel providing its own NVMe driver for the P3600/P3700.

Lastly, let's quickly discuss the ULLtraDIMM. I wrote a quick piece on ULLtraDIMM right after Flash Memory Summit, but SanDisk has already scored Huawei as the third ULLtraDIMM partner (in addition to IBM and Supermicro). The first generation product that is currently available is internally based on a pair of SATA 6Gbps controllers, but SanDisk said that a native DDR to NAND controller is possible in the future if the market adopts the new form factor well. As usual, the industry is fairly slow in adopting new form factors, so it's hard to say whether NAND DIMMs will really take off, but it's a very interesting and potentially useful technology.

Final Words

All in all, SanDisk is definitely one of the most interesting NAND companies going forward. USB drives, eMMC solutions, SSDs and even the storage arrays from the Fusion-io acquisition are all built on NAND, which puts SanDisk in a unique position as it's the only NAND manufacturer that focuses solely on NAND products. The company can't turn to alternative revenue sources like e.g. Intel and Samsung can, but on the other hand that's also SanDisk's strength as all the know-how and experience in the company is related to NAND in one way or the other.

Ultimately next year will be crucial for SanDisk because it determines whether the company can materialize all the underlying potential from the Fusion-io acquisition and become a serious competitor to Intel and Samsung in the enterprise space. The pieces are definitely there, so it's just a matter of execution now.

The Client
POST A COMMENT

132 Comments

View All Comments

  • PeterMorgan573 - Friday, December 5, 2014 - link

    For people who use their computers only intermittently (which is perhaps the target of this question), the difference between quick startup and slow is that they turn off their computers instead of leaving them on all day. Microsoft is responsible for decades of many millions of computers being left on all day, perhaps 100 GWh/year per million computers (30 Watts for 8 hours, say, that could be saved per day), because of slow startup, which is potentially ameliorated by SSDs. People who use their computers all day aren't affected by this, of course. Reply
  • bigboxes - Friday, December 5, 2014 - link

    I leave my PC on 24/7. It's doing work all of the time. If I'm out I can remotely access it and all of my documents. I leave my file server on 24/7. Do I really want to boot it up only when I know I (or anyone else on my network) want access to the files? That's what convenience is for. Now, my wife turns her pc off at night, but she's not a power user. Reply
  • sheh - Friday, December 5, 2014 - link

    Hibernation is pretty quick, and sleep is quicker even if not as power efficient. Reply
  • Jalek99 - Friday, December 5, 2014 - link

    Hibernation on my Windows 8.1 machine only ends in blue screens within minutes. It was that way in beta and clean installs made no difference. If I leave it on all the time, it never seems to crash. Reply
  • Hrel - Monday, December 8, 2014 - link

    This is an excellent point. Before I had an SSD in my desktop I would leave it on all day while I went to work because I didn't want to have to wait for it when I got home. Sleep mode always causes problems so I avoid that and strictly use Shutdown.

    Now I don't care about shutting it down, hell, I even do Windows updates a lot more often because it's not so painful to restart the computer.

    20 seconds is what I believe the threshold for this is, the machine needs to be usable, as in booted, responsive with at least one program open, within 20 seconds of me hitting "restart".
    Reply
  • paradeigmas - Friday, December 5, 2014 - link

    Actually I think the real problem is that the consumers are not educated on the impact of having an SSD. Today's consumers are trained by advertising to look for "1080p", "Intel i5+", and "6+GB" of RAM. They are certainly not aware of the fact that having an SSD will increase their relative performance by a significant margin. What SanDisk needs to do is to get a "Relative Performance Rating" into consumer's mind, and have the bulk of the score weighted heavily by an SSD (which is true). Then, setup demo units in Malls and Best Buys that include a SSD laptop with mediocre spec, and a top-spec laptop with a traditional hard drive and do a side-by-side demonstration of exactly what SSDs does when it comes to boosting speed. SanDisk should aim to shift the paradigm of what is important for consumersa to have when purchasing a computer. Reply
  • nirolf - Friday, December 5, 2014 - link

    This. Many people I know that are computer literate don't really know what's up with these SSDs and what are they good for. They don't realize how much time they would save in their everyday tasks if they would use a SSD. Reply
  • Jalek99 - Friday, December 5, 2014 - link

    I recall the early days when people warned that they were only good for 50,000 writes or something, then they would be unreliable. Clearly something changed with so many people using them as boot drives now, but like the initial worries about plasma televisions, those things linger even if they're obsolete concerns. Reply
  • Cerb - Sunday, December 7, 2014 - link

    Now, we're down to 1,000! But, in hose early days, there was not wear-leveling. Then, there was wear-leveling that was very bad at handling small writes. Even at 50,000 writes, those 50,000 writes going to the same page over and over again could kill them much quicker than a modern TLC drive rated for 2% of that, due to spreading out the writing across pages and blocks. Reply
  • Cinnabuns - Friday, December 5, 2014 - link

    Yep. Consumers are absolutely not trained at looking for SSDs. The only thing they look for in storage is the number next to overall capacity, and advertising for stores happily oblige in providing only this info.

    SSD manufacturers should get together to push for a rating system that demonstrates how much faster it is for most people in everyday scenarios.
    Reply

Log in

Don't have an account? Sign up now