The first version of the Non-Volatile Memory Express (NVMe) standard was ratified almost five years ago, but its development didn't stop there. While SSD controller manufacturers have been hard at work implementing NVMe in more and more products, the protocol itself has acquired new features. Most of them are optional and most are intended for enterprise scenarios like virtualization and multi-path I/O, but one feature introduced in the NVMe 1.2 revision has been picked up by a controller that will likely see use in the consumer space.

The Host Memory Buffer (HMB) feature in NVMe 1.2 allows a drive to request exclusive access to a portion of the host system's RAM for the drive's private use. This kind of capability has been around forever in the GPU space under names like HyperMemory and TurboCache, where it served a similar purpose: to reduce or eliminate the dedicated RAM that needs to be included on peripheral devices.

Modern high-performance SSD controllers use a significant amount of RAM, and typically we see a ratio of 1GB of RAM for every 1TB of flash. The controllers are usually conservative about using that RAM as a cache for user data (to limit the damage of a sudden power loss) and instead it is used to store the organizational metadata necessary for the controller to keep track of what data is stored where on the flash chips. The goal is that when the drive recieves a read or write request, it can determine which flash memory location needs to be accessed based on a much quicker lookup in the controller's DRAM, and the drive doesn't need to update the metadata copy stored on the flash after every single write operation is completed. For fast consistent performance, the data structures are chosen to minimize the amount of computation and number of RAM lookups required at the expense of requiring more RAM.

At the low end of the SSD market, recent controller configurations have chosen instead to cut costs by not including any external DRAM. There are combined savings of die size and pin count for the controller in this configuration, as well as reduced PCB complexity for the drive and eliminating the DRAM chip from the bill of materials, which can add up to a competitive advantage in the product segments where performance is a secondary concern and every cent counts. Silicon Motion's DRAM-less SM2246XT controller has stolen some market share from their own already cheap SM2246EN, and in the TLC space almost everybody is moving toward DRAM-less options.

The downside is that without ample RAM, it is much harder for SSDs to offer high performance. Even with clever firmware, DRAM-less SSDs can cope surprisingly well with just the on-chip buffers, but they are still at a disadvantage. That's where the Host Memory Buffer feature comes in. With only two NAND channels on the 88NV1140, it probably can't saturate the PCIe 3.0 x1 link under even the best circumstances, so there will be bandwidth to spare for other transfers with the host system. PCIe transactions and host DRAM accesses are measured in tens or hundreds of nanoseconds compared to tens of microseconds for reading from flash, so it's clear that a Host Memory Buffer can be fast enough to be useful for a low-end drive.

The trick then is to figure out how to get the most out of a Host Memory Buffer, while remaining prepared to operate in DRAM-less mode if the host's NVMe driver doesn't support HMB or if the host decides it can't spare the RAM. SSD suppliers are universally tight-lipped about the algorithms used in their firmware and Marvell controllers are usually paired with custom or third-party licensed firmware anyways, so we can only speculate about how a HMB will be used with this new 88NV1140 controller. Furthermore, the requirement of driver support on the host side means this feature will likely be used in embedded platforms long before it finds its way into retail SSDs, and this particular Marvell controller may never show up in a standalone drive. But in a few years time it might be standard for low-end SSDs to borrow a bit of your system's RAM. This becomes less of a concern as we move through successive platforms having access to more DRAM per module in a standard system.

Source: Marvell

Comments Locked

33 Comments

View All Comments

  • rocky12345 - Tuesday, January 12, 2016 - link

    Bad idea on so many levels. If I spend the money on a SSD drive that costs double the price of a normal hard drive I expect it to have its own complete hardware to run and not use my ram resources in my system as a cache or whatever. If I was to get a 2TB SSD and it used 2GB of my memory in the system that is memory my OS could have or was using for its own needs. O a lower end system with less memory this would pretty much kill system performance everywhere else but hey I got that new fancy SSD drive in there big whoop if the system is struggling every where else now. I would say min spec for these types of SSD drives would be at lest 8GB of system ram better 10GB but have the extra 2GB over the 8GB partitioned off on a sep memory channel just for the cheap ass SSD drives,It would mean Intel and AMD having to add a \n extra memory channel that could be filled if a crap SSD like these are used in a system like most OEM's will use to fill the check mark on the spec sheets. By adding the extra memory channel and making it only useable by these SSD drive when installed you do not lose system memory or bandwidth that the crap SSD drives would normally use/steal. Most new basic systems now days come equipped with 6GB or 8GB which is good for most everyday tasks but not enough for heavy use like for people that never close facebook pages and have like 15 to 20 tabs open and music playing from youtube videos etc etc that 6 or 8GB pretty much is all used up once windows takes it share as well. oh yea we all rember how soft modems worked out most times not so great I see SSD drives going this route and if so their future is bleak for sure..just my input on this.
  • Frihed - Friday, January 15, 2016 - link

    In a world where 16gb is about to become standard I don't see it as a problem. We don't need that much memory anyway.
  • JoeDuarte - Monday, February 22, 2016 - link

    This reminds me of the work Baidu did in dumbing down their flash drives to improve data center performance. I think they had the OS take over from the flash controller and removed the flash DRAM. Their context was much different from a PC user's, but it's an interesting piece of work: http://www.zdnet.com/article/baidu-chooses-dumb-ss...

Log in

Don't have an account? Sign up now