Today SK Hynix is announcing the sampling of its next generation DDR5 memory. The headline is the commercialization of a new 24 gigabit die, offering 50% more capacity than the leading 16 gigabit dies currently used on high-capacity DDR5. Along with reportedly reducing power consumption by 25% by using SK Hynix’s latest 1a nm process node and EUV technology, what fascinates me most is that we’re going to get, for the first time in the PC space (to my knowledge), memory modules that are no longer powers of two.

For PC-based DDR memory, all the way back from DDR1 and prior, memory modules have been configured as a power of two in terms of storage. Whether that’s 16 MiB to 256 MiB to 2 GiB to 32 GiB, I’m fairly certain that all of the memory modules that I’ve ever handled have been powers of two. The new announcement from SK Hynix showcases that the new 24 gigabit dies will allow the company to build DDR5 modules in capacities of 48 GiB and 96 GiB.

To be clear, the DDR5 official specification actually allows for capacities that are not direct powers of two. If we look to other types of memory, powers of two have been thrown out the window for a while, such as in smartphones. However PCs and Servers, as least the traditional ones, have followed the power of two mantra. One of the changes in memory design that is now driving regular modules to non-power of two capacities is that it is getting harder and harder to scale DRAM capacities. The time it takes to figure out the complexity of the technology to get a 2x improvement every time is too long, and memory vendors will start taking those intermediate steps to get product to market.

In traditional fashion though, these chips and modules will be earmarked for server use first, for ECC and RDIMM designs. That’s the market that will absorb the early adopter cost of the hardware, and SK Hynix even says that the modules are expected to power high performance servers, particularly in machine learning as well as other HPC situations. One of the quotes on the SK Hynix press release was from Intel’s Data Center Group, so if there is any synergy related to support and deployment, that’s probably the place to start. A server CPU with 8x 64-bit channels and 2 DIMMs per channel gives 16 modules, and 16 x 48 GiB enables 768 GiB capacity.

As to when this technology will come to the consumer market, we’re going to have to be mindful of cost and assume that these chips will be used on high-cost hardware. So perhaps 48 GiB UDIMMs will be the first to market, although there’s a small possibility 24 GiB UDIMMs might make an appearance. Suddenly that 128 GiB limit on a modern gaming desktop will grow to 192 GiB.

Source: SKHynix Newsroom

POST A COMMENT

22 Comments

View All Comments

  • OFelix - Wednesday, December 15, 2021 - link

    "powers of two have been thrown out the window for a while"

    I'm sorry? What? Do you pass your work through Google Translate 3 times before publishing it?
    Reply
  • Dolda2000 - Wednesday, December 15, 2021 - link

    Why? I see nothing wrong with that sentence. Reply
  • inighthawki - Wednesday, December 15, 2021 - link

    Agreed with Dolda2000 - what is wrong with that sentence? Do you not know basic English idioms? Reply
  • dwillmore - Wednesday, December 15, 2021 - link

    This is interesting. While it's true that cell phones and other mobile devices have been using some non-power-of-two memory devices for a while 24GiB devices to stay under the 4GiB limit of some 32bit SoCs, such sizes have never commonly been used on PCs. I'm very concerned with bugs in firmware and in chipsets supporting these sizes well there.

    Unlike embedded systems where this type of memory has been used, PCs aren't fixed hardware with hardcoded memory sizes (and speed, timings, etc.). Given how many bugs we find in higher level PC software when past assumptions are challenged, I truely fear the many likely latent bugs we're going to find when NPoT memory sizes come to the PC.

    I'd sort of feel safer if it came to the server side first, because it's likely to be actually tested there.
    Reply
  • Mr Perfect - Wednesday, December 15, 2021 - link

    I suspect that as long as the hardware and OS can handle it, the software will be fine. VMs can be assigned non power-of-two ram amounts today. It's something I've made use of myself. When you've got an untold number of VMs all competing for a host's resources, they each get what RAM they need, not what makes their RAM pool a power of two. Reply
  • dwillmore - Wednesday, December 15, 2021 - link

    I don't worry about the OS once it gets running. With three bus processors like the older Intel processors (Westmere, etc.), I have no concern with the OS dealing with arbitrary sizes, but the firmware that has to program all the address decoders for the chip select lines, etc. Those I worry about. Reply
  • evanh - Wednesday, December 15, 2021 - link

    Most definitely is not a power of two.

    I'd guess the principle is holes in the physical address space are allowed. And it's up to the MMU to patch the address space back together.
    Reply
  • Wrs - Wednesday, December 15, 2021 - link

    There's already a hole in the address space whenever you're not using the maximum memory capacity of your controller (usually on the CPU). Reply
  • edzieba - Wednesday, December 15, 2021 - link

    Mixing Gibibytes and Gigabits in the same sentence? A bold move! Reply
  • nandnandnand - Thursday, December 16, 2021 - link

    AnandTech readers aren't ready for gibibits yet. Reply

Log in

Don't have an account? Sign up now