Samsung has been on a roll lately with memory & storage-related announcements, and that roll is continuing today with a new DRAM-related announcement out of the juggernaut. This afternoon the company is announcing that they have completed fabrication, functional testing, and validation of a prototype 8Gbit LPDDR5 module. The company is targeting data rates up to 6.4Gbps-per-pin with the new memory, and while Samsung isn’t ready to start mass production quite yet, the company’s press release notes that they’re already eyeing it.

This is actually the first LPDDR5 announcement to cross AnandTech’s proverbial desk, so if you haven’t heard of the standard before, there’s a good reason for that. LPDDR5 is so cutting edge that the standard itself has yet to be completed; the JEDEC standards group has not yet finalized the specifications for DDR5 or LPDDR5. The JEDEC only first announced work on DDR5 last year, with the specification due at some point this year. As a result information on the memory technology has been limited, as while the major aspects of the technology would have been hammered out early, the committee and its members tend to favor holding back until the specification is at or is close to completion.

In any case, it appears that Samsung is the first to jump out of the gate on LPDDR5, becoming the first manufacturer to announce validation of their prototype. And as part of the process, they have revealed, at a high level, some important specifications and features of the new memory standard.

In terms of performance, Samsung is targeting up to 6.4Gbps/pin with the new memory. Which for a typical 32-bit bus chip works out to 25.6GB/sec of memory bandwidth. This is a 50% increase in bandwidth over the current LPDDR4(X) standard, which tops out at 4.266Gbps under the same conditions. So for a high-end phone where 64-bit memory buses are common, we’d be looking at over 50GB/sec of memory bandwidth, and over 100GB/sec for a standard 128-bit bus PC.

Underpinning these changes are a number of optimizations for both increased bandwidth and reduced power consumption. On the bandwidth side, the single biggest change is that the number of memory banks per channel, from 8 banks on LPDDR4(X) to 16 banks on LPDDR5. And while Samsung’s press release doesn’t explicitly note it – and I’m still waiting to get confirmation thereof – doubling the number of banks has traditionally gone hand-in-hand with doubling the prefetch size. This would give LPDDR5 a 32n prefetch size, with increasing the prefetch size long being the favored method to improve DRAM performance. Or to simplify matters a bit, the core clockspeed of the DRAM itself wouldn’t be changing, but rather LPDDR5 increases the amount of parallelism so that data is read and written out over more banks simultaneously.

Update: While they weren't able to get us a response at press time, Samsung got back to us early this morning with a few more specifications for their memory. Rather unexpectedly, the company is claiming that their LPDDR5 has a single 32-bit (x32) memory channel, with 16 banks in that channel. Furthermore the memory still has a prefetch of 16n, the same as LPDDR4. Besides undoing one of the core changes of LPDDR4 - the reduction to 16-bit memory channels - it's not clear how this organization allows Samsung to achieve the data rates they've published. But for now, this is the official answer we have from the company.

Update 07/19: Samsung has sent over a second round of information on their LPDDR5, confirming the underlying mechanisms allowing for the greater data rate. As it turns out, the earlier claim of a single 32-bit device is incorrect; it's a single 16-bit device. The biggest change here is that LPDDR5 is implementing DDR4-style bank grouping, which not to be confused with prefetchng over multiple banks, is another means of increasing the internal data rate. The specific mechanisms of bank grouping can be saved for another day, but it's essentially a means to bundle together two unrelated memory accesses at the same time. This allows more data to be transfered in a given unit of time, so long as the two requests come from different banks.

Samsung LPDDR Generations
Max Density 32 Gbit 64 Gbit? ?
Max Data Rate 2.13Gbps 4.26Gbps 6.4Gbps
Channels 1 2 1
Width x32 x32 (2x x16) x16
(Per Channel)
8 8 16
Bank Grouping No No Yes
Prefetch 8n 16n 16n
Voltage 1.2v 1.1v Variable
(Max 1.1v)
Vddq 1.2v 1.1v 0.6v 0.6v?

Meanwhile for the memory bus itself, although not in Samsung’s own press release, we know from other sources that LPDDR5 is implementing differential clocking, similar to GDDR5 graphics memory. Differential clocking is key for a memory bus to be able to hit the high frequencies required to actually carry data in and out of DRAM as fast as LPDDR5 is going to be able to generate/consume it. Unlike the DRAM cells themselves, the memory bus can’t easily be made more parallel due to architecture and engineering limitations, so going faster is the only way forward without more radical changes. With that said, it’s known that differential clocking is responsible for GDDR5’s relatively high power consumption, so I’m curious what Samsung and the JEDEC have done to tamper this down.

And speaking of power consumption, let’s talk about the optimizations there. While LPDDR4X already jumped the gun here a bit by reducing the Vddq I/O voltage from 1.1v to 0.6v, LPDDR5 is implementing some of the other changes that have previously been proposed for future LPDDR standards. Voltages have once again been reduced, although Samsung isn’t listing what the new voltages are. Though presumably other voltages likes Vtt have been reduced this time around. Meanwhile Samsung’s press release also notes that the standard has introduced a feature to avoid overwriting cells already containing a 0, thereby avoiding wasting power to set the cell to 0 again.

Though from Samsung’s perspective the marquee feature in terms of power consumption savings on LPDDR5 is the new deep sleep mode. LPDDR5’s DSM is a longer sleep mode than what’s currently used in LPDDR4(X), allowing the DRAM to idle at even lower levels of power consumption. The tradeoff I expect being that it takes longer to enter and exit this sleep mode, based on how deep sleep modes make similar tradeoffs in other devices. Overall Samsung is claiming that this new sleep mode consumes half as much power as LPDDR4X’s idle mode. Ultimately the company estimates that due to the combination of these features, LPDDR5’s power consumption will be upwards of 30% lower than LPDDR4X, though it’s not clear whether this is total package power consumption or power consumed per bit of data moved.

Samsung for their part will be offering two speed/voltage grades of LPDDR5. Their fastest 6.4Gbps SKU will operate at 1.1v, and meanwhile they will also offer a 5.5Gbps SKU that runs at a lower-still 1.05v for devices that need even lower power consumption. And as mentioned earlier, Samsung has not yet begun mass production – presumably this can’t happen until the specification is formally ratified – but is eying to get mass production going soon.

Finally, like LPDDR4 before it, expect Samsung and other manufacturers to initially chase the mobile market with this new generation of memory. Samsung is officially stating that the memory is for use in "upcoming 5G and Artificial Intelligence (AI)-powered mobile applications,” and buzzwords aside, this is the mobile market in a nutshell. However since LPDDR5 will be a core JEDEC standard, as our own Dr. Ian Cutress was already asking me when Samsung sent over the announcement, it seems like it’s just a matter of time until LPDDR5 support comes to x86 processors as well. Though if LPDDR4 is any indication, “a matter of time” can still be quite a while.

Source: Samsung

Comments Locked


View All Comments

  • ToTTenTranz - Tuesday, July 17, 2018 - link

    AMD's Picasso had better bring support for LPDDR4 *and* LPDDR5.
    As cool as Raven Ridge is, they really need to make up for the abysmal bottleneck in power savings and raw memory bandwidth they have from not supporting the ULP memory standards.

    A mobile-oriented SoC just can't be hindered by the terribly slow pacing of desktop+server memory evolution.
  • Targon - Friday, July 20, 2018 - link

    This is where Gen-Z would come into play. With Gen-Z, you just tie the CPU into the system bus and everything can talk to each other, allowing for much higher speed connections to memory or other devices. It remains to be seen if/when Gen-Z will actually make it into modern systems, but with AMD having been an early backer of the idea, I would hope for the new CPUs in 2020 to make use of it.
  • watzupken - Wednesday, July 18, 2018 - link

    I noticed that nowadays, manufacturers of these high speed memory don't seem to disclose the latency. I sense in order to obtain these high memory bandwidth, a lot is being sacrificed in terms of latency.

Log in

Don't have an account? Sign up now