Today we posted a news article about SK hynix’s new DDR5 memory modules for customers – 64 GB registered modules running at DDR5-4800, aimed at the preview systems that the big hyperscalers start playing with 12-18 months before anyone else gets access to them. It is interesting to note that SK Hynix did not publish any sub-timing information about these modules, and as we look through the announcements made by the major memory manufacturers, one common theme has been a lack of detail about sub-timings. Today can present information across the full range of DDR5 specifications.

When discussing memory, there are a few metrics to consider:

  • Type, e.g. DDR4, DDR5
  • Capacity
  • Power consumption / voltage
  • Bandwidth
  • Latency
  • Price
  • Persistence

When building a platform, a number of these factors all come into play – a system that implements oil and gas simulations might require terabytes of memory, regardless of power, of for smaller installations price might be the major concern. For specialist applications, persistent memory might be a focus, or a combination of bandwidth/latency will be key to driving performance.

In order for all these companies that build memory and systems to work together, a set of standards are developed by a consortium of all interested parties – this is called JEDEC. JEDEC creates the standards to ensure support for all compliant systems.

Users who are familiar with JEDEC specifications will note that consumer grade memory is often specified faster than what JEDEC lists – this is a feature in which processors that can support faster memory, when paired with memory qualified to be faster than JEDEC, can be paired together. This is why we see memory kits all the way up to DDR4-5000 in the market today that only work with a few select systems.

Read AnandTech’s Corsair DDR4-5000 Vengeance LPX Review
Super-Binned, Super Exclusive

For DDR4, JEDEC supports standards ranging from DDR4-1600 up to DDR4-3200. From the data rate, a peak transfer rate can be calculated (12.8 GB/s per channel for DDR4-1600, 25.6 GB/s per channel for DDR4-3200), however the latency requires additional information. The typical sub-timings offered with memory are:

  • CAS: Column Address Strobe: the time between sending a column address and the response
  • tRCD: Row to Column Delay: clock cycles to load a column when new row is opened
  • tRP: Row Precharge Time: clock cycles to load data when wrong row is open
  • tRAS: Row Active Time: minimum time between row active and precharge

These are typically reported as CAS-tRCD-tRP with tRAS sometimes added on. This means that in JEDEC’s DDR4 specification, the base DDR4-3200 metric allows for a 24-24-24 set of sub-timings. For latency calculations, we need both the data rate (3200 MT/s) and the CAS (24 clocks) to calculate the CAS in terms of nanoseconds, the real world latency (in this case, 15 nanoseconds).

The combination of data rate and CAS Latency has been used to compare single access latency numbers for memory over the years. Moving from the early iterations of DRAM, both data access rates and single access latencies have improved. However recently, due to physical limitations, while data rate has been increasing, access latency has been roughly consistent.

Memory and Bandwidth, up to DDR4
AnandTech Data Rate
SDR 100 100 0.80 3 24.00
133 133 1.07 3 22.50
DDR 200 200 1.60 2 20.00
333 333 2.67 2.5 15.00
400 400 3.20 3 15.00
DDR2 400 400 3.20 5 25.00
667 667 5.33 5 15.00
800 800 6.40 6 15.00
DDR3 800 800 6.40 6 15.00
1066 1066 8.53 8 15.00
1333 1333 10.67 9 13.50
1600 1600 12.80 11 13.75
1866 1866 14.93 13 13.93
2133 2133 17.07 14 13.13
DDR4 1600 1600 12.80 11 13.75
1866 1866 14.93 13 13.92
2133 2133 17.07 15 14.06
2400 2400 19.20 17 14.17
2666 2666 21.33 19 14.25
2933 2933 23.46 21 14.32
3200 3200 25.20 22 13.75
*Not all of these are JEDEC Standards

Pivoting to DDR5, JEDEC has enabled standards ranging from DDR5-3200 to DDR5-6400. It also has placeholders up to DDR5-8000, however the specifics of those standards are still a work in progress. At the end of DDR3, and through DDR4, JEDEC introduced additional sub-timing specifications for each data rate - for each of the data rates, JEDEC has specified an ‘A’ fast standard, a ‘B’ common standard, and a ‘C’ looser standard – technically the looser standard is more applicable to higher capacity modules. It means that each data rate can cast a wide range of performance based on the quality of the silicon used.

Starting with the lowest data rate, the DDR5-3200A standard supports 22-22-22 sub-timings. At a theoretical peak of 25.6 GB/s bandwidth per channel, this equates to a single access latency of 13.75 nanoseconds.

If we look at SK Hynix’s announcement of DDR5-4800, this could be DDR5-4800B which supports 40-40-40 sub-timings, for a theoretical peak bandwidth of 38.4 GB/s per channel and a single access latency of 16.67 nanoseconds.

Here is the full list, from DDR5-3200 to DDR5-6400, including some of the extra standards not yet finalized.

DDR5 JEDEC Specifications
AnandTech Data Rate
Peak BW
DDR5-3200 A 3200 22 22 22 25.60 13.75
B 26 26 26 16.25
C 28 28 28 17.50
DDR5-3600 A 3600 26 26 26 28.80 14.44
B 30 30 30 16.67
C 32 32 32 17.78
DDR5-4000 A 4000 28 28 28 32.00 14.00
B 32 32 32 16.00
C 36 36 36 18.00
DDR5-4400 A 4400 32 32 32 35.20 14.55
B 36 36 36 16.36
C 40 40 40 18.18
DDR5-4800 A 4800 34 34 34 38.40 14.17
B 40 40 40 16.67
C 42 42 42 17.50
DDR5-5200 A 5200 38 38 38 41.60 14.62
B 42 42 42 16.15
C 46 46 46 17.69
DDR5-5600 A 5600 40 40 40 44.80 14.29
B 46 46 46 16.43
C 50 50 50 17.86
DDR5-6000 A 6000 42 42 42 48.00 14.00
B 50 50 50 16.67
C 54 54 54 18.00
DDR5-6400 A 6400 46 46 46 51.20 14.38
B 52 52 52 16.25
C 56 56 56 17.50
Future Bins
DDR5-6800 6800       54.40  
DDR5-7200 7200       57.60  
DDR5-7600 7600       60.80  
DDR5-8000 8000       64.00  
DDR5-8400 8400       67.20  

You may remember our report in May 2018, where Cadence and Micron showed off some DDR5-4400 memory in a test platform. We were able to determine from the photographs provided that this system was running at a CAS Latency of 42 clocks. Since then, the JEDEC standard has come down in that speed bracket to support 32-40 clocks, indicating the evolution of the platform.

The table above is a bit cumbersome, so here's the same table showing only the 'A' fastest specifications for each data rate. This likely applies for any installation of the equivalent of 1 module per channel.

JEDEC DDR5-A Specifications
AnandTech Data Rate
Peak BW
DDR5-3200 A 3200 22 22 22 25.60 13.75
DDR5-3600 A 3600 26 26 26 28.80 14.44
DDR5-4000 A 4000 28 28 28 32.00 14.00
DDR5-4400 A 4400 32 32 32 35.20 14.55
DDR5-4800 A 4800 34 34 34 38.40 14.17
DDR5-5200 A 5200 38 38 38 41.60 14.62
DDR5-5600 A 5600 40 40 40 44.80 14.29
DDR5-6000 A 6000 42 42 42 48.00 14.00
DDR5-6400 A 6400 46 46 46 51.20 14.38

In terms of single access latency, we are ultimately not going to be any faster than we were by the end of the DDR3 era. DDR3-1866 at CL13 was already at 13.93 nanoseconds. This means that despite the increasing CAS latency values in clocks (going to CL46 at DDR5-6400), the actual single access latency is still roughly the same in real world time units.

It is interesting to note that the DDR5 specification has provision in the hardware registers for CAS Latencies from CL22 up to CL66. This might be interpolated to mean that even with a sufficiently binned DDR5 memory module, or with overclocking, CL22 might be the lowest possible for the hardware. We know that DDR5 now moves the voltage regulation for the memory onto the module, so that will be an additional area for memory manufacturers to differentiate themselves, especially when targeting the enthusiast market.


For users looking for an insight into how DRAM actually works, then I would like to direct you to our 2010 article entitled 'Everything You Always Wanted To Know About Memory (But Were Afraid To Ask)'. It's a great technical article that I still refer back to, and I still scratch my head over!



Source: JEDEC DDR5 Specification

Related Reading


Comments Locked


View All Comments

  • Spunjji - Wednesday, October 7, 2020 - link

    You're welcome to goose the voltage yourself in an attempt to try for lower latencies, but you'll find that you rapidly hit diminishing returns.
  • halcyon - Tuesday, October 6, 2020 - link

    Interesting. I wonder how long we'll have to wait for lower latencies this time around?

    Currently one can buy 3200Mhz DDR4 @CL14 (theoretical latency of 8.75ns). This is available as 16GB modules (so up to 64GB on most four slot MBs).

    To achieve the same on DDR5, we should get 6400Mhz @CL28.
  • Revv233 - Tuesday, October 6, 2020 - link

    Anyone got some BH5 left sitting around?

    Still lowest latency memory even today. It took DDR3 to beat it.
  • eastcoast_pete - Tuesday, October 6, 2020 - link

    Not so much a comment on DDR5, but more on memory speeds in general: I am (still) amazed that non-volatile NAND in fast PCIe4 SSDs basically reaches bandwidth numbers of working memory from about 10-15 years ago. IMHO, that's real progress.
    Now, if AMD and Intel could give us four memory channels for their APUs, we'd be cooking! Even on DDR4 3200, four channels should feed those iGPUs of Renoirs and Tiger Lake a lot better. Hey, I can always ask and dream..
  • abufrejoval - Tuesday, October 6, 2020 - link

    Bandwidth is easy: Just put enough platters of spinning rust in parallel and you can hit 50GB/s.
    But should you try to run your game on an NVMe instead of DRAM, it won't be a lot of fun because latencies pile up.

    Reminds me of fixed head drives or even magnetic drums being used as fast swap devices in the 1960's for DRAM that counted in KWords. But those were running batch jobs where managing locality was hopefully easier.
  • Spunjji - Wednesday, October 7, 2020 - link

    I'm not sure the economics of it works out. Between the larger die area and increase in motherboard complexity, you end up pumping more cost into something that will be beaten senseless by an inexpensive add-in board.

    The only market in which it makes sense is ones where that board isn't an option - e.g. highly integrated systems - and I don't know that there's a lot of demand for significantly higher GPU performance there (yet).
  • Tomatotech - Wednesday, October 7, 2020 - link

    iPhones, iPads, high-end androids, chromebooks, laptops (Macbooks, Surface Pro, Apple's new ARMbooks), game consoles, VR headsets? There's a lot of demand for highly integrated systems with high RAM / storage / CPU performance.
  • brucethemoose - Tuesday, October 6, 2020 - link

    JEDEC went through a lot of trouble to push that much bandwidth over long PCB traces.

    We've enjoyed discrete memory/logic for a long time, but the traces just have to get shorter. Integrated packages, with memory stacked next to or under the CPU, are the way forward.

    Some customers will always need enormous pools, but they can use specialized products. Perhaps they'll start using silicon photonics for memory buses at the very top end.
  • peevee - Tuesday, October 6, 2020 - link

    13nm latency... light IN VACUUM can travel only 39cm in this time, in wires the electrical field is slower. Basically, since DDR3 the latency is limited by physical design of DIMMs separately on the motherboard, with complex routing. And the only improvement can come also from physical design, smaller SO-DIMM slots need to be on the CPU package (say, on the sides) for the shortest possible paths (would also improve energy efficiency due to lower noise in shorter routes).

    That would also alleviate the need in large LLC.
  • Corporate Goon - Tuesday, October 6, 2020 - link

    You're off by a factor of ten there. Light travels about a foot per nanosecond in a vacuum, so it travels about 390cm in 13 ns.

Log in

Don't have an account? Sign up now