Comments Locked

89 Comments

Back to Article

  • Bobby3244 - Tuesday, July 14, 2020 - link

    One thing I read about DDR5 a while back was that it was going to have built in ECC (as in every dimm will include ECC), is that correct? I kept hearing conflicting information on this.
  • FreckledTrout - Tuesday, July 14, 2020 - link

    Its in the article. "as well as on-die ECC." So yes.
  • OFelix - Tuesday, July 14, 2020 - link

    Well, I'd like you to be correct but if it was the case I would have expected considerably more attention would be paid to it in the article.
  • FreckledTrout - Tuesday, July 14, 2020 - link

    Well if its on die for DDR5 since its not a separate chip like DDR4 then it isn't optional. Everything points towards DDR5 having ECC built in. The reality is with the densities of RAM that is going to occur on DDR5 it will be very necessary.
  • Bobby3244 - Tuesday, July 14, 2020 - link

    I wasn't sure if that meant that the new chips have an optional component built into dram, or if it was a requirement. I've been really looking forward to the day that ECC is required in DRAM, so if it's now included as a requirement that is great news.
  • tygrus - Sunday, July 19, 2020 - link

    On-die ECC would give some protection for data on-die (eg. bit-flipping from cosmic rays) BUT NO protection for data in transit between DIMM & CPU.
    DIMM-wide ECC gives some protection for data on-die AND between DIMM & CPU.
    You can add ECC to address & command bus transactions and internal buffers but I don't know the DDR5 spec.

    Doubling the ECC per 64bits of data is costly for DIMM-wide ECC.
    Ryan Smith quotes Hynix below regarding on-die ECC used for DDR5.
  • kwinz - Thursday, January 21, 2021 - link

    I had two questions: 1. is ECC support mandatory for DDR5?
    And 2. Does On-Die ECC protect the data in flight beween DIMM and CPU. Are you *sure* it doesn't?
  • Ryan Smith - Tuesday, July 14, 2020 - link

    So on-die ECC is a bit of a mixed-blessing. To answer the big question in the gallery, on-die ECC is not a replacement for DIMM-wide ECC.

    On-die ECC is to improve the reliability of individual chips. Between the number of bits per chip getting quite high, and newer nodes getting successively harder to develop, the odds of a single-bit error is getting uncomfortably high. So on-die ECC is meant to counter that, by transparently dealing with single-bit errors.

    It's similar in concept to error correction on SSDs (NAND): the error rate is high enough that a modern TLC SSD without error correction would be unusable without it. Otherwise if your chips had to be perfect, these ultra-fine processes would never yield well enough to be usable.

    Consequently, DIMM-wide ECC will still be a thing. Which is why in the JEDEC diagram it shows an LRDIMM with 20 memory packages. That's 10 chips (2 ranks) per channel, with 5 chips per rank. The 5th chip is to provide ECC. Since the channel is narrower, you now need an extra memory chip for every 4 chips rather than every 8 like DDR4.
  • Ryan Smith - Tuesday, July 14, 2020 - link

    And to quote SK Hynix

    "On-die error correction code (ECC)3 and error check and scrub (ECS), which were first to be adopted in DDR5, also allow for more reliable technology node scaling by correcting single bit errors internally. Therefore, it is expected to contribute to further cost reduction in the future. ECS records the DRAM defects and provides the error counts to the host, thereby increasing transparency and enhancing the reliability, availability, and serviceability (RAS) function of the server system."

    https://news.skhynix.com/why-ddr5-is-the-industrys...
  • azfacea - Tuesday, July 14, 2020 - link

    any news on GDDR7 ?? if GDDR6 is based on DDR4 then where (when) is GDDR7
  • Matthias B V - Thursday, April 29, 2021 - link

    I would expect GDDR7 by late 2022.

    GDDR6 won't cut it as it probably maxes out at around 24Gbps. GDDR6 won't live as long as GDDR5 similar to PCIe 4.0/5.0 vs. PCIe 3.0.

    You could also see with GDDR5X that short after everyone moved to GDDR6 and it was just a gap solution.

    You can already find leaks to first GDDR7 drafts from late 2020...
  • brucethemoose - Tuesday, July 14, 2020 - link

    Thats unfortunate. I was hoping DIMM-wide ECC would filter down to consumer stuff some day, but 25% more chips is a huge premium.
  • ravyne - Thursday, July 16, 2020 - link

    Ryan, it seems like on-die ECC and/or the higher proportion of DIMM-wide ECC bits would mitigate attacks like Row-Hammer. Are JEDEC or Manufacturers saying anything about that?
  • Ryan Smith - Thursday, July 16, 2020 - link

    I haven't heard anything. But absence of evidence is not evidence of absence.
  • Brane2 - Friday, July 17, 2020 - link

    Hasn't that been taken care for within memory controller ?
    I remember seeing some bits in AMD doc for activating RH detection and doubling refresh rate of affected rows...
  • Santoval - Tuesday, July 14, 2020 - link

    Not every DIMM, not even every chip; it will rather be on every *die*. Apparently each die will have an 8 bit parity (an ECC method) cell per every 128 bits of usable data. Or at least that's how SK Hynix are going to implement ECC. I was reading the article below which was linked above by Ryan, and it doesn't say that only RDIMMs or LRDIMMs will have ECC. SK Hynix are talking about DDR5 memory as a whole, thus also unbuffered DIMMs :
    https://www.anandtech.com/show/15699/sk-hynix-ddr5...
  • PeachNCream - Tuesday, July 14, 2020 - link

    So no more cheap laptops that have CPUs that support dual channel but only come with one memory slot so you're stuck with both the iGPU and processor cores contending for more limited than necessary bandwidth?
  • ravyne - Tuesday, July 14, 2020 - link

    Presuming that those CPUs stick with two memory controllers, yes. This two-channels-per-dimm should also fix the current problems with mismatched DIMM sizes -- today with DDR4, if you have one 8GB and one 4gb module installed, one of those channels is stretched half as thin as the other. This (roughly speaking) leaves you with 8GB operating at dual-channel speeds, and 4GB operating at single-channel speed -- so a smart operating system will try to manage where data is actually located in order balance operations. Usually this means that the "extra" memory is used for caching files and such if you have an abundance of free memory, but it's needless complication and can have real performance impact of you're actually using all your memory. This is becoming more and more common on laptops which will solder some amount of memory on the board, but have a single sodimm slot for expansion (my work laptop is 16gb soldered+16gb sodimm, but it will take up to 32gb sodimms). The problem gets worse and worse the larger the disparity is -- if you had a 4gb module paired with a 16gb module, then 60% of your memory capacity is effectively single-channel and no amount of OS smarts will save you.
  • Santoval - Tuesday, July 14, 2020 - link

    "Presuming that those CPUs stick with two memory controllers, yes. This two-channels-per-dimm should also fix the current problems with mismatched DIMM sizes.."
    I think you have confused the in-DIMM channels with the channels of the memory controllers. LPDDR4X is also split in two channels per DIMM, with each channel being 16 bit wide. You can find cheaper laptops that work in 2x16 bit mode or more expensive laptops that are setup in 4x16 bit mode. In the former case the DIMM(s) is (are) run by *one* memory controller, not two. In the latter case they are run by both memory controllers.

    If the in-DIMM channels were handled by one memory controller each then the dual channel LPDDR4X (dual *external* memory controller channels, four *internal* 16 bit channels) would require *four* memory controllers from the CPU, and of course no such laptop exists!
  • Valantar - Wednesday, July 15, 2020 - link

    Actually LPDDR4x allows for either 16- or 32-bit channels. Laptop platforms (Ice Lake, Renoir, etc.) all use 32-bit channels, with 16-bit channels typically being used by smartphone SoCs. Each memory controller in these laptop platofrms is capable of driving either a single 64-bit DDR4 channel or two 32-bit LPDDR4x channels. From AMD's Renoir launch article here at AT: https://images.anandtech.com/doci/15624/2%20AMD%20...
  • dotjaz - Thursday, July 16, 2020 - link

    That's not completely correct. LPDDR4/4x allows 2x16-bit or 32-bit channels. Single 16-bit channel is not allowed.
  • dotjaz - Thursday, July 16, 2020 - link

    "This two-channels-per-dimm should also fix the current problems with mismatched DIMM size"
    "if you had a 4gb module paired with a 16gb module, then 60% of your memory capacity is effectively single-channel"

    Nope, you are completely wrong. Asymmetrical DDR5 has exactly the same drawback as DDR4. The difference is now 32x4+32x2 vs 64x2+64x1.

    "Presuming that those CPUs stick with two memory controllers, yes"
    That's even worse that "problem" you described. In the "problem" at least part of the capacity (2x Smaller DIMM) gets FULL 128-bit. What you are describing is 100% only 64-bit. Why on earth would you "fix" a problem by making performance suffer?
  • Ryan Smith - Tuesday, July 14, 2020 - link

    Unfortunately nothing about this precludes laptops coming half-filled with memory. Vendors can still put 1 SO-DIMM in a laptop, leaving it with only 64-bits of its 128-bit memory bus filled.
  • PeachNCream - Tuesday, July 14, 2020 - link

    I must be misunderstanding the new two channels per single DIMM thing. Does that not apply to SODIMMs? I get that soldered down RAM would allow maybe something to fall outside the DDR5 specs, but your article implies that there is 128 bits worth of data moving across the memory bus from a single stick of RAM.
  • Ryan Smith - Tuesday, July 14, 2020 - link

    The channels are now half-sized. It's 2 32-bit channels per DIMM, instead of 1 64-bit channel per DIMM. So you will still need two DIMMs to fill a 128-bit bus.
  • Santoval - Tuesday, July 14, 2020 - link

    The article has a table mentioning that LPDDR5 is only single channel and just 16 bits wide. Which is weird since LPDDR4(X), its predecessor, was also split in two channels per DIMM (2x16 bit). Assuming the LPDDR5 bit is accurate* and that is what you mean by "SODIMMs" then forget the two channels per DIMM. By the way, the two channels per DIMM are *internal*; they do not require separate memory controllers. This is how a laptop with 4x16 bit LPDDR4X is run by a SoC with two memory controllers. That's dual channel externally (i.e. from the SoC) but quad channel internally.

    *I looked up LPDDR5 quickly at Wikipedia and it doesn't mention if it reverted to a single channel. However single channel LPDDR5 at 6.4 Gbps (the spec's top limit) would have an identical speed to dual channel LPDDR4(X) at 3.2 Gbps which has already well been surpassed. So I guess the table is incorrect and LPDDR5 also has two 16 bit channels per DIMM.
  • Ryan Smith - Tuesday, July 14, 2020 - link

    Bear in mind that LPDDR has no concept of DIMMs. It's strictly a solder-down memory interface.

    Anyhow, the channel size difference between 4 and 5 is mostly semantics. Officially, according to a back-and-forth discussion we had with Samsung, the smallest unit of organization in LPDDR5 is a single 16-bit channel. This is as opposed to LPDDR4, where the smallest unit was two 16-bit channels. As a result, they classify LPDDR5 as 1x16 instead of 2x16.

    Chips will still come with multiple channels per chip. And in fact I'm not aware of anything smaller than a 32-bit (2x16) LPDDR5 chip.
  • back2future - Wednesday, July 15, 2020 - link

    Is internal DDR5 memory refresh method independent from memory controller, while one single 16bit data channel always has access to one half of memory cells or is refresh influenced by parameters from SoC memory controller? Thx
  • dotjaz - Thursday, July 16, 2020 - link

    " your article implies that there is 128 bits worth of data moving across the memory bus from a single stick of RAM."

    Where? Are we even reading the same article? The article EXPLICITLY said "two independent 32-bit data channels per DIMM", it's not implying anything, it flat out told you 2x32-bit per DIMM.
  • Santoval - Tuesday, July 14, 2020 - link

    Not quite. Cheap laptops of the future will have LPDDR5. It just premiered in some flagship smartphones, and I believe it is commonly set up as dual channel. However, unless the table comparing the various memories in the article is inaccurate*, it can also work in dual channel mode (unlike its predecessor LPDDR4X). If that's the case that's what cheap laptops from 2021 onward will have.
    *I'm about to retire for the day, so I can't check out if it is right now..
  • Dragonstongue - Tuesday, July 14, 2020 - link

    typo but will use to my advantage

    "with adoption starting at the sever level before trickling down to client PCs and other devices later on. "

    ----- you are likely quite very correct on sever(e) level, as DDR2-3-4 .. all of memory standards prior to mass market stable production >? in regards to pricing.

    who knows, maybe this time will be different with vendors well on their way to having a full assortment of speed bins, prices, kits and all that fun stuff

    maybe them makes might have smartened up a wee tad, that is, have the DIE used to make the memory as small as possible to increase yield hopefully meeting or beating expected % loss and all that

    long story, keep price as low as can be reasonably managed, as if it is "expected" to be all that and a cup of cakes, it will be quite likely flying off the shelves, only seems like yesterday was DDR4 launched where DDR3 was here for a long enough while (started of wicked @#$ expensive for the quite low speeds compared to now, whereas DDR4 beyond the not able to keep shelves stocked, makers once again monopolostic pricing (curb down the amount produced to keep price as high as can be, till get nailed some hefty fines (which is a joke..hurts me and you, them massive corps, slap on the wrist..considering there is what, like 4 maybe 5 memory makers these days, but only 3 major players overall (Samsung, Micron, Hynix (Elpida branding now as well? forgets)

    anyways.

    def feels less "snappy" with DDR4 over fast DDR3 (latency or something?) but when it gets going, it is wicked quick overall.. imagine DDR5 will be similar, small hit overall latency (for the system to "gear up") but when it does, that much faster, lower power use as well (guess that depends on raw amp vs just "volt")

    anywho

    enough word wall from me for another post..my bad
  • Duncan Macdonald - Tuesday, July 14, 2020 - link

    Probably going to be an easier transition for AMD than Intel. As the memory access is via the I/O die in AMD CPUs, this can be modified without impacting the compute dies. Intel with its monolithic setup has to redo the whole die.
  • FreckledTrout - Tuesday, July 14, 2020 - link

    Maybe. Intel is headed towards a chiplet approach so timing wise it may work out for Intel.
  • Deicidium369 - Wednesday, July 15, 2020 - link

    Alder Lake will be DDR5. Nothing about AMD makes them more likely or less likely to get DDR5.
  • Fulljack - Wednesday, July 15, 2020 - link

    Zen 3 aka Vermeer, or desktop Ryzen 4000 CPU has been confirmed to be the last AM4 CPU, that is still on DDR4. Zen 4 are speculated to uses AM5 and DDR5, as using DDR4 doesn't makes any sense if they change the chipset.
  • Deicidium369 - Wednesday, July 15, 2020 - link

    Yes, moving to a new socket would indicate that DDR5 is likely. Everything will be DDR5 in a short time.

    My reply to Duncan Macdonald's assertion that DDR5 would be more easily accomplished with Ryzen due to it's MCM nature, with an IO die.

    Alder Lake (desktop) and Sapphire Rapids (server) will be the 1st Intel CPUs to get DDR5 - and Zen 4 would be in the same time frame, and would be DDR5 as well.
  • Kjella - Tuesday, July 14, 2020 - link

    This looks very close to two memory sticks on one die, I wonder why they didn't just use twice as many SO-DIMM slots. But I guess it really doesn't matter, the chips are tested before they are assembled so it wouldn't affect yield. Just that you could replace one bad side instead of a whole stick.
  • Brane2 - Tuesday, July 14, 2020 - link

    Cut the crap and mark this of rwhat it is - paid markeiting crap.

    We all know that the story with DDR5 is just umpteenth rerun of DDR/2/3/4 story.

    We know that this is just an excuse for price hike for next generation.
    WHich will last for more than a year. During thawt time, many of new DDR5 sticks will be SLOWER than existing DDR4 sticks.

    Then we'll se the consequences of penny-pinching on new, smaller DRAM cells in the form of RowHammer3 or somesuch shit - only to get YET ANOTHER RAM etcetc.
  • psyclist80 - Tuesday, July 14, 2020 - link

    If you read the article...they said they are going for a more aggressive speed ramp of DDR5 vs DDR4. JEDEC isnt selling anything, just setting the standards for all to use. DDR5 4800 to start seems pretty good especially with the added throughput coming per clock. Now you may put your tinfoil hat back on.
  • Brane2 - Wednesday, July 15, 2020 - link

    They say that EVERY time.
    And they now have to emphasize that because everyone and their dog knows about the trick.
  • extide - Wednesday, July 15, 2020 - link

    No, this is different. They are starting out the DDR5 spec way above what the DDR4 spec ended at. DDR4 officially ended at 3200, and they are starting at 4800, so no, it won't be slower like every other gen bump because in the past they would have started DDR5 at 3200. Sure, you can get DDR4 faster than 3200 -- but thats not the point -- the point is they are starting at a much higher point than ever before. They have never done this before. And just like you can get unofficial DDR4 faster than 3200, you will be able to get unofficial DDR5 above 4800 as well.

    Of course it will be more expensive to start ... duh it's a new technology and it will not be very common at first, so yeah that will be just like every other time.
  • IntelUser2000 - Wednesday, July 15, 2020 - link

    We'll see how fast it really is.

    DDR2 needed DDR2-533 to be faster than DDR-400. DD3 needed DDR3-1066. DDR4 needed DDR4-2133.

    At equal speeds, the next generation RAM is slower than the predecessor. Actually, its quite a bit slower as DDR4-1866 was equal/slower than DDR3-1600.
  • LurkingSince97 - Wednesday, July 15, 2020 - link

    There is a large number of things that make this a LOT different than the DDR2->3 or DDR3->4 transitions. Those transitions had a lot more to do with power and density necessities than performance, especially at launch. The frequency scaling was there, but overall performance was a lower priority.

    DDR5 at 3200 will have a lot more effective bandwidth than DDR4 at the same 3200MT/sec data rate would.

    This is over a year old and covers some: https://www.micron.com/about/blog/2019/june/ddr5-t...

    This is over 7 months old and has more: https://www.micron.com/about/blog/2019/november/dd...

    The extra memory 'sub' channel, plus the large increase in memory banks / bank groups, and the same bank refresh feature all add up to significantly better bandwidth utilization and significant memory latency improvements under high concurrent load.

    In any event, even if DDR4-1866 was slower than DDR3-1600, what makes you think DDR5-4800 will be slower than DDR4-3200? Even if you handicap DDR5 like you would for the prior releases, ignoring the non-frequency performance features entirely, you might come up with something like DDR5 needing to hit 3600 in order to match DDR4 at 3200. But we're starting off at 4800....
  • IntelUser2000 - Thursday, July 16, 2020 - link

    That's very possible.

    3 generations have shown otherwise so you must understand why I'm skeptical. Only shipping products tell the real story.
  • Valantar - Wednesday, July 15, 2020 - link

    Given that DDR4->4000 is still ridiculously expensive and DDR5 starts at 4800, I doubt there will be a situation even close to resembling the DDR4 launch.
  • Deicidium369 - Wednesday, July 15, 2020 - link

    Yeah! The 6502 in my 8bit Ataris were plenty - everything after that was just marketing and some big company something something paid something something flat earth.
  • Valantar - Wednesday, July 15, 2020 - link

    Hey, dude, chill, don't give flat earthers a bad rep by associating them with this guy.

    ;)
  • Spunjji - Wednesday, July 15, 2020 - link

    "penny-pinching on new, smaller DRAM cells"
    I'm sorry, what? New lithographies cost an awful lot to develop and the results are necessary for density and performance improvements. What part of that is "penny-pinching"? :|

    Your whole post reads like somebody who has noted the tendency for price-fixing in the DRAM market and has decided to apply the concept to literally everything related to RAM technology.
  • Kamen Rider Blade - Tuesday, July 14, 2020 - link

    Why didn't they move the notch to be right in between the 2x 7-bit Command pins?
  • DanNeely - Wednesday, July 15, 2020 - link

    Probably too close to the center. You want the key far enough to one side that people won't try to insert the ram backwards.
  • Spunjji - Wednesday, July 15, 2020 - link

    Honestly, I already have trouble with the positioning on DRR4 as it is. D:
  • Kamen Rider Blade - Wednesday, July 15, 2020 - link

    You need new glasses or maybe wear a head lamp when you install DDR4 DIMM's?
  • Kamen Rider Blade - Wednesday, July 15, 2020 - link

    As long as it's not dead center, it should be fine, right? It just needs the off-set to let you know which orientation to insert.
  • back2future - Wednesday, July 15, 2020 - link

    We will stay quite a while probably with DDR5, 64bit memory interfaces, PCI-E4 because physics limitations call for higher efforts in speeding up things and keep materials/known standards/protocols (PAM-4, PAM-x) compatible?
    DDR6 will need memory being an even more specialized part of a mainboard, with AI memory organization optimization on user profiles and maybe another µc on its own?
  • extide - Wednesday, July 15, 2020 - link

    I bet that eventually memory turns into a high speed serial interface, possibly multiple ones in parallel. Something like 8 high speed serial links (one to each IC) per stick (or 9 with ecc). MIt gets very difficult to scale up the speed of parallel interfaces and just about everything else has moved this way.
  • Deicidium369 - Wednesday, July 15, 2020 - link

    CXL makes pools of memory (volatile and non volatile) accessible as stand alone systems - same with FPGAs, GPUs, AI accelerators, etc. CXL debuts with Sapphire Rapids on PCIe5 - but the jump to PCIe6, on servers, is likely in relatively short order - possibly over silicon photonics rather than a copper/electrical connection.
  • Deicidium369 - Wednesday, July 15, 2020 - link

    PCIe5 is scheduled for Sapphire Rapids, along with DDR5. We will get DDR5 on desktops, but don't expect PCIe5 on desktops anytime soon.

    So yeah, DDR5/PCIe4 is the long term configuration moving forward.
  • back2future - Thursday, July 16, 2020 - link

    by that time, bigger changes on mainboards might been done, like fiber cabled optical routing for networking and maybe wider specialization between client side data conversion (mobile SoCs, desktops), workstations and host side data compression (quality related filtering (customer triggered), physically by improved algorithms and AI support, .....)
    Serial or parallel might not seem that easy to separate, if lanes for transmitting serialized data (protocols) by electrical signals are added and electrical changes are getting multi-level information on one bit position (later PAM-x modulation and high speed analog-to-digital conversion)?
    Don't know if optical data transfer will do somekind of PAM modulation, but variation to wavelength is done already.
    One task with optical transmitter laser diodes compared to electrical signal transmitting on shrinking node levels might be again higher power consumption (for highest data transmission speed on consumer mass markets) and difficulties with compatibility to slower standards (but technically that would be just a matter of additional Gbit ethernet connectors)?
  • back2future - Thursday, July 16, 2020 - link

    additionally, learned that CPUs will be getting faster towards PCIe5 (Saphire Rapids ~2021, Zen4 ~2022) than consumer needs might on desktop mass markets (not gamers, not workstations, not servers, not data centers), like for PCIe4 x4 4lanes ~7.5GB/s for storage data bandwidth and ~15GB/s on PCIe5. Furthermore transition from hdd to sdd is going on also.
  • ksec - Wednesday, July 15, 2020 - link

    In terms of Historical Prices and Context DRAM is still very expensive. Hopefully we will see $2/GB with DDR5,
  • MrVibrato - Wednesday, July 15, 2020 - link

    Yeah, sure it will hit $2/GB or $3/GB. When DDR6 takes over the market and vendors trying to get rid of their DDR5 stock. You just have to wait a little...
  • haukionkannel - Thursday, July 16, 2020 - link

    Expect at least four times more expensive than ddr4 with same capasity in the Beginning. In two to three years it will only be two times more expensive and after that it will get even and after that it will be really hard to find cheap ddr4...
  • Valantar - Wednesday, July 15, 2020 - link

    Is it just me or is this arriving a bit later than expected? I wonder what consequences this will have for AMD's post-AM4 platform, given that it should launch (late) next year and anything but DDR5 on that platform would be a _serious_ letdown. I guess they could launch 1st-gen motherboards with DDR4 support and move to DDR5 later if the controller supports both, but that would still be a serious letdown. So, will AMD launch the platform early and accept low memory availability and high prices, will they hold off until things normalize (and not launch "AM5" until 2022), or will consumer availability and adoption be quicker than what is suggested here? 12-18 months is July 2021-January 2022 after all.
  • DanNeely - Wednesday, July 15, 2020 - link

    Start of the second paragraph: "Originally planned for release in 2018, today’s release of the DDR5 specification puts things a bit behind JEDEC’s original schedule, "
  • Valantar - Wednesday, July 15, 2020 - link

    I didn't mean that - that delay is widely known after all - but in regards to the roadmaps of significant DDR5 adopters (AMD, Intel) and the known end date for AM4 support for new launches past 2020.
  • LurkingSince97 - Wednesday, July 15, 2020 - link

    AMD can be flexible, and if necessary:

    Put Zen 4 on AM4 along side a new DDR 5 socket, if the DDR 5 situation is problematic (for example, too expensive to move enough Zen 4 volume). Same on the Epyc side. Its just an I/O die swap to support it on two platforms.

    If Zen 4 has awful delays, but the DDR5 platforms are ready, they can launch Zen 3 based SKUs on the new I/O die on the new DDR5 platform. I believe that they originally thought Zen 3 might end up on DDR5, but gave up on that when it became obvious DDR5 wasn't ready.
  • Spunjji - Wednesday, July 15, 2020 - link

    Zen 4 is slated for 2022. I'd bet on them launching it on AM5 in mid-2022 and counting on the performance of their new architecture on a new 5nm process to justify the overall cost of the platform. AM4 should tide them over just fine until then.
  • Valantar - Wednesday, July 15, 2020 - link

    Mid-2022? So you're betting on an 18+ month wait from Zen3 to Zen4? I know their cadence has been >1 year since the launch of Zen, but 18 months would be _slow_. Given that Zen3 for the desktop is confirmed to launch this year, it would leave 2021 with a few midrange and lower end Zen3 chips as AMDs only desktop CPU launches for the full year, and then another six months or so beyond that. That sounds unlikely to me, though it might of course happen.
  • scineram - Thursday, July 16, 2020 - link

    I think it will be 2022Q1 for Zen4. Q2 if something goes wrong.
  • haukionkannel - Thursday, July 16, 2020 - link

    They can make Zen3+ as a middle tier upgrade, if They need to. Most likely datacenters goes to ddr5 first and customers few year later. I Expect ddr5 to be super expensive first few years...
  • scineram - Thursday, July 16, 2020 - link

    Zen4 is 2022. DDR5. PCIe5.
  • MrVibrato - Wednesday, July 15, 2020 - link

    Sorry for going off on a tangent somewhat, but i am realy puzzled by and a bit in awe of the PCB the DIMM module is resting on in the first picture. It's quite likely a composite image, but man, are that are some rather ginormous traces and vias. (in relation to the size of the DIMM module) (-;
  • rrinker - Wednesday, July 15, 2020 - link

    Most likely that is not a PCB, but an enlarged PHOTO of a PCB - possibly even the one for the memory module shown sitting on it.
  • Valantar - Wednesday, July 15, 2020 - link

    "Back in my day, we didn't count no nanometers in our CPUs, they were etched by hand! 1mm litography was the bee's knees back then, each circuit etched by a worker with a magnifying glass. And our motherboards were as big as a house!"
  • Deicidium369 - Saturday, July 18, 2020 - link

    Calm down, have a butterscotch. I remember when remote control was the youngest member of the family having to get up and change the rotary channel - now some older people need to get the youngest member of the family to show them how to use the remote control.
  • AbRASiON - Wednesday, July 15, 2020 - link

    I don't understand, the article clearly references "Non-ECC" - have we finally solved the ECC thing or not? I'd like it as standard please.
  • Ryan Smith - Wednesday, July 15, 2020 - link

    I use non-ECC figures as a reference point, since that's what most people are familiar with. DIMM-wide ECC still requires additional chips and a wider data bus (40 bits per channel).
  • extide - Wednesday, July 15, 2020 - link

    It's not standard. It's optional just like it always has been.
  • Oxford Guy - Wednesday, July 15, 2020 - link

    "Typically a new standard picks up from where the last one started off, such as with the DDR3 to DDR4 transition, where DDR3 officially stopped at 1.6Gbps and DDR4 started from there. However for DDR5 JEDEC is aiming much higher, with the group expecting to launch at 4.8Gbps, some 50% faster than the official 3.2Gbps max speed of DDR4. And in the years afterwards, the current version of the specification allows for data rates up to 6.4Gbps, doubling the official peak of DDR4."

    It's pleasant to see JEDEC learning from its mistakes.
  • J0S3R - Thursday, July 16, 2020 - link

    So if the DIMMs control their own voltage, do motherboards have any way to influence that, or will over (or under) volting memory become a thing of the past? Likewise, I see in the diagram there are temperature sensors on the DIMM (is that a mandatory part of the spec?) -- is that strictly for use within the DIMM for failure protection, or will that information be made available to the rest of the system?
  • MrVibrato - Friday, July 17, 2020 - link

    Good question. The DDR5 memory specification is only concerned about the DRAM memory chips/packages, not about the DIMM modules.

    JEDEC's comittee JC-45 responsible for specifiying DRAM modules hasn't publicized anything concrete about DDR5 DIMMs on their website yet. (There is a mechanical spec for the DDR5 DIMM socket already available, but that obviously is also not about the modules themselves). I guess we have to wait and see...

    (Several unauthoritative places in the interwebs mumble about on-DIMM voltage regulators being a thing for "server" or "high-end" DIMMs, whatever that really means. Up to you if you want to take such chatter at face value...)
  • BreadFish64 - Saturday, July 18, 2020 - link

    > there aren’t any massive, fundamental changes to the memory bus such as QDR
    I don't think QDR is technically possible. From my understanding, DDR works by transferring on the rising and falling edges of a clock cycle. In order for QDR to make sense you would need a clock that somehow has 4 edges.
  • MrVibrato - Saturday, July 18, 2020 - link

    The SDRAM chip only needs one external clock signal. But it has to "split" the clock into two clock signals, with the 2nd clock signal being shifted/delayed by 90 degrees/a quarter of the clock period. This will provide four edges. All this is handled by the SDRAM chip itself, employing a delay-locked loop (DLL).
  • MrVibrato - Saturday, July 18, 2020 - link

    Correction: While the chips contain a DLL, the DLL's purpose is not to generate a 2nd delayed clock signal. Rather, the SDRAM chip has a 2nd clock signal (called "data strobe") which is the phase-shifted clock signal which provides the second rising and falling edges.
  • PandaBear - Thursday, August 20, 2020 - link

    Seems like the way with registered and on dimm voltage regulation goes, this will push laptop to solder at least 1 set of base memory onboard and if you are lucky, leave you 1 slot for upgrade.
  • bobrooney - Tuesday, September 1, 2020 - link

    thanks, but i'll wait for the 2nd gen ddr5 amd motherboards. they screwed me over once with first gen ddr4 mobos. never again. the support was horrid. agesa after agesa firmware release, nothing stabilized the ability to use the standard 3200mhz ram until the 2nd gen ryzen boards came out, and then i was left out in the cold. support stopped, and thats that. never again.
  • moseo - Monday, January 11, 2021 - link

    DDR5 is the new 8k!!

    Mo Seo
    http://www.buysellram.com
  • kepstin - Tuesday, February 16, 2021 - link

    Interesting - going from 72 (64 + 8) bits per channel to 40 (32 + 8) bits per channel on the ECC modules means that the ratio of parity bits to data bits has doubled; does this mean we're going to be seeing support for correcting, rather than just detecting, multi-bit errors?
  • Simba* - Monday, March 8, 2021 - link

    I'm also interested in the type of coverage being offered by ECC dimms as well. Specifically (judging by the SPD), it looks like DIMMs will actually offer two levers of ECC. EC4 bits and EC8. So the data bus can actually be 36 bits (32 + 4 bits ECC) and 40bits (32 + 8 bits ECC). I've been looking online but I can't find any documentation that speaks to the level of coverage that you get with each.

Log in

Don't have an account? Sign up now