Samsung has issued an update to the GDDR6 announcement earlier this month. The company’s GDDR6 lineup will include chips featuring 8 Gb and 16 Gb capacities as well as speed bins not mentioned in the original announcement.

In addition to 16 Gb GDDR6 chips with 18 Gbps I/O speed, Samsung will offer GDDR6 with 12, 14 and 16 Gbps data transfer rates, thus targeting applications with different performance requirements. Also, two chip capacities (8 Gb and 16 Gb) will enable Samsung to target applications with various requirements for the amount of memory onboard.

Assuming both capacities will be made in all the speed bins, this gives the following:

GPU Memory Math: Samsung GDDR6
  8 Gb (1 GB) 16 Gb (2 GB)
Bandwidth Per Pin (Gb/s) 12 14 16 18 12 14 16 18
B/W per Chip/Stack (GB/s) 48 56 64 72 48 56 64 72
Max Capacity 256-bit 8 GB 16 GB
384-bit 12 GB 24 GB
Total Bandwidth (GB/s) 256-bit 384 448 512 576 384 448 512 576
384-bit 576 672 768 864 576 672 768 864

Samsung’s 16 Gb GDDR6 chips could be used for various high-end products that benefit from large amounts of memory, including graphics cards and compute accelerators. By contrast, the company’s 8 Gb GDDR6 ICs will be handy for mainstream graphics cards that do not carry large amounts of memory.

Samsung did not announce pricing of its GDDR6 products, but it is logical to expect 16 Gb chips with an 18 Gbps data transfer rate to cost considerably more than 8 Gb ICs with lower speed bins. Therefore, the large portfolio will enable Samsung to capitalize on the new type of memory.

Related Reading

Source: Samsung

Comments Locked

19 Comments

View All Comments

  • Dragonstongue - Thursday, January 25, 2018 - link

    10 years ago there were many limitations to making 512gb+ bus width, cost being one of them (am sure they could have made 4096 bus width, but, no one would be willing to foot the bill A and B programs and stuff simply could NOT take advantage of it.

    The product in question, say system memory or graphics card are BUILT to use the bus width as best as possible, too narrow a bus is useless and extremely wide is just as useless, as can be seen going back years ago where ultra low end graphics cards had 4+gb of memory and could NEVER effectively use it being hampered by a 64 or 128 bit bus.

    Is truly a sicky slope to build them proper amount of memory for the design let alone a wide enough bus to accommodate it without making the cost astronomical for nothing gained (if you have to dedicate say 60% of the die just for the memory bus that leaves very little "meat" for the engine sort of speak)

    Like a 20 lane highway for 1 vehicle, or a 1 lane highway for 1000 vehicles ^.^

    I could only assume 10+ years ago where the dies were far larger it was "easier" to wire things up (more room on the die for the extra wiring needed) whereas "modern" using substantially smaller nm die size makes it much much more challenging to get the same amount of wiring needed to get very wide bus widths while still making sure they have all the transistors, wiring, voltages etc to "make it happen"

    Notice we see less and less large connectors these days if at all (such as VGA or whatever i.e far too much wiring needed for nothing worthwhile)

    Smaller and smaller is a good thing in many ways, but, also very detrimental in others, the cross talk alone from having wires at high operating speed/power/temperatures in close and closer proximity to one another makes it anything but "easy" am sure.

    10+ years ago was a limit of technology to make things fast and keep temperatures/power in check, seems now the tech is there to make it possible, but, there is always a limit as well such as "to close together to make it run crazy speeds and keep signaling in check"

    Am sure one they use pure optical interconnect type the limit of power/temperature/signalling will not be anything close to the same issues they currently face. light is either on or off (signal or no signal) broken or unbroken, in theory is either a 100% functional or 100% "dead" circuit type thing, unlike the problem currently faced as my understanding is to what I stated above.

    Point is IMO, these things are anything but, pin the tail on the donkey easy, insane expensive design/man hours to make it happen and if they do not make it well or mis judge it, basically becomes a multi million dollar paper weight O.O
  • Pork@III - Friday, January 26, 2018 - link

    Yes Yes future graphic card will be no card only SoC all infostreams between components will be wi-fi...This way, manufacturers will save the most from materials and will not have any complications in the construction because they will simply not have a construction. Once they have come to the point of saving the costs of building up and producing their works, I think we as clients should also want to save by not buying the cheap, third-party garbage at any cost.
  • CiccioB - Friday, January 26, 2018 - link

    10 years ago traces had not to carry 16Gb/s information.
    You can improve the signal quality into silicon by shrinking and so, but on copper traces the quality remains the same and too fast signals for too wide channels is and remains a worldwide problem.
    I'm not saying it is impossible, I'm just saying that it would be really expensive to obtain, and see the bandwidth these new memories provide, it is quite useless to make such a bus whose cost would probably be similar to adopt HBM.
  • Pork@III - Friday, January 26, 2018 - link

    10 years ago GPUs was many times slower than todays GPUs
  • willis936 - Tuesday, January 30, 2018 - link

    10 years ago saying “4000 pin cpu in 2018” would have gotten you laughed out of any room. Yet here we are. Also 2008 is late enough that anyone could see that things like low overhead encoding such as 128/130 and training/emphasis would be reasonable on hundreds of channels in consumer electronics considering the decrease in transistor size.
  • yhselp - Friday, January 26, 2018 - link

    That's just the thing; in light of these new GDDR6 chips, HBM looks a bit redundant. While HBM2 is theoretically capable of up to 1 TB/s, the fastest implementation so far has been 716 GB/s, which can be surpassed even by using a step-down GDDR6 chip. Not to mention that even GDDR5X has been providing HBM-competitive bandwidths.

    There was a brief point in time where the maximum achievable bandwidth using conventional memory was about 336 GB/s, GPUs were quickly getting faster, and so HBM's promise of a threefold improvement seemed like the future, but it never came, even today there're no 1 TB/s implementations, and conventional memory - GDDR5X, GDDR6 - has quickly gained ground.
  • yhselp - Friday, January 26, 2018 - link

    Edit: I'd completely forgotten about Tesla V100's implementation of 1.75Gb/4096-bit HBM2 implementation that's good for 896 GB/s. Not that it changes the point I was trying to make too much, but still.
  • btb - Friday, January 26, 2018 - link

    Looking forward to next generation consoles using this :). XBox One X currently uses 12 GB of GDDR5 memory with 326 GB/s of bandwidth.. so looks like there is room to bump that up to 576-864GB/s depending on speedbin of GDDR6.
  • alwayssts - Saturday, January 27, 2018 - link

    Yeah.

    I believe there's essentially three options, given I think they will have a gpu capable of high-end 4k30 and enough CPU horsepower (read: actual Ryzen/Core) to run 60fps (if devs choose lower res rendering, checkerboard, dynamic scaling etc)...heck even native 1080p120 to the PSVR could be a thing. To put it another way, essentially 2x an Xbox One X or 2.88x PS4pro...Or similar-ish to Vega/1080 in the GPU department plus a 'real' cpu.

    1. 256-bit 18gbps GDDR6
    2. 1.2ghz HBM2
    3. 1ghz HBM2 or 16gbps GDD6 + DDR4

    The first two are perhaps better (given it gives the CPU access to a ton of BW and devs can allocate accordingly) and more likely options, but also expensive until the rest of the non-Samsung market catches up with production on 10nm nodes. That said, Hynix (and Micron wrt GDDR6) very well may do so by the time the next set of systems become feasible, which themselves likely won't make sense until 7nm (at whichever foundry) becomes both feasible from a cost and production standpoint. They very well may indeed line up, but we'll have to see!

Log in

Don't have an account? Sign up now