Tonga’s Microarchitecture - What We’re Calling GCN 1.2

As we alluded to in our introduction, Tonga brings with it the next revision of AMD’s GCN architecture. This is the second such revision to the architecture, the last revision (GCN 1.1) being rolled out in March of 2013 with the launch of the Bonaire based Radeon HD 7790. In the case of Bonaire AMD chose to kept the details of GCN 1.1 close to them, only finally going in-depth for the launch of the high-end Hawaii GPU later in the year. The launch of GCN 1.2 on the other hand is going to see AMD meeting enthusiasts half-way: we aren’t getting Hawaii level details on the architectural changes, but we are getting an itemized list of the new features (or at least features AMD is willing to talk about) along with a short description of what each feature does. Consequently Tonga may be a lateral product from a performance standpoint, but it is going to be very important to AMD’s future.

But before we begin, we do want to quickly remind everyone that the GCN 1.2 name, like GCN 1.1 before it, is unofficial. AMD does not publicly name these microarchitectures outside of development, preferring to instead treat the entire Radeon 200 series as relatively homogenous and calling out feature differences where it makes sense. In lieu of an official name and based on the iterative nature of these enhancements, we’re going to use GCN 1.2 to summarize the feature set.


AMD's 2012 APU Feature Roadmap. AKA: A Brief Guide To GCN

To kick things off we’ll pull old this old chestnut one last time: AMD’s HSA feature roadmap from their 2012 financial analysts’ day. Given HSA’s tight dependence on GPUs, this roadmap has offered a useful high level overview of some of the features each successive generation of AMD GPU architectures will bring with it, and with the launch of the GCN 1.2 architecture we have finally reached what we believe is the last step in AMD’s roadmap: System Integration.

It’s no surprise then that one of the first things we find on AMD’s list of features for the GCN 1.2 instruction set is “improved compute task scheduling”. One of AMD’s major goals for their post-Kavari APU was to improve the performance of HSA by various forms of overhead reduction, including faster context switching (something GPUs have always been poor at) and even GPU pre-emption. All of this would fit under the umbrella of “improved compute task scheduling” in AMD’s roadmap, though to be clear with AMD meeting us half-way on the architecture side means that they aren’t getting this detailed this soon.

Meanwhile GCN 1.2’s other instruction set improvements are quite interesting. The description of 16-bit FP and Integer operations is actually very descriptive, and includes a very important keyword: low power. Briefly, PC GPUs have been centered around 32-bit mathematical operations for some number of years now since desktop technology and transistor density eliminated the need for 16-bit/24-bit partial precision operations. All things considered, 32-bit operations are preferred from a quality standpoint as they are accurate enough for many compute tasks and virtually all graphics tasks, which is why PC GPUs were limited to (or at least optimized for) partial precision operations for only a relatively short period of time.

However 16-bit operations are still alive and well on the SoC (mobile) side. SoC GPUs are in many ways a 5-10 year old echo of PC GPUs in features and performance, while in other ways they’re outright unique. In the case of SoC GPUs there are extreme sensitivities to power consumption in a way that PCs have never been so sensitive, so while SoC GPUs can use 32-bit operations, they will in some circumstances favor 16-bit operations for power efficiency purposes. Despite the accuracy limitations of a lower precision, if a developer knows they don’t need the greater accuracy then falling back to 16-bit means saving power and depending on the architecture also improving performance if multiple 16-bit operations can be scheduled alongside each other.


Imagination's PowerVR Series 6XT: An Example of An SoC GPU With FP16 Hardware

To that end, the fact that AMD is taking the time to focus on 16-bit operations within the GCN instruction set is an interesting one, but not an unexpected one. If AMD were to develop SoC-class processors and wanted to use their own GPUs, then natively supporting 16-bit operations would be a logical addition to the instruction set for such a product. The power savings would be helpful for getting GCN into the even smaller form factor, and with so many other GPUs supporting special 16-bit execution modes it would help to make GCN competitive with those other products.

Finally, data parallel instructions are the feature we have the least knowledge about. SIMDs can already be described as data parallel – it’s 1 instruction operating on multiple data elements in parallel – but obviously AMD intends to go past that. Our best guess would be that AMD has a manner and need to have 2 SIMD lanes operate on the same piece of data. Though why they would want to do this and what the benefits may be are not clear at this time.

AMD's Radeon R9 285 GCN 1.2: Geometry Performance & Color Compression
Comments Locked

86 Comments

View All Comments

  • felaki - Wednesday, September 10, 2014 - link

    The article says that the Sapphire card has "1x DL-DVI-I, 1x DL-DVI-D, 1x HDMI, and 1x DisplayPort". Can you be more precise as to which versions of the spec are supported? Is it HDMI 1.4 or HDMI 2.0? I believe since this refers to MST, it's only HDMI 1.4 and a DisplayPort connection is required in MST mode for 4K@60Hz output?

    Reading the recent GPU articles, I'm very puzzled why HDMI 2.0 adoption is still lacking in GPUs and displays, even though the spec has been out there for about a year now. Is the PC industry reluctant to adopt HDMI 2.0 for some (political(?), business(?)) reason? I have heard only bad things about DisplayPort 1.2 MST to carry a 4K@60Hz signal, and I'm thinking it's a buggy hack for a transitional tech period.

    If the AMD newest next-gen graphics card only supports HDMI 1.4, that is mind-boggling. Please tell me I'm confused and this is a HDMI 2.0-capable release?
  • Ryan Smith - Wednesday, September 10, 2014 - link

    DisplayPort 1.2 and HDMI 1.4. Tonga does not add new I/O options.
  • felaki - Wednesday, September 10, 2014 - link

    Thanks for clarifying this!
  • Penti - Wednesday, September 10, 2014 - link

    You can do 4K SST on both Nvidia and AMD-cards as long as they are DisplayPort 1.2 capable. It depends on your screen. There is no HDMI 600MHz on any graphics processor. Neither is their much of support from monitors or TVs as most don't do 600MHz.
  • felaki - Wednesday, September 10, 2014 - link

    Thanks! I was not actually aware that SST existed. I see here http://community.amd.com/community/amd-blogs/amd-g... that AMD is referring to SST as being the thing to fix up the 4K issue, although the people in the comments on that link refer that the setup is not working properly.

    How do people generally see SST? Should one defer buying a new system now until proper HDMI 2.0 support comes along, or is SST+DisplayPort 1.2 already a glitch-free user experience for 4K@60Hz?
  • Kjella - Wednesday, September 10, 2014 - link

    Got 3840x2160x60Hz using SST/DP and it's been fine, except UHD gaming is trying to kill my graphics card.
  • mczak - Wednesday, September 10, 2014 - link

    DP SST 4k/60Hz should be every bit as glitch free as proper hdmi 2.0 (be careful though with the latter since some 4k TVs claiming to accept 60Hz 4k resolutions over hdmi will only do so with ycbcr 4:2:0). DP SST has the advantage that actually even "old" gear on the graphic card side can do it (such as radeons from the HD 6xxx series - from the hw side, if it could do DP MST 4k/60Hz it should most likely be able to do the same with SST too, the reason why MST hack was needed in the first place is entirely on the display side).
    But if you're planning to attach your 4k TV to your graphic card a DP port might not be of much use since very few TVs have that.
  • Solid State Brain - Wednesday, September 10, 2014 - link

    I won't get another AMD video card until idle multimonitor consumption gets fixed. According to other websites, power consumption in such case increases substantially whereas NVidia video cards have almost the same consumption as when using a single display. In the case of the Sapphire 285 Dual-X it increases by almost 30W just by having a second display connected!!

    I think Anandtech should start measuring idle power consumption when more than one display is connected to the video card / multimonitor configurations. It's an important information for many users who not only game but also need to have productivity needs.
  • Solid State Brain - Wednesday, September 10, 2014 - link

    And of course, a comment editing function would be useful too.
  • shing3232 - Wednesday, September 10, 2014 - link

    well, AMD video card have to run higher frequency with multiscreen than with a single monitor

Log in

Don't have an account? Sign up now