290 Tri-X OC Thermal Management

Before jumping into our benchmarks, due to the significant focus we’re placing on cooling and noise for the 290 Tri-X OC (amidst the reference 290’s weaknesses) we also wanted to spend a moment discussing the card’s thermal management algorithms.

With the 290 series AMD introduced their next generation PowerTune technology, which allows for thermal management based on temperatures, power consumption, and now fan speeds. For the reference 290X in particular this was especially important as AMD used this functionality to keep fan speed noises in check despite the heavy thermal load Hawaii placed on the cooler. At the time we had assumed that everyone would use this technology even if they used different coolers, but as it turns out this isn’t the case.

For the 290 Tri-X OC Sapphire has reverted to traditional power and temperature based throttling, opting not to use the functionality of next generation PowerTune. This means that the 290 Tri-X OC does not offer the ability to throttle based on fan speeds, nor does it offer the ability to adjust the temperature it throttles at, instead throttling at Hawaii’s TjMax. This implementation caught us off guard at first since we had expected everyone to use next generation PowerTune, however as it turns out this is something that board partners get to decide for themselves on their customized cards.

Sapphire for their part has told us that based on the ample cooling performance of the Tri-X cooler that they've opted to use a traditional thermal management implementation in order to better sustain performance. Though we can’t readily test Sapphire’s statements about sustainability, we certainly can’t argue against Sapphire’s statement on the performance of their cooler. We’ll see the full breakdown in our benchmark section, but they are having absolutely no problem balancing noise and temperatures right now without next generation PowerTune.

Realistically we wouldn’t be surprised if this was also chosen because the Tri-X cooler predates the 290 series – and hence it wasn’t necessarily designed to work well with next generation PowerTune – but that’s just speculation on our part. To that end it would have been interesting to see a full next generation PowerTune implementation on this card, however it’s really just an intellectual curiosity. Out of the box the 290 Tri-X OC works just fine with a traditional thermal management implementation.

The Test

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 290X
AMD Radeon R9 290
XFX Radeon R9 280X Double Dissipation
Asus Radeon R9 280X DirectCU II TOP
Sapphire Radeon R9 280X Toxic
AMD Radeon HD 7970 GHz Edition
AMD Radeon HD 7970
NVIDIA GeForce GTX 770
NVIDIA GeForce GTX 780
NVIDIA GeForce GTX 780 Ti
Video Drivers: NVIDIA Release 331.93
AMD Catalyst 13.11 Beta v8
AMD Catalyst 13.11 Beta v9.5
OS: Windows 8.1 Pro

 

Sapphire Radeon R9 290 Tri-X OC Review Gaming Performance
Comments Locked

119 Comments

View All Comments

  • ShieTar - Tuesday, December 24, 2013 - link

    "Curiously, the [idle] power consumption of the 290 Tri-X OC is notably lower than the reference 290."

    Well, it runs about 10°C cooler, and silicone does have a negative temperature coefficient of electrical resistance. That 10°C should lead to a resistance increase of a few %, and thus to a lower current of a few %. Here's some nice article about the same phenomenon observed going from a Stock 480 to an Zotac AMP! 480:

    http://www.techpowerup.com/reviews/Zotac/GeForce_G...

    The author over there was also initially very surprised. Apparently kids these days just don't pay attention in physics class anymore ...
  • EarthwormJim - Tuesday, December 24, 2013 - link

    It's mainly the leakage current which decreases as temperature decreases, which can lead to the reductions in power consumption.
  • Ryan Smith - Tuesday, December 24, 2013 - link

    I had considered leakage, but that doesn't explain such a (relatively) massive difference. Hawaii is not a leaky chip, meanwhile if we take the difference at the wall to be entirely due to the GPU (after accounting for PSU efficiency), it's hard to buy that 10C of leakage alone is increasing idle power consumption by one-third.
  • The Von Matrices - Wednesday, December 25, 2013 - link

    In your 290 review you said that the release drivers had a power leak. Could this have been fixed and account for the difference?
  • Samus - Wednesday, December 25, 2013 - link

    Quality vrms and circuitry optimizations will have an impact on power consumption, too. Lots of factors here...
  • madwolfa - Wednesday, December 25, 2013 - link

    This card is based on reference design.
  • RazberyBandit - Friday, December 27, 2013 - link

    And based does not mean an exact copy -- it means similar. Some components (caps, chokes, resistors, etc.) could be upgraded and still fill the bill for the base design. Some components could even be downgraded, yet the card would still fit the definition of "based on AMD reference design."
  • Khenglish - Wednesday, December 25, 2013 - link

    Yes power draw does decrease with temperature, but not because resistance drops. Resistance dropping has zero effect on power draw. Why? Because processors are all about pushing current to charge and discharge wire and gate capacitance. Lower resistance just means that happens faster.

    The real reason power draw drops is due to lower leakage. Leakage current is completely unnecessary and is just wasted power.

    Also an added tidbit. The reason performance increases while temperature decreases is mainly due to the wire resistance dropping, not an improvement in the transistor itself. Lower temperature decreases the number of carriers in a semiconductor but improves carrier mobility. There is a small net benefit to how much current the transistor can pass due to temperature's effect on silicon, but the main improvement is from the resistance of the copper interconnects dropping as temperature drops.
  • Totally - Wednesday, December 25, 2013 - link

    Resistance increases with temperature -> Power draw increases P=(I^2)*R.
  • ShieTar - Thursday, December 26, 2013 - link

    The current isn't stabilized generally, the current is: P=U^2/R.

    " Because processors are all about pushing current to charge and discharge wire and gate capacitance. Lower resistance just means that happens faster."

    Basically correct, nevertheless capacitor charging happens asymptotic, and any IC optimised for speed will not wait for a "full" charge. The design baseline is probably to get the lowest charging required for operation at the highest qualified temperature. Since decreasing temperature will increase charging speed, as you pointed out, you will get to a higher charging ratio, and thus use more power.

    On top of that, the GPU is not exclusively transistors. There is power electronics, there are interconnects, there are caches, and who knows what else (not me). Now when the transistors pull a little more charge due to the higher temperature, and the interconnects which deliver the current have a higher resistance, then you get additional transmission losses. And that's on top of higher leakage rates.

    Of course the equation gets even more fun if you start considering the time constants of the interconnects itself, which have gotten quiet relevant since we got to 32nm structures, hence the high-K materials. Though I have honestly no clue how this contribution is linked to temperature.

    But hey, here's hoping that Ryan will go and investigate the Power drop with his equipment and provide us with a full explanation. As I personally don't own a GPU which gets hot in idle (can't force the fan below 30% by software and won't stop it by hand) I cannot test idle power behavior on my own, but I can and did repeat the Furmark-Test described in the link above, and also see a power-saving of about 0.5W per °C with my GTX660. And thats based on internal power monitoring, so the mainboard/PCIe slot and the PSU should add a bit more to that:

    https://www.dropbox.com/s/javq0dg75u40357/Screensh...

Log in

Don't have an account? Sign up now