In a busy week that’s going to end up being just a bit busier, AMD has pushed out another Catalyst driver update specifically targeted at the new Radeon R9 290 series, Catalyst 13.11 Beta9.2. This release is especially notable because it resolves some outstanding issues with the hardware that the hardware press has covered in depth this week opposite the 290 launch, and because it is making low level operational changes that will have a direct impact on the power, noise, and performance characteristics of the 290 series.

First off, let’s talk about what AMD has done with their drivers. Citing an issue with variability in the fan speeds on 290 series cards, AMD has changed the algorithms for how their drivers handle the fan speeds on 290 series cards, essentially overriding the BIOS defaults with new values. This is similar to how AMD deployed their specification changes for the 290 ahead of its launch – adjusting it from 40% to 47% as the default maximum fan speed – but AMD has also used their driver update to change how they’re defining and controlling fan speeds.

Rather than controlling fan speed based on percentages, which are really measuring fan speed as the duty cycle of the fan’s motor on a relative basis, AMD has switched to controlling fan speeds on an absolute basis, using the measured RPM of the fan as their metric of absolute fan speed. This goes back to AMD’s concern over variance, as there is going to be some variance – and apparently too much variance – from fan motor to fan motor in how fast it can go, and consequently just what a specific duty cycle represents on a relative basis. Consequently by switching to measuring fan speed on an absolute basis there will definitely be less variation. as AMD is now controlling fan speeds by the very same metric they use to define that variation (RPM).

For the release of this driver, this is what AMD specifically had to say.

We’ve identified that there’s variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink.

The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

Given the significant interest there has been this week in articles published over at Tom’s Hardware and their experience with additional retail 290 series cards, it's likely that this is related to the issues Tom’s was seeing. In which case the implication is that fans are running too slow, which could've definitely resulted in lower performing cards. It's obviously also possible that fans could be running too fast in some configurations, which would obviously result in louder/higher performing cards. The third scenario that this update corrects is one that AMD told us about: where the fans are running too slow during light-to-medium workloads, which in turn allows the GPU to heat up more than it should and forces the fan to run at higher speeds down the line. In this third scenario, the overall acoustic profile of the card would actually be quieter post update. Admittedly this isn't something we test for or something we've seen internally, but it's a situation that AMD says also improves with this update.

Along with reducing variation the net result of this driver as far as our samples are concerned will be that fan speeds are going to go up. AMD’s new maximum fan speeds for the 290X (quiet mode) and 290 will be 2200RPM and 2650RPM respectively. A quick meta-analysis doesn’t show any site as having reported their review samples as having RPMs that high or higher, in which case the situation should be similar to ours. Our cards topped out at 2100RPM for the 290X and 2500RPM for the 290, so these new values represent a 100RPM and 150RPM increase in default fan speeds respectively. Or on a percentage basis, we’ve gone from 40% to 42% for the 290X, and 47% to 49% for the 290. 

AMD Radeon R9 290 Series Maximum Fan Speeds
Card Catalyst 13.11 Beta 8 Catalyst 13.11 Beta 9.2
290X (Quiet Mode) ~2100 RPM (40%) ~2200RPM (42%)
290 ~2500 RPM (47%) ~2650 RPM (49%)

Since we don’t have any other 290 cards at this time, and our second 290X already behaved virtually identical to our first, we’re not in a position to talk about the matter of variance in further detail. Presumably variance was a big enough issue that it required AMD’s quick attention, but we don’t have any further cards to get a first-hand impression of just how large that variance was. Whatever the variance was though, this should virtually eliminate it.

What we can briefly look at however is how this changed our performance results. The net effect of this change is that AMD has increased their fan speeds for the 290 series, and as a result noise levels are going to go up slightly, and due to the close relationship between noise, cooling, and heat dissipation, power consumption will also go up slightly too. We’d say performance is going to go up too (again similar to the 290’s spec change), but in reality the amount of variance caused by PowerTune has all but drowned out any possible performance difference on our 290X. Meanwhile our 290 wasn’t cooling limited in the first place, so this change hasn’t affected gaming performance.

With respect to PowerTune on the 290X, we’ve been finding that PowerTune adjusts clockspeeds rather significantly in response to the smallest input changes, which makes it difficult to isolate any resulting performance changes from the fan speed adjustment. The reason why this is happening is unclear, but we suspect that it has to do with the 290 series cards not having much flexibility to adjust their voltages, resulting in them having to instead widely adjust their clockspeeds to achieve the necessary reduction in power consumption and heat generation.

To put this concept to the test, here are some quick scatter plots of the 290 and 290X running FurMark, plotting clockspeed against voltage (VDDC) as measured by GPU-Z. These voltages are going to be subject to external factors such as vDroop, but it’s the best we have right now since we can’t see VIDs.

In brief, there’s only roughly a 100mv difference in voltages between the 290X’s base clockspeed and boost clockspeed, and even less a difference on the 290. If this data is reasonably accurate, then it would explain why the 290 series sees such heavy clockspeed throttling at times, and why our gaming performance hasn’t changed. So with that in mind, let’s look at the numbers.

Radeon 290 Series Driver Changes: Noise

First and foremost, noise under load has predictably gone up. For the 290X where FurMark and Crysis 3 top out at the same point, this new noise level is 55.6dB, 2.3dB higher than the old maximum of 53.3dB. For the 290 on the other hand, noise levels don’t change under Crysis 3 since it wasn’t cooling/fan limited in the first place, remaining at 57.2dB. However the worst case scenario, as represented by FurMark, sees noise levels increase a further 1.6dB to 60.1dB.

Radeon 290 Series Driver Changes: Power

As for power consumption, since we’re clearly cooling limited in most scenarios on the 290 series, any increase in cooling performance causes an increase in power consumption. For the 290X in FurMark this is another 10W at the wall, while under Crysis 3 (where performance is nearly identical) this is a barely measurable 3W difference. While for the 290 the difference is 11W for FurMark, and absolutely nothing for Crysis 3 since it wasn’t cooling limited in the first place.

Radeon 290 Series Driver Changes: Gaming Performance

Finally for performance, we can see that the fan speed adjustments had no measurable impact on performance under Crysis. The 290 was never cooling limited in the first place, and for the voltage issues discussed further, PowerTune has all but wiped out any potential performance improvement for the 290X, leaving it changed by a fraction of a frame per second. Unfortunately this means the noise increase is very real, but there’s not a measurable performance increase to go with it.

With all of that said, this won’t be impacting our reviews of the 290 or 290X (or GTX 780 Ti), as there isn’t a performance change to account for, and the noise change, though unfortunate, is under gaming workloads limited to the 290X (though this does mean 290X loses some further ground to 290).

About that 290 Conclusion

Since we’re already on the matter of our recommendations, I wanted to spend a bit of time following up on our 290 review, as that review and its conclusion generated a lot more feedback than we had been expecting. In this week’s article I flat out avoided recommending the 290 because of its acoustic profile. When faced with the tradeoff of noise vs. performance, AMD clearly chose the latter and ended up with a card that delivers a ridiculous amount of performance for $399 but exceeds our ideas of comfortable noise levels in doing so.

I personally value acoustics very highly and stand by my original position that the reference R9 290 is too loud. When I game I use open back headphones so I can listen for phone calls or the door for shipments, and as a result acoustics do matter to me. In the review I assumed everyone else valued acoustics at least similarly to me, but based on your reaction it looks like I was mistaken. While a good number of AnandTech readers agreed the R9 290 was too loud, an equally important section of the audience felt that the performance delivered was more than enough to offset the loud cooling solution. We want our conclusions to not only be reflective of our own data, but also be useful to all segments of our audience. In the case of the 290 review, I believe we accomplished the former but let some of you down with the latter.

Part of my motivation here is to make sure that we send the right message to AMD that we don’t want louder cards. I believe that message has been received loud and clear from what I understand. It’s very important to me that we don’t send the message to AMD or NVIDIA that it’s ok to engage in a loudness war in the pursuit of performance; we have seen a lot of progress in acoustics and cooler quality since the mid-to-late 2000’s, and we’d hate to see that progress regressed on. A good solution delivers both performance and great user experience, and I do believe it’s important that we argue for both (which is why we include performance, power and noise level data in our reviews).

The Radeon R9 290 does offer a tremendous value, and if you’re a gamer that can isolate yourself from the card’s acoustics (or otherwise don’t care) it’s easily the best buy at $399. If acoustics are important to you, then you’re in a tougher position today. There really isn’t an alternative if you want R9 290 performance at the same price. The best recommendation I have there is to either pony up more cash for a quieter card, accept the noise as is or wait and see what some of the customized partner 290 cards look like once those do arrive. I suspect we’ll have an answer to that problem in the not too distant future as well.

Note that this isn't going to be the last time performance vs. acoustics are going to be a tradeoff. AMD pointed out to us that the 290/290X update is the first time its fan speed has been determined by targeting RPMs vs. PWM manipulation. In the past, it didn't really matter since performance didn't scale all that much with fan speed. Given the current realities of semiconductor design and manufacturing, the 290/290X situation where fan speed significantly impacts performance is going to continue to be the case going forward. We've already made the case to AMD for better reference cooling designs and it sounds like everyone is on the same page there. 

Given the amount of interest this has generated I'm curious to get your feedback on the performance vs. acoustic debate. Feel free to share your comments below on how important acoustics are for you (vs. performance) and at what point does a GPU become too loud? For us the reference point was NVIDIA's GeForce GTX 480, but I'm interested to know what GPUs in your past have been too loud.

Comments Locked

141 Comments

View All Comments

  • jerrylzy - Sunday, November 10, 2013 - link

    I have to say NVIDIA has a far more better reference cooling system. The R9 290 is indeed too loud, and the voltage is ridiculously high, which, in my opinion, should be responsible for the high power consumption and noise level. I personally own a 7970ge vapour-x card, the stock voltage of which is 1.256v. The card is loud(but not insanely loud). After modifying some bios, I found that I can lower the voltage since I didn't need very high frequencies. Then I changed the voltage from 1.256v to 1.143v. The whole world became quite without losing performance.
  • chrnochime - Wednesday, November 13, 2013 - link

    That and to this day GPUs are still exposed chip and not protected by heat spreader like CPUs are. Reminds me the horrors of installing heatsinks on Duron/Thunderbirds when they still were exposed chip. I am not willing to risk chipping/destroying the GPU when I need to pull the OEM heatsink off and putting the aftermarket one on.
  • Will Robinson - Sunday, November 10, 2013 - link

    And my 4850 never did that so there's goes your theory on that one.
  • Mr Perfect - Monday, November 11, 2013 - link

    Noise is as import to me as it seems to be to Ryan. The last time I used a video card with a reference cooler on it was... My Voodoo 5. Every card since then has either promptly had a a quieter aftermarket cooler added to it, or came with one pre-fitted.

    It will be interesting to see what the customized 290s look like.
  • Zumb - Tuesday, November 12, 2013 - link

    I don't think that the cooler is bad because of a money issue.
    The great thing with that stock cooler is that it push the hot air outside the box, at the back of the case. Aftermarket cooler are way better, but they don't do that, so, they are extremly dependant of the case cooling performance.
    Most of people don't see that, because people testing cards are poeples who care and have a good case cooling already, so aftermarket cooler are always better. But if the reference cooler was more dependant of the case cooling, I am pretty sure AMD would get a lot of burned card back in warrenty, installed in poorly cooled case. Are standard low budget acase are worse and worse every year sicne everybody like those tiny shiny micro ATX case.
  • chrnochime - Wednesday, November 13, 2013 - link

    I know what you are saying. I still have my reference XFX 4870 that I got for a good price new back in 2008 sitting in my old PC, and loathed playing games back then because then I'd need to turn the fan speed up to keep it from overheating. The main reason why I went with a 770 over another loud AMD card this time around. I'd rather spend 30-40 dollars extra for a card that isn't going to drive me nuts every time the computer is running.
  • abhishek_takin - Saturday, November 9, 2013 - link

    Its time AMD should learn few things. Everyone wants a all round card. AMD is giving performance for a great price by putting cooling, noise and power on line. GTX 480 was one of the hottest and HD6990 card was one of the nosiest card ever produced. Nvidia learned that thing and made card less hot and noisy in next generations. But AMD is still putting their issues on board partners and not releasing good reference models. Price of Nvidia cards are high because they are well equipped and build properly. And if they will charge high for that then its obvious as its scoring in all departments.

    I own a AMD HD7970 card but dont like the things what is happening in R9 edition cards.
  • 1Angelreloaded - Saturday, November 9, 2013 - link

    No, NVidia does price gouge along with board partners, simply put the chips in most cases are exactly the same, with disabled units attached or lower frequency clocks on the RAM, essentially you pay for Binn sifted parts and all around NVidia and ATI using sub par alloys like nickel plated aluminum heatsinking instead of using copper all around, which in the case of any card over 500$ is flat out a disgrace. Instead of pushing the industry forward they stifle it over generations that offer marginal increases, less than 10%, in order to reap higher profit margins, 680 to 770 is a great example of this.
  • The Von Matrices - Sunday, November 10, 2013 - link

    You are wrong on two counts. First, it is by no means cheap to design a new chip; think tens of millions of dollars. To design a new architecture, you need even more than that. Second, you forget that silicon process advances have slowed significantly. There's only so much you can do when you can't move to a smaller node as has been responsible for the yearly performance improvements in the past.

    Sure, NVidia could design a new chip to fit between GK110 and GK104, but think of the economics. They would be selling the chip for $300-$400, a very small portion of the overall graphics market. It would have a 20-25% performance improvement at most over existing products, significantly discouraging upgrades. And it would be obsolete in less than a year when the new 20nm process came out, which would significantly limit the ability to recoup cost. Taking a smaller margin on GK110 (aka GTX 780), performing extreme binning on GK104 (aka GTX 770), or just completely omitting the $400 target is a much safer proposition than designing a new chip to fit that market.

    The only reason AMD designed Hawaii is because 20nm was delayed and the company couldn't push Tahiti to any higher performance level. If 20nm was coming in January, we would never see the Hawaii GPU because AMD could never sell enough chips to make it profitable before it was obsolete.
  • Yojimbo - Sunday, November 10, 2013 - link

    Noun 1. price gouging - pricing above the market price when no alternative retailer is available

    How is NVidia price gouging? You're setting the market price in your mind, but you don't have the power to make your fantasy a reality, and therefore NVidia price gouges? People go into an enterprise to turn a profit, and throughout their history NVidia has usually been profitable. But their profits have not been excessive, nor have they been based primarily on market relationships (e.g., Microsoft using Windows to foist Office). They have been profitable because they have been well-managed, and have produced successful, competitive products. People who complain "Oh, XY graphics card is too expensive. The companies are price gouging" sound like children who want, want, want but don't understand, or care to understand, how things work, but simply try to use a bully pulpit to get their way. Some people want the best graphics card that is out there, and are willing to pay a premium for it. The demand is there. When NVidia, or any other company, is in the position to tap into that demand, they will do so, because it's a high-margin segment.

    As far as your assertion that the chips are the same, and NVidia and AMD are (my paraphrase, you used "price gouge") "gaming" the market, one must understand the structure of the industry. First of all, realize there is demand for graphics processors at various wide-ranging price points. A GPU company ideally would like to tap into all parts of said market. Next, realize that a large part of chip designers' costs are tied up in the design and validation of an architecture, which creates an economy of scale. Designing separate architectures, one each for a $100 card, for a $150 card, for a $200 card, etc, is terribly inefficient and such cards would be prohibitively expensive and uncompetitive. At the same time, if, for instance, it designed one architecture and released one product at, say, $200, it would probably have a very good product at that price point assuming everyone was willing to pay $200. But any competitor which was able to provide value at $150 or less or $250 or more will capture most of the market, and our company again would be losing out on the economy of scale, because it spent all that money designing a chip for a very narrow market segment; The price of its $200 chip would have to be increased just to attempt to break even to try to pay for the R&D expenses, because its competitor could spread those costs over a wider volume output. So what could be a solution? A solution is to sort of "zip" or "tar" the segments together in the design process, and then decouple them again when delivering individual products, assigning design costs unequally across the segments. For instance, Intel might see two markets, one which uses hyperthreading, and one which does not. There is a certain cost associated with developing hyperthreading, and they are only going to design one architecture. So they design that architecture with hyperthreading, then enable it for the one segment and don't enable it for the other. I think that in this case one could think about the cost structure for the hyperthreading-enabled chip as having the design costs of the hyperthreading as part of its total cost, while the cost structure of the non-enabled chip could leave the hyperthreading design cost out.

    Another way to hit a range of market segments is by chip binning. There is a distribution of the quality of chips produced by a particular manufacturing process. Chips that perform at X quality or above are placed in the highest bin, and account for Y% of the total production. They are scarce because to get Z chips with at least X quality, one must make at least Z/(Y%) total chips. Intuitively, these X-quality chips therefore cost more to make.

    As far as what you say about "sub-par alloys," your statement that it is "flat out a disgrace" to have aluminum heatsinks is based on what knowledge of the cost and benefit of various materials for heatsinks? I think your accusation that NVidia and AMD are not truly competing, but trying to hold back innovation to reap higher profits is unfounded and ridiculous. The reason generation-over-generation performance increases have lessened is because the GPU industry is maturing, and innovation gets increasingly difficult. In addition, semiconductor processes are not currently scaling heat-wise or cost-wise like they did years ago.

Log in

Don't have an account? Sign up now