Power Consumption

The nature of reporting processor power consumption has become, in part, a dystopian nightmare. Historically the peak power consumption of a processor, as purchased, is given by its Thermal Design Power (TDP, or PL1). For many markets, such as embedded processors, that value of TDP still signifies the peak power consumption. For the processors we test at AnandTech, either desktop, notebook, or enterprise, this is not always the case.

Modern high performance processors implement a feature called Turbo. This allows, usually for a limited time, a processor to go beyond its rated frequency. Exactly how far the processor goes depends on a few factors, such as the Turbo Power Limit (PL2), whether the peak frequency is hard coded, the thermals, and the power delivery. Turbo can sometimes be very aggressive, allowing power values 2.5x above the rated TDP.

AMD and Intel have different definitions for TDP, but are broadly speaking applied the same. The difference comes to turbo modes, turbo limits, turbo budgets, and how the processors manage that power balance. These topics are 10000-12000 word articles in their own right, and we’ve got a few articles worth reading on the topic.

In simple terms, processor manufacturers only ever guarantee two values which are tied together - when all cores are running at base frequency, the processor should be running at or below the TDP rating. All turbo modes and power modes above that are not covered by warranty. Intel kind of screwed this up with the Tiger Lake launch in September 2020, by refusing to define a TDP rating for its new processors, instead going for a range. Obfuscation like this is a frustrating endeavor for press and end-users alike.

However, for our tests in this review, we measure the power consumption of the processor in a variety of different scenarios. These include full AVX2/AVX512 (delete as applicable) workflows, real-world image-model construction, and others as appropriate. These tests are done as comparative models. We also note the peak power recorded in any of our tests.

First up is our image-model construction workload, using our Agisoft Photoscan benchmark. This test has a number of different areas that involve single thread, multi-thread, or memory limited algorithms.

Each of our three processors here seems to approach different steady state power levels for the different areas of the benchmark.

  • The 8-core is around 65 W in the first stage, and more around 48 W in the second stage.
  • The 6-core is around 51 W in the first stage, and more around 38 W in the second stage.
  • The 4-core is around 36 W in the first stage, and more around 30 W in the second stage.

The fact that the difference between each of the processors is 14-15 W in the first stage would go a little to suggesting that we're consuming ~7 W per core in this part of the test, which is strictly multi-threaded. However when it moves more into variable threaded loading, all three CPUs are well below the TDP levels.

The second test is from y-Cruncher, which is our AVX2/AVX512 workload. This also has some memory requirements, which can lead to periodic cycling with systems that have lower memory bandwidth per core options.

The y-Cruncher test is a little different, as we're mostly concerned about peaks. All three CPUs have a TDP rating of 65 W, however the 8-core here breaches 80 W, the 6-core is around 72 W, and the only processor below that TDP value is the quad core Ryzen 3.

For absolute peak power across all of our tests:

(0-0) Peak Power

For absolute instanteous peak power, each of the Ryzen R4000 APUs does what was expected - with the Ryzen 7 hitting the socket limit for 65 W processors.

Test Setup and #CPUOverload Benchmarks Integrated Graphics
POST A COMMENT

104 Comments

View All Comments

  • MDD1963 - Friday, December 18, 2020 - link

    So, in a nutshell, this is still just a better CPU but still crippled with just over (barely) GT1030-level of integrated graphics... Reply
  • Assimilator87 - Friday, December 18, 2020 - link

    To everyone complaining about the benchmark resolutions/settings: Just double the results of the 1080p benchmarks and that's the ballpark 720p performance. I'm sure 1080p max was used in order to make sure there was a complete GPU bottleneck. That's the only way to compare the GPUs in relation to each other. Once you have that scale, you can extrapolate to other resolutions.

    Ian, what happened to the Subor Z+ review? That would be such an incredibly interesting comparison point.
    Reply
  • McFly323 - Friday, December 18, 2020 - link

    The best World APU is PS5 AMD APU.But the AMD will never release that for PC buyers because that would murder PC components market. Reply
  • Oxford Guy - Friday, December 18, 2020 - link

    Since these are OEM-only I wouldn't expect to see them married to high-performance RAM.

    Many are looking at this lineup from the point of view of the build-a-gaming-PC-myself enthusiast sector but one can also look at it from the point of view of "How much does slow OEM RAM hobble these APUs?" Since OEMS often tout the performance of products that don't perform as well as they could or should (a thing helped out by companies that sell stealth watered-down versions of their products, sometimes with the same name attached) it's useful to have the information out there about how they will perform with baseline RAM.

    However, given that 3200 has been cheap for a long time (I got 16 GB for $90 in 2016 as I recall) it would be good to always have the tests show both the slow RAM and something affordable like 3200 that offers quite a bit more performance.

    One problem that a company like AMD faces if making CPUs like this is the possibility of them being used with slow RAM. The way around that is to engineer the CPUs to fail to run with slow RAM.
    Reply
  • Oxford Guy - Friday, December 18, 2020 - link

    "The way around that is to engineer the CPUs to fail to run with slow RAM."

    So, not doing that means the company is satisfied with the parts being hobbled by slow RAM, not just the OEM.
    Reply
  • vol.2 - Saturday, December 19, 2020 - link

    If they make IGPU performance "deliver," it will eat into the sales of DGPUs. Reply
  • Valantar - Sunday, December 20, 2020 - link

    It's great to see these reviewed! I bought a 4650G off a German ebay store a couple of months back, and I couldn't be happier with it for my HTPC. Sips power (I've never seen it exceed 110W at the wall), and performs admirably. With my Crucial Sport LT 3200C16 running happily at 3800C16 (1:1:1) (with near zero effort thanks to 1usmus' dram calc) and the iGPU at 2100 it delivers 60-75fps in Rocket League at 1080p Quality preset, which is perfectly enjoyable. I understand AT's choice of running JEDEC max spec DRAM, but for these chips in particular I think DRAM OC testing would be a good idea. Reply
  • artifex - Monday, December 21, 2020 - link

    I feel let down by AMD that they won't officially put their better APUs out in the retail chain, when most AM4 boards out there have video connectors and associated hardware ready to support them. It's like a promise that can't be fulfilled. Reply
  • tkSteveFOX - Monday, December 21, 2020 - link

    The Vega architecture and lack of DDR4X high speed RAM make AMD APU's just not worth it when you can get a 2600x and pair it with an RX5500 or GTX1650 or even an older 1050Ti and deliver 30-60% more gaming performance.
    With RDNA integrated, AMD could have blown away any Intel iGPU and lower end Nvidia solutions.
    This 4th gen AMD Desktop APUs are simply not worth it.
    Reply
  • Brane2 - Wednesday, December 23, 2020 - link

    Isn't that a bit late now, that 5xxxx is to come out ? Reply

Log in

Don't have an account? Sign up now