Anandtech reviews products independently. When you click links to buy products we may earn money that supports our work.


Original Link: https://www.anandtech.com/show/13660/amd-athlon-200ge-vs-intel-pentium-gold-g5400-review



In the course of our reviews, when we get a chance to get hands on with random processors, we run our test suite and add the data to our database. Sometimes that doesn’t materialize directly into a review, but at least we have the data. Two very similar CPUs have come across my desk recently: AMD’s dual core Athlon 200GE, and Intel’s Pentium G5400. Both chips round to the $60 mark, have some form of integrated graphics, and are aimed at budget systems.

This is going to be fun

One of the perennial issues with modern technology review cycles is that there’s a lot of focus on the high-end parts. These are the ones that the manufacturers sample: they have the highest margins, but are also the halo products: if they sit atop of the standings, then the hope is that that influence will trickle down into the rest of the product range, typically the high-volume parts. There is also the added benefit that more people want to hear about the best of the best. It’s a reason why there are so many Ferrari and Aston Martin ‘WOW’ pieces in written and video media.

Normally this would make sampling very difficult. If we were reviewing cars, anyway. The two chips in today’s analysis, the Intel Pentium Gold G5400 and the AMD Athlon 200GE, cost around $60 apiece, which I forked out for personally as I was never expecting to be sampled. (AMD asked if I wanted a 200GE sample two days after my retail unit arrived, go figure. I sent that on to Gavin for his 7-year old’s new gaming system.)

AMD vs Intel at ~$60
  AMD Athlon
200GE
Intel Pentium
Gold G5400
Cores / Threads 2 / 4 2 / 4
Microarchitecture Zen Coffee Lake
Motherboards X470, X370, B450
B350, A320, A300
Z390, Z370, Q370
H370, B360, H310
CPU Frequency 3.2 GHz 3.7 GHz
L2 Cache 512 KB/core 256 KB/core
L3 Cache 2 MB / core 2 MB / core
Integrated Graphics Vega 3
192 SPs
UHD 610
12 EUs (96 ALUs)
DDR4 Support DDR4-2933 DDR4-2666
GPU Frequency Up to 1000 MHz 350-1050 MHz
TDP 35 W 54 W (2-core die version)
58 W (4-core die version)*
Price $55 (SRP) $64 (1k/u)
* Intel harvests both 2+2 and 4+2 dies to make G5400 parts. It's impossible to know which one you have without removing the lid and measuring the die area.

When we stack up the two processors side by side, it gets interesting. Both are dual core, quad thread parts. The Intel processor has the frequency advantage, running at 3.7 GHz compared to the 3.2 GHz of AMD, but the AMD has beefier Vega 3 integrated graphics compared to the UHD 610 (GT1) graphics of the Intel chip. One sore point might be the TDP, where the AMD chip has a rating of 35W and the Intel chip is rated at 58W, however as we’ll see in the review, neither of them come close to those values.

Tackling the budget end of the market is fun. I’ve been a long-time advocate for budget builders to build a system piece-by-piece, getting one high-end part at a time rather than smearing a budget across several average parts at once. Under this philosophy, these processors could very well be the start of one of those builds, only costing an average of $60 MSRP. Note that under this philosophy, you might end up with that big graphics card before a processor that can power it. We’re covering those benchmarks as well.

Before you click further, place your bets on who you think will win: the Intel Pentium Gold G5400, or the AMD Athlon 200GE?

Latest News: While neither processor is officially overclockable, since we tested for this article it was recently reported that MSI motherboards with certain BIOS versions will allow users to overclock the 200GE to ~3.9 GHz. I've asked Gavin to contribute, and he managed a nice 3.9 GHz over the 3.2 GHz base clock. Head over to page 21 for the details.

Pages In This Review

  1. Analysis and Competition
  2. Test Bed and Setup
  3. 2018 and 2019 Benchmark Suite
  4. CPU Performance: System Tests
  5. CPU Performance: Rendering Tests
  6. CPU Performance: Office Tests
  7. CPU Performance: Encoding Tests
  8. CPU Performance: Legacy Tests
  9. Gaming: Integrated Graphics
  10. Gaming: World of Tanks enCore
  11. Gaming: Final Fantasy XV
  12. Gaming: Shadow of War
  13. Gaming: Civilization 6
  14. Gaming: Ashes Classic
  15. Gaming: Strange Brigade
  16. Gaming: Grand Theft Auto V
  17. Gaming: Far Cry 5
  18. Gaming: Shadow of the Tomb Raider
  19. Gaming: F1 2018
  20. Power Consumption
  21. Overclocking
  22. Conclusions and Final Words


Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
AMD APU Athlon 200GE
R3 2200G
Ryzen 3 1200
Ryzen 3 1300X
A6-9500
A12-9800
ROG Crosshair
VI Hero

MSI B350I Pro
for IGP
P1.70 AMD Wraith
RGB
G.Skill SniperX
2x8GB
DDR4-2933
Intel 8th Gen i7-8086K
i7-8700K
i5-8600K
ASRock Z370
Gaming i7
P1.70 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
Intel 7th Gen i7-7700K
i5-7600K
GIGABYTE X170
ECC Extreme
F21e Silverstone*
AR10-115XS
G.Skill RipjawsV
2x16GB
DDR4-2400
Intel 6th Gen i7-6700K
i5-6600K
GIGABYTE X170
ECC Extreme
F21e Silverstone*
AR10-115XS
G.Skill RipjawsV
2x16GB
DDR4-2133
Intel HEDT i9-7900X
i7-7820X
i7-7800X
ASRock X299
OC Formula
P1.40 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
AMD 2000 R7 2700X
R5 2600X
R5 2500X
ASRock X370
Gaming K4
P4.80 Wraith Max* G.Skill SniperX
2x8 GB
DDR4-2933
GPU Sapphire RX 460 2GB (CPU Tests)
MSI GTX 1080 Gaming 8G (Gaming Tests)
PSU Corsair AX860i
Corsair AX1200i
SSD Crucial MX200 1TB
OS Windows 10 x64 RS3 1709
Spectre and Meltdown Patched
*VRM Supplimented with SST-FHP141-VF 173 CFM fans

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Hardware Providers
Sapphire RX 460 Nitro MSI GTX 1080 Gaming X OC Crucial MX200 +
MX500 SSDs
Corsair AX860i +
AX1200i PSUs
G.Skill RipjawsV,
SniperX, FlareX
Crucial Ballistix
DDR4
Silverstone
Coolers
Silverstone
Fans


Our New Testing Suite for 2018 and 2019

Spectre and Meltdown Hardened

In order to keep up to date with our testing, we have to update our software every so often to stay relevant. In our updates we typically implement the latest operating system, the latest patches, the latest software revisions, the newest graphics drivers, as well as add new tests or remove old ones. As regular readers will know, our CPU testing revolves an automated test suite, and depending on how the newest software works, the suite either needs to change, be updated, have tests removed, or be rewritten completely. Last time we did a full re-write, it took the best part of a month, including regression testing (testing older processors).

One of the key elements of our testing update for 2018 (and 2019) is the fact that our scripts and systems are designed to be hardened for Spectre and Meltdown. This means making sure that all of our BIOSes are updated with the latest microcode, and all the steps are in place with our operating system with updates. In this case we are using Windows 10 x64 Enterprise 1709 with April security updates which enforces Smeltdown (our combined name) mitigations. Uses might ask why we are not running Windows 10 x64 RS4, the latest major update – this is due to some new features which are giving uneven results. Rather than spend a few weeks learning to disable them, we’re going ahead with RS3 which has been widely used.

Our previous benchmark suite was split into several segments depending on how the test is usually perceived. Our new test suite follows similar lines, and we run the tests based on:

  • Power
  • Memory
  • Office
  • System
  • Render
  • Encoding
  • Web
  • Legacy
  • Integrated Gaming
  • CPU Gaming

Depending on the focus of the review, the order of these benchmarks might change, or some left out of the main review. All of our data will reside in our benchmark database, Bench, for which there is a new ‘CPU 2019’ section for all of our new tests.

Within each section, we will have the following tests:

Power

Our power tests consist of running a substantial workload for every thread in the system, and then probing the power registers on the chip to find out details such as core power, package power, DRAM power, IO power, and per-core power. This all depends on how much information is given by the manufacturer of the chip: sometimes a lot, sometimes not at all.

We are currently running POV-Ray as our main test for Power, as it seems to hit deep into the system and is very consistent. In order to limit the number of cores for power, we use an affinity mask driven from the command line.

Memory

These tests involve disabling all turbo modes in the system, forcing it to run at base frequency, and them implementing both a memory latency checker (Intel’s Memory Latency Checker works equally well for both platforms) and AIDA64 to probe cache bandwidth.

Office

  • Chromium Compile: Windows VC++ Compile of Chrome 56 (same as 2017)
  • PCMark10: Primary data will be the overview results – subtest results will be in Bench
  • 3DMark Physics: We test every physics sub-test for Bench, and report the major ones (new)
  • GeekBench4: By request (new)
  • SYSmark 2018: Recently released by BAPCo, currently automating it into our suite (new, when feasible)

System

  • Application Load: Time to load GIMP 2.10.4 (new)
  • FCAT: Time to process a 90 second ROTR 1440p recording (same as 2017)
  • 3D Particle Movement: Particle distribution test (same as 2017) – we also have AVX2 and AVX512 versions of this, which may be added later
  • Dolphin 5.0: Console emulation test (same as 2017)
  • DigiCortex: Sea Slug Brain simulation (same as 2017)
  • y-Cruncher v0.7.6: Pi calculation with optimized instruction sets for new CPUs (new)
  • Agisoft Photoscan 1.3.3: 2D image to 3D modelling tool (updated)

Render

  • Corona 1.3: Performance renderer for 3dsMax, Cinema4D (same as 2017)
  • Blender 2.79b: Render of bmw27 on CPU (updated to 2.79b)
  • LuxMark v3.1 C++ and OpenCL: Test of different rendering code paths (same as 2017)
  • POV-Ray 3.7.1: Built-in benchmark (updated)
  • CineBench R15: Older Cinema4D test, will likely remain in Bench (same as 2017)

Encoding

  • 7-zip 1805: Built-in benchmark (updated to v1805)
  • WinRAR 5.60b3: Compression test of directory with video and web files (updated to 5.60b3)
  • AES Encryption: In-memory AES performance. Slightly older test. (same as 2017)
  • Handbrake 1.1.0: Logitech C920 1080p60 input file, transcoded into three formats for streaming/storage:
    • 720p60, x264, 6000 kbps CBR, Fast, High Profile
    • 1080p60, x264, 3500 kbps CBR, Faster, Main Profile
    • 1080p60, HEVC, 3500 kbps VBR, Fast, 2-Pass Main Profile

Web

  • WebXPRT3: The latest WebXPRT test (updated)
  • WebXPRT15: Similar to 3, but slightly older. (same as 2017)
  • Speedometer2: Javascript Framework test (new)
  • Google Octane 2.0: Depreciated but popular web test (same as 2017)
  • Mozilla Kraken 1.1: Depreciated but popular web test (same as 2017)

Legacy (same as 2017)

  • 3DPM v1: Older version of 3DPM, very naïve code
  • x264 HD 3.0: Older transcode benchmark
  • Cinebench R11.5 and R10: Representative of different coding methodologies

Linux (when feasible)

When in full swing, we wish to return to running LinuxBench 1.0. This was in our 2016 test, but was ditched in 2017 as it added an extra complication layer to our automation. By popular request, we are going to run it again.

Integrated and CPU Gaming

We have recently automated around a dozen games at four different performance levels. A good number of games will have frame time data, however due to automation complications, some will not. The idea is that we get a good overview of a number of different genres and engines for testing. So far we have the following games automated:

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra
Final Fantasy XV JRPG Mar
2018
DX11 720p
Standard
1080p
Standard
4K
Standard
8K
Standard
Shadow of War Action / RPG Sep
2017
DX11 720p
Ultra
1080p
Ultra
4K
High
8K
High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low
Car Mechanic Simulator '18 Simulation / Racing July
2017
DX11 720p
Low
1080p
Medium
1440p
High
4K
Ultra
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard
Strange Brigade* FPS Aug
2018
DX12
Vulkan
720p
Low
1080p
Medium
1440p
High
4K
Ultra
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra
Far Cry 5 FPS Mar
2018
DX11 720p
Low
1080p
Normal
1440p
High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

For our CPU Gaming tests, we will be running on an NVIDIA GTX 1080. For the CPU benchmarks, we use an RX460 as we now have several units for concurrent testing.

In previous years we tested multiple GPUs on a small number of games – this time around, due to a Twitter poll I did which turned out exactly 50:50, we are doing it the other way around: more games, fewer GPUs.

Scale Up vs Scale Out: Benefits of Automation

One comment we get every now and again is that automation isn’t the best way of testing – there’s a higher barrier to entry, and it limits the tests that can be done. From our perspective, despite taking a little while to program properly (and get it right), automation means we can do several things:

  1. Guarantee consistent breaks between tests for cooldown to occur, rather than variable cooldown times based on ‘if I’m looking at the screen’
  2. It allows us to simultaneously test several systems at once. I currently run five systems in my office (limited by the number of 4K monitors, and space) which means we can process more hardware at the same time
  3. We can leave tests to run overnight, very useful for a deadline
  4. With a good enough script, tests can be added very easily

Our benchmark suite collates all the results and spits out data as the tests are running to a central storage platform, which I can probe mid-run to update data as it comes through. This also acts as a mental check in case any of the data might be abnormal.

We do have one major limitation, and that rests on the side of our gaming tests. We are running multiple tests through one Steam account, some of which (like GTA) are online only. As Steam only lets one system play on an account at once, our gaming script probes Steam’s own APIs to determine if we are ‘online’ or not, and to run offline tests until the account is free to be logged in on that system. Depending on the number of games we test that absolutely require online mode, it can be a bit of a bottleneck.

Benchmark Suite Updates

As always, we do take requests. It helps us understand the workloads that everyone is running and plan accordingly.

A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite.



CPU Performance: System Tests

Our System Test section focuses significantly on real-world testing, user experience, with a slight nod to throughput. In this section we cover application loading time, image processing, simple scientific physics, emulation, neural simulation, optimized compute, and 3D model development, with a combination of readily available and custom software. For some of these tests, the bigger suites such as PCMark do cover them (we publish those values in our office section), although multiple perspectives is always beneficial. In all our tests we will explain in-depth what is being tested, and how we are testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Application Load: GIMP 2.10.4

One of the most important aspects about user experience and workflow is how fast does a system respond. A good test of this is to see how long it takes for an application to load. Most applications these days, when on an SSD, load fairly instantly, however some office tools require asset pre-loading before being available. Most operating systems employ caching as well, so when certain software is loaded repeatedly (web browser, office tools), then can be initialized much quicker.

In our last suite, we tested how long it took to load a large PDF in Adobe Acrobat. Unfortunately this test was a nightmare to program for, and didn’t transfer over to Win10 RS3 easily. In the meantime we discovered an application that can automate this test, and we put it up against GIMP, a popular free open-source online photo editing tool, and the major alternative to Adobe Photoshop. We set it to load a large 50MB design template, and perform the load 10 times with 10 seconds in-between each. Due to caching, the first 3-5 results are often slower than the rest, and time to cache can be inconsistent, we take the average of the last five results to show CPU processing on cached loading.

AppTimer: GIMP 2.10.4

As a single threaded test, application loading is a key part of the user experience. Unfortunately the AMD 200GE is 21% slower in this case.

FCAT: Image Processing

The FCAT software was developed to help detect microstuttering, dropped frames, and run frames in graphics benchmarks when two accelerators were paired together to render a scene. Due to game engines and graphics drivers, not all GPU combinations performed ideally, which led to this software fixing colors to each rendered frame and dynamic raw recording of the data using a video capture device.

The FCAT software takes that recorded video, which in our case is 90 seconds of a 1440p run of Rise of the Tomb Raider, and processes that color data into frame time data so the system can plot an ‘observed’ frame rate, and correlate that to the power consumption of the accelerators. This test, by virtue of how quickly it was put together, is single threaded. We run the process and report the time to completion.

FCAT Processing ROTR 1440p GTX980Ti Data

In a similar light, the single threaded nature of this test shines on an AMD processor that is 18.6% slower than the Intel competition.

3D Particle Movement v2.1: Brownian Motion

Our 3DPM test is a custom built benchmark designed to simulate six different particle movement algorithms of points in a 3D space. The algorithms were developed as part of my PhD., and while ultimately perform best on a GPU, provide a good idea on how instruction streams are interpreted by different microarchitectures.

A key part of the algorithms is the random number generation – we use relatively fast generation which ends up implementing dependency chains in the code. The upgrade over the naïve first version of this code solved for false sharing in the caches, a major bottleneck. We are also looking at AVX2 and AVX512 versions of this benchmark for future reviews.

For this test, we run a stock particle set over the six algorithms for 20 seconds apiece, with 10 second pauses, and report the total rate of particle movement, in millions of operations (movements) per second. We have a non-AVX version and an AVX version, with the latter implementing AVX512 and AVX2 where possible.

3DPM v2.1 can be downloaded from our server: 3DPMv2.1.rar (13.0 MB)

3D Particle Movement v2.1

For pure unoptimized throughput, both processors are similar in the 3DPM test.

3D Particle Movement v2.1 (with AVX)

But if we crank on the tuned AVX code, the AMD 200GE scores a big win. On the Pentium, the different code path had almost zero effect, with less than a 10% increase in performance, but the 200GE went up by a good 60% by comparison.

Dolphin 5.0: Console Emulation

One of the popular requested tests in our suite is to do with console emulation. Being able to pick up a game from an older system and run it as expected depends on the overhead of the emulator: it takes a significantly more powerful x86 system to be able to accurately emulate an older non-x86 console, especially if code for that console was made to abuse certain physical bugs in the hardware.

For our test, we use the popular Dolphin emulation software, and run a compute project through it to determine how close to a standard console system our processors can emulate. In this test, a Nintendo Wii would take around 1050 seconds.

The latest version of Dolphin can be downloaded from https://dolphin-emu.org/

Dolphin 5.0 Render Test

Our emulation test has always been a strong performer for Intel CPUs.

DigiCortex 1.20: Sea Slug Brain Simulation

This benchmark was originally designed for simulation and visualization of neuron and synapse activity, as is commonly found in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron / 1.8B synapse simulation, equivalent to a Sea Slug.

We report the results as the ability to simulate the data as a fraction of real-time, so anything above a ‘one’ is suitable for real-time work. Out of the two modes, a ‘non-firing’ mode which is DRAM heavy and a ‘firing’ mode which has CPU work, we choose the latter. Despite this, the benchmark is still affected by DRAM speed a fair amount.

DigiCortex can be downloaded from http://www.digicortex.net/

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

On this more memory limited test, the official supported frequency of the 200GE comes into play, and it scores 37% more than the Intel chip.

y-Cruncher v0.7.6: Microarchitecture Optimized Compute

I’ve known about y-Cruncher for a while, as a tool to help compute various mathematical constants, but it wasn’t until I began talking with its developer, Alex Yee, a researcher from NWU and now software optimization developer, that I realized that he has optimized the software like crazy to get the best performance. Naturally, any simulation that can take 20+ days can benefit from a 1% performance increase! Alex started y-cruncher as a high-school project, but it is now at a state where Alex is keeping it up to date to take advantage of the latest instruction sets before they are even made available in hardware.

For our test we run y-cruncher v0.7.6 through all the different optimized variants of the binary, single threaded and multi-threaded, including the AVX-512 optimized binaries. The test is to calculate 250m digits of Pi, and we use the single threaded and multi-threaded versions of this test.

Users can download y-cruncher from Alex’s website: http://www.numberworld.org/y-cruncher/

y-Cruncher 0.7.6 Single Thread, 250m Digitsy-Cruncher 0.7.6 Multi-Thread, 250m Digits

For our second AVX optimized test, AMD again scores a win. It would appear that the Pentium chips from Intel do not seem to be implementing the performance uplifts we see with the Core models.

Agisoft Photoscan 1.3.3: 2D Image to 3D Model Conversion

One of the ISVs that we have worked with for a number of years is Agisoft, who develop software called PhotoScan that transforms a number of 2D images into a 3D model. This is an important tool in model development and archiving, and relies on a number of single threaded and multi-threaded algorithms to go from one side of the computation to the other.

In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but is still more stringent than our 2017 test. We report the total time to complete the process.

Agisoft’s Photoscan website can be found here: http://www.agisoft.com/

Agisoft Photoscan 1.3.3, Complex Test

Photoscan is more of a mixed workload, with multithreaded and singlethreaded steps to get a good sense of a performance. The Intel processor slips a win here, a few percent faster than the AMD.



CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona is an all-threaded test, and Intel’s frequency advantage comes into play here with a 15% win.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Blender also uses a mixed instruction workload, though not to the extent that our 3DPM and y-cruncher workloads do. But as with those tests, we get an AMD win of 3.7%.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

Interestingly we see AMD take the win for the C++ code by 5%, but Intel gets ahead with the OpenCL code (running purely on the CPU) by 10%.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

As another throughput test, here AMD is behind by 20%.

 



CPU Performance: Office Tests

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

As an all-around test, one hopes that PCMark should be able to shine a light into this analysis. The end result is that PCMark says Intel should be ahead by around 8%.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

I like this Chromium test, given that it probes a lot of areas within a system. The Intel G5400 wins here again, scoring 6.9 compiles per day, compared to 5.9 for the AMD 200GE.

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

For our tests, we report in Bench the results from every physics test, but for the sake of the review we keep it to the most demanding of each scene: Cloud Gate, Sky Diver, Fire Strike Ultra, and Time Spy.

3DMark Physics - Cloud Gate3DMark Physics - Sky Diver3DMark Physics - Fire Strike Ultra3DMark Physics - Time Spy

In all the tests except Time Spy, Intel takes an 11-12% lead over AMD, while in Time Spy that increases to 20%.

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST OverallGeekbench 4 - MT Overall



CPU Performance: Encoding Tests

With the rise of streaming, vlogs, and video content as a whole, encoding and transcoding tests are becoming ever more important. Not only are more home users and gamers needing to convert video files into something more manageable, for streaming or archival purposes, but the servers that manage the output also manage around data and log files with compression and decompression. Our encoding tasks are focused around these important scenarios, with input from the community for the best implementation of real-world testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Handbrake 1.1.0: Streaming and Archival Video Transcoding

A popular open source tool, Handbrake is the anything-to-anything video conversion software that a number of people use as a reference point. The danger is always on version numbers and optimization, for example the latest versions of the software can take advantage of AVX-512 and OpenCL to accelerate certain types of transcoding and algorithms. The version we use here is a pure CPU play, with common transcoding variations.

We have split Handbrake up into several tests, using a Logitech C920 1080p60 native webcam recording (essentially a streamer recording), and convert them into two types of streaming formats and one for archival. The output settings used are:

  • 720p60 at 6000 kbps constant bit rate, fast setting, high profile
  • 1080p60 at 3500 kbps constant bit rate, faster setting, main profile
  • 1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile

Handbrake 1.1.0 - 720p60 x264 6000 kbps FastHandbrake 1.1.0 - 1080p60 x264 3500 kbps FasterHandbrake 1.1.0 - 1080p60 HEVC 3500 kbps Fast

Handbrake manages to use the Pentium resources and higher frequency better, scoring about a 15% win in every circumstance.

7-zip v1805: Popular Open-Source Encoding Engine

Out of our compression/decompression tool tests, 7-zip is the most requested and comes with a built-in benchmark. For our test suite, we’ve pulled the latest version of the software and we run the benchmark from the command line, reporting the compression, decompression, and a combined score.

It is noted in this benchmark that the latest multi-die processors have very bi-modal performance between compression and decompression, performing well in one and badly in the other. There are also discussions around how the Windows Scheduler is implementing every thread. As we get more results, it will be interesting to see how this plays out.

Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.

7-Zip 1805 Compression7-Zip 1805 Decompression7-Zip 1805 Combined

7-zip is an interesting test, given that Intel usually wins Compression but AMD wins Decompression. The same occurs here, however Intel wins the first test by a lot and AMD wins the second test by a small margin. Overall win to Intel here.

WinRAR 5.60b3: Archiving Tool

My compression tool of choice is often WinRAR, having been one of the first tools a number of my generation used over two decades ago. The interface has not changed much, although the integration with Windows right click commands is always a plus. It has no in-built test, so we run a compression over a set directory containing over thirty 60-second video files and 2000 small web-based files at a normal compression rate.

WinRAR is variable threaded but also susceptible to caching, so in our test we run it 10 times and take the average of the last five, leaving the test purely for raw CPU compute performance.

WinRAR 5.60b3

As a mixed workload that involves memory, this result would seem hard to predict given the two CPUs being tested. It ended up a clear win for Intel however – that extra core frequency in the G5400 mattered more than the main memory frequency of the 200GE.

AES Encryption: File Security

A number of platforms, particularly mobile devices, are now offering encryption by default with file systems in order to protect the contents. Windows based devices have these options as well, often applied by BitLocker or third-party software. In our AES encryption test, we used the discontinued TrueCrypt for its built-in benchmark, which tests several encryption algorithms directly in memory.

The data we take for this test is the combined AES encrypt/decrypt performance, measured in gigabytes per second. The software does use AES commands for processors that offer hardware selection, however not AVX-512.

AES Encoding



CPU Performance: Legacy Tests

We have also included our legacy benchmarks in this section, representing a stack of older code for popular benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

3DPM v1: Naïve Code Variant of 3DPM v2.1

The first legacy test in the suite is the first version of our 3DPM benchmark. This is the ultimate naïve version of the code, as if it was written by scientist with no knowledge of how computer hardware, compilers, or optimization works (which in fact, it was at the start). This represents a large body of scientific simulation out in the wild, where getting the answer is more important than it being fast (getting a result in 4 days is acceptable if it’s correct, rather than sending someone away for a year to learn to code and getting the result in 5 minutes).

In this version, the only real optimization was in the compiler flags (-O2, -fp:fast), compiling it in release mode, and enabling OpenMP in the main compute loops. The loops were not configured for function size, and one of the key slowdowns is false sharing in the cache. It also has long dependency chains based on the random number generation, which leads to relatively poor performance on specific compute microarchitectures.

3DPM v1 can be downloaded with our 3DPM v2 code here: 3DPMv2.1.rar (13.0 MB)

3DPM v1 Single Threaded3DPM v1 Multi-Threaded

x264 HD 3.0: Older Transcode Test

This transcoding test is super old, and was used by Anand back in the day of Pentium 4 and Athlon II processors. Here a standardized 720p video is transcoded with a two-pass conversion, with the benchmark showing the frames-per-second of each pass. This benchmark is single-threaded, and between some micro-architectures we seem to actually hit an instructions-per-clock wall.

x264 HD 3.0 Pass 1x264 HD 3.0 Pass 2



Gaming: Integrated Graphics

Despite being the ultimate joke at any bring-your-own-computer event, gaming on integrated graphics can ultimately be as rewarding as the latest mega-rig that costs the same as a car. The desire for strong integrated graphics in various shapes and sizes has waxed and waned over the years, with Intel relying on its latest ‘Gen’ graphics architecture while AMD happily puts its Vega architecture into the market to swallow up all the low-end graphics card sales. With Intel poised to make an attack on graphics in the next few years, it will be interesting to see how the graphics market develops, especially integrated graphics.


An AMD APU Base Layout

The two processors on test today have very different attitudes towards integrated graphics. The AMD Athlon 200GE uses the latest Vega architecture, designed for high performance, even if AMD only uses 192 streaming processors in this design. Intel on the other hand is using its older Gen 9 graphics architecture, built for mobile processors, and is using a baseline GT1 configuration when most Intel desktop processors have GT2.

AMD vs Intel at ~$60
  AMD Athlon
200GE
Intel Pentium
Gold G5400
Cores / Threads 2 / 4 2 / 4
Microarchitecture Zen Coffee Lake
Motherboards X470, X370, B450
B350, A320, A300
Z390, Z370, Q370
H370, B360, H310
CPU Frequency 3.2 GHz 3.7 GHz
L2 Cache 512 KB/core 256 KB/core
L3 Cache 2 MB / core 2 MB / core
Integrated Graphics Vega 3
192 SPs
UHD 610
12 EUs (96 ALUs)
DDR4 Support DDR4-2933 DDR4-2666
GPU Frequency Up to 1000 MHz 350-1050 MHz
TDP 35 W 54 W (2-core die version)
58 W (4-core die version)*
Price $55 (SRP) $64 (1k/u)
* Intel harvests both 2+2 and 4+2 dies to make G5400 parts. It's impossible to know which one you have without removing the lid and measuring the die area.

Intel does have a small ray of hope here – caches are important when it comes to integrated graphics, so while the 200GE has a bigger L2 cache (512KB vs 256KB) and faster main memory (DDR4-2666 vs DDR4-2400), the AMD L3 cache is a victim cache whereas the Intel L3 cache is a fully inclusive cache that can pre-fetch data. It’s a slim chance, but Intel should take what it can.

For our integrated graphics testing, we take our ‘IGP’ category settings for each game and loop the benchmark round for five minutes apiece, taking as much data as we can from our automated setup.

IGP: World of Tanks, Average FPS

IGP: Final Fantasy XV, Average FPS

IGP: Shadow of War, Average FPS

IGP: Civilization 6, Average FPS

IGP: Car Mechanic Simulator 2018, Average FPS

IGP: Ashes Classic, Average FPS

IGP: Grand Theft Auto V, Average FPS

IGP: Far Cry 5, Average FPS

IGP: F1 2018, Average FPS

That was a white wash. AMD’s worst win was 48% in both Ashes and F1 2018, while its best wins were in Far Cry 5 at 122.2% and Civilization 6 at 112.1%.



Gaming: World of Tanks enCore

Albeit different to most of the other commonly played MMO or massively multiplayer online games, World of Tanks is set in the mid-20th century and allows players to take control of a range of military based armored vehicles. World of Tanks (WoT) is developed and published by Wargaming who are based in Belarus, with the game’s soundtrack being primarily composed by Belarusian composer Sergey Khmelevsky. The game offers multiple entry points including a free-to-play element as well as allowing players to pay a fee to open up more features. One of the most interesting things about this tank based MMO is that it achieved eSports status when it debuted at the World Cyber Games back in 2012.

World of Tanks enCore is a demo application for a new and unreleased graphics engine penned by the Wargaming development team. Over time the new core engine will implemented into the full game upgrading the games visuals with key elements such as improved water, flora, shadows, lighting as well as other objects such as buildings. The World of Tanks enCore demo app not only offers up insight into the impending game engine changes, but allows users to check system performance to see if the new engine run optimally on their system.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

World of Tanks IGP Low Medium High
Average FPS
95th Percentile

At all resolutions, we see the G5400 have a sizeable lead (20%+) over the 200GE, in both average and minimums.



Gaming: Final Fantasy XV

Upon arriving to PC earlier this, Final Fantasy XV: Windows Edition was given a graphical overhaul as it was ported over from console, fruits of their successful partnership with NVIDIA, with hardly any hint of the troubles during Final Fantasy XV's original production and development.

In preparation for the launch, Square Enix opted to release a standalone benchmark that they have since updated. Using the Final Fantasy XV standalone benchmark gives us a lengthy standardized sequence to record, although it should be noted that its heavy use of NVIDIA technology means that the Maximum setting has problems - it renders items off screen. To get around this, we use the standard preset which does not have these issues.

Square Enix has patched the benchmark with custom graphics settings and bugfixes to be much more accurate in profiling in-game performance and graphical options. For our testing, we run the standard benchmark with a FRAPs overlay, taking a 6 minute recording of the test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Final Fantasy XV JRPG Mar
2018
DX11 720p
Standard
1080p
Standard
4K
Standard
8K
Standard

All of our benchmark results can also be found in our benchmark engine, Bench.

Final Fantasy XV IGP Low Medium High
Average FPS
95th Percentile

At 720p there's a big enough difference between the two CPUs, however this gets smaller as the settings are increased.



Gaming: Shadow of War

Next up is Middle-earth: Shadow of War, the sequel to Shadow of Mordor. Developed by Monolith, whose last hit was arguably F.E.A.R., Shadow of Mordor returned them to the spotlight with an innovative NPC rival generation and interaction system called the Nemesis System, along with a storyline based on J.R.R. Tolkien's legendarium, and making it work on a highly modified engine that originally powered F.E.A.R. in 2005.

Using the new LithTech Firebird engine, Shadow of War improves on the detail and complexity, and with free add-on high-resolution texture packs, offers itself as a good example of getting the most graphics out of an engine that may not be bleeding edge. Shadow of War also supports HDR (HDR10).

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Shadow of War Action / RPG Sep
2017
DX11 720p
Ultra
1080p
Ultra
4K
High
8K
High

All of our benchmark results can also be found in our benchmark engine, Bench.

Shadow of War IGP Low Medium High
Average FPS

At our IGP and Low settings, the G5400 has an obvious lead, although this is minimised in the Medium and High settings.



Gaming: Civilization 6 (DX12)

Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low

All of our benchmark results can also be found in our benchmark engine, Bench.

Civilization VI IGP Low Medium High
Average FPS
95th Percentile

At 1080p and 4K, the G5400 has a reasonable 10%+ win over the 200GE, which reduces as we move to 8K and 16K testing.



Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

All of our benchmark results can also be found in our benchmark engine, Bench.

Ashes IGP Low Medium High
Average FPS
95th Percentile

At every setting the results are similar, showing that Ashes has a consistent CPU bottleneck for these small processors.



Gaming: Strange Brigade (DX12, Vulkan)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arose once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Strange Brigade* FPS Aug
2018
DX12
Vulkan
720p
Low
1080p
Medium
1440p
High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

All of our benchmark results can also be found in our benchmark engine, Bench.

Strange Brigade DX12 IGP Low Medium High
Average FPS
95th Percentile

Strange Brigade Vulkan IGP Low Medium High
Average FPS
95th Percentile

Some of the biggest differences between the two processrs are in the IGP and Low settings for Strage Brigate. At the High settings the parts are essential equal, however.



Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

GTA V IGP Low Medium High
Average FPS
95th Percentile

This popular test gives a clear win for the G5400, except at 4K where the parts are very similar.



Gaming: Far Cry 5

The latest title in Ubisoft's Far Cry series lands us right into the unwelcoming arms of an armed militant cult in Montana, one of the many middles-of-nowhere in the United States. With a charismatic and enigmatic adversary, gorgeous landscapes of the northwestern American flavor, and lots of violence, it is classic Far Cry fare. Graphically intensive in an open-world environment, the game mixes in action and exploration.

Far Cry 5 does support Vega-centric features with Rapid Packed Math and Shader Intrinsics. Far Cry 5 also supports HDR (HDR10, scRGB, and FreeSync 2). We use the in-game benchmark for our data, and report the average/minimum frame rates.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low High
Far Cry 5 FPS Mar
2018
DX11 720p
Low
1080p
Normal
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

Far Cry 5 IGP Low High
Average FPS
95th Percentile

As with the other tests, the G5400 wins out here.



Gaming: Shadow of the Tomb Raider (DX12)

The latest instalment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest

All of our benchmark results can also be found in our benchmark engine, Bench.

Shadow of the Tomb Raider IGP Low Medium High
Average FPS
95th Percentile

The comparison between the two CPUs is consistent through to our our medium settings: the G5400 has a comfortable win.



Gaming: F1 2018

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained; otherwise, we should see any newer versions of Codemasters' EGO engine find its way into F1. Graphically demanding in its own right, F1 2018 keeps a useful racing-type graphics workload in our benchmarks.

We use the in-game benchmark, set to run on the Montreal track in the wet, driving as Lewis Hamilton from last place on the grid. Data is taken over a one-lap race.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

F1 2018 IGP Low Medium High
Average FPS
95th Percentile

Even through the 4K, the G5400 is the processor to have for F1 2018 between the two.



Power Consumption: TDP Doesn't Matter

Regular readers may have come across a recent article I wrote about the state of power consumption and the magic 'TDP' numbers that Intel writes on the side of its processors. In that piece, I wrote that the single number is often both misleading and irrelevant, especially for the new Core i9 parts sitting at the top of Intel's offerings. These parts, labeled 95W, can go beyond 160W easily, and motherboard manufacturers don't adhere to Intel official specifications on turbo time. Users without appropriate cooling could hit thermal saving performance states very quickly.

Well, I'm here to tell you that the TDP numbers for the G5400 and 200GE are similarly misleading and irrelevant, but in the opposite direction.

On the official specification lists, the Athlon 200GE is rated at 35W - all of AMD's GE processors are rated at this value. The Pentium G5400 situation is a bit more complex, as it offers two values: 54W or 58W, depending on if the processor has come from a dual-core design (54W) or a cut down quad-core design (58W). There's no real way to tell which one you have without taking the heatspreader off and seeing how big the silicon is.

For our power tests, we probe the internal power registers during a heavy load (in this case, POV-Ray), and see what numbers spit out. Both Intel and AMD have been fairly good in recent memory in keeping these registers open, showing package, core, and other power values. TDP relates to the full CPU package, so here's what we see with a full load on both chips:

Power (Package), Full Load

That was fairly anticlimactic. Both CPUs have power consumption numbers well below the rated number on the box - AMD at about half, and Intel below half. So when I said those numbers were misleading and irrelevant, this is what I mean.

Truth be told, we can look at this analytically. AMD's big chips have eight cores with hyperthreading have a box number of 105W and a tested result of 117W. That's at high frequency (4.3 GHz) and all cores, so if we cut that down to two cores at the same frequency, we get 29W, which is already under the 200GE TDP. Scale the frequency back, as well as the voltage, and remember that it's a non-linear relationship, and it's quite clear to see where the 18W peak power of the 200GE comes from. The Intel chip is similar.

So why even rate it that high?

Several reasons. Firstly, vendors will argue that TDP is a measure of cooling capacity, not power (technically true), and so getting a 35W or 54W cooler is overkill for these chips, helping keep them cool and viable for longer (as they might already be rejected silicon). Riding close to the actual power consumption might give motherboard vendors more reasons to cheap out on power delivery on the cheapest products too. Then there's the argument that some chips, the ones that barely make the grade, might actually hit that power value at load, so they have to cover all scenarios. There's also perhaps a bit of market expectation: if you say it's an 18W processor, people might not take it seriously.

It all barely makes little sense but there we are. This is why we test.



Overclocking the Athlon 200GE

In recent weeks, motherboard manufacturers have been releasing BIOS firmware that enables overcooking on the Athlon 200GE. It appears that this has come through an oversight in one of the base AMD firmware revisions that motherboard vendors are now incorporating into their firmware bundles. This is obviously not what AMD expected; the Athlon is the solitary consumer desktop chip on AMD's AM4 platform that is not overclockable. Since MSI first starting going public with new firmware revisions, others have followed suit, including ASRock and GIGABYTE.  There is no word if this change will be permanent: AMD might patch it in future revisions it sends out to the motherboard vendors, or those vendors will continue to patch around it. As it stands, however, a good number motherboards can now offer this functionality. 

The question does arise if there is even a point to overclocking these chips. They are very cheap, they usually go into cheap motherboards that might not even allow overclocking, and they are usually paired with cheaper coolers. The extra money spent on either an overclocking enabled motherboard or even spending $20 on a cooler might as well be put into upgrading the CPU to a Ryzen 3 2200G, with four cores and better integrated graphics, which comes with a better stock cooler and stomps all over Intel's Pentium line, and is also overclockable without special firmware. The standard response to 'why overclock' is 'because we can', which if you've lived in that part of the industry is more than enough justification.

Given that our resident motherboard editor, Gavin, has been on a crusade through 2018 looking at the scaling performance of the AMD APUs, I asked if he could do a few overclocking tests for us.

Overclocking the 200GE

Enabling our MSI motherboard with the latest overclocking BIOS was no different to any other BIOS flash, and with it, the multiplier options opened up for the chip. Even though AMD's chips can go in quarter multiplier steps, we could only push this processor in full multiplier jumps of 100 MHz, but with a little bit of voltage using our usual overclocking methodology, we managed to get 3.9 GHz without any trouble. 

To be fair, we are using a good cooler here, but to be honest, the thermals were not much of a problem. Our practical limit was the voltage frequency response of the chip at the end of the day, and our 3.9 GHz matches what other people have seen. The base frequency is locked, so there is little room for fine adjustments on that front.

At each stage of the overclock, we ran our Blender test. The gains went up almost linearly, leading to a 20% performance throughput increase from the stock frequency to the best frequency.

Thoughts

A 21 percent performance increase across the range of benchmarks would put the 200GE either on par with Intel on most tests or even further ahead on the tests it already wins. This now changes our conclusion somewhat, as explained on the next page.

If you want to see a full suite test at the overclocked speed, leave a comment below and we'll set something up in January. 



Conclusion: Split Strategy

Battling CPUs at $60 is going to be a tough call. Do you throw the best hardware around the chip that money can buy to compare the absolute limits of the hardware under ideal conditions, or do you keep it more reasonable for the price bracket it is intended for? I'm a big advocate of building a system piece by piece with the best you can afford at the time, rather than spreading out over several below average components at once, so I guess my suggested situation falls into neither the all-out or budget options. In our comparison of the G5400 and the 200GE however, the results are fairly clear-cut.

When deciding between these two processors, there's a hierarchy of questions you need to ask.

  1. Are you going to need the integrated graphics for gaming or compute?
  2. Do you already have good overclocking tools?

If the answer is yes to either of those, then the processor to get is the AMD Athlon 200GE. But the base answer for anyone getting a discrete graphics card, or a fresh system with a graphics card, then the answer is the Intel Pentium G5400.

Let me explain.

In all of our CPU and office benchmarks, except for those that are 'floating point' heavy (run math with fractions rather than whole numbers), then the Intel processor is the clear winner. There's no mistaking where it sits in our tests - it often beats the AMD chip by 8 to 20 percent. 

PCMark10 Extended Score

In gaming with a discrete graphics card, for example, if you've invested in something like the GTX 1080, the Intel Pentium will push more frames and higher minimums in practically every test at every resolution.

GTX 1080: Grand Theft Auto V, Average FPS

If I were building a work and play system for anyone in my family, out of the two I'd take the Pentium G5400.

There are two situations in which I'd take the Athlon, however. If the system was a true budget gaming system, going for good 720p action without a discrete card, then the Athlon is the obvious choice. It knocks six shades out of the Pentium for its integrated graphics performance.

IGP: Grand Theft Auto V, Average FPS

The other exception is if I already have a good motherboard and cooler to hand, and that motherboard allows me to overclock. I wouldn't go out of my way to invest in these parts for a specific build, but if I had them to spare and still had to choose between the two, then I'd get the Athlon in this situation as well, then push it to a good frequency.

But the baseline choice remains the Intel Pentium G5400 in this shoot-out. 

If you want to compare either processor with any of the other processors we've tested on AnandTech, don't forget to check out our benchmark database comparison pages!

Log in

Don't have an account? Sign up now