AMD 3990X Against $20k Enterprise CPUs

For those looking at a server replacement CPU, AMD’s big discussion point here is that in order to get 64 cores on Intel hardware is relatively hard. The best way to get there is with a dual socket system, featuring two of its 28-core dies at a hefty $10k a piece. AMD’s argument is that users can consolidate down to a single socket, but also have better memory support, PCIe 4.0, and no cross-memory domain issues.

AMD 3990X Enterprise Competition
AnandTech AMD
3990X
AMD
7702P
Intel
2x8280
SEP $3990 $4450 $20018
Cores/Threads 64 / 128 64 / 128 56 / 112
Base Frequency 2900 2000 2700
Turbo Frequency 4300 3350 4000
PCIe 4.0 x64 4.0 x128 3.0 x96
DDR4 Frequency 4x 3200 8x 3200 12x 2933
Max DDR4 Capacity 512 GB 2 TB 3 TB
TDP 280 W 200 W 410 W

Unfortunately I was unable to get ahold of our Rome CPUs from Johan in time for this review, however I do have data from several dual Intel Xeon setups that I did a few months ago, including the $20k system.

Corona 1.3 Benchmark

This time with Corona the competition is hot on the heels of AMD's 64-core CPUs, but even $20k of hardware can't match it.

3D Particle Movement v2.1

The non-AVX verson of 3DPM puts the Zen 2 hardware out front, with everything else waiting in the wings.

3D Particle Movement v2.1 (with AVX)

When we add in the AVX-512 hand tuned code, the situation flips: Intel's 56 cores get almost 2.5x the score of AMD, despite having fewer cores.

Blender 2.79b bmw27_cpu Benchmark

Blender doesn't seem to like the additional access latency from the 2P systems.

AES Encoding

For AES encoding, as the benchmark takes places from memory, it appears that none of Intel's CPUs can match AMD here.

7-Zip 1805 Combined

For the 7-zip combined test, there's little difference between AMD's 32-core and 64-core, but there are sizable jumps above Intel hardware.

POV-Ray 3.7.1 Benchmark

LuxMark v3.1 C++

AppTimer: GIMP 2.10.4

Verdict

In our tests here (more in our benchmark database), AMD's 3990X would get the crown over Intel's dual socket offerings. The only thing really keeping me back from giving it is the same reason there was hesitation on the previous page: it doesn't do enough to differentiate itself from AMD's own 32-core CPU. Where AMD does win is in that 'money is less of an issue scenario', where using a single socket 64 core CPU can help consolidate systems, save power, and save money. Intel's CPUs have a TDP of 205W each (more if you decide to use the turbo, which we did here), which totals 410W, while AMD maxed out at 280W in our tests. Technically Intel's 2P has access to more PCIe lanes, but AMD's PCIe lanes are PCIe 4.0, not PCIe 3.0, and with the right switch can power many more than Intel (if you're saving 16k, then a switch is peanuts).

We acknowledge that our tests here aren't in any way a comprehensive test of server level workloads, but for the user base that AMD is aiming for, we'd take the 64 core (or even the 32 core) in most circumstances over two Intel 28 core CPUs, and spend the extra money on memory, storage, or a couple of big fat GPUs.

AMD 3990X Against Prosumer CPUs Opportunities Multiply As They Are Seized
Comments Locked

279 Comments

View All Comments

  • HStewart - Friday, February 7, 2020 - link

    One note on render farms, in the past I created my own render farms and it was better to use multiple machines than cores because of dependency of disk io speed can be distributed. Yes it is a more expensive option but disk io serious more time consuming than processor time.

    Not content creation workstation is a different case - and more cores would be nice.
  • MattZN - Friday, February 7, 2020 - link

    SSDs and NVMe drives have pretty much removed the write bottleneck for tasks such as rendering or video production. Memory has removed read bottleneck. There are very few of these sorts of workloads that are dependent on I/O any more. rendering, video conversion, bulk compiles.... on modern systems there is very little I/O involved relative to the cpu load.

    Areas which are still sensitive to I/O bandwidth would include interactive video editing, media distribution farms, and very large databases. Almost nothing else.

    -Matt
  • HStewart - Saturday, February 8, 2020 - link

    I think we need to see a benchmark specifically on render frame with single 64 core computer and also with dual 32 core machines in network and quad core machines in network. All machines have same cpu designed, same storage and possibly same memory. Memory is a question able part because of the core load.

    I have a feeling with correctly designed render farm the single 64 core will likely lose the battle but the of course the render job must be a large one to merit this test.

    For video editing and workstation designed single cpu should be fine.
  • HStewart - Saturday, February 8, 2020 - link

    One more thing these render tests need to be using real Render software - not PovRay, Corona and Blender.
    e
    I personal using Lightwave 3D from NewTek, but 3DMax, Maya and Cimema 3d are good choice - Also custom render man software
  • Reflex - Saturday, February 8, 2020 - link

    It wouldn't change the results.
  • HStewart - Sunday, February 9, 2020 - link

    Yes it would - this a real 3d render projects - for example one of reason I got in Lightwave is Star Trek movies, also used in older serious called Babylon 5 and Sea quest DSV. But you think about Pixar movies instead scenes in games and such.
  • Reflex - Sunday, February 9, 2020 - link

    It would not change the relative rankings of CPU's vs each other by appreciable amounts. Which is what people read a comparative review for.
  • Reflex - Saturday, February 8, 2020 - link

    Network latencies and transfer are significantly below PCIe. Below you challenged my points by discussing I/O and storage, but here you go the other direction suggesting a networked cluster could somehow be faster. That is not only unlikely, it would be ahistorical as clusters built for performance have always been a workaround to limited local resources.

    I used to mess around with Beowulf clusters back in the day, it was never, ever faster than simply having a better local node.
  • Reflex - Friday, February 7, 2020 - link

    You may wish to read the article, which answers your 'honest generic CPU question' nicely. Short version is: It depends on your workload. If you just browse the web with a dozen tabs and play games, no this isn't worth the money. If you do large amounts of video processing and get paid for it, this is probably worth every penny. Basically your mileage may vary.
  • HStewart - Saturday, February 8, 2020 - link

    Video processing and rendering likely depends on disk io - also video as far as I know is also signal thready unless the video card allows multiple connections at same time.

    I just think adding more core is trying to get away from actually tackling the problem. The designed of the computer needs to change.

Log in

Don't have an account? Sign up now