Section by Dr. Ian Cutress (Orignal article)

Windows Optimizations

One of the key points that have been a pain in the side of non-Intel processors using Windows has been the optimizations and scheduler arrangements in the operating system. We’ve seen in the past how Windows has not been kind to non-Intel microarchitecture layouts, such as AMD’s previous module design in Bulldozer, the Qualcomm hybrid CPU strategy with Windows on Snapdragon, and more recently with multi-die arrangements on Threadripper that introduce different memory latency domains into consumer computing.

Obviously AMD has a close relationship with Microsoft when it comes down to identifying a non-regular core topology with a processor, and the two companies work towards ensuring that thread and memory assignments, absent of program driven direction, attempt to make the most out of the system. With the May 10th update to Windows, some additional features have been put in place to get the most out of the upcoming Zen 2 microarchitecture and Ryzen 3000 silicon layouts.

The optimizations come on two fronts, both of which are reasonably easy to explain.

Thread Grouping

The first is thread allocation. When a processor has different ‘groups’ of CPU cores, there are different ways in which threads are allocated, all of which have pros and cons. The two extremes for thread allocation come down to thread grouping and thread expansion.

Thread grouping is where as new threads are spawned, they will be allocated onto cores directly next to cores that already have threads. This keeps the threads close together, for thread-to-thread communication, however it can create regions of high power density, especially when there are many cores on the processor but only a couple are active.

Thread expansion is where cores are placed as far away from each other as possible. In AMD’s case, this would mean a second thread spawning on a different chiplet, or a different core complex/CCX, as far away as possible. This allows the CPU to maintain high performance by not having regions of high power density, typically providing the best turbo performance across multiple threads.

The danger of thread expansion is when a program spawns two threads that end up on different sides of the CPU. In Threadripper, this could even mean that the second thread was on a part of the CPU that had a long memory latency, causing an imbalance in the potential performance between the two threads, even though the cores those threads were on would have been at the higher turbo frequency.

Because of how modern software, and in particular video games, are now spawning multiple threads rather than relying on a single thread, and those threads need to talk to each other, AMD is moving from a hybrid thread expansion technique to a thread grouping technique. This means that one CCX will fill up with threads before another CCX is even accessed. AMD believes that despite the potential for high power density within a chiplet, while the other might be inactive, is still worth it for overall performance.

For Matisse, this should afford a nice improvement for limited thread scenarios, and on the face of the technology, gaming. It will be interesting to see how much of an affect this has on the upcoming EPYC Rome CPUs or future Threadripper designs. The single benchmark AMD provided in its explanation was Rocket League at 1080p Low, which reported a +15% frame rate gain.

Clock Ramping

For any of our users familiar with our Skylake microarchitecture deep dive, you may remember that Intel introduced a new feature called Speed Shift that enabled the processor to adjust between different P-states more freely, as well as ramping from idle to load very quickly – from 100 ms to 40ms in the first version in Skylake, then down to 15 ms with Kaby Lake. It did this by handing P-state control back from the OS to the processor, which reacted based on instruction throughput and request. With Zen 2, AMD is now enabling the same feature.

AMD already has sufficiently more granularity in its frequency adjustments over Intel, allowing for 25 MHz differences rather than 100 MHz differences, however enabling a faster ramp-to-load frequency jump is going to help AMD when it comes to very burst-driven workloads, such as WebXPRT (Intel’s favorite for this sort of demonstration). According to AMD, the way that this has been implemented with Zen 2 will require BIOS updates as well as moving to the Windows May 10th update, but it will reduce frequency ramping from ~30 milliseconds on Zen to ~1-2 milliseconds on Zen 2. It should be noted that this is much faster than the numbers Intel tends to provide.

The technical name for AMD’s implementation involves CPPC2, or Collaborative Power Performance Control 2, and AMD’s metrics state that this can increase burst workloads and also application loading. AMD cites a +6% performance gain in application launch times using PCMark10’s app launch sub-test.

Hardened Security for Zen 2

Another aspect to Zen 2 is AMD’s approach to heightened security requirements of modern processors. As has been reported, a good number of the recent array of side channel exploits do not affect AMD processors, primarily because of how AMD manages its TLB buffers that have always required additional security checks before most of this became an issue. Nonetheless, for the issues to which AMD is vulnerable, it has implemented a full hardware-based security platform for them.

The change here comes for the Speculative Store Bypass, known as Spectre v4, which AMD now has additional hardware to work in conjunction with the OS or virtual memory managers such as hypervisors in order to control. AMD doesn’t expect any performance change from these updates. Newer issues such as Foreshadow and Zombieload do not affect AMD processors.

X570 Motherboards: PCIe 4.0 For Everybody Test Bed and Setup
Comments Locked

447 Comments

View All Comments

  • Maxiking - Tuesday, July 23, 2019 - link

    I said a few times... I don't tend to buy amd products so no, I am not gonna sue anybody.

    And as pointed out in the video, in his German one, he works for a retailer selling prebuilt pcs.. People keep returning pcs with AMD cpus becaue they do not boost to the promised frequency. You there, there are something like laws, if you write on the box 4.6ghz, it must reach it.

    You are so knowledgeable, sharp minded and analytical when comes to meaning of words and what people want to say, you should sue Intel on your own, should be easy.
  • Atom2 - Monday, July 29, 2019 - link

    ICC compiler is 3x faster than LLVM and AVX512 is 2x faster than AVX2. And both were left out of comparison? The comparison designed purely only for the LLVM compiler users? Used by who?
  • Rudde - Saturday, August 10, 2019 - link

    ICC is proprietary afaik and Anandtech prefers open compilers. AVX512 should be found in 3DPM and shows utter demolition by the only processor that supports it (7920X).
  • MasterE - Wednesday, August 7, 2019 - link

    I considered going with the Ryzen 9 3900X chip and an x570 motherboard for a new rendering system but since these chips aren't available for less than $820+ anywhere, I guess I'll be back to either the threadripper or Intel 9000+ series. There is simply no way I'm paying that kind of price for a chip with a Manufacters Suggested Retail Price of $499.
  • gglaw - Friday, August 23, 2019 - link

    @Andrei - I was just digging through reviews again before biting the bullet on a 3900X and one of the big questions that is not agreed upon in the tech community is gaming performance for PBO vs all-core overclock, yet you only run 2 benches on the overclocked settings. How can a review be complete with only 2 benches run, neither related to gaming? In a PURELY single threaded scenario PBO gives a tiny 2.X percent increase in single threaded Cinebench. This indicates to me that it is not sustaining the max 4.6 on a single core or it would have scaled better, so it may not be really comparing 4.6 vs 4.3 even for single threaded performance. Almost all recent game engines can at least utilize 4 threads, so I feel your exact same test run through the gaming suite would have shown a consistent winner with 4.3 all-core OC vs PBO. And in heavily threaded scenarios the gap would keep growing larger, but specifically in today's GAMES, especially if you consider very few of us have 0 background activity, all-core OC would hands-down win is my guess, but we could have better evidence of this if you could run a complete benchmarking suite. (unless I'm blind and missed it, in case my apologies :)

    I've been messing around with a 3700X, and even with a 14cm Noctua cooling it, it does not sustain max allowed boost on even a single core with PBO which is another thing I wish you touched on more. During your testing do you monitor the boost speeds and what percent of the time it can stay at the max boost over XX minutes?
  • Maxiking - Monday, August 26, 2019 - link

    Veni, vidi vici

    Yeah, I was right.

    I would like to thank my family for all the support I have received whilst fighting amd fanboys.

    It was difficult, sometimes I was seriously thinking about giving up but the truth can not be stopped!
    The AMD fraud has been confirmed.

    https://www.reddit.com/r/pcgaming/comments/cusn2t/...
  • Ninjawithagun - Thursday, October 10, 2019 - link

    Now all you have to do is have all these benchmarks ran again after applying the 1.0.0.3. ABBA BIOS update ;-)
  • quadibloc - Tuesday, November 12, 2019 - link

    I am confused by the diagram of the current used by individual cores as the number of threads is increased. Since SMT doesn't double the performance of a core, on the 3900X, for example, shouldn't the number of cores in use increase to all 12 for the first 12 threads, one core for each thread, with all cores then remaining in use as the number of threads continues to increase to 24?

    Or is it just that this chart represents power consumption under a particular setting that minimizes the number of cores in use, and other settings that maximize performance are also possible?
  • SjLeonardo - Saturday, December 14, 2019 - link

    Core and uncore get supplied by different VRMs, right?
  • Parkab0y - Sunday, October 4, 2020 - link

    I really want to see something like this about zen3 5000

Log in

Don't have an account? Sign up now