Machine Learning: MLPerf and AI Benchmark 4

Even as a new benchmark in the space, MLPerf has been made available that runs representative workloads on devices and takes advantage of both common ML frameworks such as NNAPI as well as the respective chip libraries for each vendor. Using this benchmark on retail phones to date, Qualcomm has had the lead in almost all the tests, but given that the company is promoting a 4x increase in AI performance, it will be interesting to see if that comes across all of MLPerf’s testing scenarios.

It should be noted that Apple’s CoreML is currently not supported, hence the lack of Apple numbers here.

MLPerf 1.0.1 - Image ClassificationMLPerf 1.0.1 - Object DetectionMLPerf 1.0.1 - Image SegmentationMLPerf 1.0.1 - Image Classification (Offline)

Across the board in these first four tests Qualcomm is making a sizable lead, going above and beyond what the S888 can do. Here we’re seeing up to a 2.2x result, making an average +75% gain. It’s not quite the 4x that Qualcomm promoted in its materials, but there’s a sizable gap with the other high-end silicon we’ve tested to date.

MLPerf 1.0.1 - Language Processing

The only non-lead is with the language processing, where Google’s Tensor SoC is almost 2x what the S8g1 scores. This test is based on a mobileBERT model, and either for software or architecture reasons, it fits a lot better into the Google chip than any other. As smartphones increase their ML capabilities, we might see some vendors optimizing for specific workloads over others, like Google has, or offering different accelerator blocks for different models. The ML space is also fast paced, so perhaps optimizing for one type of model might not be a great strategy long-term. We will see.

AI Benchmark 4 - NNAPI (CPU+GPU+NPU)

In AI Benchmark 4, running in pure NNAPI mode, the Qualcomm S8g1 takes a comfortable lead. Andrei noted in previous reviews with this test that the power consumed during this test can be quite high, up to 14 W, and this is where some chips might be able to pull ahead an efficiency advantage. Unfortunately we didn’t record power at the same time as the test, but it would be good to monitor this in the future.

Testing the Cortex-X2: A New Android Flagship Core System-Wide Testing and Gaming
Comments Locked

169 Comments

View All Comments

  • Alistair - Tuesday, December 14, 2021 - link

    Their GPU is great, their CPU is 3 years behind now, and this improvement over last year is almost nill. Sigh. Sad.
  • Raqia - Tuesday, December 14, 2021 - link

    I'm looking forward to the '23 Nuvia designed cores for laptop compute, so let's see what they can do.

    However, I think it's perfectly fine if they went with a smaller ARM solution for future phone SoCs: in the SG81 and the 888 they consciously chose to limit L3 cache sizes in their CPU complex (and hence single threaded performance) for 2 generations from the biggest possible to dedicate die area and power consumption for other higher impact purposes. To me, an ideal Apple phone would have 6 of their small cores so they can dedicate more die area to their GPU, NPU and ISPs.

    John Carmack himself thought it best to throttle the CPU to half of maximum clock speed in even the XR2 (which is a 865 derivative using a faster clocked, bigger cache A77 as its biggest core):

    https://twitter.com/ID_AA_Carmack/status/130662113...
    https://twitter.com/ID_AA_Carmack/status/131878675...

    but he praised the XR2 in no uncertain terms, calling it "a lot of processing"

    https://youtu.be/sXmY26pOE-Y?t=1972

    It is indeed the DSPs and the GPUs doing the heavy lifting in the VR use case; I don't see it being much different for phones where wireless data rates are by far the biggest bottleneck.

    The CPU benches you see headlining many web SoC reviews matter only for the benchmark obsessed, but pretty much no one else.
  • Alistair - Tuesday, December 14, 2021 - link

    Your throttling argument doesn't make sense when the iPhone is more efficient also. You can run an iPhone at Snapdragon speed, and then you use way less power.
  • Raqia - Tuesday, December 14, 2021 - link

    If you look at the efficiency curves for the A77 and the A13's big CPU, they're pretty danged close:

    https://miro.medium.com/max/1155/1*U7qA0vDhixGAYes...

    The bigger point is, for phones it's well past the point of diminishing returns to pin the A13 CPU where most benchmarks do since it's simply not a bottleneck in realistic workloads. You can make a very fast CPU for bench-marketing purposes and get semi-technical people excited about your SoC, but you won't need to go very far along the curve to both hit its "knee" and have excellent performance.

    Apple made the core for laptops and desktops (for which it's well suited) but included it in its iPhone for marketing purposes rather than to address actual performance needs. Some cite the fact that more apps are coded in Javascript and websites are more Javascript intensive these days, but by far the bigger culprit in responsiveness is data connectivity and they were happy to use Intel's inferior modems behind the scenes while trotting out big but irrelevant Geekbench scores. Furthermore, part of their battery-gate issue stems from the huge possible current draw of their CPUs, which while efficient still use high peak power and current.

    Qualcomm has certainly been worse in efficiency and performance across multiple SoC processing blocks for the past two generations due to switching to Samsung as its premium SoC fab, and I certainly have no kind words for them in making that decision. However, given what they had to work with in terms of die area and power draw, they did make the correct decision in de-emphasizing the CPU block for relatively more grunt in the other blocks.
  • ChrisGX - Thursday, December 16, 2021 - link

    Yes, that's right, but Samsung's inadequate process nodes are primarily responsible for Snapdragon parts (and all premium mobile SoCs based on licensed ARM IP) falling further behind. (Note: ARM SoCs are still seeing notable improvements in the execution rate of floating point workloads even as integer performance wallows.) For that reason, it will be very interesting to see how the TSMC fabbed MediaTek Dimensity 9000 acquits itself.

    The more telling part of this story, I think, is the failure of ARM and ARM licensees to manage this transition to high performance mobile SoCs while maintaining energy efficiency leadership. In the mobile phone world, today, Apple not only wears the performance crown but the energy efficiency crown as well.
  • Wilco1 - Saturday, December 18, 2021 - link

    There are claims that Dimensity 9000 has ~49% better perf/W than SD8gen1: https://www.breakinglatest.news/business/tsmcs-4nm...

    That means the efficiency gap was indeed due to process as suspected. There is definitely an advantage in using the most advanced process 1 year before everyone else.
  • Raqia - Saturday, December 18, 2021 - link

    Really good to see them pick up their game: bigger L3 cache and faster clocked middle cores seem to be part of the reason efficiency and multicore performance are up as well aside from process.

    Some rumors indicate the dual sourced version of the S8G1 (SM8475) may be more efficient than the samsung node fabbed version but not as much as expected. It seems like Qualcomm picks different sub-blocks to optimize with each generation: this gen it was most certainly the GPU. Looks like the CPU block can be expected to languish until they bring up the NUVIA designed cores likely in '24. As their initial focus was servers, NUVIA may not have had a suitable small core in the pipeline for '23 which is much more important for mobile than laptop scale devices.
  • Wilco1 - Sunday, December 19, 2021 - link

    Yes it looks like Mediatek have done a great job. The larger caches should help power efficiency as well indeed. It will be interesting to see how the larger L3 and system cache compare with the Snapdragon and Exynos in AnandTech's benchmarks.
  • Kamen Rider Blade - Tuesday, December 14, 2021 - link

    I wonder how much more performance Android would gain by going with C++ instead of Java.

    https://benchmarksgame-team.pages.debian.net/bench...

    There's ALOT of performance to be gained by going with C/C++/Rust.

    The fact that Android went with Java for it's primary programming language while Apple went with a C/C++ derivative could be what explains the large gap.
  • jospoortvliet - Wednesday, December 15, 2021 - link

    Might make a difference in day to day use but not in these benchmarks as they already use native code.

Log in

Don't have an account? Sign up now