Rosetta2: x86-64 Translation Performance

The new Apple Silicon Macs being based on a new ISA means that the hardware isn’t capable of running existing x86-based software that has been developed over the past 15 years. At least, not without help.

Apple’s new Rosetta2 is a new ahead-of-time binary translation system which is able to translate old x86-64 software to AArch64, and then run that code on the new Apple Silicon CPUs.

So, what do you have to do to run Rosetta2 and x86 apps? The answer is pretty much nothing. As long as a given application has a x86-64 code-path with at most SSE4.2 instructions, Rosetta2 and the new macOS Big Sur will take care of everything in the background, without you noticing any difference to a native application beyond its performance.

Actually, Apple’s transparent handling of things are maybe a little too transparent, as currently there’s no way to even tell if an application on the App Store actually supports the new Apple Silicon or not. Hopefully this is something that we’ll see improved in future updates, serving also as an incentive for developers to port their applications to native code. Of course, it’s now possible for developers to target both x86-64 and AArch64 applications via “universal binaries”, essentially just glued together variants of the respective architecture binaries.

We didn’t have time to investigate what software runs well and what doesn’t, I’m sure other publications out there will do a much better job and variety of workloads out there, but I did want to post some more concrete numbers as to how the performance scales across different time of workloads by running SPEC both in native, and in x86-64 binary form through Rosetta2:

SPECint2006 - Rosetta2 vs Native Score %

In SPECint2006, there’s a wide range of performance scaling depending on the workloads, some doing quite well, while other not so much.

The workloads that do best with Rosetta2 primarily look to be those which have a more important memory footprint and interact more with memory, scaling perf even above 90% compared to the native AArch64 binaries.

The workloads that do the worst are execution and compute heavy workloads, with the absolute worst scaling in the L1 resident 456.hmmer test, followed by 464.h264ref.

SPECfp2006(C/C++) - Rosetta2 vs Native Score %

In the fp2006 workloads, things are doing relatively well except for 470.lbm which has a tight instruction loop.

SPECint2017(C/C++) - Rosetta2 vs Native Score %

In the int2017 tests, what stands out is the horrible performance of 502.gcc_r which only showcases 49.87% performance of the native workload – probably due to high code complexity and just overall uncommon code patterns.

SPECfp2017(C/C++) - Rosetta2 vs Native Score %

Finally, in fp2017, it looks like we’re again averaging in the 70-80% performance scale, depending on the workload’s code.

Generally, all of these results should be considered outstanding just given the feat that Apple is achieving here in terms of code translation technology. This is not a lacklustre emulator, but a full-fledged compatibility layer that when combined with the outstanding performance of the Apple M1, allows for very real and usable performance of the existing software application repertoire in Apple’s existing macOS ecosystem.

SPEC2017 - Multi-Core Performance Conclusion & First Impressions
Comments Locked

682 Comments

View All Comments

  • Spunjji - Tuesday, November 17, 2020 - link

    @halo37253 I suspect you're largely correct based on what we're seeing in the benchmarks here.

    Of course, the answer to why Apple would do it is clear: they love vertical integration. They'll eventually be able to translate this into power/performance advantages that will be difficult to assail with apps written specifically for their platform.
  • mdriftmeyer - Friday, November 20, 2020 - link

    Apple will have to modify their future M1s to accomodate PCIe because a large portion of the Audio Video Professional world needs it--in fact we all rely on DMA over PCI for Thunderbolt to reduce latency, and nothing like throwing away a $5k-$25k stack of Audio Interface, Mic Pres and more just because Apple wants to drop that, or just simply dump Apple and move back to Windows and deal with DLLs. I hate Windows but I sure as hell won't drop expensive gear tied with Dante Ethernet and TB3 interfacing with various Audio Interfaces and rack mount hardware because Apple thinks the Pro market only needed the Mac Pro one off before dropping us off a cliff.

    No one in the world of Professional Music uses Logic Pro stock plugins and the average track has any where between 80-200 channel strips to manage one mix. If you think the M1 or its predecessors with this type of tightly joined unified memory system will satisfy people are just not familiar to how many resources making professional music or film production require.

    Let's not even talk about 3D Modeling for F/X in Films or full blown PIXAR style film shorts, never mind full length motion pictures. Working in 8k and soon 16k film to have real-time scrubbing will demand new versions of the Mac Pro's Afterburner and upgraded Xeons [or if they were smart, Zens] but definitely not M series SoCs.
  • Spunjji - Monday, November 23, 2020 - link

    @mrdriftmeyer - I don't see that any of the requirements you've mentioned here would preclude Apple producing an M1 successor that would be capable of fulfilling them. In particular you mentioned 8K video scrubbing, which the M1 can already do better than the average Xeon. I doubt they'd throw away the audio market entirely over this switch - I guess we'll just have to wait and see what the next chips look like.
  • varase - Wednesday, November 25, 2020 - link

    Most people are looking at these first Apple Silicon Macs wrong - these aren't Apple's powerhouse machines: they're simply the annual spec bump of the lowest end Apple computers with DCI-P3 displays, Wifi 6, and the new Apple Silicon M1 SoC.

    They have the same limitations as the machines they replace - 16 GB RAM and two Thunderbolt ports.

    These are the machines you give to a student or teacher or a lawyer or an accountant or a work-at-home information worker - folks who need a decently performing machine who don't want to lug around a huge powerhouse machine (or pay for one for that matter). They're still marketed at the same market segment, though they now have a vastly expanded compute power envelope.

    The real powerhouses will probably come next year with the M1x (or whatever), rumored to have eight Firestorm and four Icestorm cores. Apple has yet to decide on an external memory interconnect and multichannel PCIe scheme, if they decide to move in that direction.

    Other CPU and GPU vendors and OEM computer makers take notice - your businesses are now on limited life support. These new Apple Silicon models can compete up through the mid-high tier of computer purchases, and if as I expect Apple sells a ton of these many will be to your bread and butter customers.

    In fact, I suspect that Apple - once they recover their R&D costs - will be pushing the prices of these machines lower while still maintaining their margins - while competing computer makers will still have to pay Intel, AMD, Qualcomm, and nVidea for their expensive processors, whereas Apple's cost per SoC goes down the more they manufacture. Competing computer makers may soon be squeezed by Apple Silicon price/performance on one side and high component prices on the other. Expect them to be demanding lower processor prices from the above manufacturers so they can more readily compete, and processor manufacturers may have to comply because if OEM computer manufacturers go under or stop making competing models, the processor makers will see a diminishing customer base.

    I believe the biggest costs for a chip fab are startup costs - no matter what processor vendors would like you to believe. Design and fab startup are _expensive_ - but once you start getting decent yields, the additional costs are silicon wafers and QA. The more of these units Apple can move, the lower the per unit cost and the better the profits.

    The real threat to OEM computer and processor makers are economic - and that fact that consumer publications like Consumer Reports will probably _gush_ over the improvements in battery life and performance.

    Most consumers are not Windows or macOS or ChromeOS fanboys - the just want a computer which is affordable and has decent build quality and gets the job done. There are aspirational aspects of computer purchases, and M1 computers shoot waaayyy above their peers. This can mean a potential buyer _doesn't_ have to buy way up the line for capabilities he or she may want sometime during their ownership window, and these computers will last a long long time and will not suffer slowdowns due to software feature creep.
  • Eric S - Tuesday, November 17, 2020 - link

    Remember that this is designed to be Apple’s lowest end Mac chip. Their Intel i3. Wait until the big chips come out next year.
  • BushLin - Wednesday, November 18, 2020 - link

    ... Your speculation may or may not be correct but next year will see 5nm zen 4 which is actually announced rather than rumors.
  • jospoortvliet - Wednesday, November 18, 2020 - link

    Sure, and 3nm m2. Different generation with different processes etc. But today, M1 has the best single core and at lower power comes close to octacores despite only 4 fast and 4 slow cores. I wish I could buy it with Linux on it...
  • dysonlu - Sunday, February 21, 2021 - link

    "makes we wonder why Apple is so willing to fracture their already pretty small Mac OS fanbase"

    You have it upside down. It is exactly BECAUSE it has a small fanbase that it can afford to do this kind of migration. (The large and heterogenous "fanbase" in Windows is the big achilles' heel for Microsoft when it comes to making any significant change.) There will be very little "fracture" of Apple's fanbase, if any at all. The fans will gladly move to Mx CPUs given the advantages over Intel.
  • adriaaaaan - Thursday, November 19, 2020 - link

    People are giving apple too much credit here, this is only impressive because of the process advantage which has nothing to do with apple.

    People are forgetting that Mac's have a tiny market share and that's not likely to change any time soon. You wouldn't knows it because journos tend to use Mac's therefore they think everyone does.

    If anything I hope this kicks AMD into gear they are still releasing gcn designs. Let's see who's boss when they release 5nm rDNA 2
  • Spunjji - Thursday, November 19, 2020 - link

    "this is only impressive because of the process advantage"

    False. A crap core on a high-tech process will still produce bad results; you only have to look at the last bunch of Zhaoxin CPUs based on the old Via tech.

    If this were just about process node you'd expect to see lower power but with limited performance. As it is, they manage both extremely low power *and* very competitive performance. Beating Intel is no small feat, even in their current incarnation.

Log in

Don't have an account? Sign up now