Rosetta2: x86-64 Translation Performance

The new Apple Silicon Macs being based on a new ISA means that the hardware isn’t capable of running existing x86-based software that has been developed over the past 15 years. At least, not without help.

Apple’s new Rosetta2 is a new ahead-of-time binary translation system which is able to translate old x86-64 software to AArch64, and then run that code on the new Apple Silicon CPUs.

So, what do you have to do to run Rosetta2 and x86 apps? The answer is pretty much nothing. As long as a given application has a x86-64 code-path with at most SSE4.2 instructions, Rosetta2 and the new macOS Big Sur will take care of everything in the background, without you noticing any difference to a native application beyond its performance.

Actually, Apple’s transparent handling of things are maybe a little too transparent, as currently there’s no way to even tell if an application on the App Store actually supports the new Apple Silicon or not. Hopefully this is something that we’ll see improved in future updates, serving also as an incentive for developers to port their applications to native code. Of course, it’s now possible for developers to target both x86-64 and AArch64 applications via “universal binaries”, essentially just glued together variants of the respective architecture binaries.

We didn’t have time to investigate what software runs well and what doesn’t, I’m sure other publications out there will do a much better job and variety of workloads out there, but I did want to post some more concrete numbers as to how the performance scales across different time of workloads by running SPEC both in native, and in x86-64 binary form through Rosetta2:

SPECint2006 - Rosetta2 vs Native Score %

In SPECint2006, there’s a wide range of performance scaling depending on the workloads, some doing quite well, while other not so much.

The workloads that do best with Rosetta2 primarily look to be those which have a more important memory footprint and interact more with memory, scaling perf even above 90% compared to the native AArch64 binaries.

The workloads that do the worst are execution and compute heavy workloads, with the absolute worst scaling in the L1 resident 456.hmmer test, followed by 464.h264ref.

SPECfp2006(C/C++) - Rosetta2 vs Native Score %

In the fp2006 workloads, things are doing relatively well except for 470.lbm which has a tight instruction loop.

SPECint2017(C/C++) - Rosetta2 vs Native Score %

In the int2017 tests, what stands out is the horrible performance of 502.gcc_r which only showcases 49.87% performance of the native workload – probably due to high code complexity and just overall uncommon code patterns.

SPECfp2017(C/C++) - Rosetta2 vs Native Score %

Finally, in fp2017, it looks like we’re again averaging in the 70-80% performance scale, depending on the workload’s code.

Generally, all of these results should be considered outstanding just given the feat that Apple is achieving here in terms of code translation technology. This is not a lacklustre emulator, but a full-fledged compatibility layer that when combined with the outstanding performance of the Apple M1, allows for very real and usable performance of the existing software application repertoire in Apple’s existing macOS ecosystem.

SPEC2017 - Multi-Core Performance Conclusion & First Impressions
Comments Locked

682 Comments

View All Comments

  • Holliday75 - Tuesday, November 17, 2020 - link

    I pray to the computer gaming gods that I do not have to purchase an Apple product 10 years from now.
  • nandnandnand - Tuesday, November 17, 2020 - link

    You might need to make a sacrifice while you're at it.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    That is not happening. Apple is always thin and light. They don't even sell their HW for others as in a B2B situation for the Server Market or such, there's no DIY in Apple land, it's all propreitary and gated. AMD Is not going to sit idle and Intel as well, investor pressure, Market demands. AWS needs to put more of their HW in their services, Oracle started with Xeon and EPYC recently.

    Windows abandoning DX is never going to happen, they are pushing to far to make the DX12 the base for all Xbox games and DX11 is about to die sadly. And MS wants gaming market, with Xbox failure and constant dizziying of their own studios at garbage games (Gears5, Halo5, Infinite) they are betting on the XCloud like Luna and Stadia but the market will only decide how far that goes.
  • nico_mach - Tuesday, November 17, 2020 - link

    The same MS that's putting everything in the cloud via subscriptions so that 'thin and light' devices can play AAA games? THAT MS?
  • taligentia - Tuesday, November 17, 2020 - link

    AWS doesn't care about AMD.

    They have their AWS Graviton (ARM) CPUs which destroys AMD/Intel. So much so they have been recently transitioning all of their managed services to it e.g. S3, RDS.

    ARM is going to eat everything.
  • Silver5urfer - Tuesday, November 17, 2020 - link

    uhh what. "Destroys AMD and Intel", is this a joke or what, go and read articles on STH first before writing such useless trash..

    "RDS instances are available in multiple configurations, starting with 2 vCPUs, with 8 GiB memory for M6g, and 16 GiB memory for R6g with up to 10 Gbps of network bandwidth, giving you new entry-level general purpose and memory optimized instances. The table below shows the list of instance sizes available for you:"

    That is from Oct 2020 AWS blog, on RDS with Graviton 2, destroys ? utter bs, notice that line about "entry level".

    ARM is not AWS nor Apple. Amazon is stupid to buy tons of machines based off EPYC and XEON machines ? Or what about PCIe based HPC accelerator markets with FP64 compute with MI RDNA2 and GA100. Step back to reality and see economies of scale and read about it before even writing such lines on AWS doesn't care, it's their business to provide the Enterprises on the requirments, ARM is not competiting in any case with x86, Marvell Thunder is dead, they abandoned X3 from Off Shelf to Custom design like Graviton 2 upon a client request. AMD bought Xilinx FPGA too for boosting their Server market and HPC, and then Altera has to show yet what is their case, Nuvia is all smoke show, Qualcomm abandoned. Huawei is banned and not even there with this N.A market of Datacenters, what on hell are you talking.
  • Hifihedgehog - Tuesday, November 17, 2020 - link

    No worries. Developers are already in a strongly hostile posturing against Apple and Apple is going to try to pull a Microsoft and it is going to blow up miserably in their faces. The writing is on the wall that they are going to double down on the App Store in macOS. That is reason alone to look hard and long and use some objective common sense in light of history of what Apple has done, can do, does do, and will do to punish developers. Fool me once...
  • taligentia - Tuesday, November 17, 2020 - link

    You are delusional. Developers love the current situation.

    They can write ONE app and have it run on all Macs, iPads and iPhones.
  • nevcairiel - Wednesday, November 18, 2020 - link

    I'm a developer for desktop software, and my target is Windows, Linux, and macOS, and macOS is the worst part of the job by far. And its not getting better.

    Developers entrenched in Apples ecosystem might like it, but someone like myself absolutely hates the direction all of this is going. macOS is already the worst desktop OS to develop a cross-platform app for.
  • Kuhar - Wednesday, November 18, 2020 - link

    100% agree on that.

Log in

Don't have an account? Sign up now