With the latest I/O conference, Google has finally publicly announced its plans for its new runtime on Android. The Android RunTime, ART, is the successor and replacement for Dalvik, the virtual machine on which Android Java code is executed on. We’ve had traces and previews of it available with KitKat devices since last fall, but there wasn’t much information in terms of technical details and the direction Google was heading with it.

Contrary to other mobile platforms such as iOS, Windows or Tizen, which run software compiled natively to their specific hardware architecture, the majority of Android software is based around a generic code language which is transformed from “byte-code” into native instructions for the hardware on the device itself.

Over the years and from the earliest Android versions, Dalvik started as a simple VM with little complexity. With time, however, Google felt the need to address performance concerns and to be able to keep up with hardware advances of the industry. Google eventually added a JIT-compiler to Dalvik with Android’s 2.2 release, added multi-threading capabilities, and generally tried to improve piece by piece.

However, lately over the last few years the ecosystem had been outpacing Dalvik development, so Google sought to build something new to serve as a solid foundation for the future, where it could scale with the performance of today’s and the future’s 8-core devices, large storage capabilities, and large working memories.

Thus ART was born.

Architecture

First, ART is designed to be fully compatible with Dalvik’s existing byte-code format, “dex” (Dalvik executable). As such, from a developer’s perspective, there are no changes at all in terms of having to write applications for one or the other runtime and no need to worry about compatibilities.

The big paradigm-shift that ART brings, is that instead of being a Just-in-Time (JIT) compiler, it now compiles application code Ahead-of-Time (AOT). The runtime goes from having to compile from bytecode to native code each time you run an application, to having it to do it only once, and any subsequent execution from that point forward is done from the existing compiled native code.

Of course, these native translations of the applications take up space, and this new methodology is something that has been made possible today only due to the vast increases in available storage space on today’s devices, a big shift from the early beginnings of Android devices.

This shift opens up a large amount of optimizations which were not possible in the past; because code is optimized and compiled only once, it is worth to optimize it really well that one time. Google claims that it now is able to achieve higher level optimizations over the whole of an applications code-base, as the compiler has an overview of the totality of the code, as opposed to the current JIT compiler which only does optimizations in local/method chunks. Overhead such as exception checks in code are largely removed, and method and interface calls are vastly sped up. The process which does this is the new “dex2oat” component, replacing the “dexopt” Dalvik equivalent. Odex files (optimized dex) also disappear in ART, replaced by ELF files.

Because ART compiles an ELF executable, the kernel is now able to handle page handling of code pages - this results in possibly much better memory management, and less memory usage too. I’m curious what the effect of KSM (Kernel same-page merging) has on ART, it’s definitely something to keep an eye on.

The implications to battery life are also significant - since there is no more interpretation or JIT-work to be done during the runtime of an app, that results in direct savings of CPU cycles, and thus, power consumption.

The only downside to all of this, is that this one-time compilation takes more time to complete. A device’s first boot, and an application’s first start-up will be much increased compared to an equivalent Dalvik system. Google claims that this is not too dramatic, as they expect the finished shipping runtime to be equivalent or even faster than Dalvik in these aspects.

The performance gains over Dalvik are significant, as pictured above; the gains are roughly a 2x improvement in speed for code running on the VM. Google claimed that applications such as Chessbench that represent an almost 3x increase are a more representative projection of real-world gains that can be expected once the final release of Android L is made available.

Garbage Collection: Theory and Practice
POST A COMMENT

137 Comments

View All Comments

  • jabber - Wednesday, July 2, 2014 - link

    Been using ART on my Nexus 4 since Jellybean came out. No issues. Life carries on as normal. Reply
  • Krysto - Wednesday, July 2, 2014 - link

    I can't believe Google hasn't also adopted F2FS in Android L. I would've been perfect. How is it that they put it in Motorola devices a year ago, and they still can't make it default on stock Android? Reply
  • uhuznaa - Wednesday, July 2, 2014 - link

    Because changing the FS in an update sucks. You may see this in new devices, but not in updates for existing devices. Reply
  • phoenix_rizzen - Wednesday, July 2, 2014 - link

    Not really. It just depends on how the update is done.

    If it's a "nuke'n pave" restore (like the Dev Preview or System Images), then it's not an issue. Backup your data to the PC/cloud, reformat all partitions, install, carry on.

    If it's an in-place upgrade, then it becomes tricky. Unless, of course, you are using F2FS for the /data filesystem, which (really) is the only one that benefits from it. You don't need to make /sdcard (internal storage) F2FS, and you don't want to make /ext-sd (SDCard) F2FS as then you lose all non-Linux reader support. Nothing stopping you from using those as F2FS, though.

    I'd really like to get a custom recovery for the G2 that allowed you to select which FS to use for each partition, and a ROM with a kernel that supported it, though. Just to try it out, and see how it works. :) Any takers? ;)
    Reply
  • moh.moh - Wednesday, July 2, 2014 - link

    Yeah, I am really hoping for a big push towards F2FS in the coming months. I mean Moto has showed the significant increase in performance which we can get. Reply
  • Krysto - Wednesday, July 2, 2014 - link

    > but bad programming practices such as overloading the UI thread is something that Android has to deal with on a regular basis.

    I believe they've also added a new UI thread now to L. You should look into that. I think it's in one of Chet Hasse's sessions, possibly in "What's new in Android".

    I think I found it: https://www.youtube.com/watch?v=3TtVsy98ces#t=554
    Reply
  • Krysto - Wednesday, July 2, 2014 - link

    > Google claims that 85% of all current Play Store apps are immediately ready to switch over to 64 bit - which would mean that only 15% of applications have some kind of native code that needs targeted recompiling by the developer to make use of 64-bit architectures.

    Does this means that OEMs could use soon "pure" Aarch64 architectures? I think you can use ARMv8 purely for the 64-bit mode, with no compatibility for 32-bit, too. I imagine that would make the chips less expensive and also more efficient for OEMs.

    I'm not familiar with how Intel has its chips, but I think it would be a lot harder for Intel to get rid of the "32-bit" parts, and they are pretty much stuck with their chips being both 32-bit and 64-bit, at least for the next few years, until nobody in the world needs 32-bit anymore on any platform Intel chips runs, and then they could just redesign their architecture to be 64-bit only.
    Reply
  • _zenith - Wednesday, July 2, 2014 - link

    x86 also has a 16bit mode AFAIK, so its more complicated than that still. [80]x86 is just a bitch of an ISA. Reply
  • name99 - Wednesday, July 2, 2014 - link

    I've long suggested that this is exactly what Apple will do. I don't think they'll ditch 32-bit support for the A8, but I honestly would not be surprised if the A9 comes without 32-bit support and iOS9 has a 32-bit SW emulator to handle old apps. Then by iOS 11 or so they just ditch the 32-bit emulator.

    Other vendors have the problem that they don't have a tight control over the entire eco-system. Qualcomm, for example, are not making Android chips, they're making ARM chips --- for anyone who wants an ARM chip. It's something of a gamble to just ditch 32-bit compatibility and tell anyone who wants that "Sorry, you should go buy from one of these competitors". Most companies (foolishly, IMHO) weigh the cost of backward compatibility as very low, and the cost of losing a sale (even if it's to a small and dying industry segment) as very high; so I suspect they're not even going to think about such an aggressive move until years after Apple does it.
    Reply
  • coachingjoy - Wednesday, July 2, 2014 - link

    Thanks for the info.
    Nice article.
    Reply

Log in

Don't have an account? Sign up now