With the latest I/O conference, Google has finally publicly announced its plans for its new runtime on Android. The Android RunTime, ART, is the successor and replacement for Dalvik, the virtual machine on which Android Java code is executed on. We’ve had traces and previews of it available with KitKat devices since last fall, but there wasn’t much information in terms of technical details and the direction Google was heading with it.

Contrary to other mobile platforms such as iOS, Windows or Tizen, which run software compiled natively to their specific hardware architecture, the majority of Android software is based around a generic code language which is transformed from “byte-code” into native instructions for the hardware on the device itself.

Over the years and from the earliest Android versions, Dalvik started as a simple VM with little complexity. With time, however, Google felt the need to address performance concerns and to be able to keep up with hardware advances of the industry. Google eventually added a JIT-compiler to Dalvik with Android’s 2.2 release, added multi-threading capabilities, and generally tried to improve piece by piece.

However, lately over the last few years the ecosystem had been outpacing Dalvik development, so Google sought to build something new to serve as a solid foundation for the future, where it could scale with the performance of today’s and the future’s 8-core devices, large storage capabilities, and large working memories.

Thus ART was born.

Architecture

First, ART is designed to be fully compatible with Dalvik’s existing byte-code format, “dex” (Dalvik executable). As such, from a developer’s perspective, there are no changes at all in terms of having to write applications for one or the other runtime and no need to worry about compatibilities.

The big paradigm-shift that ART brings, is that instead of being a Just-in-Time (JIT) compiler, it now compiles application code Ahead-of-Time (AOT). The runtime goes from having to compile from bytecode to native code each time you run an application, to having it to do it only once, and any subsequent execution from that point forward is done from the existing compiled native code.

Of course, these native translations of the applications take up space, and this new methodology is something that has been made possible today only due to the vast increases in available storage space on today’s devices, a big shift from the early beginnings of Android devices.

This shift opens up a large amount of optimizations which were not possible in the past; because code is optimized and compiled only once, it is worth to optimize it really well that one time. Google claims that it now is able to achieve higher level optimizations over the whole of an applications code-base, as the compiler has an overview of the totality of the code, as opposed to the current JIT compiler which only does optimizations in local/method chunks. Overhead such as exception checks in code are largely removed, and method and interface calls are vastly sped up. The process which does this is the new “dex2oat” component, replacing the “dexopt” Dalvik equivalent. Odex files (optimized dex) also disappear in ART, replaced by ELF files.

Because ART compiles an ELF executable, the kernel is now able to handle page handling of code pages - this results in possibly much better memory management, and less memory usage too. I’m curious what the effect of KSM (Kernel same-page merging) has on ART, it’s definitely something to keep an eye on.

The implications to battery life are also significant - since there is no more interpretation or JIT-work to be done during the runtime of an app, that results in direct savings of CPU cycles, and thus, power consumption.

The only downside to all of this, is that this one-time compilation takes more time to complete. A device’s first boot, and an application’s first start-up will be much increased compared to an equivalent Dalvik system. Google claims that this is not too dramatic, as they expect the finished shipping runtime to be equivalent or even faster than Dalvik in these aspects.

The performance gains over Dalvik are significant, as pictured above; the gains are roughly a 2x improvement in speed for code running on the VM. Google claimed that applications such as Chessbench that represent an almost 3x increase are a more representative projection of real-world gains that can be expected once the final release of Android L is made available.

Garbage Collection: Theory and Practice
POST A COMMENT

137 Comments

View All Comments

  • moh.moh - Wednesday, July 2, 2014 - link

    Can somebody confirm or deny that the ART from KitKat is the same as the ART from L? What I have read points to ART from Kitkat being different from ART on L. Reply
  • p3ngwin1 - Wednesday, July 2, 2014 - link

    ART in the existing Preview release of "L" already is more advanced than KitKat's.

    the final release of ART on "L" will be even more changed than the current Preview of "L".
    Reply
  • phoenix_rizzen - Wednesday, July 2, 2014 - link

    Yeah, it's an evolutionary upgrade, not a revolutionary whole-hog replacement.

    Just as Dalvik in 4.4 is different from Dalvik in 2.3; it's an evolutionary upgrade.
    Reply
  • tipoo - Thursday, July 3, 2014 - link

    The current build of L is more developed and better performing with ART than Kitkat, as will the final be. Reply
  • raghu.ncstate - Wednesday, July 2, 2014 - link

    "Google was not happy with this and introduced a new memory allocator in the Linux kernel, replacing the currently used “malloc” allocator" - Malloc allocator is not in the kernel. I dont think there was any change to the linux kernel in this. Malloc and Rosalloc are both done in user space in the ART lib. Both probably use the sbrk() system call to get memory from the kernel. Also a quick look at Rosalloc.cc code shows it is written in C++. So definitely cannot be in the linux Kernel. Reply
  • jospoortvliet - Thursday, July 3, 2014 - link

    On that C++ point - Linus has been coding C++ - http://liveblue.wordpress.com/2013/11/28/subsurfac... so who knows what the future holds ;-) Reply
  • Haravikk - Wednesday, July 2, 2014 - link

    The article mentions that startup times for devices will be worse with ART, but I don't understand why; surely if the code has already been compiled it will simply be cached somewhere, so it's just a case of executing it directly. In fact, this should mean that startup should be faster than normal.

    In fact, the space requirement is another question mark; once an application has been compiled, does the byte code even need to be retained? Surely it can be discarded in that case? Though I suppose it's required to ensure that signatures don't change, it seems like the OS could enforce that differently (i.e - as long the byte code validated pre-compilation, then the compiled code is considered signed as well)?

    I dunno, it just seems to me like there are plenty of ways to not only avoid slow-downs or extra storage use, but in fact there are ways to use ahead of time compilation to accelerate startup and reduce storage use.
    Reply
  • Stochastic - Wednesday, July 2, 2014 - link

    I think you're correct. First time device startup and app installations will be longer, but once the compilation is done startup times shouldn't be slower. Reply
  • metayoshi - Wednesday, July 2, 2014 - link

    It only makes sense the the application's first startup will take a long time. That first startup is where the Ahead of Time compilation is happening. Where else would it happen? Application startups after that will be much quicker, though, since the AOT compilation was already done beforehand. Reply
  • phoenix_rizzen - Wednesday, July 2, 2014 - link

    AoT happens when the app is installed on the phone; or during the first boot after changing the runtime to ART. Reply

Log in

Don't have an account? Sign up now