 Hello everybody. Good morning. All of you are excited for having a nice, wonderful day today. My name is Satyam. I've been working at Little I Labs. We build performance analysis tools for Android apps. I've been working on this tool since around last one and a half year. And I've been learning about Dalvik in the year I am to just share some of my learnings today with you. Just a small change in that name. It is supposed to be deep dive, but I won't do a deep dive, I'll just do a shallow dive. I don't want to make you uncomfortable. I wanted to make it a little easier. This is a quick slide, a small slide about the Android system architecture. If you look at it, these are all various parts of Android. So you would see that I highlighted the Dalvik one. The Dalvik one is like, otherwise I thought it would be a little difficult to search and find it. So there it is, just a small piece on many other things that are there. However, one thing to just kind of note it is the one that which runs your Android apps. So in a way, that is very important. Nevertheless, the others are also important. The Dalvik has been kind of designed to work with all the other parts as such and taking considerations about the complete Android motivation itself. The Android has been, if you look at it, the complete Android system has been designed, taking some of the constraints as follows. One of them is the memory. So it has to run with devices which has very low RAM. They didn't want to make use of any swap space, too. So they're supposed to work with the low memory. And CPU, they're supposed to work at low-end CPUs. And more importantly, it should be consuming much lesser battery. So these are the design constraints that Dalvik has been kind of based in. And we'll try to look at many of those things, how Dalvik has addressed many of these factors while we speak. If you look at it, what does the Dalvik VM really does? It is actually just like a JVM. Or in fact, maybe I'll just get in there a little bit. But you write your source code in Java. So that gets converted into Java bytecode. And that gets converted into Dalvik bytecode. And this Dalvik bytecode is the one that runs on Dalvik VM. The Dalvik VM just runs your Dalvik bytecode. Now, what does the Dalvik VM does? Yes, it works like the JVM itself. It loads the text files. And it has to interpret all those bytecodes and execute them on the hardware. And it also might have to do, just in time, compile some of this bytecode directly into the hardware code. And apart from this, it also just does some manages your memory, too. So before that, so we can actually just look at, start with trying to look at the Dalvik VM when it has to really load the text file. It needs to define a format on how the text file is. So Dalvik has tried to design even the files from the starting from how the file will be structured as, and when it's loaded, trying to optimize in terms of memory and all those things. So if you just look at this figure, so you can see that like all other files, it has an header. And then it's followed by string IDs. The string IDs are any strings that you use in your class source code. Everything will be part of the string IDs. For example, if you use hello world, you use, and you're just trying to print a line, hello world. So that is also hello world is a string. That's how print a line. The methods are also strings. And the class names are strings, and all those things. Then that is followed by type IDs. Type IDs, any type references you do. Not only those type declarations that you do in your classes, but all the type references that you do in your classes. Also information about all those type IDs are there. And then prototype for prototypes. It is a prototype of all the methods that you use, the method signatures, and the field IDs, fields. Method IDs actually have a reference to the prototypes apart from trying to tell the type and the method name, et cetera, and the class definitions and the data. In fact, though all these are there, all of these, this is probably the correct figure in a way. See that all the references jump from here and there, everywhere, right? So basically, if you look at the type IDs, type ID will not really store the complete string. It will just tell that, hey, this type, the name of the type is the string ID. And the string ID would tell, OK, this string, whatever is being referenced, is actually in data. So everything is actually really speaking in data. These are all the real indexes. Everything are there in all other fields. So this is kind of meant to avoid many of the duplicacies. So if I look at it, if I go back and try to connect with Java as such, that is being that Talvik has been trying to follow with Java, trying to connect with Java a little bit. In Java, if you look at it, all the JAR files that which we generally build consist of many class files. And all the class files exist as it is in the JAR file. They are not kind of compressed. So if you look at it, this, sorry, it's a little harder. But if you can see on the left side, all the class files are there. Each class file has constant pools. And then the red, OK, thanks, OK, if I look at it. So here is the data. And these are the constant pools for all the classes. And if you look in the DEX file, DEX file is now kind of all the class files are combined, everything, into one particular file as such. So all the constant pools that are kind of separated separately are combined from all the class files into one file as such. And all the data has been filed in this. So just illustrate with a little example. So let's take a small example. So we have an interface. And we have this class plot. And then we have another class, JAPR. So there are three classes. One of them kind of implements the other one. The other one just uses the other class. And see how it looks like, for example, in a JAR file. The JAR file, if you see the number of strings that are there, you look at the class, JAPR. You would see that the strings, the JAPR, is there. The method signatures are there. And if you look at Bloat, the similar method signatures are there. And even the strings, again, JAPR and JAPR are there. So there's a lot of duplicacy of the strings. So if you look in the text file, this is how it looks. Well, it doesn't look so clean here. All references jumping here and around. But if you look at it, there's no duplicacy at all. So the file has been kind of compressed as if that there will be no more duplicacies. All strings point to only one particular string as such. All method IDs, everything will point out to the same thing. Prototypes, if there is a signature which is similar to another method, it will share the same signature, trying to kind of squeeze on every byte that it can really make it. So that's why the text file is kind of little compressed. So if you look at it, if I just try to compare the dot text versus the dot jar file, though it is actually just a, in terms of size, it is 50% of uncompressed jar files are printed. That's what I've been reading across. So though the jar file compressed is lesser, and so in a way, the shipping size may be a little different. But at the same time, when you load into the memory, it's probably when you expand your class files and all those things, it will, again, blow up. So in a way, the text uses lesser memory. But one thing that you need to really keep in mind is that a jar file, if it's a jar file, your class will be really loaded, or in fact, will be found only when you really use it. If you don't use it, it will never be created at all. So but in the text, it will be actually loaded. It will be there in the memory, though the class object will not be created, but it will be there in the memory. So if you kind of avoid really using any class files which you think are not necessary, avoid using it. Well, it doesn't save too much memory, because the class object is not created and all those things. So in a way, it doesn't really differ much. But yeah, you may just get some extra bytes free. Now, but there is also an advantage of using the textile, which I'll try to talk about right now. And before that, I just want to explain a little bit in the general if you go back with the process memory usage. If you just go back the way that things used to happen in the Windows or the UX worlds with your native applications there, but in the Java applications. So there, if you look at the shared libraries and the dynamic libraries, so many of the processes use Lipsy or the user sys.tll in the Windows side or Lipsy in the Windows unique side. So all applications use it. So now the way that they worked out on the operating system is that they try to share the text section of all these libraries on the memory. So basically, if program A on the program two, both of them load Lipsy, the Lipsy will be mapped into the process in the virtual memory, but in the real physical memory, it will be there, both of them will be sharing the same text size. So this is one thing that Java itself is actually missing. You have your class files, and if you're running multiple JVMs on your machine, there is actually not really a good way of sharing those text files. So in physical memory, they can just be shared, right? And it will just reduce your RAM uses of your application or your complete system as such, because there's nothing really different as such. So Android has kind of tried to solve this problem using this way of decks, as well as other things that I'm trying to go ahead. So one of them thing is the zygode, zygode, right? So zygode is a process that they kind of created when it kind of boots up. It is actually, if you look at it on your Android devices, there is slash system slash bin slash app process, if you look at it. It is the one that which really starts the zygode. If you just look at it, you know how to really start the zygode process as such. It comes up during the boot process. It loads all the framework classes and everything and it just waits for the socket request so that when any other app request comes onto that socket, it just forks this application. When you fork this application, it is easier for the operating system to create that application very fast. It's just how to just go and mark all the pages that this process has been able to read or write for the new processes. If it is text section, they'll be only read and for the data sections, what happened is they marked it something called copy on write. The copy on write does mean that if you are really trying to write into the data pages the other new process, then it'll actually create a separate page and continue. The same will happen for the existing process. So in that way, the launch of the application will be faster as at the same time, you can actually share the pages also. So this is one main reason that the zygode has gone. It just kind of illustrated a little bit about the application launched on the zygode. This is what happens, right? So when you kind of click your application in the beginning, it kind of gets us, sorry, I think it is a little unreadable, sorry for that. So you just click your application, so it kind of goes to the launcher. The launcher takes it, calls the activity manager that so on site activity needs to be launched and it sees if that is in memory or not. If it's in memory, it just kind of loads it back or otherwise it sends a message to zygode. Zygode in turn will create a process, return back the PID to this guy and the zygode then will just invoke the activity thread of the algorithm to get started with which will internally just load the classes and will get started. So this is how zygode will actually really start their application such. Now to get back to our other one that which we are talking about, the memory part. Just excuse me one minute. As I was saying, right? So on the Java side, we are not really sharing the classes. You're, now because the zygode is the one which gets started in the beginning and whenever any app that gets created, it just folks application. Now you get, not only get those pages, you also share those text reasons. You also share those data reasons or for example, all the text structures that gets created, you just share them. So though, you will be using up some virtual memory of your process as such, but at the same time, you'll not be occupying an extra physical memory on your application, on your system. Anyway, so your RAM is kind of protected. It's okay that your process will take probably more memory. If you actually look at mat, don't do anything, you just launch the application, you bring up mat, you'll see a 10 MB or something assigned to your process. That is just because it is coming from the zygode. It is getting all that memory kind of assigned to your app space. But so if you just look at it, right? So I'll just try, I will try to explain to you maybe in the next session, next slide. So this is kind of the various parts of the memory of an application as such. So starting with PSS. So PSS you can call is a proportional set size of your application as such. So where there is some pages kind of share, get shared across various apps. And now you have your own apps. So the one that which are kind of shared are called shared and those which are only for you are called private pages. And again, there is like different between shared and private. The shared has two modes. One of them is dirty, clean, private, dirty, clean. So the dirties are those which, for example, if you get modified or if you throw it out from memory, you can't restore them back. But if it is the clean, those are the one which can be restored back. The restored backs are simple like this text files, right? Because they are really stored in the files. If they throw that pages from the RAM, you can load them back from the files onto the RAM. And so those are clean. So they don't, so they can be thrown away so they are lesser in weight. But the dirty things should not be thrown away otherwise your app will not be working. Okay, so now using your apps, the proportional set size is calculated for all your processes such that it looks at all your private files. And the shared prices, it will look at how many apps has really been using it and just divide by N. And so that is the PSS that it kind of attributes to all your processes. So the advantage of these processes, I mean the PSS is very useful for the Android is that it uses this to figure out which service it should kind of actually knock out. Okay, if it thinks that it's running low on RAM, it will try to first remove all your cash processes. Then it will actually try to look at all the services or any bank row and apps that are running which has higher PSS, it'll try to first kill them. It'll rank out in various cases and it'll start killing all those processes to be able to make the foreground application or the priority processes do their tasks. So the PSS is the why, this is the reason why PSS is very important. And then comes the shared dirty. If you look at it, then there are two parts, the native and Dalvik. So just to kind of differentiate, the native is not the native apps that we talk about or the HTML apps. The native is that which uses the process memory as such, the one which is not really the Dalvik. When the Dalvik VM runs, it actually has two kind of heaps. One heap is to run the Dalvik heap is the one which all your applications run in it. So your applications, the Java code, the Java code that is allocated and everything runs as part of Java heap. And the code that which your shared libraries or any other things that really get created or itself, the memory of Dalvik VM itself, everything is actually goes into the native memory as such. That's how the native memory and Dalvik memory are kind of separated. And so if you just do ADP shell dumps is meminfo and just give the PID, you'll get all this information and you can use this information to actually look at the memory uses of your application at any point of time. So just move into the next talk. So the text after it loads these files, the one thing it has to do is it has to interpret the byte code. So to define the byte code, let's see how the byte code of the Dalvik looks like. I'll just not go deeper enough but I'll just go quick example. If you look at it, this is the Java code. This Java code does nothing. It just adds up the sum of the given array and just written soaps. Actually I should have written some very mistake. But otherwise, this is a quick, this is how the byte code looks like or the Dalvik byte code looks like. Just to kind of touch upon small, small things here, here the Dalvik byte code uses the restermation which is kind of a little different from Java byte code. I'll just touch upon that. And all instructions are like 16 bytes. And if you kind of go back to your microprocessor, whatever you learn with x86 in all those places, right? So you would see them, all the instructions are mostly loads, stores, add, and all those kind of instructions. Here the byte code actually doesn't, well they have such kind of similar outputs but otherwise they have different kind of, a little higher level of outputs too. Look at, for example, array length, right? So it is array length is one of the byte code wherein the byte code actually will map it to figure out that particular variable and kind of tries to run it. Just to run through you, maybe here it runs, the Dalvik byte code kind of assumes there are like infinity number of registers like what the traditional compiler does. They know that how many registers are there and they have to fix it for you, fix it for them. But here it doesn't have to be born with. The runtime interpretation will take care about all those things. But otherwise, here it assumes there are infinite number of registers and it will just try to allocate the message. The arguments that are passed are actually the last variables. If you look at it, the v3 here is the, sorry, v3 here is the one argument that is kind of passed to this method as such and this is how kind of it looks like. I'll just touch up on a little bit more later. And if you can just try to compare with the Java byte code itself, right? The Java byte code actually uses the stack machine. The stack machines would not really take any operators. If you go back and look at Dalvik one, all the operators are actually given as an arguments, right? But if you come back here, all the operators are assumed to be on the top of the stack. So some of the operators can be actually those variable names, local variable names and all those things. But by default, they are assumed to be on the stack and they operate. And if you try to compare the number of instructions for the same function, okay? It is just as how many instructions? 32, 46, or the number of bytes are 16, 22, and here I believe it is 23. The number of effective byte size is, this is little, just one more than that. But at the same time, if you look at it, the number of instructions are more. And this is simple, but that one actually, the Dalvik actually generates into this byte code, which is almost like how many of our machines are, all of machines are actually resturbaged machines. So it becomes easier to really translate them if you use a resturbation. And that's one reason why Dalvik has gone with the resturbation as such, okay? So to kind of just summarize about the Dalvik byte code, it uses a register machine, assumes unlimited number of registers. All instructions are 16-bit instructions, well, some of the operators can actually overflow to be the next set, okay? And the registers are assumed to be 32-bit size. And if you need to use 64-bits, it uses both the consecutive registers as such, and N arguments are in the N last registers as such, okay? For all the instance methods for the Java classes and all those things, which are non-static, right? One of the first argument will be this. So that will be also be one of those arguments as such. So moving across, so, well, I'm not touching about how it really interprets. It can just take that code, find out the relevant code and just execute that, okay? So, but now let me move towards verification. The one thing that the Java class loader does it, when you load your class, it actually needs to verify that the class is actually following the proper syntax. Is it like all the string IDs that we are talking about a few minutes back, right? All of them are in proper range or not. And many other cases, whether the classes defined are, actually, it can just load all its dependencies also. And it also needs to find out whether those classes are valid and it doesn't really crash the stack machine. So similarly, the Dalvik VM also has to do this verification for all these classes. But over the Android as such, kind of has a little more control than what a Java can do, the JVM can do, right? So basically it can control your installation. It controls, it knows when you install your app or et cetera. So when you install your app, it immediately takes the control and it tries to verify right then itself. It tries to verify during, just at the installation time itself. And there is this tool that's called Xopt. It's the one which actually really verifies and also does optimizations also, okay? And after it verifies, there is one folder that it calls Android data, data Dalvik cache. It tries to keep all those text files there. And to verify it, first it loads all the class files to verify. It actually just looks at your framework jars and et cetera to resolve all the classes and verifies that. And if it could not verify any class, if it could not find some class or if it finds something illegal, it just marks a flag in that particular class file telling that, hey, I could not verify this. This probably is not verified so that it probably will give an error at a runtime because during install time, it should not give a error message to the user at such time. When that class file is really used, okay? Note when this class is used, it's not the complete text, right? When this class is really used, then it's when it actually will throw an error at a runtime when you are really using that app. It doesn't do it during the install time, okay? So, text apart from doing the verification, it also does optimization. So that some of the optimizations can happen at the time itself rather than later. So one thing that you can probably ask is why can't I really optimize them during on my system itself? Why do I need to do it during install time? Here, the optimization that it really does are a little different, which probably cannot be, which can be actually platform dependent. So one of them is actually the bytes. So Dalvik assumes all the instructions, everything, all the data here in little Indian, okay? So many of the processors right now actually really allow both little Indian as well as big Indian, but there may be some processors which are strictly big Indian. So for if Dalvik is kind of putted on those processors, right? They ask to be swapped. The bytes have to be swapped. So it does those. There's one platform dependent. The other thing it also does is it also tries to inline some functions depending upon the platform. So some of the things it can inline some methods and it can also optimize some of the methods that it would figure out which are going into the framework jar and or et cetera. It can actually directly use the V table to really directly reference into them, okay? To just go back to an example of the inlining. This is a quick example of how we can inline, right? For example, some of the methods which are very used widely, like string.length or string.equals, right? That this, at the top, you see what your normal class file is. It just calls invoke virtual. Invoke virtual is a bytecode to tell, execute this particular function, okay? Here it's trying to tell execute string.length. And the next line you see is a one. Actually if you see slash data, Dalvik cache slash data, you would find this particular class file which will actually contain the optimized text file. It's actually called ODEX. And if you look at that, so that one actually has a line, execute inline, okay? So it also has some offset which tells this offset is for string.length. So this probably can be changing changed across various platforms. So that's the reason so it kind of does an optimizations on the platform itself. So one thing that it has to happen, for example, many of you would have got update for Android 4.4, right? So now when Android 4.4 gets downloaded, all this files, cache files need to be regenerated. So it will take a little while to actually install. So after this downloads, it has to go back and load all this, it has to keep your original class files or original text files and it has to run them again. The text art will just run all of those files again and generate once again, all this optimized text files flushing the cache. So your original text file also should be left even after you have this new files. So the next thing that the Dalif VM should be also doing is, should be probably doing just in time compilation to avoid interpretation every time. It was introduced with Android 2.2 actually. So before that or even after that also, if you have a very ICPU bound application, it is better that you write your application in SQL. That's because you will avoid those interpretations moving back and forth and all those things. That's what it's recommended. But to kind of reduce that usage, it's better to do some kind of a just in time compilation. So Android has introduced just in time with Android 2.2 and the goals that they add for like it has to become effectively. If you look on the Java on the server side or on your desktop and generally your applications there will be running for your time. So it's okay that it actually takes a while to warm up and then it starts trying to look at the caches, see which one it should really optimize and then continue. But here the applications that may be running, it has to just get optimized very quickly. So it has to start really, it cannot wait for warm up basically. And the memory is a constraint. It should not be using too much memory and of course they should not be constantly doing. They should be very careful in how much is really compiled into the native program as such. So when they decided, when they were trying to look at which JIT they should be using, they came out with, these are the two traditional JITs that are apparently used much. One of them is the trace method and the other one is the method JIT. In the method JIT is actually much simpler in a way is that you have your code and it'll just see the methods which are executed heavily and it will just compile for those methods. And if you look at it in this figure, the first box is showing you the full program parts. Those yellow pieces are the methods that which are really used. And if you just optimize them, you'll be using this much memory. Only you'll be really just in time. So that's what they kind of put it across is like only 10% of the code actually really excludes a lot. And then the next thing is you can just, doing a method JIT is very simple. It's just actually a method boundaries. You just need to find out and you do just sit for the particular methods. But one of the disadvantage is the method JIT as is that the complete flow in the, all the flows in the method are not really excluded every time, right? Because there may be some if-else ports which will never get excluded. So we are unnecessarily sometimes probably just doing JIT code for those codes which are never getting executed. So the trace JIT actually takes advantage of that. It will actually really just in time compile at the basic block levels. Or as n number of basic blocks, they can just do just on that rather than the complete method. So trying to reduce the instructions that which we do just in JIT or at the same time by just reducing that your cache will be less, your memory requirements will be less and the CPU that you spend cycles on trying to just do just in time compile will also reduce. So and if you look at in this figure, it tells that just 2% of the program is actually can be, is really generally optimized in this particular case. Just to have a quick flow about all the trace JIT works, when your application is actually running, it actually just updates a profile count for that location as such. When it reaches a threshold, it'll actually just look at whether that particular code is actually the translation for that already exists or not. If it just exists, it'll just use a translation. If it doesn't exist, it'll actually go, the interpreter will actually go and look at each byte code and see until when it probably can give it to the compiler thread to really exclude. Basically trying to see if the code is getting complex. Let me stop here kind of those things. It'll just try to figure out that and it'll give it to the compiler thread. And if you look at the number of threads that your application is running through DDMS or whatever, right? So you can actually see that there'll be one particular thread called compiler, which is actually occasionally doing just in time comparison. So when this gives a notice to the compiler thread to go and optimize, it'll try to take those pieces and it'll try to compile them and put them in a translation cache. The translation cache apparently is very small. They used to have just something like 200K or so. So less. And now once the code is translated, any exit points that are gone, that exit points will be bound back to the interpreter or if that translation for that other basic block also exists, it will be actually bound to that so that it doesn't have to use one of them. The small, small optimizations to make the most out of the just in time compilation. Okay, but is that enough? So but still there is a cost to the interpretation. It has to interpret every time. There's a cost. You optimize a little bit, but still there's a limit to that. So with Android 4.4, Android is kind of introduced or tried to preview with something called ART, the Android Run Time. It's what they call. So if you have got an update for 4.4, you can try it out in developer options. You can go and tell, we can just tell, include ART and just, it'll probably use those things I didn't try it out. But the way that I understand is that when you install, it'll do compilation at that point of time. It'll compile into almost nitty code at that time. So effectively the interpretation, the just in time, and all those things may not be necessary if you just do that. Well, so one question that you probably can be asking is that, hey, why can't I generate my executable directly on my desktop itself? Why should it do something here? So by just doing it at a system level, it can actually take advantage of the CPU that your current system is using. For example, if we, at little labs, actually have a nitty code as such, and we wanted to use one library, which heavily uses on the thumb instructions of ARM, we could not use it just because the Galaxy Y, the ARM chip that comes with Galaxy Y doesn't support the thumb instructions. So we could not really know. One thing for me to do is that either I support multiple shell libraries, I build for all of those, and depending upon the system, I load that, or otherwise don't take advantage of the thumb instructions. So to reduce the pain, currently, I'm not using the thumb instructions. But if I just outsource my job to somebody else, that would be better. And I can't keep multiple versions of my shell libraries and all those things. That will be painful for me to maintain. So by now pushing it to the runtime or to the install time, Dalvik is trying to give away, throw away your work from there, and they know, because they know the, they can know which Android processor it is, and they can optimize it better for that particular, and for that particular ARM chip or whatever chip it is. So that is the advantages that you can actually go when you move towards the Android runtime. So that's what happens at the just on time compilation and the runtime. Now, the other thing that what Android, okay, so I think I'll just finish. The memory is one main part that is the other thing that what Android Dalvik really manages. So just to, if you actually look at your locket, there are two main important things that really part. One of them is the GC concurrent and one of them is the GC for malloc. Before Android 2.3, they used to do only GC for mallocs, okay, wherein it just stops the complete world. If your memory are not regularly, but yeah, regularly it stops the complete, all the suspense, all the threads, and tries to garbage collector memory. With Android 2.3, they changed a little mode, wherein they call it this concurrent mode, what that will, which will just suspend for a few minutes, just do a marking and then it'll do a small sweeping, then again it'll try to lock, try to make sure that everything is fine and it'll kind of save and then it'll free the memory and it'll go. So it does almost everything, but it tries to stop the rest of the world only less time. However, at the time, if the memory allocation fails at the time and if the GC for concurrent is happening, the memory allocation will actually stop and will actually you'll see in the lock at, wait for, waiting for a concurrent allocation to finish kind of those messages, you'll be seeing those things. So you can just look at the lock at to figure out. So if you see at the last seconds, it is you have the time, okay, the pause times here, here is the pause time for this one. You have to just have a look at this to figure out how much pause time your application is kind of taking. So these are some of the references that you can go. I can probably share with you if you have anything and do you have time for questions? Okay, so any questions? Hi, my name is Garima. Hi. So I have a question for you. So what is the difference between using a jar file for library and referencing a library as, I mean referencing as a library project? Add the Dalek library. Okay, I'm trying to understand the hybrid application you're talking about is the HM. So if we are including external libraries, right? We can either include a jar file in our project, right? Or we can reference another project as a library project. So how, how are we going to do this? So all this jar files get converted into DEX as such, okay, even your hybrid programs will be just running those files in DEX and it'll just run them. The Dalvik doesn't know whether it is an hybrid or non-hybrid. It will just knows as DEX code and it'll just try to exclude them as such, okay? This is Saravanan. I said. So my question is more about, you know, multi-code processors. Okay. How does the Dalvik VM, you know, takes advantage of the multi-code? Okay, so the Android OS actually takes care of the multi-code places. The Dalvik just kind of tries to interpret it and it will just try to see that the number of reads, right, collisions, and all those things happen properly and all those things. But at the same time, I don't think it does anything special for it. It doesn't have to worry about it. I think it's the OS which will really take care of it. Thank you. This is Parkeer. Yeah. So my question is, how does it resolve conflicts between different jar files and different versions? Different jar files have different versions. So does it, I see classes? So yes, really speaking, the text will assume that there is only one class for, there is only one class for a particular complete seven name. It doesn't really allow two things to happen. So when you compile it, it will figure out in the class path which one comes first, and it will include only that particular class file in the text file. It doesn't include all the classes. This is a counter question here. So basically, so it'll just figure out which one is in the first in the class path that it can find, right? If there are two particular ones, it'll just figure out the first one. It will not. When you build, this happens during the build time itself, right? It's not at the run time, okay? But when the DX, which builds all your jar files and combines into a text file, that time itself, it'll actually look at, it'll figure out the class file which it really thinks is proper and which it finds first on the class path. We'll try to use that. So I think that in some cases, it'll find multiple text errors when there are similar names. I think maybe sometimes it'll use those warnings. Sometimes it can just choose not to use everything else and try to figure out. Hi, this is Ajay. With the introduction of ART, there is no interpreted code anymore, right? So it's all ahead of time combined. Yes. So it's a sunset for Dalvik or it's like... So I'll put this right. So Dalvik is doing kind of around three things, right? It is also doing the memory management for you, right? So the memory management, somebody should just be doing that. So such kind of components will still be there part of the head of compilation, sorry. It'll be there, still be there part of the ART as such. The memory garbage collection and everything will be there. So you can assume that the interpretation piece or the just-in-time compilation pieces, both of them probably may not be there, but there'll be some other pieces that which would probably still be there in Dalvik. In the run time library, which will be there. Can we say that ART... Which one? So I won't talk, ART will be part of the Dalvik VM. I think both of them will be probably be separate. It'll be, but still the various other pieces that what we discussed today, right? The text files and all those things will still continue. But at the same time, the interpretation and the just-in-time compilation will probably not be there. Hi. And it is still in preview. So I think they're just kind of testing to make sure that everything is fine. But yeah, maybe the next version will probably, they'll probably make that one preference to the Dalvik. Hello. Hi. Hi. With the introduction of ART, what implications does it have for garbage collection? Can we expect the pauses to be more deterministic or maybe take less time? I don't think there should be much difference in either way. I think it should be the same. I'm not expecting it to be different. Guys, just a couple of questions. That's it. I'll add, see, I'm in the cinema. Yeah. I'm from Hyderabad. I'll add the question what this question has asked about ART. So my understanding is ART is a big advantage in Android. But according to the discussion what, for that question, ART is just a complement to Dalvik. Okay. Is it just a complement where it works in sync with Dalvik or does ART has really huge advantage in real-time scenario? Yeah, it has an advantage, right? It will try to, no longer interpretation will be necessary. So your applications probably will be faster a little bit. That's fine. But your installation time will be slower, but I think that's okay for people. Then in that case, will Zygote initiate the... The Zygote will still continue, right? So one of the issues that would be they need, they can just share this, all this code and everything. So it may just note the class files, but it'll be there so that your launch time will be faster. Yeah, absolutely. So will Zygote call Dalvik or will Zygote will let ART? Zygote will be using ART. Thanks. How will the Dalvik take advantage of the processor on which it is running? Because every processor has some special instructions, like maybe SIMD or maybe doing fast multiplication and all those stuff. How does Dalvik handle this? So the Dalvik is, you can actually, the Dalvik and every time you're on your Android machine, you build Dalvik depending upon the processor architecture, right? So it knows about how the processor architecture is. So it knows if it has to take advantage of the SIMD architectures or et cetera, depending upon the processor there, it will take advantage of them. So the code is almost similar, except there'll be some if-debs or whatever necessary, but at the same time, if the processor has SIMD, it will take advantage of them. Kind of. Yeah, hi, I'm Karan. With ART, you know, with ART, you can actually read the dot OAT files as opposed to the previous dot ODEX files. And I've heard that Android also includes OAT to ODEX converter. So like, do you think how, like using this, how will backward compatibility with previous apps be, you know, that can be managed? So I think it should be backward compatible. I don't think there should be any issues with the LBART being non-Dalvik, non-backward compatible. If you look at it, so the, so really speaking, Dalvik VM, what it does is it really executes the text code, right? The text code, if you look at the text byte code, I don't think it has changed a lot in the last few years. It can just run the whole code also, right? So ART should be able to run the new world code also, and I don't think that should be an issue. Okay? So there's a, at 12.30, there's a speaker console there. Like, I'll be there in that room. If you just take out and you take left, I'll be there in that room. If you have any further questions, you can meet me there. Okay? Thank you very much. Have a nice conference. Thank you.