 Hello everyone, I am Anjali Singhal, my master thesis project is on energy efficient applications for low power devices. Now energy efficiency can be discussed at three stages, that is first at the architecture level, at the system level and at the application level. At the architecture level, the processor usage can be reduced by using multicore processors. At the system level, we can do hard disk spin down and compiler driven optimizations to achieve energy efficiency. Similarly, at the application level, we have lot of user information, which the developer can use and he can change the algorithm and the source code or the design, so that energy efficiency can be achieved. But for this, the developer must have insight about where the energy is actually spent. So, in that case, we will be studying some optimizations in this talk. In the basic form, the basic functionality which we used to have is calling, messaging, etcetera. Nowadays, phones are not only just used for the basic functionality, but also for the multimedia and mailing, etcetera, many other applications have come. So, adding the basic functionality and the main applications, so adding the basic functionality and the main applications we have that is present in the current smartphones and the new phones. So, hence there are more applications, so the power consumption will be very high because each application is consuming very significant amount of power. Now, many developers across the whole world are creating Android applications and are uploading the applications on Android market, so that many users can use them, but they generally care about the functionality of the applications and not more on the power consumption. Hence, what happens is the battery of the phones drain very quickly and the end users have to suffer. So, if the developer give as much importance to the functionality, to the power consumption as much to the functionality, then the impact will be very high for improving the energy consumption. Now, if we see the division of power consumption among the different components, then the major consumers of power are GSM module, CPU and display. In display that includes LCD panel, touch screen, graphics driver and the backlight. So, if we can turn on, if those components are not being used, then we can just turn off the components, so that the energy consumption will be reduced. So, the most effective power management approach is to shut down the unused power, unused components. And also it has been found that the free apps which we download have the free advertisement modules and they consume 65 to 75 percent of the total energy, because there is lot of IO energy which is being wasted in the advertisement modules. Hence, in this talk we will be seeing many optimizations which can be done at the application level, so that each developer can contribute to saving the energy for the end user's purpose. First we will key energy bugs, then optimizations which includes many network intensive applications etcetera and then some other optimization and conclusion, so energy bug. Okay, so in Android every component remains in the sleep state until it is waken up explicitly. So, that is in power management in Android, when we want to use any component in our application, then we need to define a wake log instance, so when we need to use any component, then we need to define wake log for using that component and we have to define wake log as one of the four, one of the options which we will be discussing further. Wake log is an instance of power manager dot wake log class and while initializing it, we have to pass a parameter, that will be one of the options which we will be discussing, that will switch on or off the component, switching on and off means that that component will either be in the high power state or the low power state, when we switch on the component then that device or the component will go in the high power state and similarly when we switch off the component, then it will go in the low power state. So, in this diagram we can see many options which we can pass as a parameter while initializing the wake log instance, like if we pass partial wake log, that means only CPU will remain on, so while initializing the wake log we can pass any of these options, suppose we pass partial wake log, that means the CPU will remain on, like if in our smartphone if we do not use it for some time there is a period of inactivity, then it will go in the sleep state. So, if we define this partial wake log, then it will remain in the CPU will remain on irrespective of the inactive state, similarly in the screen dim wake log the CPU will remain on as well as the screen will remain dim and similarly screen bright wake log and in full wake log the CPU will remain on and the screen will be bright and keyboard backlight will be on. So, now we will see how we can use this wake log to actually switch on any component. Before that when we how to use this wake log, now when we define a wake log is an instance to switch on of any component, initially if the component is in low power state and then we acquire the wake log, then the component will go from low power state to high power state. Similarly, when we release the wake log, it will go from high power state to low power state. Now, in this example we defined an instance of power manager dot wake log class and initialized it with the option partial wake log, that means the CPU will remain on irrespective of the inactivity state. So, now when we do write WL dot acquire, that means that CPU will should not go to sleep and now we can write our code and it will execute and after finishing our application we should release the wake log. So, that we can say that now CPU is free to sleep if it is inactive. Now, energy bug what is energy bug? It is defined as an error in the system. The system will behave very normally. The application will work as per the functionality operating system will work as per the functionality, only that there will be a huge energy drain unexpected amount of energy drain, because the end user cannot figure out why so much energy is being drained, but the functionality will be normal. Now, we have it is categorized in two parts, no sleep bug and the looping bug. In the no sleep bug suppose you acquired a wake log in your some part of the code and you released you forgot to release, then there will be a no sleep bug or if you have even released the wake log in the code, but due to some unexpected event or exception that code did not execute that will create no sleep bug. Now looping bug in looping bug suppose your application is trying to connect to an outside server and that server has crashed. Now, your application will keep on trying to contact the external server and the network device which is trying to connect will be remain on and hence the energy is wasted. Now, we will see as we have seen in the previous talks that android applications have activity, broadcast, receivers, services etc. So, we will see what is the expected way to write an activity and using wake logs. Now, we know that activity started on on create event and it is destroyed in on destroy callback. When the activity is paused it is in it is not in the foreground, but visible. So, in that time when it is paused it will call the on pause callback. If the developer has released the wake log in on destroy, but not in on pause that will create a no sleep bug because the application is not running it is paused, but still the device the component is in high power state. So, wake log should be released in on pause callback also and then again it should be acquired in on release on resume callback. Similarly, in service there is a on start command if the service is bind service then on unbind and on handle intent callbacks at the end of these callbacks the desired task should be completed and the wake log should be released. And in the broadcast receiver also on receive callback should at the end of it it should complete the desired task as well as it should release the wake log. Now, for analyzing all these whether the application has no sleep bug or not we can do static data flow analysis. So, that the no sleep code paths can be find out. Now, we will see an example through the code that how this analysis can be done. Suppose, we have this code we are acquiring CPU wake log here we are acquiring GPS wake log here we are releasing the GPS wake log here and we are releasing the CPU wake log here. Now, each use of the wake log is defined as in definition we are doing the reaching definition analysis here reaching definition data flow analysis. So, each use of the wake log is a definition like in this block B 1 CPU acquire is a definition which I have defined as D 1 and I have denoted CPU wake log as C W when I am acquiring it I am assigning 1. So, C W equal to 1. Similarly, GPS wake log I have defined as GPS it is the definition 2 and because I am acquiring it I am assigning it 1. Similarly, in block 2 I am releasing the wake log. So, I am assigning 0 and it is the third definition and similarly in block 3 I am assigning 0 because I am releasing the wake log for CPU. Now, for each of the block I will calculate generate and kill. So, which definitions are generated and which definitions are being killed because of the other definitions. Now, if we see B 1 block they are 2 paths from 2 paths through B 1 1 is through B 3 and 1 is through enter. So, enter does not have any information coming outgoing which will be received by B 1, but B 3 is passing on the information to B 1 that is now I am defining here D 1 and D 2. So, it is generating the definitions D 1 and D 2 whereas, the previous definition which I have assigned for C W is C W equal to 0 that is D 4. Now, I am again assigned C W equal to 1. So, that D 4 definition will be killed by D 1. So, I have written kill in kill D 4. Similarly, GPS equal to 0 is there in D 3 which will be passed on to B 1 through B 3. So, that will be also killed by D 2. So, in kill I have written D 4 and D 3. Similarly, for B 2 I will calculate gen and kill I have defined definition D 3. So, D 3 is in the gen set and in the kill since I am assigning GPS equal to 0, B 1 is passing on the information D 2. So, D 2 definition will be killed because I have assigned 0 now. Similarly, in B 3, I have the new definition which is generated is D 4 and the definition which is killed is from D 1 because of the definition coming from B 1 to B 2 to B 3, D 1 will be killed. Now, we will calculate the in and out. It is the process to calculate to do the data flow analysis for reaching definition. The purpose is at the end exit block, this end block I will find that which of the definitions are reaching ultimately here. So, that I can know that if any acquire definition is reaching here that means there is a no sleep bug. So, I will do the reaching data flow analysis. I will calculate out and in for each block and ultimately you can see that in the exit block I have D 2, D 3, D 4. That means D 2, D 3 and D 4 GPS 1, GPS 0 and C W equal to 0. These three definitions are reaching at the end of the out of the exit block. That means there is some path through which D 2 is reaching, D 3 is reaching and D 4 is reaching. So, GPS equal to 1 acquire is also reaching through some path which means there is a no sleep bug and as we can analyze from the path itself if the code takes this path enter to B 1 to B 3 to end, then the GPS still remains 1. It is not being assigned 0. So, we can calculate through all paths that there is a no sleep bug. Now, the other optimizations which we can do first of them is application optimization at the design level. Now, when you are designing any application then depending on the need suppose your application is IO intensive that is a video streaming application. So, you will need to read and write a lot that means if you compress your code or you compress your data and you read and write then the date it will be more efficient. Whereas, if your application is CPU intensive then you should use uncompressed code because the CPU usage will be less. Similarly, the applications needing continuous, but variable workload the scaling the frequency according to the workload will be most beneficial. So, if the developer has all these information at the design time then he can use it to optimize the application for energy efficiency of the application. Next optimization is the battery virtualization. Now, if I am going somewhere and I am playing games etcetera then my most of the energy is wasted in playing games, but I need my mobile to have sufficient energy for receiving calls or messaging. So, I can so what they can be done is they we can have in battery allocation for each application class that is navigation, phone, games etcetera. Each application will be assigned certain fraction of battery as per the users policy. So, that they will not be after certain fraction they will not be allowed to use these applications. If the battery runs very low then they will not be allowed to play games etcetera like that. So, these can be enforced as a policy which my friend earlier mentioned that implementing the policy framework and we can create an android service which will periodically check if the fraction of the energy assigned to that application class is what is the amount of energy that fraction of energy which is left for that application class and accordingly taking the action in that service. Now, the next application optimization is in network applications. Now, if I am trying to send any data transfer then first of all my network device will go in the high power state. So, the whole energy consumption during the data transfer is divided into three parts. First is the ramp energy, then transfer energy, then tail energy. When the device go in the high power energy from low power state to high power state then that amount of energy is called the ramp energy. When the actual data transfer happens then it is called the transfer energy and the rest of the time the device is in high power state is called the tail energy. Now, as we discussed earlier network devices are the major consumers of the total energy consumed by the application. Now, if our application is using 3G, Wi-Fi or GSM then in 3G the tail energy is very significant amount as compared to the transfer energy whereas, ramp energy is very small. Similarly, in GSM the tail energy is comparable to transfer energy, but it is less compared to the 3G and in the Wi-Fi the scanning and association energy is very high and maintenance energy also to maintain the link. So, as we can see the is the tail energy consuming significant amount of energy, but it is not doing any actual work during that energy. So, how we can utilize this tail time? In three ways we can utilize it that is the tail aggregation, tail tuning and tail theft, tail aggregation. So, what we do in tail aggregation is we differ any request transfer request that is coming to its deadline. So, that the two or three transmissions will overlap and the tail time will be overlapped that is in the tail time of one transmission I will send the data of another transmission. Hence the tail time will be actually used for transfer and the inter transfer time will be decreases. Similarly, in the tail tuning I will reduce the tail time. Now, reducing the tail time will increase my state state promotions because if I reduce my tail time and in the near future I have a transmission again then again I have to switch on I have to transfer my network component into high power state. So, again and again there will be state promotions from high power state to low power state to high power state like that. So, it will instead create increase the power consumption. Hence if we are doing tail tuning then very high prediction accuracy is required. Now, tail theft in the in the tail time a virtual tail time is maintained along with the physical one. So, that now we can schedule the smaller transmission during the virtual tail time, but the drawback is that if the tail time ends before the transmission then the transmission is cancelled. So, for that we can either divide the chunk make chunks of the big transmissions into small ones and then we can send the small transmissions in the tail time virtual tail time. Now the other optimizations which can be done. So, as we discussed network devices and display and CPU are the major consumers of power consumption. So, in the display the backlight consumes highest power then the LCD panel and the frame buffer. So, if for the backlight if we change the ambience according to the ambience if very brightest bright ambience is there then we can reduce the brightness of the backlight. So, that the energy consumption will be less. Similarly, for the frame buffer etcetera we can it is the frame buffer is being refreshed at very high rate. So, if we can again again we are writing into the frame buffer etcetera. So, if we can encode the frame buffer then the size which will be again and again written will be very less. So, that encoding will compress the frame buffer and it will reduce the energy consumption. So, in the conclusion we studied energy bugs which we studied detection of energy bugs and possible causes which will which will create the energy bugs etcetera and also we studied different optimizations in the network intensive applications display for display we studied how we can reduce the power consumption etcetera and how we can utilize the tail time. So, that we can actually transfer use it and transfer more data during that time.