 Good afternoon. I am Lokesh Mittal and he is Vishwanath Pratap Singh. We are here from MakeMyTrip, the India's biggest OTA to talk about non-functional testing. So let's talk about why do we need this non-functional testing? What was the need behind it? So as we all know that the e-commerce industry has grown rapidly in the last decade and especially the mobile online transactions happening on mobile has increased in the last couple of years substantially. So what does this mean? This means that the mobile application is running on the device and OS combination which is N number. There is vast spread, fragmented devices and OS spread. So we have to cover all of them for both functional and non-functional testing. So from there, audible now? So from there we got this cracks that we need to work upon the non-functional testing for mobile applications as well. So what was that that we had been doing in the past? So we had been doing this non-functional testing for a server-side applications for quite some time and capturing some key matrices like memory, CPU, utilization, the network utilization. And we actually did not have a benchmark for the app-side matrices that we need to publish. So what we did is that we benchmarked our server-side matrices and used and tried to implement and check on those same for the app-side as well. So other than that, there are a few more matrices, key matrices that luckily Google has started publishing a couple of years back and you get that for all applications over the Play Store. So they are like the activity lifecycle time, the overall app performance basis, the internal storage that it has, the shared preferences that has been used and the janky frames. Then there's a battery health check. So these are the kind of matrices that Google has started providing us with. So these, along with what we were already tracking, is the key set of matrices that we actually work upon during a performance monitoring. And then we have part as performance engineering as well, where improving upon things. So there's another thing like what exactly did we feel the need for non-functional test automation? So we were a hybrid application, few years back in the line. So then we moved on to a native and more personalized application. So at that time it became absolutely necessary that we work upon these non-functional testing. Because during the manual testing and the development itself, we observed that the, what you can say is that there were lags when we were transitioning from one activity to other. There were battery consumption issues that we already faced during a manual testing. So there was no way to get these data in a concise and in a particular manner that it is absolutely correct. So for that, we had a journey. So I'll walk you through our journey. So a journey began that we started manual testing these performance matrices using third-party obligations available on the Play Store and over the Internet. But the data was not reliable that we were getting out of it. And every application had its own set of data in the benchmark. We did not have a benchmark then. So how do you prove the reliability of the applications? So we moved on to a better renowned Trap-in tool by Qualcomm. So the beauty of the tool was that what we were doing with multiple applications, this application is like an APK which installs and like lays over your application on the test. And that after your test is executed, it gives you all the key matrices and the data that you're looking for in one place. So that concise report was the thing. But we even integrated with our automation solution. We did a POC around it. But there was a challenge with that. We observed that the mattress reliability again was a challenge with the same set of tests, with the same scenario being executed multiple times. The results that we were getting were not accurate, were not seen. So that got us into thinking of the reliability of this tool as well. So we moved on and we worked with ADB commands. We created a script integrated with our automation framework. And using that we started capturing data. So it was a good success that we were able to get the data finally and every time we were getting the same data. But the reliability constraint was changed to that we were not able to get the data when we wanted it. So let's say if I am launching an activity in an app, I'm launching a particular screen. So what happens is I want the data then and there. That is my scenario. So with this ADB commands and the automation framework, I was not able to get the state that the activity has launched. We'll have to wait for an element or get to know when these screens completely loaded. So but my matrices are gone little then. Maybe the consumption was high during the on create of that activity. So this was the reason that we moved away from the ADB commands as well. We then moved on and tried with, tried tool DDMS, which is provided in the Android SDK itself. So it was not an automated solution though, but the reliability was quite high. So we were able to capture like profile a debug APK with DDMS, run our test and manually intervene with DDMS to capture all the matrices that we were looking for. So this manual intervention had caused us a delta of errors quite sometimes. So that is the reason we had to move away from that as well. Then there with Android Studio 3.0 it came Android profiler came into picture. So we started using that, that we connected our device under test to the system, profiled it using Android profiler and profiler was getting all the data and matrices that we wanted in a much, that was visualized as well. We can visualize it very well then. But the app became sluggish when it was being used with the profiler. So we were not able to rely on the matrices that we're getting because the app is sluggish. So now we had to move on to something which has no manual intervention. Then only we can rely on the results. So that got us into thinking of building an automation framework for this non-functional testing and integrating it with our automation framework. It was APM then. So we integrated with our APM framework and we got the results. We were happy with the results. It was consistent. The reliability was there. There was no manual intervention. So it was quite good and we were able to achieve what we're looking for in this complete journey. So, but there was some scenarios wherein we were not able to accomplish those scenarios at the current APM automation framework. So to list down few, let's say that I want my app to reach a clear state after every test so that the consumption matrices and everything is cleared. Fresh, I'm getting the data for every test cycle. It's not cumulative. So that was not it. I was not able to do that with APM. So what we did is and there are some internal Google APIs, the Android APIs, the methods that you can use to get these matrices because like the debug API, the package manager API, the activity context, the activity itself has a lot of information about the activity that you are looking at, what API response it has and a lot many things. So that we were not able to get with APM because it's residing outside the application. It's working over the UI layer. So what we did is that we integrated this automated solution with Espresso. We moved on to Espresso, which is residing inside the application, having context of the code and getting all these information that we wanted, the APIs that we wanted, the activity context to accomplish this. So that is how we moved on to the solution. So you can carry on with the... Thank you, Lokesh, for the introduction and what we are going to cover in today's session. So basically we have divided the whole topic into two sections. The first thing is performance testing, in which we will talk about the numbers, the throughputs and all the matrices, key matrices that we are going to capture. And in the second one is the performance engineering, which is basically how we can improve the things that has been degraded or we found that optimization is required. So in application performance, we are capturing the three main parameter of application. The first one is memory, second one is CPU and the third one is network data. As end-on memory is further segregated into a two-further subcategory like Dalvik memory and native memory. So Dalvik memory is the heap memory which is allocated by the Java object and the native memory is basically the operating system needed some system services to run like C or C++ library. So the end-on operating system do not have any kind of check in native side of memory. It can take how much memory available to the operating system, but definitely there is a check on Dalvik side. So for every device there is a check that the application cannot use greater than a particular amount of memory that is defined by the operating system. So if application try to capture those memory, we found the out-of-memory kind of issues here. So in CPU, in CPU we also capture the system space and the user space CPU. The system space CPU is basically the CPU used by the kernel and the user space is the processes running inside our application. We also capturing the IO weight. If any IO operation is being done by the application then how much time it is being taken by this particular operation. So we have captured this kind of matrix also. And the third one is the network data. The network data used by the application for a particular test cases level. So the RX data basically the data received by the application. TX data is basically the data transmitted and the total data is basically the sum of the transmitted plus received data. So this is a framework. This is a framework architecture is common for Android and iOS as the same matrix we have captured for the Android, the same matrix we are also captured for iOS. For iOS, we have used XEY and for Android we have used Espresso. The same thing we also achieved with the APM also but our requirement was different and we have to find some different inside of the application so we move to this framework. We update this framework. So basically here is our application code base itself. So we have a utility, NFR utility kind of thing. So in Espresso all the test cases are run by a J unit. So we have a J unit runner and we run our test case. So in for every test case we have segregated our NFR test case in such a manner that we can cover all the screen and Android these are activities and in iOS these are controllers, UI controllers basically. So for every scheme we have a test case. In the one before of the test case we clear the application in a, we clear the database, we clear the application internal data if any data application consumed like network data or any DB related data, we clear the whole data and we perform in a test case in such a manner it is a freshly app for every test case. So if the test case is passed then we capture the information by a utility we ask the controller for a particular like memory, CPU, NFR it captures the data and we publish our data by using one internal API because we cannot hit directly a DB layer inside our application. So we need one utility or an API so that we can transfer our data into our DB. So we are using one API for capturing the data and we put this data in our database and now we have a data we can play with like we have a monitoring we can capture the inside and all. So but the thing is here the thing is once a length we recently we face once a length after using it after a couple of months that we are capturing our data for a particular amount of time we need it the test case ask for the memory data for the utility NFR utility it provide us a data for the memory or CPU but suppose if our test case is like we have to open a multiple screen then we are capturing the data after the test case is ended if suppose any tweaks is happening for memory or CPU that is miss somehow miss somehow can miss so for this we have make a memory CPU or network utility as a service so they are capturing data for us for every second one second if suppose if test case is run for 50 seconds then we have a data for 50 seconds so now we capture the mean we calculate the mean mode and median if suppose the CPU is starting from 1 and it goes to 10 and the average is 5 and the delta is mode is basically 2 so we can say that plus minus 5 it is the range of CPU that is varying for test case to test case so by using this service we can have a if any tweaks is there we can easily get it by using the using this utility as a service for memory or CPU what it or any doubt so how we compute it so for computing we are using an library external library provided by the google for we are talking for android so we have a runtime and debug api for memory utilization so runtime api basically fetching the runtime memory of the application and debug api has been funding the dalvik and native size of memory available for the particular time stamp and we can also capture the heap dump by using debug api heap dump is basically helpful if suppose we found any issue that particular our test case fail then automatically we capture heap dump now for heap dump we have a manual processing that at that time we capture the heap dump we analyze that how much what is the memory segregation for particular package level so we manually identify the cause for if any issue happen for this and for CPU for CPU we have executed a Linux top command in a separate process in a background thread this thread is running in a background thread for a parallel processing like if we start our scenario this thread automatically invokes and it pulls the application that how much CPU is consumed by our application package and we process this data and dump it data to our database we have the debug api that we use for the memory also for the method that we have used for the for the thread analyzer tool we have we use this api for this purpose and for network state we are using the traffic state and connectivity manager api traffic state api basically gives the data user rxdata and txtata transmitted and uploaded data user and connectivity manager tells us at that particular time the application is connected to the which kind of data provider like byfi or cellular network if it is a cellular network then what kind of speed is there is it like on 3G on 4G on BOLT so we also capture the because we have to compare the data if suppose we are capturing a bundtask data on 3G in the next time the device is on 4G then we cannot compare because it is on 3G 10 4G we have to compare our Apple to Apple so we need to capture this kind of data also so below is the code link we have uploaded the whole code related for this functionality in github we can check it here and this is one blog that we publish for how we are capturing this data, why it is required and how we calculate and how it impact for e-commerce application for greater insight we can read it and these are the api for iOS because the same thing we are capturing for the iOS as well by using XC UI so for memory we have used the processinfo and MAC task basic info api and for thread we have a different api and for network we have ARPA and inet api that we are using it so the same the code link is here and the separate blog for iOS as well we have because the things are different in iOS in iOS do not have any kind of memory segregation and iOS garbage collection works also differently because it has ARC automatic resource counter then the whole the memory and CPU reader things are different so the data capturing point and data tweaking point also different in iOS so this is a small code snippet for memory for memory we are runtime for getting the runtime memory and at that time we can also capture the what are the heaps available for this for our application that is running so we capture the runtime memory and the total heap available and if we subtract the one from the another then we can get the available heap at that particular amount of time and this thing we are this thing we are capturing by debug API for Dalbeck and native side memory and for if we need a heap dump heap dump is needed by our test case if test case is fired then it automatically ask for the to capture the heap dump utility then it capture it and save it into our device storage and we can pull it and we can open it to the ender profiler so that we can debug you know now we can now we are debugging by a totally human intervention there is no automated solution for debugging but for debugging of CPU we have automated solution that we will discuss in a different slide so for CPU that we discuss that we are using a top command so the top command we have run in a separate thread and we have passed this thread in a and we have passed this process in a thread and it is checking that what kind of information you need so suppose if we are taking for the com dot make my trip as well so if we have a different application then we have to pass the package name different and it can easily get the CPU uses for this application so once we have our data in our database so we can use it for like pass, fail, data validation purpose and for reporting purpose so we have a Jenkins the whole suit is she rules with a Jenkins job so it runs two times in a day in morning and evening so that we because the application application release is so fast that every after 15 to 20 days we have to release some features then we have to by adding extra feature we are not dealing with our key parameters constraint that we have we have some threshold that we if the matrix is based on the limit then we can only go ahead otherwise the application will not go live by having these kind of issues that we found in this matrices so here in this at that time the application running memory available for application is a and the memory taken by the devices is along 100 to 115 MB so this is the trend that the application is behaving well and we do not have any constraint for memory and this is the segregation of native and Dalvik how much the native memory is taking this is further segregated in this suppose we found any issue application consumption increase then we can look into it is a native size increment or it is a Dalvik size increment so for the visualization we have a further segregation if we combine the native and Dalvik O it is approximately the same memory that is the application running memory so it help us for while debugging issue if we found any memory related issue it is basically on the which code level it is from our code itself or we are using some extra library because of them so this is the key type of issue that we can found in our general suits like memory uses increase and memory leak kind of issues this is the representation of CPU the CPU we have categorized into a two different category like user space CPU and the system space CPU this is that trend and we also capturing the mean mode and medium that we discuss so that we can check what is the limit of the CPU for running the particular test case or scenario this is for the network data how much is the data downloaded by the application how much data is transmitted by the application and the total data of the application at the particular amount of time this is one kind of metric we have covered with NFR now the second we have categorize this in the application performance and application performance throughput in which we will deal about the numbers and the same information we are also getting from the Google play store that is Google recently started location also debrief the same so this kind of but the problem is the Google provided data which is on the after production data when that application is in live state we cannot do anything because the application is live but we have to capture this data in a pre-production phase so that we can we can capture it in a initial phase and we can able to fix it before application is live so in this we are capturing the basically the overall application memory overall application CPU internal storage internal storage used by the application database used by the application which is basically a shared preferences in terms of Android how much data has been saved into a shared preferences there should be as we cannot see everything in shared preferences if we if we are doing this then definitely there will be UI delay and we will find some application loading slow UI slow rendering problem and we can also observe the syllabus as well and we also have a activity lifecycle performance so basically as we all know that Android has a everything that we see an activity and activity has its life cycle it life cycle govern with a different method like on create on pause on resume on one start so basically what the life cycle suggested we should not do any heavy operation on on create or on pause if we are doing this then definitely there will be a delay in rendering the UI and we will feel a sluggish also so this thing will help to check that the every operation that is being done on the on create method and on pause method is not on main thread or we are not hitting any API on this on these methods as well so now on axis how we are going to compute it so in the previous that we have we have a solution in which we run our script but from now we are not running any script this is a as a service it is automatically running in our application but it only debug build not on not on in the release build only debug build so how we compute it we have a one debug implementation we have implemented our application in a debug mode and we have extended the life activity life cycle there and active in activity life cycle we have provided the information of the activity in a logging manner like activity on create time is there memory user or CPU user or the network data information all the information is written in a log so we have a one logger service the role of this logger service is to read the data that we have published in the debug implementation as the debug implementation extend the activity life cycle so we do not take care about that we have to pass any activity name or application contacts every time it automatically capture for that we move on to the different different screen of the application so the logger service reads the data and it capture the data and format the data in a manner in which we required it and it process the data after hitting many way and we store the data into our dv layer once our dv level once the data is there we publish the data in such a manner that it we can check it for the device level we have a device level checks here we can because the application is in multiple devices and the multiple devices have a different android version also so we can check it that in suppose android 23 what is the average memory or CPU uses are there or android 24 or 25 what is the difference on there we can check here that the different we have a fragmented operating system in android so we can check that the android version dpi level also the things get vary because these are internally processing with a different dpi version and the screen as android has a different segmented of screens some 4.5 inch or some machine has a 5 inch or 6 inch so the definitely the matrix will be different so for tracking for the individual kind of devices we have a different different test devices that we have configured for capturing the data so here the shared preferences users is about 203 kb if as google says that the if the limit of shared preferences should not be greater than 1400 of kb but many applications store the data in more than 2 mb or 3 mb then definitely we can find some delay while rendering and monitoring operation so these health check we can provide it here and also we have some threshold we have set for 80 percent and 80 percent if the threshold will be greater than this then automatically there will be trigger it will trigger our email and send us to all the application memory performances greater than the threshold that we have set for our application this is one kind of thing that we are showing this is the overall basically application performance monitoring the in this the second one by the activity life cycle performance in the activity life cycle performance as we discussed at the end of everything is a activity that we are seeing or visualizing so in activity like this is the issue we found in the while running our suit in our application the initially the on-grade time by 3.65 and on-pause time by 4 second it is quite high it should not be greater than 1600 or 1700 milliseconds so we found then we checked that many things were doing on the main thread itself so somehow then we optimize it and move it to the background thread and we stop some heavy operation like bitmap loading and all so we were able to come in under 1600 milliseconds on create and on pause time so it is recommended that we should not do any costly operation in on create and on pause because we are transition then if we are doing heavy operation on a previous scene definitely it will cause a delay in introducing for the second screen the next thing is UI performance and slow rendering so Google also provide a same a different information by using the same methods they call it as a frozen frames or slow frames and we are capturing the janky frames so the definition is different but internally they are interlinked so the janky frames are the frames if any frames on UI is taking more than 16 milliseconds of time in rendering then it comes in under a category of janky frames so if suppose for every refresh cycle there should be in 1 second 60 frames should be rendered in android UI if the frames are missed then definitely you can feel as luggage and bed behavior of application in terms of android they are the frozen frames are the frame they are taking more than 700 milliseconds in a particular session so the long if the frame is not able to render after a 700 millisecond they are capturing the information and provided in a google play console internally they are interlinked if somehow we can optimize the janky frame definitely there would not be a frozen frames kind of thing in google play console so what kind of matters we are covering here the janky frame on activity level how many frames are rendered and how many frames are janky frames and what are the every time taken by the frames to render on the screen so for this we have a frame matrix implementation there is one google provided one api android version greater than 25 we can implement it in our activity life cycle and we capture the activity name and the total number of frame render and how much is the janky frames in there and we publish it in our db this is the actual code snippet there is one class activity frame builder it implement frame matrix dot builder it provides some activity life cycle method so it only supports android version greater than n so here we are going to capture it as we have overwritten method and it automatically captures the native information as the activity such as from one to another so we are capturing the activity name the frame time average frame time total number of frame render for a particular screen so this is the representation this is for whole app the total number and this is for a particular ui or activity you are looking for this is our home screen the total frame render were 10,000 and the janky frames were approximately 1700 was janky frames and the average time was initially 18 89 millisecond it should be in a range of not more 16 millisecond but it is quite high so we need to improve this and we raise this view and we got the solution for this so we take the thing like memory cpu and janky frames and all so the things are different but they are interrelated to each other if the frames are not rendered properly then definitely they will cause cpu cpu will process more if the cpu processor is more than the battery consumption will be high so internally they all are connected if we if we found any issue so it is not the issue that we found only because of janky frame and we have to backtrack and to find at that time what is the cpu level what is the memory level and what is the activity life cycle performance also we need to check that what is the time of uncreated footage we have an idea we are from where the issue is basically injecting in our code and it is not behaving properly as it is needed so they all are interlinked to each other moving on to the next matrix it's battery so battery performance is quite important so can anyone of you tell me why do you think that an app consuming more battery will keep that app in your phone you will definitely not you will move away from that app so that is one of the key resources that has proven that people uninstall an app basically and majorly because of battery consumption so it becomes absolutely necessary again to capture this metric then so how do we capture it we capture it using Google battery storing tool which internally reads the android bug report file to give you the consumers of your battery so what are the key matrices that we get out of this utility so we get the battery used by an app which is running and the big cloud services and name it's ground so that is the information that we get so how do we compute it so we as Vicky has already talked about we have a espresso suit built in so we have multiple scenarios so a user behaves with an app differently so a user can be a lofty user using the app continuously a rigorous user maybe of 30 minutes or so it could be that the app is in background state you open the app put in background moved away to some other app he put the app in idle state and the phone in idle state and left it overnight so this battery has been consumed in all these things so these are the areas where we got input sizes that these are the areas where the battery consumption is quite high so what we did is that we had these scenarios built up using our espresso suit and we had a Jenkins CI in place so we integrated the the espresso suit with the Jenkins CI got that automated and after the execution of our test we were generating a bug report using the ADB command so on the device we had just generated and stored in the storage so the battery stored in Docker image was installed in a dedicated server so that server used to pull that image the bug report file that we had because it needs that bug report file to compute the data so once the data was computed so it's not everything that we were looking for there are some key constraints as we talked about the key matrices that we were looking out of that report so we built a service around it to get the relevant data from the server so we got the relevant data from that storage server and stored it in our open-tstv so after that it was visualisation so this is how we do it at BigMyDrip so if we talk about the reporting so that's quite interesting that as we see the two graphs here so one says battery use CPU and the other says battery use device so what's the difference between these two so battery use devices when your application is executing so what is the actual amount of battery it is consuming this is the percentage of that which is 0.3% and the battery use CPU is when your application is running so other than your code there are system level services that are being executed kernel is taking part care of that CPU is taking part of that so battery use CPU is the other part where in the system services are being used the sum of these two so then we have as we talked about scenarios that we had scenarios built up so if you would see in the top we had filters there mentioning scenario half an hour so we had half an hour the idle scenario, the overnight scenarios across devices, across app versions so we have an app to app comparison app version to app version comparison so that's how we do it then the information about services is given in the graph below so this is the count of the launch count of a service and this is the time in seconds that the service took in average so this also gives us information there could be services which are not required in such lofty count or there could be services which are being running for much longer time so you can work on those areas and get that information corrected so that your battery improves so the third thing was wake lock so this is the count of wake locks and the seconds that average wake lock, particular wake lock name was running so these three matrices gave us major issues around the battery and we were able to fix them so the issues were like the wake up alarm frequency was increased then there's a location manager wake up so that is basically used for google geofence api a feature using that so it was increased around 20 times as compared to the last app version which is quite high so we were able to capture it before releasing it to the production getting a user feedback and then there were increased wifi scans so it could be that the application does not need to scan the wifi adapter, radio adapter that many times as it is doing at the moment so you can get hold of that as well so these were the issues that we were able to fix using this battery utility so now we have kept all the key matrices that is needful in performance testing as we cannot measure it then we cannot fix it so now it is about solely responsibility that how we are going to fix this kind of issue because in this comes under the category of performance engineering so once we raise any performance related issue then we have to provide a root cause analysis for the issue that why this issue is happening like either it is a memory or CPU, either it is a slow UI or it is a battery issue and what is the cause for this issue so now the performance engineering is basically manual task in which we need to we capture the heap dump thread dump and the method trace and the battery battery data information and we check that what is actually happening at a particular amount of time then we capture the insightful information and provide the information to the developer so that they can check into and they can fix it so in performance engineering basically manual task but for thread analysis we have one automated solution in which we capture the method traces there basically method tracing is a profiler we can easily get by ender profiler if we run ender profiler in ender studio we found a method trace option is there if we start and stop it then one trace file is generated and we can humanly read it in a humanly readable format so this is the solution for the automated solution for the method trace file or the thread dumping in which it help this this is a utility it runs for a activity like this is the first NFR related test cases are there so it is combined with these test cases this utility is with this framework we have a one separate suite for the thread analysis so what for all the test cases we are using for the for capturing the NFR we have integrated it with the method trade also so for every activity we capture this kind of information and we check that how many number of thread are running at that particular amount of time so we are taking a top 10 thread here which is running on a long duration for a particular screen and then we call the subcoli of the thread there what is the subcoli of the thread the thread is internally calling multiple thread or multiple method so the information we are calling and the third one is the subcoli is calling our Java classes and Java class methods so like you can see here like the method the class name is avyutils and one method is calling inside the method name is we it is calling multiple times so this is a basically a framework the framework that we are integrated in our express of framework we run our scenario we run our scenario for a particular test script and we compute the method trace file we capture the file this file is saved in a device itself and we process the dot trace file for as this is a huge file it is around more than 80 MB of file so we need two thread for parallel processing the first thread do the capture the thread information and link it with the thread ID when we send this ID to a different thread then it process the internal information of the particular thread like internal coli, internal subcoli information and subcoli calling the method and classes itself it capture the all the all such information and we pass this data to our database and we visualize the data here is the code link for the same and this is the visualization this is we have a one test cases which are related to our hotel detail test cases in which we launch our application we search and hotel and we found the hotel detail where we provide all the hotel related information we have a scenario like this then these are the top 10 thread we have a we can limit the thread how many thread we want to show here we have a limit here for 10 we are showing a top 10 thread that are taking more time for this particular scenario these are the thread information and here is the number of subcoli how thread is calling main thread is calling 100 subcoli suppose and the subcoli is again calling 50 thread so we need to debug into to check so we finally we come into a method level this method should not suppose we have a scenario in this this method should not call twice but it is calling more than 5 time or 10 time then we can easily check that there is something mismatch in this particular class level or method level we can backtrack and find that this may be a cause not a definite cause or this may be a cause so we can find this kind of information here we also find the entry time and the exit time of the thread when this thread was introduced and when this thread was released by the operating system or our application so all the thread level information we have here so it help us while we are going to fix the issue and we provide information regarding the cause of the issue so all the parameter that we have covered is the inside of performance testing or performance so these as we can say that we are capturing the overall application performance and we are also capturing the activity level performance if suppose application performance is high then we will navigate to the test case test script level check where we have a memory CPU if we not found any information then we will backtrack and check the activity life cycle that all the thing like time is on within threshold or not if it is not then we will check the UI related performance in which we will check the frozen frames information and we also check the database like of thing that we can check so the all the matters internally they are connected while we are fixing the issue and we need to re-engineering of all the tasks that we have performed here so these are the all the things that we covered in our whole session so this thing we can also achieve the same thing by a APM or any other open source tool it is not only constrained with the espresso itself but it very requirement to requirement what we are going to capture and this thing we have captured for the McMattery we can easily capture it for any application available just we need to pass the package name and our requirement in what manner we need a data both we have a graphic representation but suppose your requirement is different you want a data in a different format just you need to make a model classes in a different manner and the same utility is a plug and play utility you can be integrated in any application any e-commerce application available in the market Hi this is Antosh so I have a question like regarding the battery consumption so you mentioned you use for android some mystery and tool which you can capture the data so coming to iOS like is there any tool available where you can capture the battery consumption for iOS till now we are capturing via Xcode itself in Xcode there is a energy performance tab is there in which you can check the all the information but it is not an automated solution till now we are working on it but via energy performance you can check that what are the like as iOS have a different parameter iOS do not have any bake log or any kind of scans are there they have a different scheme they need some checks like they actually they have a different checks so you can find a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .