 Good afternoon, everyone. It was an awesome lunch. Everyone awake? Everyone awake? Yes. I'm glad. Or else I'll just go to sleep, you know. I really like the land there. Really, really tasty. Yeah. So, let's get into the game. My name is Sai and I work for ThoughtWorks. It's been like four and a half years with ThoughtWorks. And overall I have like about nine years of experience as a QA. And I do quite a lot of open source work, contribute back to the community on my learnings because when I started my career as a manual tester, I actually learned a lot of things from the community like you all here. So I thought it's just not fair me just learning and just keeping it to myself. So I wanted to just give it back to the community. So I spend a lot of my time on open source and stuff. And I also maintain and contribute to the APM modules and Java client. So if there's anything that you want to catch up with me, I'm here around today. Yeah, that's me. Hello, everyone. Myself, Srinivasan Shekhar, APM member, maintain a contributor to various open source repositories. So if you have any doubts or questions, and if something is broken on Java client, feel free to poke me out. So, yeah, that's me. And let's go ahead and see native mobile commands in APM. So why native mobile commands? So we'll go ahead and see why do we need to talk about native mobile commands in APM? So from yesterday morning, we have been talking about WebDriver protocol, W3C standards. So this is some of the protocol. Native mobile commands are some of the APIs which are defined in APM side, which are not part of actual W3C APIs or not part of the WebDriver spec itself. So there are platform-specific APIs provided by Google and Apple to give you an example. For example, we have Siri in iOS. So Apple exposes an API to interact with Siri, which is not part of WebDriver spec. WebDriver spec designed by Jonathan, Simon, and other folks are completely generic, generic, generic enough in nature. So it is extensible to any platform, any browser, any IoT devices. So keeping everything in mind, it's designed as generic as possible and doesn't have a plover of platform, doesn't have a plover of browsers, doesn't have a plover of a particular iOS device or particular IoT platform. So these are some of the APIs provided by platform vendors like Google, specific to their own automation libraries to perform, probably to perform a gesture, probably to perform a Siri interaction in iOS. So that's deep dive on what are the mobile commands that we have and so let's deep dive into what are the mobile commands in iOS and mobile commands in Android. How many of you in the room actually use Appium in the mobile test frameworks in your organizations of tried and stuff? Quite a lot of hands, please. Don't drop down, please. Yeah, yeah, yeah. And how many of your apps actually have gestures? You really have some test code written in your framework which has got some code for gestures. And what are the APIs you use for gestures? How does your code look? Anyone just randomly touch actions? Anything other than touch actions? Come on. Anyone other than touch actions? Okay. So it's been, it's like everyone using touch actions if it's a gesture, right? So how many of you agree that the touch actions are so easy, just works out of the box, it runs all the time. I don't consider them all the time. And you're very super happy with it. It's given you all the flexibilities. Anyone? Okay, so partially, right? You're not like apart from them. It's just you're partially like happy, but everyone's unhappy. Like why are you unhappy with the touch actions? Whatever we have currently in our mobile automation, anyone randomly? I feel the same too. I feel the same too back this when I was, you know, writing some code for gestures. It was really a pain for me as well. So I'm one of you guys there. Just blims very happy in the room. I want to catch up with you. So the biggest pain point over there was it works sometimes and you get a 200 from the server and no actions actually happen on your screen, right? You look at the logs because you come back to the community and say, you know what? It doesn't scroll up. Can you get us the logs? Your log says 200, right? So you just can't give a log saying which has got a 200 response, which means it has no errors and it doesn't happen. No action performed on your device. What we tend to do now is just going to close the issue because we can't solve the problem. And some cases where you try to have some complex gestures in your app. Let me go back and tell you one of the applications which was automating was really crazy, which wanted a digital signature. You have to like just sign. And that was like, if I have to compare that with anything else, that was like a checkout feature with your entire app, right? So which means my end to end is just not achieved with that. I mean, we really couldn't do that. So we had to do a lot of work around flaky test and things. So I just wanted someone to rescue me from that and probably something really cool that we want to do. And that's when we had the actions API after the W3C protocol which Srini was saying. So Selenium went into W3C, Appium followed Selenium with W3C and we said, okay, anything on W3C standards is what we will follow. We made sure that from the community side, let's go ahead and respect all the W3C APIs. So which means we have to make some changes in our code bases. So we went ahead with action APIs. That's all the problem. I was able to like go ahead and write, you know, complete the flow of my digital signature and I could actually completely do that using the action APIs. But again, people who were then the workshop should have seen the action APIs were pretty not straightforward. So there were a lot of sequence and you need to add a lot of actions and all the crazy things. So typically it was like, you need to at least write like 10 to 15 lines of code to actually just do a scroll up or a scroll down. You need to again get the coordinates, add some sequences. You know, you need to press down, move and all of that. And you need to have your right pauses in between because XCUI says, you know, a specific pause time and Android says, you know, the minimum time out time for a double tap is 40 seconds and XCUI says it's 200 milliseconds and stuff, right? So which means you still have to, you know, go write some piece of code and things. That was really crazy, right? So let's go ahead and sort of see how cool these native commands are. And when I was talking to communicating to a lot of people, they're like, yeah, iOS is a pair, iOS does this and stuff. iOS also does cool things. They also expose good stuff for a testing. So let's go look at some of the native commands which iOS actually has exposed and Appium has actually written some of the mobile colon commands behind those and how easy they are to use. So just before deep diving into any of the native gestures, can you all tell me some of the gestures that you have used while interacting with your mobile application or the mobile devices? Can you all name few gestures that you have used? Pinch, swipe, scroll. Yeah. Okay, let's take this in another use case, right? I mean, a lot of people said scroll. I had one pinch and swipe. So let's take the scroll. So even if it's your touch actions or your action APIs, let's say you have a list and there's a lot of data on the list and you have to typically keep scrolling, scrolling, and let's say there's a button at the end of the page, right? So what you tend to do is you write a while loop, you write a follow up or whatever the loop is, whatever your logics are, you keep scrolling, you keep scrolling till the element exists. And all that you need to do is what? You have to just crawl all the way down because your whole test case or your aim is to just hit the bottom so you can click that button, right? So this could be one case. There could be another case where, you know, you'll have to just crawl, look for a certain element. It's still a valid scenario. So let's root that out. Maybe in only this case, you scroll all the way down to the list. You just have to click on that button, right? In such cases, we put them into the loop. We reiterate, yeah, there's a button present now. Let's give a scroll. A button present not there. Let's give a scroll. And, you know, again, there's a call made to XCUI. It's quite slow. It performs, you know, with that time of you scrolling, that's not exactly what you want. All that you need to do is you just need that button where you actually can go perform your click and things, right? So why do we have to do all this crazy stuff just to get all the way to the bottom? Scroll like 20 times because you don't do anything in the, you know, 20 scrolls and things. So let's look one of the mobile native commands which iOS actually exposed, which is XCUI and how simple that is is the mobile colon scroll. So the mobile colon has got a lot of more, you know, gesture specific stuff. So we're going to look at the second one, which is mobile colon scroll. You see how simple that is? You just say the direction. You want to scroll up. You want to scroll down. And then you say execute script. And the script is a mobile colon scroll. And that is just going to scroll all the way down so you could find the button now. So you can get rid of all your loops. So you save a lot of time as well right over there. So you can straight away just hit down all the way to the screen over there. And pinch. Since it is native API, it is much faster. Yeah. Yeah. And it's pinched, right? I remember I had hard times pinched even on Actions API because I have to like get like crazy coordinates because one has to go in X and Y in two different directions. And then I need to like probably just zoom. So simple over here, it says mobile colon pinch and just give it a scale. That's it. And these are like very, very tied up and coupled only to the iOS platforms. If you ask me, will this actually work in my Android? No, they will not. So these commands will not work in Android. These are very specific to iOS. And that's the reason. So it's again something which you really need to decide if you're using a cross-platform framework and your app is cross-platform. You need to see if you want to use this or you want to go back to Actions API so you could have the same code base which can work for both. But still you can have the same code base using these gestures as well. It's just about the different APIs which iOS and Android actually provides to us. Maybe let's look at complex gestures. Complex gestures. So what do you want to look at as a complex gesture? Maybe double-tap. Yeah, double-tap. Maybe even a two-finger tap. That's even more complex than a double-tap. Has anyone done a two-finger tap in or even looked at it in your touch actions or anything like that? Like two fingers, you tap on stuff. Yeah, so it's one hand there. Quite common use case is when you have G-Maps or something integrated, you do two-finger tap to zoom it. So that's one of the quite common use case where you use two fingers to zoom it. Yeah. And when I used to try this back there using touch actions, wow, this is so difficult. I had to create multiple actions, merge them and do all these crazy things. I used to always keep my fingers crossed and start praying if my test has to pass for this specific, you know, action. And it has to just work. It randomly works, randomly doesn't work. Sometimes it fails, sometimes it gives a 200, but nothing used to happen. And finally, I had to just get rid of the test and I started manually just tapping that. Cool, at least I'm satisfied that something's working correctly in my app now. So let's go ahead and see how simple that specific action can be performed in these mobile colon commands, actually. So if you look at the last one out there, it says mobile colon two-finger tap. That's it. Just give the element where you want to perform this specific two-finger tap and you just say mobile colon two-finger tap. And these commands would be sent down to XE UI and they actually take care. So, which means your gestures are getting very, very, very simplified for you using the native commands. And the similar stuff is for double tap or could be even for your taps. You give your X and Y coordinates where you want to perform your taps. That's just going to actually perform your tap. And how many of you feel now or feel relaxed that, okay, there is a solution which is extremely simple than what we actually use now? Cool. So far we have seen about gestures on iOS. So another common use case that we use on any platform, irrespective of iOS Android, is switching between apps. So a lot of use cases that we see in day-to-day apps where, at least for payment BHV, we migrate from one app to another app, do the payment, come back to another app and see whether payment is successful or not. And that could be cases wherein if you are opening a link, you have two browsers, a sudden pop-up saying we have two browsers, which one do you want to operate with? So go back and if you do a back and you come back to the existing app. So a lot of common complex use cases are end-to-end use cases that we use to. We are using, we are used to in our day-to-day life when we are doing any payment or switching between multiple apps. For example, you are working on a particular app and you got a call from someone. They switch to the caller ID and then comes back to the original app. So let's talk about switching between multiple apps in iOS. So switching between multiple native apps or it could be a web app or it could be even a settings app. So there are a lot of cases wherein we go and check the permissions of settings app in between and see whether your application has camera permissions or we have given any other permissions to your applications. Perform the action towards it. So it is a common end-to-end use case which we tend to say not automatable but it is actually automatable using native mobile commands. Yeah, and just to add on that this is actually doable even in Android. So there's a different API for that. It's just not that it's something new. It's been there for quite some time where you can actually start a new activity just do some interactions and fall back to your old activity so which means identically that's also like switching between two apps which is also supported in Android and that's been there for quite some time. There are two-three APIs involved in it behind the scenes. One is a launch app if you have given the bundle idea of a particular application it launches that particular app. If you wanted to close the app you can use close API to close the app or if you wanted to check the state of the application it's open, close and if you wanted to query the state of the application we can use another API called very API state, very app state which helps us to provide the state of our application back to user. So it helps us to switch between multiple apps when you are having a control on one another app you can switch between another app, perform some actions on it and then come back to another app. Yeah. Maybe a quick demo. So would you like to see a live demo or a record on one? Live? So if you look at the API I have given setting ID bundle idea of the settings app which is settings app is nothing but the IOS settings app which you use to check what are all the applications we have installed or if you wanted to change the state of the application we go back and check the state of the application in settings app. So I wanted to launch another application which is the application on the test and I wanted to go to settings app check whether the application is registered in Siri or not and then comes back to the original application and perform some gesture on it and I have my APM server running in my local so let's go ahead and execute So it opens sample application it just shows some pictures randomly and then it switches to settings application check whether this application as Siri and under Siri whether the sample application is registered or not So behind the scenes it is installing web driver agent this is one of the painful stuff which used to happen in IOS So it installs web driver agent it launches the application again then switches the application back to the settings application performs click on Siri then checks whether shortcuts has the original application on the test and just perform some other applications. So now we can easily perform switch between applications by using this samplats the APIs that we have we have three APIs just to reiterate one is launch application another one is close application and next is querying the state of the application and these also work on simulators as well as real devices it's not restricted only just to work on simulators they also work on real devices again it's not restricted to platform as well it works absolutely fine and Android as well So how many of you have iPhones here how many of you set your allowance using Siri at least few hands so Apple has recently exposed an API to interact with Siri and test your application nowadays we see a lot of applications have interactions with Siri so you can perform a gesture or you can launch an application using Siri or you can do some purchase using Siri so when you say hi Siri so when I said hey Siri what's trending it's our PM just popped up in twitter so maybe a quick demo on Siri shortcuts so a lot of applications now have a conversational way integrated so some of some of the Android applications have even Google now integrated and now we have Siri integration so let's see how do you interact with your application using Siri so I have a sample application again again a live demo so I have a sample application if I ask the application hey Siri show picture in a previous demo we have seen click operation on show picture it just pops up some picture on it so if I say hey Siri show picture it pops up probably a picture hey Srinu show demo look at that hey Siri show picture that's how it works manually and the next time when you actually ask for the picture it actually randomly picks another image so just make sure we have a nice all the probably the next time in the scripts run it should be something else so again it launches the application then installs web driver agent then launches application back switches to Siri and then how do you see how many of you saw the Mac out it's cool right so we could also automate Siri so which means you're just not tied closely just to the applications what we have just a functional stuff every time when you execute it shows a different picture so now we have APIs through native mobile commands you can interact with Siri and perform a specific action in your application and how many of you would get super excited if you know that we have APIs which can actually give you the performance of your app and that's iOS that's iOS so iOS is got APIs where we could actually measure your performance of your application so basically we have like APIs where you say start profiling go ahead just perform your scrolls clicks or whatever you want to do over there and then you can actually say start profiling and that's going to give you a trace file for you and you can actually drop the trace file into your instruments or your X code which can actually give this the dashboard for you so you know how your app is performing this is X code so if you go back to X code in your debugging stuff you get this and if you really want to see with specific threads performing really crazy in my application so you can just drop the trace file into instruments and that's going to give you all your main threads and stuff whatever crazy things are happening in your app and I think this is a very very cool feature in the mobile column actually light in with X UI exposed so Apple does gives time profiling so you can go ahead and see a specific series of time period during the course of time period what are the threads that got invoked and how much memory that it got consumed and which thread leads to the spike in CPU performance so there are a lot of profiling that Apple gives us by default so if you change the commands in native mobile commands it gives you a kind of so trace file as I said if you open it in instruments X code instruments it gives us a bird's eye view of how does my application looks like so in terms of performance so if you deep dive into the tool and see what are all the number of threads that got spinned up and what is the exact code that leads to this spike in terms of memory or CPU or anything else related to performance so it's quite easy to capture your performance during the course of your automation execution so maybe I can show the piece of work so there you are in line 83 we basically so you actually in line 82 you say the profile name whatever the profile name you want to keep and in 83 you say execute mobile call and start performance you just scroll start performance record and then you go ahead and do all your actions actual test cases are supposed to do and once that's done you can actually go back and stop your performance record so what we are exactly doing here is for the current process ID for a particular time out capture the time profile in X code through X code will do this so we can ask we can ask it to capture memory profile as well so it gives us maybe a trace file so if you look at I can execute and it doesn't give you a straight trace file for you all you need to do is it returns you a base 64 string so you got to decode that back and write it into a trace zip file and you are going to have the trace file there if you look at the code we have a base 64 string just written in trace zip file so I have performed some operations I have been able to performance log capture after the operation I am stopping the stop capturing the logs so we can see trace zip file if you open this one in instruments so it will give us a nice view of how does my memory consumption looks like for the particular app again this has been captured by X code build tools nothing by IPM and for a particular process ID and not for all the process ID so it is completely specific to your application as well and you know I know a lot of people raise hand saying that you guys use iPhones and what sort of lock you are using your iPhones like how is your what sort of locks you use face ID what sort of lock do you use face anyone uses patin anything else everyone uses face numeric yeah so we have lot of people using face numeric biometric cool yeah so Apple also exposes couple of APIs to interact with OS with a particular biometric or even the face so it works only fine on iOS simulators not for the real devices so you can unlock your Apple iPhone any of the Apple devices if it has a biometric enabled device so they recently taken off the fingerprint recognition itself now we have face unlock so it works perfectly fine for face unlock as well and and again it's a behind a mobile call in say you say mobile call in send biometric match and if you say it's a face ID through or if it's a biometric with your old devices you say biometric through it's just going to consider that just touch ID yeah and yeah before we and these are just not the mobile call in commands you'll have to just go back and see there are like a lot of mobile call in commands for iOS didn't I say iOS also does cool stuff I'm happy with that it actually opened a lot of doors for testing right and yeah so just to summarize on what we went through we looked at face unlock we looked at how you could actually profile the commands is how you can measure them and we looked at some of the cool gestures that was like a big pain point for us and how easy it is for us to do that and in gestures we saw a lot of gestures like two finger tap double tap tap swipe scroll and a lot of things should go back and you know take a look at the other mobile calling stuff for iOS yeah I would strongly recommend do that yep so let's go ahead and see what's that on android so android recently on its developer somebody has released a feature called instant apps so if you know what about what is instant app if you have explode instant apps so if you go to place to type any of the apps which has instant instant enable for example candy crash saga so you don't need to install your application install the application on the device it means you're avoiding a lot of applications getting installed in your device but you can still try out without installing the entire application in your device so if you click on try out it just opens up an instant apps you can still go ahead and see in the settings apps sections the apps that are got installed you can see a minor version of it getting installed but if you look at the list of apps probably it doesn't look over there so it helps us to try out a particular application before even you get into before even you install it and explore the application so candy crash has enabled this feature so if you wanted to try out candy crash you can go ahead and try it so these are some of the scenarios how do you automate in case of android yeah go ahead and just to add to what Srinni said on the instant app stuff just came top of my head is the url which you see there it's just not the url like some random url there you need to actually configure this url android manifest files which means your app should be prepped to be an instant app so where you set in you know I think it's a instant app to true in your manifest file and you need to know that what's the url when you click it it's a sort of navigation stuff so your developer should definitely do that and you need to know what the url is and the package for it to like sort of quickly open in google in android google provides 3 ways to interact with your application there are 3 ways to deep link to your application one is instant apps if you know the url if you know the package name you can launch it through command line as well so for example if you have done some payments you got the order confirmation mail if you wanted to check the order you can land from the email from the order confirmation email you can land directly into your application orders page without even opening the home page you can directly launch to that activity that's another way of deep linking so another way of deep linking when we have seen switching between apps if you have 3 or 4 browsers installed or if you have 3 or 4 payment gateways installed for example you have 3 or 4 payment gateways installed and you wanted to make a payment it gives us a list of payment gateway upa options that is installed in your application this is another way of deep linking again so there are 3 ways to deep link one is instant apps if you know the url you can land directly into your application activity next is next are the 2 ways where you can switch between multiple apps and list the application you can go to the browser and launch the url these are 3 ways to launch it and google also provides an api to perform this kind of actions through deep link api wherein you have to specify the package that you are going to interact with and the url of the application that url of the activity that you are going to interact with so if you say these 2 informations deep link take these 2 parameters and launch the particular activity not the home activity so there are couple of android api it doesn't support these features if you go slightly back android api doesn't support these features because it's quite new to the market so what android exactly does is you might have noticed it when you are launching another url so from application it goes to google chrome launches that url and again it redirects back to that exact application so google chrome access a bridge over here so it goes to google chrome launches the application that particular application so that's another deep link if the android api doesn't support this feature it goes via google chrome if android api supports this feature and you have the application installed it directly launches the application and if you don't have the application installed it takes us to the play store it takes us to the play store for us to install the application so these are kind of deep linking that is exposed by android and all these are all possible through deep link api through native mobile commands of apm cool so let's look at another very interesting api which is very specific to spresso driver most of you have seen these hamburger menus and applications right like google play store has got that most of the applications actually have the hamburger menu so what typically does is you either go tap on the hamburger so it just slides out there's another way for you to swipe right in the opposite direction so it just comes out so previously with the touch action it used to be like you got exactly find that you know the coordinates so which means you have to exactly go to the corner of your screen then you move which means you do minus by divide you multiply and you get these coordinates and then you try to like try to work on that and still pain right so with these api we have something called as open drawer api which works only with spresso driver not with ui automator 2 so what typically this does is as you see on the screen so you have a gravity so basically what the gravity does there is how fast you want to open the drawer so you just set the gravity level there and find the element what's the element on which element you want to actually perform the action on and then you say mobile colon open drawer that actually is going to just swipe it based on your gravity so when you are doing these on actions api when you are going to perform a gesture for these kind of drawers on actions api you will get high likely that you will get element out of bound exception so so these are all the apis that's been exposed natively to solve these problems yeah yeah and especially and I think they could be some random crazy errors like how Sweeney said it says your coordinates are outside the screen and you go crazy seeing those numbers in the exception and then you decide okay let me debug reduce the numbers coordinates and stuff right so you can get rid of all those things and start using these apis if you are actually moving to espresso driver and again it doesn't work on a lot of apis it works absolutely fine maybe it works absolutely fine the actions api that you use with x and y it works perfectly fine on this use case it works perfectly fine on some devices not perfectly on other devices even if you adjust the x and y coordinates it sometimes gives us element not bound exception or element out of bound exception yeah let's see another api very specific to another problem so we try to bring in all these apis which is always a problem for people to set including me or Sweeney over here is you know trying to set a date and time with the default date pickers and time pickers on this crazy clock right I hear this it's giving me a lot of pain in trying to set some time find the coordinates stuff I see a lot of people nodding head yes I know I understand and it's quite easy now and again this works only with the espresso driver it's pretty simple now all that you need to do is again you just say mobile phone set date and just give the date parameter as you see over there which you watch your month and a date you want to actually set and espresso driver is going to help you with that which means you no longer need to inspect get your xbox yeah and you have to do all those crazy stuff you really no more need to do that just use these apis they bring straight forward and it's just going to set the date out for you and we even we have an api for setting the time if you have specified house and minutes and specify the api set time it just sets time for you and it works perfectly this has been exposed only on espresso it works perfectly fine on espresso and if you have any custom components then it's not recommended if you have a custom date picker there are a lot of custom date pickers available in market and if you still have a custom date picker you have to go through the pain how many of you would love to see if we could do some profiling like the way we did in IOS right so android can as well do that and android can do it and now apium is actually adopted to that so which means you can actually try to see what's your battery consumption when you're automating your app or how does your wifi load how is your cpu information so basically that's like your adb shell battery right so what do we do at an apm now so we can execute any shell commands that's been provided by adb android debug bridge for example if you hit the command adb shell services list it gives us 100 per services that you can perform with adb shell so if you pick up battery adb shell dump says battery you can get the information about the battery on your particular device maybe before and after execution get the information about battery and see how much battery been consumed or a series of interval get the battery consumption using mobile colon shell command and giving this command as dump says then you can get battery information or if you wanted to get a memory information probably you can use mem info it gives you the memory information as well if you wanted to capture cpu information for how much percentage spike being achieved when you are executing the application or when you are performing complex actions on your application so you can easily keep track of performance as well either it could be battery or cpu or memory so if you hit the command it gives you list of all services that android debug with supports if android supports that service you can go ahead and use it in mobile colon shell that's so cool and we just picked like random 3,4 apis just keeping time in mind but you need to go back to the docs and there are lot and lot of lot of these apis which is exposed for both iOS and android so go into apium docs and take a look at it it is just not this I think there are roughly about 40 plus mobile colon apis for iOS and as well as android we might be thinking lot of use cases pertaining to the application might not be automatable but it is actually automatable quite a lot of cases been exposed or quite a lot of apis been exposed by platform vendors itself in case of android we have UA Automator specific mobile commands case of Espresso we have UA Espresso specific mobile commands that the set date set time are all specific to Espresso so maybe go ahead and explore all these native mobile commands in apm thank you so all these apis are not specific to any client these are all completely defined you can use you can take any client ruby, java, javascript it works absolutely fine it is already there in java client yep in fact the demo was on java so all this goes via javascript apis it does not go we do not have a specific apis so for example if I have a specific apis for getting the memory information there are lot of information that you can get if you hit adb shell service list it gives me 120 services we cannot have the methods defined in the way that we wanted to so we wanted our users to experiment the apis so that is one of the reason why we wanted to come out of touch actions apis if you look at touch actions apis and if you wanted to perform a swipe yes sure so it is defined in such a way that you have to give press you have to apply press then you have to wait for a second or some second then you have to move to a specific location then you have to perform the action but our requirement could be I can press just move I do not need to wait for it so that is how the touch actions apis are defined so if you look at actions apis so it is defined in a way that it is generic I can configure if I wanted to wait if I do not wanted to wait go ahead and execute it in case of mobile colon methods it is another simplified version of apis provided by apple and google to interact with your application so if you write a wrapper on top of it it might make your life easier but may not be for all so we wanted to keep these methods as generic as possible I have a question two questions so do we have any idea when apis exposed by exposed by espresso will also be will be with ui2 a2 no it is not possible at all ua automated 2 is officially deprecated by google and still ua automated 2 works fine google hasn't stopped the support if you look at the code base the code is still residing in and they exposed the specific set of apis and the way espresso works is it is a different team google invests a lot of money and time in it and they expose a different set of apis they don't know each other so you say to the whole people who are using apm to move on to espresso and again it depends on the context the way forward in the future is going to be espresso maybe google can stop supporting it they have deprecated it 2 years back still the code base is there and they are not working on it it works absolutely fine and a lot of work around has been done at apm as well but way forward could be espresso that's when we wanted our community to explore espresso rates as many bugs as possible so help the community to bring up espresso in the state of how ua automated 2 is now thank you thank you so thank you very much speakers thank you so much