 Hi everyone, welcome to this session where we talk about leveraging record and playback for mobile games and XR by Naomi Ferreira. So welcome, we are glad that you can join us today Naomi. Hi. I'm glad to be here. So without further delay, over to you Naomi. It's all yours. Thank you. So thank you for the presentation just to add up to this one that maybe you know me online as the best links. And I put there my Twitter account and my blog as well. The blog is a little bit frozen now, but it's for a good reason and writing a book so my energy is going there. But it will be back and it already has a lot of good content that you may want to check after this presentation as well to maybe add up to the concept that we're talking about here. And as Roger said, we're talking about leveraging record and playback for mobile games and XR. And this is going to be the agenda that we have for today. So first we're going to talk about the advantages and disadvantages on record and playback tools. So why are they good for and why are they not so good for and how can we make the best of them. And then we're going to talk about the air test project, which is one record and play back to this all the concept in this talk can be applied to different play back and recording tool. This is just an example. There's this to me because I've used it before, and I work on them. Then we're going to see how to enhance pitch of the model. Most of you will be familiar with it. So we'll talk about it as well. Also, after that about how can we deal with the screen shots in a maintainable way, record and play back to sometimes use the advantage of the screen shots. And well, how can we deal with all of that. So the code is still maintainable. How can we deal with different resolutions. And multiple object finders sometimes when we're dealing with websites or in this case the phones as well. We have different objects to find different locators to find the objects so how can we deal with those. Now we will talk about dealing with multiple platforms so Android and iOS is going to your example. And how to deal with multiple devices so when you have more than one device and this is very handy for example when you have apps like games that may have to have two players and have to communicate with each other. And then we talk about XR XR is sort of a ratio between PR and AR and multiple other things so artificial intelligence, artificial reality, virtual reality and other realities. They all combining cross reality which is XR so we're going to talk a little bit about it and how can we still do maintainable automation with it. And then we talk about applications and conclusions. And finally, we're going to have some time for some questions as far as 11 total of points, and hopefully you'll enjoy through all the way through. Okay, so let's get started so record and play back to us. What are their advantages. So we generate the code faster than if we were to write it manually, because we just go and click around and then we get the results right. So also someone with mixed skills and may not know about programming can still create this test right that can be repeatable it can be redrawn. We avoid writing repetitive and boring code sometimes when we're writing automation some of the parts are very repetitive and boring and we end up copying and pasting a lot. So we avoid all of that with a record tool because the recording tool is doing that for us. The code is still being repetitive, but we don't have to write it. So we have other ways of finding objects that don't for those who don't know them is document object model is an API for valid for validate HTML and well form XML documents to define a structure of the documents and how the objects in the documents are accessed and manipulated so this is what we use to find a locators and find the elements when we are dealing with our automations. And finally, it could work for different applications such as XR it has functionality for XR for games and for other difficult to test applications. But what are the disadvantages if it has so many advantages, but not everyone is using it right. Well it has low maintenance ability once you record something it's really hard to add something into that or remove something from there. We also have low reliability it means that, for example, especially when the UI is changing frequently, we cannot have to go and report the whole thing again if we want to get or the best for for the thing to work otherwise he was going to because the UI has changed maybe the icons don't look the same, and it will break the best. It has low scalability as well so if I want to scale it to use more devices to use more type of testing, all of that is usually very difficult to do with this sort of tools. Because of this is also harder to reproducing other platforms so if you have some code that is working for a platform and now we have to do it the same basically the same sort of steps and tests in another platform to generate that code. It has low reusability for some reason you need to also record again for another test that is basically very similar for example if you have the most common one they have a login and then you do something else. Then if you have to record another test for the other test you also have to record in the logging and do something else. So these sort of things is what make the record and play back to know so popular. And also it costs a lot of tests, especially when you're dealing with descriptions, which is one of the benefits of the tool that I'm presenting today. And this is because all the screenshots that you take, they are left over there and unless you keep a good maintaining all of it and you go and remove the screenshots that were there before, there's going to be some tests. And also for the even for the test, if you create another test, your old test is going to be somewhere in your computer you have to remember to delete it and that's kind of an extra step on it. So what is our test project that this project is a sample that I'm going to give for this presentation. And the artist project consists in the thing into things that poco, which is the backend and the ID, which is kind of the front end right. So for example, again, I don't want to go deeper into it but just so you understand how it looks and just for for us to have a working example of everything that we're talking about on the concept that we're talking about. Here is how we will look. So we have here, just get the lazy pointer. Yeah. So we have connected to our mobile device here and we can see it here on the right hand side of the screen. In here we have all the capabilities for screenshots automation. In here we have the capabilities for many different applications so here we can see the dome of Android of iOS. I just created a new file. And it here is for Selenium if you want to automate the Selenium. So I just click here record button, and then I can just go on my mobile phone and I can do things here and it will be recording here. And it will record all of these different steps five touch it may be a little bit small so I hope you can see it well. But also I have some examples that they don't with the code in the site so you're going to see it bigger. So you can see here I can click around and I can play with my game. This game is called Dancing Lines 2. I guess someone wants to install it and basically you have to click at the time of the song. So when the song is kind of telling you there's also a visual cue. You cannot have to click it to pass the level. It's pretty much fun. I really like it. It has a little bit of advertisement but what can you do? Yes, it's normal. Okay, so we've seen how it works. So let's see, let's go to see how to enhance the page of the model. So I'm pretty sure most of you know what page of the model is. It looks something like this for those who don't know. And basically we have amount of some classes that are going to be our model where we have our test case logic, right? So for each different maybe features we will have test cases related to that one. On the other side we have some classes that are called from the model that are called pages and which we have our page elements definition. So what's a page? So a page can be a view. It could be a URL endpoint and for some apps maybe not so clear like for example for games. So it could also be forms. It could also be a window. It could also be a screen. Again, it could be a level. So what our page is is like depending on the app, right? The idea is to have together the similar type of locator. So then we can, they're all located in one side and we can easily change it. If something needs to be changed rather than having to go through the code of test cases and figure out where all of the location locator is being called and having to change it one by one and trying to like figure it out in a very low base code. So we separate it and we keep everything cleaner. So how can we enhance this to also allow us to do things like record and playback and screenshots? So basically we will have a folder where we keep our screenshots and our page is going to call the screenshots and we're going to be able to retrieve the elements there and everything else is going to be the same. So very, very similar to what we have before. Our model is going to be .air because we're using air test, but it's not so important, right? It's just so you know that if you see that that is because of that. So we're going to have a folder for pages. I call it like this. We call it anything. And then we're going to have our folder that is .air that is going to be for our model. In the case of air test, we're going to see the .air otherwise we don't need it. Okay, so how the pages will look. This is very easy code that I'm doing here with Python. So I hope everyone can follow up. Basically for this, I'm also using driver find element by IP, which is very similar to a website rather than Android or iOS. We will see later on how to do it for them, but just so you are kind of familiar with the code probably because page of the model are usually explained for websites. So just to follow on that. So I have some, at least of elements. And then I have some met up like this one click button and then I will use this touch in this case to get the element. And the model how we look. So the model looks something like this. I have to import the API for air test to do the stuff that I would do with the touch with the Android and iOS related routines. And I'm importing the page from the, so the page find name is the name that I have in this case it will be whatever I have inside of here. And the homepage class is the name of the class will see it in an example in a little bit so don't get too lost. Then we get the unit and then we just instantiate that class from that we imported and we click on the bottom. So basically we have separated what is our pages and what is our model. We have an extra thing is always the same so you can just copy and paste this and set for this bit here. You can also use a pen instead of insert. And this bit here that I'm doing here is basically telling my artist.air that my path is going to contain the other folder. So they this code understand that where everything is and how to import everything. So this is just so to make it work with the two. And with the two folders. Okay, so how can we handle the screenshots right so we will see we will see it in the example before that well we have a screenshot. But that's creature can turn into code. And when we turn the screenshot into cold it looks like this. There's a template. There's an R here and then the oldest creature that are taking with ethics are going to start with TPL, then some ideas. And then BNG. Right. So this is how it's usually called but obviously you can rename it and then rename it here. Then there's a record position that is not so important and then there's a resolution. This one is important because if we have different resolutions, we can have a screenshot with the same ideas and the different resolutions, or we can have maybe a different record position, but the resolution is very important because especially when we're dealing with games, where we deal with automation that is on the mobile phones, the mobile phones have different resolution one from another. So this is important. And then we have again the click button same as before. So this is how this creature will look when we define it as an element. Let's see all of these that we've been talking about. Let's see that in an actual example. So in this example. So this is the code that we have before. Right. So generated this screenshots from before. And what we're going to do is right click and we're going to click on image render code and that is going to produce our code. And this is how it looks now. So it just replaces all the actual images for a template. And what we're going to do next is I have created another folder there. We're going to take this creature from the folder that is being auto generated. And we're going to copy it into the folder we're going to cut it into the folder that we created. So if you can see here, these are the auto generated screen. So this one is the first one here. And then the second one and so on so forth. I don't need so many screenshots this were just for an example so I took too many of them. And you will see that reduced later on. So I go to my page, and I copy here all the screenshots and in here have a Python file, which I also created with air test but just a simple Python file. And I'm going to do the same for the Python file and I'm going to get all the screenshots and I'm going to paste it here and I know this looks super fun one. So I'm going to do that this is to enhance the model and so it can be automated, but all of these steps can also be automated. I'm just showing it so you understand the process that is being followed to make it from a single page into a page of the model with all the screenshots and everything inside. So now I create some methods in my page so I create a class as I've been seeing the code before I initialize it and maybe I start creating objects like this one. So I literally just took the template from my my method and then I just copy the method here and I change it for the object that I instance here. So we will do the same for the other objects, for example, we can get one for the piano. And when we do that, I would apply for all that again looks very manual, but can be automated. And then we need to remove it from our air test to move all the objects from there because it has been generated on the from fine basically. And we're going to add the piece of code that I showed before so they go to call the folder and also the code to call the class. Okay, so this code is a little bit wrong. So we're going to see it on the on the next slide, going to see how it everything looks finally. So this is how we will look in the end. And she wants to start the video. And so I have my elements of fine. So I reduce it to six elements there. And I have my methods where I wait for the element and then I click it so I don't go and, you know, maybe my might just are not so flaggy because I'm gonna wait for the element before. And this app you can see the name of the file and the name of the classes game page. So that's the name of the class there. That's what I was saying before name of the file name of the class. Then I just called the class. And then I start calling the elements for for the class. So each of the elements. And this is like I'm going to be available at the end of the talk. So I'm going to upload them there. If not at the end a little bit later after we handle. So you can see here, how it's working. And now it's going to fail obviously because I haven't continued the is going to click the game and then it's going to fail because I haven't continued the automation here right so to show this. So we see how we can automate this game, which imagine automated otherwise it wouldn't be possible right you have to do everything manual. In this case, we can automate it and we can also automate it in a team way that is maintainable and all the steps that I say, I repeat, it looks very manual, but it doesn't need to we can automate all of that. And the slides again if you see the code is a little bit small that will be available in the end. And then we can see you can enhance it in your computer and see that it bigger. So how can we deal with different resolution so this is halfway through the slide. So, how can we, how can we handle the different resolution so in we have, as I say before we have different templates and which of the templates will have a resolution place so you can have a list of images that you define it, and then those elements that you define it, the images that you define it or the elements that you define it you can put them actually in in a list. And you just iterate through the list so this is just going through the list, checking if the element exists, and if it succeeds to use it and then you break it because you don't need to check the other elements. So, I didn't have an example with multiple resolution, but I thought okay, why don't I do the same, just to showcase the code, but with all the elements that I have. And this is how it is. Sorry. I have my phone connected again. I have my elements here that I define it and I define the list with all these elements. So first the piano, the game up the begin game, the finger and the screen. And then I go if the screen should assist, then I press it and otherwise I continue. And then I just iterate it in a, in a while loop, right so I have a loop to check the elements and a loop to go through all the elements. So let's run this and see how it works. See, you first see the app and you see twice because it found the app, so you click it. And then we go again through another loop, because this loop was a little bit slower and the piano is the first one is going to look through everything. Because it means the piano, but because I have it in a well loop, then it's going to go back to the piano, it found it and then it clicks it. Now it's loading the loop continues with all the elements. And happy for us, we have the, again, the, because we broke the loop, we have the app, and then we have the begin element, it found it, it clicked it. Now it goes again from the piano, the begin app. Now the start up the begin element. Now we'll see the finger. Now see the finger here so we click the finger. And there we go. And it starts again and the next thing will be to click on that somewhere in the screen but because it's in a loop is slightly too slow and the line already broke. But you see, and the interesting bit of this is that I could use this very, very easily to create a crawler that will go through all my elements in the screen and try to figure out if there's any element that I can press and I can just go and keep looking through all the elements on my screen. So I don't even have to tell them, which elements I have available I can just keep them all my elements and go and click whatever you can see. So this is one of the benefits of using the, the multiple extensions right, but the main main benefit is for resolution so when we have different, we not only have an Android maybe we have also an iOS. Again, we can use it this way. Also, we kind of have way through. So let me just run it again. And if you want to take a screenshot or something to post on social media now is a good time let me post how we were quick enough. Okay, so let's keep going. Let's deal with multiple object finders. So it is not a problem for Android and iOS automation for for artists, because they always find it by the ID as we will see in a bit. And also not for the screenshots because the basically the finders are always the same. You can use it with things like where you have different sort of finders you can find my ID by name, etc. Then you may want to do something like this is so the code this doesn't work. It just goes through all my finders and if I can find it by that finder then I click on it so it's very similar to what we did with the screenshot if I can find the element by that screenshot I click on it, but I will also use finders. So the way of dealing with this sort of thing is using a switch right and this is the way you kind of use the switch on Python Python doesn't have the switch per se as maybe Java or fish up. But you can use it like this so you have an element that is ID and name and in here we have and find element by ID and find element by name and you pass the value that should be passed here. So basically this this is another way of dealing with multiple ideas so even if you have multiple ideas finders is not excuse for you not to use your page of the model and automate as much as possible. Okay, so now we're going to see how to deal with multiple platforms so iOS and Android and I put here side by side iOS code and Android goes for a test. And it is very interesting because if you see, like the first line is exactly the same. The second line is pretty much the same but it just get the driver for Android and here for iOS. The third line is exactly the same. Then in here we I didn't we didn't initialize local in here with Android, which is the speed here. And in here with iOS, which is this bit here. Then this auto set of file which I don't remember what it is for but it's always there so I always put it. And then we get the connect device in here iOS connect device in here Android is pretty much the same you have the URL with a port, you can add a port here. And then you just go vocal and whatever you want to start to click. In this case I'm seen line to which is the game that I'm showing up today. And you can click, you can clear the app you can start the app and years more information there's two links, they're not seen very well but at the end of the slide I'm giving you more links so you can get the links from there. And so basically what I'm saying here is the code looks pretty much the same. So, how can we use our kind of page of your model, how can we do something like that how can we do, how can we design this to be able to use the same code without having to record it twice. So, first of all, let's see an example of how to handle Android point because what we saw before were for a screenshot so also same sort of syntax but for a screenshot. In this case, this is just for Android right so in here is so we can select Android. We can select things like I was a unity and we can click here just to for it to create these lines automatically. And then we have the dumb and we can click here to see the objects of the dumb. If you go to the dumb directly it will highlight the objects on the other side. We click in here, and we can also record. So, we can just pick the things and it will highlight with everything is and the elements are there. Which is very good. But a sip of water. I'm showing this so we can click record and do the same. And that means that the code is going to be auto generated for us. You can see here that is clicking with that ideas that are appearing in the dumb here. And it's pretty cool. It's fairly good because for Google we also we even have ideas for the numbers. So, we can have here the C and the five six so for the calculator, everything has their own ideas and it works like a charm, you know, it works really well. Unfortunately, for other applications that doesn't have such a clear dumb for games. We will talk in the XR a little bit more about the XR section about why games have to be done slightly different than with this. But you can always do miss a match. So, hopefully it's going to get clear as we progress. Okay, so what I talked about before, okay, we have Android, we have iOS, the code is very similar. How can we do that? So we can import everything here. So again, this is for getting the folders and the page on the other folder. I'm importing the Android here and I should import the game page here, right. And I'm getting here my Poco, my setup, I'm connecting my device and then in here I'm just going to call the page and I'm going to pass that Poco. So if this instead of Android was iOS, I'm also passing a Poco. That's all I have to do. And in my page, my page is going to look the same if it's iOS or if it's Android. I'm just going to have this touch game and start in which I'm adding the Poco and I'm going to, in this case, start the game by calling the ID rather than by clicking the screenshot. And this is how how it looks. So this is going to look the same again for Android and iOS. This is going to be different for Android and iOS. And this is exactly exactly the same. So I can produce a lot of code thanks to that. Now I don't have to go and create all the screenshots again, or I don't have to go and identify all the objects again. So as long as my objects are also the ID is also the same, this is going to be the same. If my IDs are changing from iOS and Android, then that's nothing I can do really. It's different in one and the other. So then the pages have to change, but as long as the IDs are the same, my application name is the same. Then I don't have to change anything on my pages only in the connections and that's like two lines. Okay, so let's see that working actually. I have an example here with Android. It's exactly the same example as I showed before. So here I have a touch game start to because this is one already for the screen shot. So this is the one that I created. And exactly what it was on the slides. So instead of doing it by the screen shot, I can keep the same and I can start it with the Android and I pass there the Poco engine that I created. So the Android UI automation Poco. So it is starting and it is starting. And then we can go home by going through a key event. So that it is so even if we have multiple multiple platform, we can still using our enhanced page update model to deal with that. So what about multiple devices? So there's, as I mentioned on the at the beginning of the talk, there's something such as chat or some games that collaborate with two devices and might be two players and they have to do things at different time. And I'm going to show you another game for that, even though it's not fully automated. But I showed a little bit about that game as well. So how can you test that right. So in here, you can see, you can connect the device iOS, you can connect the device Android, and you can set current and switch the current device. So in here you will go clicking one in here you will click in the other one. In here I haven't showcased the page object model. And I actually haven't tested with iOS and Android, but I tested with 200 devices I'll show you the code in a bit. And, but the interesting thing here is that you can switch from one to the other, and you can also potentially have it remote I haven't tested but this is how you're so if you have your device over the wifi you should also be able to connect to it. So if this is very good for the local test to check two phones are from collaborated, but if you want to automate this in a different way you probably want to use something like a closed automation or some close solution where you can call your two devices and do the you have some sort of controller and then you control one and that's the other one. So let's see how it works so the code sometimes I've been confused let's see it with an example. So here's going to be our code here, and we have the two devices I'm connecting it to my computer so just using local host for that and I'm using 200 devices because it's what I could use for this what I could get for this presentation. So I hope you you enjoy the recording with my phone and it's not the best screen in the world is a little bit crack so I put the screen saver is correct. So that's why the screen is that it has started and the automate the connection. And it starts again. There's some funny work that it goes into the middle and I think it's because I have it like this instead of like that for the game is like that. And it just click starts, and then it started the other game and again. I don't have the full screenshot of a start just a bit and then it started as well. The reason that I did this bit of a slot start rather than the full button is because my screenshot was having helping in the middle and I think there's some work I don't know if it's a book on the earth there are my controllers for ADP, maybe my studio of maybe my phone themselves that when they do things connected to the computer and they're like this instead of like that. And if they're in a horizontal then it breaks and it's only show half of my screen. So that's why I didn't do the full of the mission for this but you can see here them both in the in the same kind of level and then they will go inside the level and they can play together and we can get something out of that. So, I know this is not the best example because it's broken, but you can fix it and it's fixable. I mean because it's probably just some drivers that I installed in Broly or maybe the Android studio as I said, and you are capable with this to test two devices. And again, this is a little bit ugly because I put everything in my same kind of thing, right and I'm reducing the code where I could more than, than easily have it in the web and a different page, and and just reduce this calling it with a different device and just change current and then calling it and then calling it with the vocal that I need to call it. So hopefully that's understandable. I think this bit is pretty cool in fairness to be honest but yeah it has to be tested a little bit more. Okay, so what about XR objects and this is kind of or the end of our biggest sections and then the rest is just a smaller so this is the final bit. However, we are 35 minutes in and this is 45 minutes presentation there's no way I can fully explain about XR in this meeting but I can give you some hints and also again if you go to my block I'm not doing advertisement for my block I don't get money out of it if you go there I don't get money for any visits or anything, but you can see their past presentations as well and I've done a presentation about VR and XR, which has a video recorded available and you can check it and you can maybe go in a more different level there. And also I have post some, I have some posts about this as well with more details so if you need it, they're there. And I'm going to explain it as well here a little bit how it works. So as I mentioned before, games are a little bit different than other applications they don't have the DOM available as we saw before in the calculator and why is that is because they usually are created with a different engine, like Cocos or like Unity. In this case XR is created for Unity so I'm going to give you the example for Unity. So what does it mean? It means that for me to create automation in Unity I have to add a folder in Unity and not outside so it is not possible with this tool at least to create automation directly unless you do the screenshot but directly touching the DOM without seeing the code. So it looks a little bit more like a unit test than like an end-to-end test but still it's very good and helpful, especially when you're writing automation I can tell you it's super helpful to have it there. So how does it work? So basically you add the Unity 3D folder that is available to download from a test website into your Asset folder. So these Asset folders are usually in the bottom of Unity and then in your Unity you're going to have to have a camera container, a camera follower and then your camera inside. This has changed a little bit from when I created a screenshot. So now the next versions don't really work the same but this is, you can have an idea how it works like this. You need a camera container and a camera follower for many reasons I have to do with the 3D way of the objects moving and how they can like lose a degree of mobility. I don't want to go into details on that, it's also in my blog if you want you can go and read more information about it but this basically how it works and then in your GameObject in my case it's going to be, you're going to have to have a GameObject there. That is an empty object and that you can just go and add the code for Poco. So that this is setup, basic setup and it's going to be pretty much the same everywhere. Then you're going to have to import the Unity 3D Unity Poco for the drivers so instead of Android drivers or iOS drivers as we did before we do the Unity drivers. And this Unity Editor window is so you can do the automation in the Editor window of the Unity otherwise you can use your phone for that. So this is how it looks to get the address and then the Unity Poco so again this is the same everywhere. And this bit here, this is interesting so this bit here I did it so I can wait until a movement is done. Because for me to have the movement like a person will do right because if you have a headset for example if you have a phone you don't go directly to an object right you go slowly move your head you move your hand it doesn't go straight away. So you will have some speed there. That means that the movement may have no big finish by the time you may want to do click right so that's why I have this piece of code here. So I have this from the Ertes code that tell me if a movement has to be finished or not and I just iterate with a little bit until it's done. So this is just setup again. This is important bit here. So and at the beginning I can take the object and I can check the attribute in this case texture and I can save it. And my texture for example if this was a cube and this could be like I don't know if it's red or blue right so I have a cube with a particular texture I verify that the texture is that one just to make sure that my test case is not going to pay. Then I rotate there's two ways of rotating this first line here, which is I just rotate up with a speed of 0.5. This is another way of rotating and it's just I rotate to look at that object that is called here with a speed of five. So I have a camera container and follower that I was telling you before that we need so unfortunately we need to do these two objects but you can call them any any way that you want to call them. And then I just do a click so it's different for this one is to click on a particular area so just in the middle of the screen in this case for unity editor window it looks a little bit different we'll see it in the video in a bit. And then we can do some assertions. So for example we can assert that the texture that we get now from that object is different that the texture that we have before. In this example when I click on the object, it changed color. And then I can assert as well that the texture is this new texture here which is I think maybe green or red I think. So this I hope is clear and maybe going a little bit fast but I hope it's clear. This is the way you do it. This is the video that I have for this. I've reduced this video because the new unity doesn't have so this unit is the ones I have installing my computer is from 2017. It doesn't allow the Android phone that I was being landed for this presentation so I couldn't make it work with it. It doesn't allow the new unity and the new unity works slightly different than this one so the library is what I have to call different libraries and have to set up a lot of things. And I didn't have enough time before this presentation to do all of that. And, but I promise I will update it and put it on the blog, and you can bother me about that anytime you want because I'm making the promise that I have the update all of that. And that's how the, the PY so the Python code will look like this is the artist code, but instead of having the IDE I haven't just said document and this is a research to the code. And this is a game that I created for a project for the project and it's a maze and you have way points that you can click and about advance through the maze. So this is what we're going to do this is a setup code that I came before, and my object that I want to look at is called way point seven. So that's the object that I want to look exactly the same code the same speed and everything, then I'm going to wait, I'm going to do this look here to wait until the movement has finished. And then I'm going to go and click in my camera container. It's quite slightly different for unity before it was just for code of click, but here I have to give them a photo of the camera of the elements that I want so I click for the camera I want the camera to click in a way. So for Android looks like that can mention. And then I rotate the object. And so basically I rotate 10 degrees up because I'm going to be looking down and there's going to be a door. I'm going to be showing it. So I rotate, I look at my way point I click on my way points. I look up, and I should be looking at them looking at the door. And then I can verify things like the door make noises or it opens so many other things right again more on this in separate presentation so let's just go to the conclusion because I see Rohit here I'm pretty sure we're running out of time. Let's go quickly to them. So with all of these concepts what we did is that we increase the maintainability reliability and scalability, although it needs a little bit more of automation that what I've shown here to do all of that but even with the little automation we already increased all of that. And we can still using our screenshots. So it's easier to reproducing other platform. And there's some test the breeze I haven't handled the test the breeze in this presentation I have in another one before. And, but it can be handled you can go and just remove every object that is not being calling your code from the folder and that's it your screenshots are going and everything is clean. It works very well for the people to test and applications such as game on XR and any application. It works for any application and any person can also use it for an easy turnaround, rather than writing code. And now you can insert your own application and conclusion so whatever you feel it has, and you have learned from these slides you can put it here as well. I have the links again and sharing this light so don't bother taking a screenshot of this because it's going to be shared afterwards. And that's everything from me thank you so much I don't know if we have much time for questions. Thank you so much Naomi. I'm afraid we have questions time for questions right now but what. I run out of time there's so many things I can tell you about this and this is something I'm passionate about so without realizing it just runs out. Thank you Naomi for sharing all your experiences with today. Be me on on Twitter if you can because I saw someone can join the tables be me on Twitter your questions as well a little bit fine I'll reply there. You can find me I'm fine I'm all I'm appreciable. Thank you so much that's very kind of you. Thank you so much and thank you Naomi again. Thank you all for being here.