 Well, hello everyone, my name is Justin Isen. I am a senior software engineer at Apple Tools But those of you that don't know what Apple Tools is our booth is right out of here, we specialize in visual testing and Little bit background about myself. I've been in IT about 20 years now the first 10 years were manual manual tester and then last time that been development automation and so forth Another bit of history is I too was that was at the 2012 Selenium conference where I saw Dan's Awesome prototype that he built and I was amazed because at the time I was using UI test those of you that remember IOS is automation framework years ago, which was not the best and I was also doing robotium for a Java for Android so that actually Having two separate frameworks was quite a frustration and then plus Selenium for web, so Now we're all here because of Dan and Jonathan Rips and Jason Huggins, so It's a great achievement that now we have Appium. So today. I'm going to talk about using Appium to build this native app crawler that I did and how I did it and show you the Pros of using sometimes So the topics of my talk will be talking about today's quick development landscape, especially with agile Mobile the mobile test matrix and all the near impossible combinations that we should validate before we release and showing how we could leverage Machines and do some of the work that we are having to pick up and do manually just because of all the work that needs to be done In how a crawler could potentially uncover more bugs to help you You know find localization issues and performance problems, etc So the agile world moves fast and it's up to us to keep up. I Put this video in here because this is how I felt This is I was this guy in the middle here running away from the bugs as fast as possible And those were the developers on there were top right laughing at me as they were introducing new bugs trying to torment me So especially with like CI and CD, you know the time to market is becoming increasingly shorter Companies want to push out new features to keep ahead of competition They want to push out new bug fixes They want to keep their app fresh. They want to take advantage of new operating system features or even Features introduced into the hardware Because of this we're having less and less time to test these applications for all the things that we need to do And it's putting more pressure on us A lot of companies are relying on the end users to report bugs, you know, I see it all the time I feel like we're almost Desensitized by it. It's almost expected that oh the user, you know Hopefully a user will find whatever bugs are and report it to us and we'll fix it You know, I don't really agree with that approach, but it's sort of the way things are currently so the goal My goal at least or what I was trying to do with this was, you know, put the machines to work automate automation And create something that I could collect as much metadata about my application in every single build to report back It's findings. So essentially like if I handed you The application I said here go ahead test my application. Tell me what you found, you know, what issues you found Essentially trying to clone myself You know because of the limited resources that I had at the time to give me back information about my that I could check So what is an application crawler? There's a you that are familiar with web. There's web crawlers out there. Also known as spider robots Basically, it's a program to mimic what a human would do but at a software level Interacting with UI components, but before I begin, let me give you a little history about why I decided to do this So I worked for a small startup at the time and we were very big on dog food in our application Those of you that don't know what that term is. It's basically just test your own software, you know You create the software you tested internally But we had a problem. We also had Small QA team of two people that only had to do QA but also test all the different applications we had Which was about six iOS Android web Both desktop clients on Mac and Windows So forth. So it was a lot For limited resources to test everything, but we also tested internally But some of the applications got more testing than others because some people preferred one platform versus the others And our applications were constantly changing We had some very very very talented designers and engineers and developers for our platforms And just like any artist, I guess you would compare it to will they were never happy with what they did last So they were constantly changing things changing the UI or changing tweaking things here and there So I was finding myself constantly having to fix and revise tests Due to these changes that our developers and designers were making and I felt a lot like this guy You know, those of you, those of you could probably empathize with me or sympathize You know, you could spend an hour two days a week months, maybe just implementing one single test because You know mobile automation isn't always so direct. You always are finding that workaround Or you're doing research to solve a problem can take take a long time So after spending all that long time and patting yourself on the shoulder or back or whatever Happy by yourself for only that implementation to break the next build. So this is how I felt So about three years ago around that time when I was feeling like that guy. I had this crazy idea You know, how what if I could just automate the automation? So let's go into the mobile test matrix and all the combinations that You know depending on if your application supports it and what we need to test so orientations It's not just landscape a portrait anymore. It's portrait this way portrait, you know 180 landscape this way landscape That way depending on your application supports different orientations But a lot of people tend to just test No portrait orientation because it makes sense this fits in your hand like this Comfortably, but if your application supports both or all of these orientations, it should be tested You know things can happen like this when you rotate the screen This is actually a true test that I was doing where actually put the Application into landscape in the screen one blank. So if I had never tested this we would have gone to production like this and Potentially a customer would be the one you know telling us there's an issue There's many resolutions now Tons of them and probably be 10 more tomorrow And if you're not testing these resolutions that your application supports You'll never know these issues exist You know us as humans, you know developers designers product people we make lots of subconscious choices We don't even know and so you designer potentially design something not taking an account that Their design might go on a very large mobile device Whereas the in the vice versa it might go on a very small device and they don't realize it So these things need to be found out and bad things could happen like this where This application was put onto Under resolution on a device out of the resolution that was way too big for it So it should have never been able to be even installed Operating versions or sorry Operating system versions There's what 15 or so now just Android alone So if your application supports all these different operating systems how you can actually know unless you actually not verify it Same goes for language You know if you have a multi-language app a lot of people tend to just you test in the language that they Are familiar with But how do you know it's gonna work for other languages like German for example German strings are predominantly longer than English strings So again the designer developer whoever it is made a subconscious Decision and didn't count for different languages string lengths that could completely break your layout of your views Same goes for like languages such as Arabic and Hebrew Which they are from right to left whereas Western languages are left to right again designers developers or whoever it is Maybe maybe maybe you've made subconscious decisions not accounting for those languages So as you can see just releasing a new version of an application. It's a you know, it's a test overload There's really so much to do and so little time to do it Especially if you're factoring an agile and trying to push out these new releases as fast as possible So you could possibly do this with UI automation You know create a test in JUnit loco test in G R-Spec whatever it is that you your choice But you would need it literally an army of engineers to pull it off Or if you have access to you could hire an army of zombie testers If you act now you get a discount code But automating every possible combination would be next to impossible to maintain so if you did pull it off Any future updates the application you're potentially talking about thousands of lines of code Thank God forbid. There's a major rewrite Then pretty much all your automations out the door and then lastly I want to talk about this topic is UI automation Only test what you program at the test So any assertions that you have in your JUnit R-Spec or whatever it is It's only explicitly gonna test that but the beauty with crawlers or monkey testers they test the unexpected They're gonna test for things that you never programmed will do and hopefully find issues that you never anticipated So now I'm getting into part about building it It just really couldn't be that hard You know my thought was I'll just get all the UI elements that are apparently on my view and I'll just loop through them You know click click in and look for some type of action to occur But that was the easy part but also really the core of any crawler So we're gonna go look at some examples now So those of you may have seen this before this is the hierarchy viewer and Android SDK there's other better ones now like the Appium desktop app has a nice hierarchy viewer This is doing the full hierarchy view So you could see all the layers of the layouts of your just this one view alone And you could see all the tributes of that object down Beneath and These are actually really good valuable information to know especially when you're building a crawler like is it clickable is it enabled Is it visible? It's and especially the bounds and the accessibility label does it has have an accessibility label doesn't have an ID So there's a lot of information just in this object alone So let's go ahead. Let's look at some elements now or let's program the program and actually look at form Here you doing a driver page source gives me a dump of the XML just like the hierarchy viewer displays in a nice worldly fashion There's also another method in the Ruby Appium Lib Called page which essentially parses that information and gives you a nice printout And I've used this for years to write my tests You know, I just go into the console I get all type page I get all the IDs or locators that are available in my application so I can write my tests at real time but Unfortunately, there's not enough information we turn on this page method to do a proper crawl So now we have the compressed hierarchy viewer as you can see this is a lot less information It's actually good because this is the only information that's relevant to the views that I actually need This only is giving me the objects that I can visibly see on the UI Again gives me all the tributes that I need So how do we do that programmatically? So the awesome Appium devs implemented this new capability or have this capability called ignore important views which give you a compressed If you just compress hierarchy So now this is great because now the crawler I can use this Unimportant views or the compressed hierarchy to get just the objects that I need to interact with and not all the other Fluff that was a part of the full one Here's just now I'm taking now I'm gonna create some code to Extend what Appium live live the Ruby bindings when I say Appium live they really live It's the Ruby bindings. I'm taking what they're giving me and I'm extending it to Essentially extract all those all the metadata or all the attributes about the objects that you saw at the hierarchy viewer Here I have highlighted where content description if that's returned that is your accessibility label So here I'm just taking the accessibility label of content description and Renaming the key as accessibility label. So in the future after I run my test or when it's running I know what's an accessibility label and what is not So let's go ahead and now use this new class. So here. I've already imported it Right now. It's going to die There's an option to put an array of locators I don't want so this is perfect if I don't want to click a certain sort of locator I have a way to exclude if it ever finds it what it's finding parsing So now I return an array of all my objects and these are the same objects that are on the compressed hierarchy view Have all the attributes if it's clickable Is it focused is it enabled its text its class and so forth So now this example here is showing you Taking that array and I'm finding the click a clickable object inside of it And then I'm just sampling it which is finding a random clickable object And now I'm gonna Interact and click this object So now you see I have the object there and it should click you know So that's the start So that block of code that you saw me just paste in now I'm gonna put it into a loop which is the basic premise of a crawl So this is just doing a 10 times do loop which is an easy Ruby block Essentially iterate 10 times through a block of code And now it's just going to find each object as it goes through each page and click click it And it's done. So that is basically crawling at the simplest form but the hard part So when you are trying to call an application, you know, you when you write in UI automation, you know The conditions usually you know what's gonna come up. So you write in conditions if statements But you don't know this when you're calling So there are so many edge cases or unknown conditionals that could occur Especially if you're trying to write this to automate or crawl through any application That's also difficult. It's a lot easier to Create a crawler knowing the application you're in crawl because you can write in all kinds of methods to deal with certain things about your application But it's also more difficult to do it for the crawl any application You know, how do I rescue the app if it gets stuck? So as you saw the crawler is is going step by step They could potentially get to a page that there are no elements and there's nothing for it to interact with So you have to put in some logic to rescue it, you know, basically put it back to a state where it could start crawling again You also have to know when a crash occurs because crawling you're running multiple threads or multiple processes If if your application crashes then potentially you're starting to crawl the you know dashboard of the of the Android you on hybrid web views so a lot of applications these days have web views built inside the native application So if you go into one of these web views and start clicking different objects in the web view Before you know it you could be crawling the entire internet So you have to you know put in some type of logic to handle one of these cases occur And then exiting the app by mistake. So a lot of apps you just have authentication You might click the log out button or you might click the back button too many times or the crawler might click Too many times puts the application out onto the home screen of the Android app So you have to have logic to Basically handle that and as you saw when I showed you the there was an exclude array You can put in the locators to in this array to not even display it when you get the heart or get the hierarchy of the page source And such a plan I just talked about So let's crawl. This is just an example of one of the many applications. I've crawled. It's a WordPress testing both landscape and portrait Hence the name of the crawls. It is a bit slow, but there's a lot happening. There's a lot. It's clicking on the metadata It's clicking screenshots performance data and so forth so Looking for changes the curve before it moves on So now we've crawled the great thing about it is now I have all my screenshots in all the different languages different operating systems landscape or portrait and now I could review these this information Find out if there's any bugs So like I said at the beginning where I was trying to clone myself to go out and try to test as much as possible I can now see what my application looks like in these different combinations. So here that keyboard was opened up so much that There was only a sliver These are things I could go to development to talk about to get fixed hopefully Here's my application and In a tablet so because application solves of both phones and tablets It's probably best that you maybe have a tablet version of that patient instead of both the phone and You know a combined application So you can see a lot of the images are stretched where as they add a more dedicated Design for a tablet. It probably looked a lot nicer This next example is just going to go into Arabic so I mentioned before so Arabic is from right to left In Western language, they're left to right. So this you know hat seen this even though I don't speak Arabic I know enough to you know identify. There's some problems probably like most likely there's English showing up when it should be Arabic Or some of the UI flows are from left to right like the bottom left where it should be on the right side And because I could page the source I get the content description I know what has an accessible label or what doesn't and I know which views I map each view to the screenshots So I could identify the objects that have them And this just raises a good point because now I could go to the product person or developer or whoever It is that needs to know this and have a discussion with them about You know the UI on the right has almost every object on it has a label Identified by the red dot, but on the left only a few do so are Seen impaired users that need to have apps be more accessible Maybe we need that more So this is good information to know so you can approach your Make your application better Now you have application performance so It's not enough to just test your application you could have the best design best-looking application in the world It could function perfect and so forth, but if the performance stinks nobody's going to use it And so a lot of people I think tend to forget about performance testing when it is just as important as functional testing And the good thing about or what you should do with also with performance testing is benchmarking Since if you could capture the benchmark of your UI flows Then you can put this in some type of graph and keep a track of it So you know if your performance starts to increase or decrease or if it gets better This is just information. That's good to have So the crawler as it crawls through each UI it collects the performance So here it's getting that memory the app CPU and the user and system resources And it's also capturing the size of the APK file. And why is that important? That's important because The app stores have hard limits on how large an application could be to be uploaded or Also, the the size what it could be the download over the air through cellular So keeping track of application size You'll know if it's starting to creep up or decrease and now I have in that information you can then Approach the developer and talk about you know removing some content out of application. That's not eating more to help reduce the size So we go ahead and watch this run and since I have all the screenshots. I could actually Map the screenshots to every data point that I collect this information So now I know exactly where the application was when this performance spike hit or dropped And I could actually go into the UI and try to reproduce it So again, just having this information for benchmarking purposes is important. So You can keep track and make sure your application isn't starting to degrade So language detection So I was you know scanning all those generated screenshots finding any spelling mistakes or abnormalities Quite consuming, you know, it's helped. It's helped me, you know expanding, you know cloning myself, but It takes still takes a lot of time and also prone to human mistake You know, I could easily miss a spelling mistake or even if it's language I don't know. I won't know if it is a strong mistake or even the wrong language in some cases because some languages look very similar So I thought there had to be a better way of automating this part Well, just happened since I have all the the page source I had I have all the objects. I have all the strings. I know the UI that it's on I could take that information and then run it through Google translate To then create a report. So this application was in Spanish And there's literally only two Spanish words on this flow and everything else is English So because I could use Google translate to actually detect these strings You could come back and tell me whether you know anything that wasn't Spanish and report it back to me So now I don't have to rely on myself or find somebody that knows these languages I could use a machine to do it. Well, and there are other alternatives I use Google translate because they gave the best results, but At the time I was using a bunch of new modules. So there's a ton of open source translating libraries But they only work if you pass them like three or four Words like in the sentence they work well then do not work well with one word They don't give you back enough assurance for one word which most application like titles and stuff like that or one word So log monitoring Just as if you I tell everybody if you're manually testing an application or Exploratory testing or whatever you're doing or even with automation UI automation You should always capture the logs. You should always tail the logs. Look at the logs as you're testing Because so many issues go unnoticed why you are testing Could be API issues there could actually be exceptions. They just don't render on the front end And so the same applies when you crawl So because I'm actually monitoring the logs I could pick up exceptions or crashes and then I can handle them appropriately So this application specifically has a button to force it to crash So now it's going to click this crash button Detect that there is an exception thrown and then shut everything down So this is great because not only do I have all the screenshots? I have all the steps that it did to get to that point I now also have an exception and now with all this information the metadata I could send this to development to fix it But we want to also replay a crawl occasionally because I ran into an exception there Maybe I want to replay the last steps to see if I could reproduce it And this is also important if like it is an actual issue of the developer might fix it And I could replay exactly what I did or what the crawler did to for you know tests of the fix Happened it actually works So here it's going to Go and replay the exact same steps before you saw just the tail end of it This is going to do every step, but it will be pretty quick So it's going through every basically doing everything that it did before up until the crash in this case It's going to just it's going to crash again because I didn't fix I have but in theory if the developer did fix it It would then not crash so again, this is sort of like Exploratory testing or like I went through and oh, I found a crash or reported a bug and now I have to reproduce those steps So automatic tests Again reviewing all the screenshots every single language every resolution orientation Again became very cumbersome Again prone to human mistake us as humans were not made to do visual Detections, you know, we might think we are but we really not we're not made for it Machines are much better at doing this than we are So I thought how could I automate this process piece of the process more efficient? And since I work for the company it only made sense So now I could actually in theory I could create a baseline in every combination that we've talked about every language every orientation every resolution and I could have a baseline for that So I don't have to review these images anymore as I have been doing a Machine can do it for me and now if I capture these baselines And I run it again and there's issues that are detected by Apple tools I only have to review the ones that are broken and not the thousands that are potentially generated So here's a basic example something that I created a configuration that create Create an example of running the Apple tools tests in this particular case. I am Identifying when I'm on this activity and this ID is displayed and this text is shown take a snapshot well And actually so I have six tests here. So it works whenever I run it it actually identifies and captures the baselines based off of the test that I generated But now I want to test that it's broken. So I actually updated in the application We're in the test again in the crawler automatically detected that the UI Something broke. So now I don't have to actually do anything. It just goes and runs and detects any change occur However, something was still missing You know it started saying I wanted to build a crawler that was self-efficient basically could run and just do its thing And I want to then again get away from writing tests like I started You know, I didn't want to be that sad sad person anymore And then you know as you saw I was explicitly writing a test You know went on this view on this ID you shown if I had hundreds of those I'd have to refactor those if anything ever changed to my application So they get away from that. I had to absolutely know more about the application to be more deterministic So there is actually an awesome tool called APK tool And with this tool I could actually decompile my application Extract the layout the views essentially of my application And then whenever I'm running in getting the DOM of or not the DOM the page source of my application I can match it to the layouts that I've ever extracted So here's a general just diagram of Android layout. It's similar to the hierarchy that we saw in the hierarchy viewer and now using APK tool I could extract all the different layouts Which are returned as XML files, which I can then parse and then match my page source with the layouts returned And so now I have the hierarchy view which maps to my layouts that I've extracted Which then now I have a deterministic way to know exactly if I'm on the login view home page Whatever view of my application is and this will actually be a test for me And when I run the Apple tools again, it actually will detect that oh you're on this layout based off of the page source That's returning which then will produce a test Which then whenever I run the test in the future, it will be the same same page and same So it automatically will test itself at this point Without me having the right to line a code So this is probably the press the funnest part that I had creating the crawler. I think there's not enough emphasis in Monkey testing I think monkey testing frameworks are great. I think they find uncover a lot of issues I think we should use them more. Just it's part of our tool belt and this kind of brings me back to being a tester again Building this because I like to break things and that's what the chaos monkey does So the Android has this built in as their own, but I replicated the same thing Without using an app or tried to at least So on the left is snapchat on the right is Twitter and in the middle is just some random console output That's basically on the you know, it's just doing random swipes up and down random taps Clicks here. It's about to tweet best buy this Cute little kid and give I think it was almost close, but I never quite did it Here you can see on the left. It's just randomly tapping and it's doing all kinds of different interactions And this is good because this actually will stress test my application This will bring my application to you know, hopefully to its limits And you know because we don't know what our users are going to do our user We have crazy users out there. They could just do what this is doing to your applications, which you know, if we're using Some type of reporting tools or getting like analytics back We might see these crashes and we have no idea why they're happening So you can see here. It's just doing random top taps slash swipes Up and down the surface So how about scaling? It's you know as so far you've seen me run it all locally You know depending on the resources of your machine, you can you know run multiple Android Emulators or device real devices, whatever you have access to or multiple machines but like really it comes down to You know resources You know, so you could do this locally if you had to do it on all those combinations It might take a while to do Or you could run on a cloud service in this case the crawler can run on soft labs But again, you are your you are resource constrained because it's still it's a remote process You're still running it from your machine. So there is some CPU and memory and everything involved And how many processes and threads that your machine could actually execute? I Think the the good the sweet spot is actually containers So I there was an awesome talk last year at the Appian conference the Android Docker Repository I actually took that and put the crawler inside of it So in theory I can now spawn one to a thousand different containers of all different combinations all languages portrait landscape Capturing screenshots and all this going into Apple tools if I choose and I have baselines now that I could now clone myself a thousand times So some funny moments why I built in this So when you first create an application or a Twitter account You don't have anybody, you know, you're not following anybody So your feed would be blank but what Twitter does is they geolocate you and they Feed your feed based off of people around you Lucky for me the police department was in my feed And so I set the crawler off to go and tweet or do what it do a thing with Twitter I come back and I was like, oh cool. It looks like it tweeted and come find out a tweet of the police department So I quickly stopped it and deleted the tweet And also, I don't know if you notice some of the text that it prints out I use this gem or library called faker and it has a Class in it does that does hipster text So it's a bunch of random stuff. Oh And then just recently as I was preparing for this talk I ran it on Twitter again and lately there's this like Twitter feud or something going on between Justin Bieber and Tom Cruise Wanted in the fight in the octagon So I ran the Twitter ran the tweet or the bot on that and it Became obsessed with tweeting and liking all the Justin Bieber and Tom Cruise tweets And you saw my examples with WordPress. I made the mistake of using my personal account so as the as the Crawler was going through every account and go into these blogs and then clicking on the comment section Put in all the hipster texts. I had a bunch of people email me And ask me what was wrong with me So Word of advice don't use your personal accounts So conclusion Hopefully I've shown you there are benefits To creating these crawlers help you find more information or about your application and just help us leverage more machines You know, I built this the whole purpose wasn't replace you and I as humans or even UI automation There are purposes and there's places for those But really it's just adding another tool to our tool belt to help find more issues Such such fast-paced environment. We are and now and so I built this because I thought it should exist. I believed it should exist and Hopefully I've inspired some of you to create your own Or you know, I'd love help too. So if you guys want to help me make this better I appreciate it And so here are some helpful resources based off of some of the things I've talked about the page source parser example is just a small breakdown of Getting parsing the page source and getting the objects APK tool should I should be extra to decompile an application Again the Docker Android Awesome repository for easily putting your application into containers And then the app and crawler so everything that you've most of everything you used to say it just seen Is out there now on version one some things I will be introducing for a version two When I get time to work on it And then again, I'm looking for any help volunteers. It definitely would appreciate appreciate any help Thank you. I have some shiny pens to shiny pens in case anybody wants to these are Appletons pens with stylists. Hello. So this is Janet. So how do you make sure app has crawled all the screens? I mean, yes, you have captured all the screens of the application. How do you make sure that? Oh, how do I make sure it goes through every screen and it captured all the options like if I If we have context menus So we'll get all the context menus and we we see the difference if we select any of the item and press the context menu again Yeah, so what the crawler does is actually as you saw the simple loop as it's going through your action It will click an object. It will then detect if there's any changes from before to what it is now Capture a snapshot. I also collect, you know the activity that it's on Because that's the beginning when the crawlers first runs it actually captures all the activities of the view So it knows every activity that exists. I also know every layout exists So there's several different options that the crawler does so one night told you it detects a change It goes on it looks for a new activity if it's same activity doesn't check it off But right now the crawler there's there's two things that kill it one is a time limit You know, I get right let it run for five minutes ten minutes. The second is if an exception occurs Right now I do plan to eventually put it, you know because cuz I can't run it in multiple processes or Containers I could have it all contact in a central Reposit or central database and say no I've already been on this for you go on to the next few So you could in theory capture every type of view on your application and shut everything down So in some cases we need to log in and some input may need valid text So how do you do that? Yeah, so there is a configuration file if you go to the repository that I put on the Here It's a very bottom up in crawler Link there's a whole read me that tells you what you what you need to do But there is a small configuration file that you need if you need to log in you could set the log in steps So the logger the crawler will detect that you're on the login view go ahead log in and then continue the call. Okay, thank you Since you implemented this with apium. I was wondering how difficult you mentioned some Android specific things But in general would this work on let's say ui TV desktop iOS Mac apps, etc. Like what are the barriers towards doing this? Yeah, so that's also why I didn't implement it with apium because it's cross platform so in theory I could take this and I wouldn't say easily but do it for iOS and When app driver for Desktop Windows applications, so there are lots of possibilities. I have it on the roadmap to eventually do for iOS I mean Android alone just doing this was a ton of work and a lot of effort and I sort of was burned out from it I was a team want to dig into iOS yet, but especially now that you can run multiple simulators When I started this that didn't exist so having that actually would make it better and more powerful tool for iOS So one last question So I have a question regarding language localization which you have shown for Arabic so we do work on Arabic and Hebrew stuff So whenever we have a decision to make the decision itself is in the opposite direction So is there any intelligence built in which actually tells you whether to cancel a pop-up or Go in a different direction because language localization when you do so you have to Handle things intelligently because it's not English it goes in the wrong direction And how do you ensure when you are using Google translate whether the translation is perfect? Yeah, good question. So Because the crime because the crawler is getting just the objects from the page source It doesn't it doesn't know doesn't care what language it is. It doesn't care if it's left or right It's just gonna click click either yes or no Yeah, for the same reason like if it does keep accepting the same So I'm saying not for a crawler for for intelligent For a flow I'm saying if it isn't Hebrew and you are clicking it's it's a random click crawler Is a random click but but when you are actually testing and you have actually the elements with you The elements are in English because obviously elements are in English, but your language is basically neither in Hebrew or Arabic So how do you ensure like the flow which is going is perfect? Like you keep canceling something or a cube accepting something those kind of things right? I mean there's no because it's Undeterministic at some point because it's just gonna click randomly. There's no way to ensure that it's gonna be perfect That's what you I automations for explicitly writing a J unit host or whatever But the answer your question about real quick about the translation is because I put the application in Like say Spanish. I tell Google translate that I'm looking for Spanish strings And then it will keep return me back Essentially gonna return me back everything telling me if it's Spanish French, whatever and I extract Only the strings that don't come back in Spanish, and then I know those don't match Great, thank you