 So thanks, Sean, for the introduction. So my name is Martin, and I'm going to share with you some of the, actually, three main learnings we had on our journey when we moved our test automation to the cloud. So maybe following up on the last meetup we had, I want to know how many of you have actually worked with Appium before? OK, so like 50-50. So I'll try to cover some of the basics as well to make it useful for all of you. So yeah, we're working at Carousel. I think the name should be familiar to most of you. And the challenge we had is we target different platforms. We have an Android app. We have an iOS app. And we also have our application on the web. So we have three platforms that we need to test. And we are in quite a lucky situation that the company understands that automation is very important. And we have quite a few resources to work on this goal, which is automate everything we can, automate as much as we can. Sorry. So this is the current focus we have. And the first step was for test automation on mobile devices. We're using Appium for that. I think the solution has been around for a bit over a year. And we started very humble with a couple of phones. So on the left side, this is actually the device farm in our office, like three stories upstairs. So you can see we have, I think it's two iPhones and two Android phones. And we use these four devices to run all our tests. And at some point, we thought about how does that scale? How will it work if we need to run more tests? And does it make sense for us to move that to the cloud? So I want to talk a little bit about what are the pros and cons about doing that yourself versus doing it with a cloud-based solution. I think the right side is some cloud farm in China. So these four phones here, you can actually scale it up as much as you want. And there are some quite good open-source tools, especially for Android, how to maintain these device farms. So for us, some of the issues we found when doing our own device farm, it's hard to maintain. We have to maintain the phones ourselves. We have to update the phones ourselves. When something doesn't work, then we have to reboot the phone. All these sort of things. And also, obviously, we are very limited to which phones we can use. So these were the two main considerations when we looked into cloud-based testing. And as we thought about it a little bit more, we realized there are actually a couple other factors that we need to consider. So what types of models are available? So maybe give you a little bit of background how our testing approach works. So we have a set of, right now, it's about 100 scenarios that we run. We call them sanity tests. So to check the basic health, basic sanity of our application. So with every build, or at least with every release, we run all these tests and see if they're still successful. At first, when we came up with these test cases, it had to be done manually, and then gradually over time, we automated more and more of them. So this is one part. It's like about 100 tests takes all together between two to three hours. And we have to target at least Android and iOS. Ideally, we target as many phones as we can. This is one scenario, the other one. What we want to reach is that with every change someone does to the application, we have a set of tests that we run to see does this pull request on GitHub breaks anything that is very important to our application. So does it, for example, break a login, break some other basic functionality? Can I still list an item on Carousel and things like this? So we take a subset of these 100 tests and run them with every code commit. So this is like, it's roughly 50 a day. So you can imagine you need quite a few phones and you need to have them stable so that you can get the feedback on time. So this is one factor to consider which devices does the cloud offer and if you don't go for cloud but do it yourself, then you need to buy these phones yourselves. Then you have maximum freedom, but like every half year, you probably need to buy a new flagship phone for Android or the new iPhone and invest like almost $1,000. And also the thing that's worth it to consider how often do these cloud providers get new phones. Then the next part is how does the cloud solution integrate with what we already have or what you already have? Does it integrate seamlessly into like build pipeline, use Jenkins or whatever, Travis CI or Circle CI? Does it maybe even come with plugins that integrate very easily or is there a lot of refactoring necessary? And the third part is also very important, already mentioned before, maintenance of the devices. If you have your own device, cloud, you're responsible. You have to do that. This is something that's often overlooked that when you look at the cost of how much does a cloud provider charge you, you overlook that if you do it yourself, it's cheaper, but then you need still pay, invest the man hours to do the maintenance. Another one is availability. So how a lot of these cloud providers work, it's like a public service and they have a number of phones and you say I want to test against an iPhone X and if an iPhone X is available, you can run your test and otherwise you need to wait. Or they offer private devices and say we buy an iPhone X which is exclusively for you and you can use it all the time. So these are the two main ways they are doing that. Sorry. So we have a couple of guys joining remotely and apparently they can't see our screen. Yeah, so next one, extensibility. So how easy, again, it's somewhat linked to the other one. How does it integrate in a build pipeline that's already there? Are there other plugins? Is there open source community around? Good community, good forum support and the most important turned out for us, does it have a good API? A cloud solution that doesn't have an API and then it doesn't support something you need, you're kind of lost. And last but not least, how much does it cost? So we did a lot of evaluation. We looked into all different sorts of providers like BrowserStack, Bitbar, Pcloudy, AWS device form and a couple of others. And I want to talk about some of the learnings from the POC with AWS. So AWS, Amazon Web Services, they also have device form and the model we're using is public cloud. So they have, I think, it's safe to say 100 different models, probably even more publicly available and you can run your Appium tests against them. So the first learning, when we looked at all these solutions is they all have their limitations. None of them is perfect. But since one of my mentors, a previous company always used to say it's only software so if it doesn't do what you want, you extend it and make it do what you want so it's always possible to work around some of the issues or fix them in some way. So the first problem we run into, how many of you are familiar with Cucumber using feature files for the tests? So we did that as well and we're quite proud of having our nice set of feature files like more than 100 scenarios and then you look at AWS and they say, oh, we're sorry, we cannot run Cucumber. The way it works, you have to compile your test suite, upload that as a zip file to AWS and then they run it for you. And there were a lot of people in the forums that, yeah, we want to use your service but we're stuck with Cucumber, we don't want to migrate and we don't want to rewrite all our tests. I don't know what they did in the end maybe go somewhere else. So we were in the same situation, it doesn't support what we need but we do not want to rewrite all our tests. So what we did might sound a little bit hacky at first but it turned out to work pretty well. I'm not sure everyone can see that. It's quite small. So this is an example of how our feature files look. So for example, we have something called coins on carousels so one test is if I set the coin balance to 1500 via an API call so that has nothing to do with the user interface and then I log in as a user, I check how many coins does that user have, does it match. So this is just a very basic check for displaying off the coin balance and we have two more coin related tests. So the reason AWS cannot run Cucumber is usually when you run Cucumber with JUnit or TestNG the way it works you have a JUnit runner that takes care of actually running Cucumber and running these scenarios and AWS doesn't support custom runners so you need to have plain JUnit tests. So something just has test methods and the test annotation that they can run. So what we did wrote the parser that parses the feature file and then generates these JUnit tests. So I think the easiest way to understand what I'm talking about is just showing. So we use Appium and Cucumber with Java and we use Maven as our build tool. So let me just run this command and make that a little bit bigger too. So this is our test project. I just run the generate sources goal. What that does parses all our maybe 20-something feature files and generates these JUnit tests. So we have a template and says okay this test class should look like this. It's actually a very simple kind of ugly in a way but it works really well. And for every feature file is correspondence to one Java class and every scenario in that feature is one test method. And this is something that AWS can actually run. We didn't know, we hoped that it would. It was worth a try. I had kind of a hunch that the way they work is you can because they support JUnit, they support TestNG, they support all that but they don't support Cucumber. The only reason they don't support Cucumber is what you usually have. I can't edit this. Let me show you. What you usually have if you run Cucumber test. So this is a small runner class for us to run things locally. You have an annotation that tells you to run with the Cucumber runner and pass a couple of options and since they don't support that we got rid of the custom runner and created some plain and simple JUnit tests. I didn't see any reason why it shouldn't work so it was worth a try. So basically what this does is Cucumber has a Cucumber main class where you can basically what the runner also executes where you can run features and you pass in what package are my step definitions, where are my feature files, so this is the path to the feature files and the step package. They also need to do a little trick because you cannot access the Cucumber and the runner cannot access the feature files inside the JAR file so you need to extract it to the file system and then read it from there. Cucumber has this nifty little syntax how you can just run a single scenario. I didn't know that and I didn't believe that this actually exists. You just specified a line number. So I think they added this feature later on and it didn't have any better way of doing it. So let's see. This is ads visibility. So this should be here. Users should be able to view ads. This is in line six. So this matches to this one scenario. I mean, that's just the way Cucumber works. So then we also had to do some things about making sure the method name and the class name is proper, camel case, valid, Java. But then it actually worked out pretty well. So the next step after that and on that AWS is quite good when it comes to documentation. So the features they have, they are quite well documented. They tell you exactly how you can package your project to get it to run. So they have the whole Maven configuration there and you can basically just use that. So when we then run Maven package. So this will do the same. So you don't have to run both of them. Just show you step by step. So this first generates the JUnit classes then compiles them in the next phase and then packages the way that AWS needs it. So it's one big zip file that includes all our classes and the javas of all the dependencies. So it gets pretty big, but it works. Okay, I give a full demo of that briefly in one of the next steps. And just move on here. So the second learning and that is actually something I already encountered at another company I was working for with a different product and a lot of these products we're looking at, they don't have that. So especially when you talk about you want to use your automated tests in a build pipeline and you need to run them in a short time and you need to run a lot of tests in a short time you need to find a way to distribute them on multiple devices. So if we have 100 tests and we probably scale up to several hundred we cannot just run them on one device because it will take too long. So we need a way to say we have 300 tests, we have let's say 10 different phones and we want a solution and we just send these tests and it will distribute them. So this is very important for integrating in a CI and it's quite surprisingly that this is not a standard feature. This is something that a lot of companies who build test automation tools cannot do out of the box. So if you're ever in a situation where you build something like that please start with that because it makes a big difference and a lot of people who need to work around that and find a way how they can still achieve what they want to achieve. I see a lot of nodding going on. I think others have the same issues. So as far as I'm concerned there are two ways how you can do this. One is you just take your test tube of 100 tests and split it up into smaller chunks. If we have four devices and I have 100 tests and I have 25, 25, 25, four chunks of 25 that I can then run all these 25 chunks on each phone. It's the easier approach but it comes with some downsides as well. I'm going to talk about that too. The second one is you just have a queue and you put all your 300 tests and then you have a worker that sees which phone is available, this phone will run the next test that comes along. Either way, when you go for something like that, you build that yourself, you might lose some features that the test framework in our case AWS already has. Maybe they have some reporting. Reporting doesn't work that well because if you split it up you don't have one report and all of a sudden you have five reports but you want to have a single report that tells you what's the status of your release so then you need to aggregate those again. One solution I talked about is you have a queue. In this very simple scenario we have three phones available and we have five tests. We queue them all up and then the first test runs on device one, next on two, three, one and then let's say device two is still running the test so the next one on the other one it's actually faster so the last one will come here. If we distribute using chunks, we pre-define which test will run on which phone and then just start it simultaneously. The downside with this is you have a lot of manual effort to make sure that the chunks are kind of the same size because if you just do the very first approach would be one, two on one, two, three, four on two and the last one on three and then it can happen that one slot takes twice as long so you need to balance out that it's kind of the same duration because in the end if you trigger them at the same time you still need to wait until all of them finish. Yeah, coming to AWS, again some research as with all the other platforms we looked into they don't support what is called test charting is what I just talked about. So again, I had some sleepless nights and thought about what can we do. So on AWS on the public cloud there's a bit of an overhead for each test execution so they have a setup that sets up the phone and they have a tear down and it all takes a bit of time. So the first solution to have a queue doesn't work because then you have to go through this setup for every single test. So we needed to split it up and to handle the reporting issue we use already before we were using something called Test Rail it's a test management solution there are a couple others as well but we're quite happy with that and the basic concept is the same whatever you use so we use that as an umbrella over all the test executions we create one test run on Test Rail then we create all these test packages that we've seen before and we upload our application and then we trigger executions right now I think we're using four or five and then these executions will run at the same time and report results back to the same Test Rail run and then once all the tests are complete we can close the run and it's done so as I said it's like kind of having an umbrella over the separate executions let me demo this as well so for that let's have a look at our Jenkins I'm going to make this larger yes so I don't want to go into too much detail it's all still work in progress so it looks a bit hacky but basically what's happening we have a mail-in command for every slot and the first one also downloads the APK or IPA file of the latest version that is released so we provide a URL where we have a small web service that provides these and we'll create the run on Test Rail so it's basically just a bunch of properties that you need to pass in so this is the project of Test Rail the test suit and which test plan so we have one test plan for Android another one for iOS and different plans for Sanity and our we call them fast feedback tests or pull request tests and then need to do some hack to pass the test run ID to publish it on the bash environment so for the next slots to pick it up so that the next execution knows to which test run plan it needs to report so we trigger all these and in the end we have a little script that just pulls the results and once all are in it terminates the Jenkins job yeah so we have written some Maven plugins to make our lives a little bit easier I'll share the links later there is the plugin that does the parsing from feature files to JUnit if you use TestNG you can use the same just the template would look different there is another one for the Test Rail Maven integration this is just a very basic demo what we do is a little bit more complicated than that but basically if anyone is in a situation where you need something like this it's all on GitHub and you can it should be enough information there to get it running and to have a good starting point the last one we are also we are also using there is one for AWS device form so that we don't need to have additional scripts we just wrap it all in one single Maven execution which does generation of the JUnit files creating the Test Rail execution and then triggering AWS so basically our whole test run is done in a script that in the end is still less than about 10 lines of code which brings me to the third learning or maybe cover that later so third one I think we already covered most of that when we do evaluation of what cloud provider we wanted to use we have huge metrics which phones does it support how often are there updates how fast does performance cost and whatever what we didn't look into so much and in hindsight actually if I have to do that again this would be my number one criteria does it have an API it's nice if it comes with all sorts of plugins AWS has a quite good Jenkins plugin that's good but if you don't know all the requirements you have in the future so if there is anything that the tool cannot do and it's quite likely every no software is perfect then you need to have a tool set to extend it so for that you need a good API so that's actually my number one tip if you look into moving your tests to the cloud see what can you do with their API so to conclude maybe bonus learning some of the best practices we came across while revamping our test automation so there's always this distinction between are we doing QA are we software engineers a lot of places these two seem to be like complete different people looked at a complete different way when I think it should be like we're all engineers, we're all building software and many times if you look at the code in test automation solutions it's a lot like piece together, copy pasted spaghetti code kind of thing and it works for some time but once you want to extend it it can bite you it can bite you quite bad so we aim to write clean code, make sure it's maintainable which is like completely we can all agree that that's important for production code that you ship to your customer but we should also extend that to the code we use internally so I don't need to go into too much detail here another thing please make sure we document what we're doing because fluctuation is also happening in QA and just because I know what I did today doesn't mean I will I will still know a week from now what the hell I've been hacking together so especially like solutions we have here is kind of making our way around some of the issues we found and it's quite important to do it in a way that's properly documented and you can build on in the future and maybe just personally I always find it important to share what we're doing and as much as we can we try to open source so people are running into similar problems they can pick up from there and if you read through the forums with all these cloud providers everyone is having the same issues so if someone goes through the pain of solving it then I just feel it's good to share to everyone else and so just final slide I mentioned it before maybe not too much related to the cloud presentation I don't like the name being called QA I think everyone in whatever company should be accountable and should feel responsible for the quality of the software and we should all just engineers so I like the term automation engineer a lot better and if you have software development mindset when you do test automation I feel you can do a lot of great things okay so that's it just also want to share Sham has set up a Slack channel for the meetup so we can stay in touch if you have any questions you can also contact me directly we will share the video we will share the slides and if you have any suggestions for future talks it would be interesting just let us know, get in touch so before I hand over to Sham I mean we have a Q&A at the end but if you have any questions now so how do you actually spread your test because you need to actually run all your tests on Android and iOS you are mentioning a dividing into 25, 25, 25 like is it the same framework wherein you have all like iOS and Android all the likes part everything is the same and you are just mentioning that there are a couple of things here some of them we discussed in our last meetup may I just follow up on this a bit so one thing to we use the same framework for both Android and iOS and we try to like all the feature files for both platforms with some exceptions when one feature is only there for Android for example the way we do the X path or the locators we try to use IDs and CSS only we have different annotations for Android for iOS and this one is for web for Selenium we use the page object pattern and we reuse the page objects for all the platforms as much as it's possible sometimes like for example coin page again there is a coin page for Android which overwrites some methods because it behaves slightly different but in an ideal scenario especially between Android and iOS the two are pretty much not the same but they are very, very similar so the only thing that's different is the locator and then when it comes to how do we run these tests I actually forgot to show this when we do the parsing of the feature files we can filter by text and I say I want to run Android then I just generate JUnit classes for everything that's tagged with Android so this one for example this scenario is there for Android, for web and for iOS others they just have Android and iOS so let's say I want to run for for Android so I filter Android and then the other tags are actually these different slots we talked about so for sanity we have right now four different slots so at some point that's the downside of it we need to go through all the feature files and tag them this scenario is in slot 1 this scenario is in slot 1, 2 let's find one that's somewhere else unlucky yeah but believe me we have them all split up and actually this is also the link to test rail sorry? yeah that's the link to test rail so but coming back to the filtering if I run like this it will only generate where is it oh yeah here so it will only generate everything that's Android and actually not because I need to clean before you can also see that it works pretty fast so these are just these 5 if I say I want to run let's say the next slot so and then when we upload to AWS we only upload these 5 that we want to run and we specify with every run through which tests are in the file these are the tests that should run and then we specify which device we want to run against and if that's an Android device then we need to make sure we upload the correct tests and the correct APK file as well so where is it again so we have you saw these parameters this is the project in AWS it's called the device pool it's here so you assign actually you can assign multiple devices but we only use one and say this device pool has iPhone 7 Plus for example and if I specify this test should run against this device pool then it will run against this iPhone and this way I can I can decide which tests to run on which device every 6 months you need to buy a new phone so what was the problem that was caused by it gets quite expensive you don't have flexibility when you want to what happened to the phone wherein you have to buy every 6 months the reason why we need to buy every 6 months is because we want to test against the latest phone it's not that anything happened to the phones it's more you want to make sure your application works with the latest phones with the latest android version with the latest iOS version and also you want to make sure it works with older phones that maybe more people are using and if you have a device form you have more flexibility because they can always target different devices if I want to test against 20 different devices in my own device form I need to buy 20 phones and there let's take AWS device form on the public cloud you just buy certain slots 5 slots for android and I can use whatever android phone I want and I can change that maybe next month I want to target different phones so you're not stuck with a particular type of phone how much time are you taking for execution of the test let's say 100 that's damaging the device we're like for all the tests probably 3-4 hours but it depends some of the scenarios are quite large also and then we are not 100% done with making sure all the tests performing well so we know that we have potential to reduce that maybe even by 50% but it's not such a big difference to running them locally so we have these performance problems locally as well if you run on a cloud it will be a little bit slower but not by much if you do with Appian and there's always also difference there are two ways you can do it one is the AWS approach where you upload your whole package and the other one is where you send a web driver that's just not local but sends all the web driver commands remotely to for example browser stack does that yeah because they don't they don't support that on their public cloud they don't have that you need to upload that's why we need to go through a lot of these things so that we can run there the benefit of it is that it is faster every request needs to go especially some of them they don't have device farms all over the world especially in Asia they don't have we have AWS I think the closest is in the US so imagine every request needs to travel halfway across the globe it's not much but it slows it down a bit another one we're looking at was Bitbar I think they told us the closest one in Europe I think in Europe they are coming to Southeast Asia so in Poland yeah so that's something you need to take into consideration I don't think it makes it a lot slower so both approaches have their benefits so it's like it's like $260 per month yes I think it's $250 you're buying one farm per month you're buying one farm one farm per month it's our first loads $250 per month so it's a new farm new model farm but it comes also the cost that we need we need some person to manage it we need a farm for the area let's assume we buy like 100 tons whatever we assume that we need 100 tons or 50 tons and we need to keep maintaining that silver and it goes on we can't do that cost and it goes on more than what we do spend do you have any limitations on slots I mean how long should the other test go through so you can run all the tests during one month without any time limitations there is no time limitation if you have the slot you can test as much as you want so we're also trying to make sure we almost test 24 hours we make maximum use of the slots what about the upgrade testing are you doing the upgrade testing sorry what testing upgrade what do you mean by upgrade testing ah I mean the I mean OS updates if you look at like cloud you don't take care of that just say one day I want to test against iOS 10 and then I test against iOS 11 how the update happens I don't care and that's also one of the challenges you would have on your local device farm because you need different versions especially when some bug reports come in and you cannot reproduce on the newest one and like especially with Apple there's only upgrade you cannot downgrade easily unless you know how to do so that's one of the benefits when you just have a provider that gives you a huge variety of different combinations have you ever tried to test the simulators instead of real devices yeah we did that too do you really need that you need like test only on like 99% so situations simulators will also cover all the problems that you can find on real devices but I've seen you will not find the real bugs on the simulators no you can't find real bugs on the simulators we had we had a few issues before that it was only applicable on let's say S9 version of other way where because we tried to reproduce it on simulators it was not able to find and then we only were able to replicate on specific hardware version because it was something to do with the that particular moment of that course that build version especially clicking camera taking photos clicking camera taking photos so you have automated test also for taking photos yeah I mean we were looking into like some mixed approach where you simulator and you lesser real devices but we haven't followed through on that because that also makes it more complicated because you need to support both but it's definitely a valid approach if your app doesn't need anything that you can only do on a real device photos also if you need to shake the device or stuff like that and it's totally valid to go for simulator you can simulate also all these actions but we have faced issues with simulators in the past where in a certain version of Xiaomi it was doing its own memory managing quite differently so even though we were not using it in the camera features we were having memory leaks on that phone so it's quite simulators are re-invented by testing that's where we decided also for one we should not scale up with simulators as well how do you check the performance of the device for example do you have any discussions for memory check the memory leak or CPU performance not at the moment it was one of the criteria we looked into but we haven't got around to try that out with AWS yet is there any IPA to do that to be honest I'm not entirely sure some of the service providers gives you some logs how much memory consumed during the test so they will give you some nice graph and the log frame but again you need to manually go through the logs and analyze what's happening there and whether it's causing in which all actions it's causing too much memory when you swipe down too much then it eat you more memory something like that there is no ready system available like okay I just ran and this is causing me this much memory there is no such perfect system actually with android test you can check the current number of usage every time so during test you can after each step you can check the memory most of the log service providers with iOS I don't think so with android you can help yourself a wave not really beautiful time so for iOS it's much easier to use manual tools for search for this it's worth purpose it looks like manual way don't have issues but the automations way have issues in terms of swiping or the data inputs or the output yeah all sorts of them takes quite a bit of our time yeah I think take some experience also to figure out how to write the test they are not flaky but some things you cannot cover everything some developer changed some locators again okay your test will fail there are some more subtle subtle differences between manual I think sham has a lot of good examples for that yeah definitely happens but also the other way automated test finds something that no one can reproduce manually which is even trickier because then your automation tells you something is wrong but you don't really know what is it really an issue is it an issue with our automation or is it just a bug that no tester was able to reproduce recently we had an issue we cannot click on one of the buttons it was working fine but in problem so what happens is that what happens is that the visibility property was false set to false and we are not sure why it's false so if you look at the application you can still see this you can click using your finger but using a PM you cannot click it both Android and iOS also yeah I was seeing this in both platforms these are some of the issues that yeah but there are workarounds like you can go for presets click using coordinates so what you can do even though it's not visible you can grab the coordinates and click using the coordinates will work but obviously you need to tell your developer to make this property as true for the visible as true or you can do it or you need to request to do it so that thing need to be fixed and all of this can always happen do you still have the manual or the automation yeah we still do manual testing but it's less is it separate or same we have less time for manual testing now because we are focused on automation and that's one of the challenges to move forward because I'm quite sure we are missing some bugs because we cannot put as much effort into manual testing as before and there are a lot of things that automation especially if it's in an early stage we're still in even if it's a hundred scenarios it's still quite early that we cannot, we don't have everything automated and a good manual tester will find a lot of things that automation can find at some point but not where we are now so I should also have to think how do you balance these two the plugins I showed that those are on github the framework itself not yet but during the last meetup I presented something that's quite similar the demo framework that's also on github and uses some of the ideas that we use internally most of them actually and we are planning to open source our framework as well but we'll take a while it's on the it's in the pipeline but I can't say when we will be ready because we need to strip out stuff that's currently I cannot have these examples of parser and then you showed me another example I cannot have all these examples the parser is available everything for the integration in the build process that I can share so if you already have cucumber tests or J unit tests you can use that yeah yeah yeah sure so that's all open source already we will share the links in the Slack channel and you can have a look and if you have any more questions just contact us how is AWS compared to Microsoft Azure honestly don't have experience with Microsoft Azure a few months by Xamarin was open source there was a tool called Xamarin which was integrating with Azure and it was the preview open source basically we did not focus on Azure because we had it was very critical not to go now with Microsoft products at the moment and the problem also with Azure was we had to migrate everything from the build process and everything to their site which we could not do because for our iOS build process Android build process is completely separate and we cannot migrate everything over there then it would be very costly if any other questions yeah we've been looking into that as well so it was actually one of the better ones so we looked into I think we had sawslips we had browser stack we had bitbar AWS device form and pclody and doing it ourselves that were the options we considered software something is not working then you can actually switch to something that would be best suited for you so just interested to know why you kept on using Hamidon what do you need with Hamidon you wanna take that okay I think we had a previous tie ups with Amazon and we have a good relationship with Amazon and they wanted us to try something which they were pushing us from last year and we still wanted to go out and evaluate what is effective but then we still had a good support from their team also so it was much easier I gave AX100 plus AX100 support motivation on those life issues those AX100 windows not sure AX400 all mainframe systems probably not with Appium there are other tools we look into we had some experience with silk test microfocus it was acquired by microfocus they also bought some other test automation solutions I think they are the largest largest vendor now but even with the distribution stuff we talked about I think my previous company we pushed them for two years before they implemented it they've been around for 20 years and then 17 they built that for your situation maybe look into something like that I don't think Appium can do it I don't think so but I can have a look at that I think Appium can do windows we are only using Appium for mobile applications but I don't think it's the right tool for that so there are no further questions thank you