 So, the next session, quick and dirty test statistics. So, Jerry is not with me, Jerry is supposed to start this session. Any idea who is this guy or have you heard about the principle of factor specificity? Any idea? Head of 80-20 year old? Yeah. Actually the same guy. So, his name is Wilfredo Pareto. So, it is the 80-20 year old also known as Pareto principle. So, basically it means that 80 percentage of the effects coming from 20 percentage of the courses. It is like if you fix 20 percentage of the issues, maybe you can achieve 80 percentage of the test cases. So, every day our test cases were failing and we need to collect all the data and we need to analyze why it is failed. So, sometimes maybe when you analyze, you will find that if you fix something, it will fix maybe all the other test. But we need to collect all the data and we need to analyze the test trends like what is the most failing test cases, what is the most successful test cases and the root cause of the common failure issues. So, you need a database to collect all these things right. So, every day you need to pull the data from your test result and you need to store somewhere, then only you can do the analysis correct. So, basically this is why you need to collect the data. So, it will help you to fix the flaky test. First one. We will give you some statistics like how many success, how many failure, most failing and most successful test. How to do it? So, we were using AWS evaluation and we found that the AWS, APA, SDK APA's are super cool. So, it uses the Boto 3. So, we were using Python and it support number of other programming languages like this. So, it quite easy. You can just create a Boto 3 object and you can call all the APA. You can fetch all the test result and you can do whatever be the analysis you need. So, this is an example. It's like you instantiate a client and you can list all the your projects. So, it's like we had multiple test trends like for the regression you have one project, for the smoke test you have another project. So, you can have any projects you want. So, if you want to get all the data you can call something like this. So, you can get the project object and you can pass this is my run ID and get me all the details from that. So, you can get the suit as well. So, all the test run will have it suit. So, there will be a suit ID for all the runs. So, from the suit ID you can get like whether it's passed, whether it's skipped, whether it's failed all the information you need. Likewise, you can get the result also like the log fails. So, before we use this one what we used to do like we used to assign one QA member. So, we used to go to the AWS web URL, download all the failing log fail and need to look at why it's fail and we need to go to an excel sheet and mark it. And it's like there were at least 20 to 30 test cases were failing. Imagine you are downloading 30 logs fail every day and you are opening it and seeing why it's fail and filling up the excel sheet. So, it was a boring job and everyone was doing it. Then, so, this job we allocated to one QA every day. So, it was Jerry Steng. So, Jerry went to home and he said it's a boring job. So, he automated it. So, how you can do it? So, it's already available on JIT Hub. So, basically what it do? Yeah. So, you create an AWS Boto object and pass your project name here and you get the project ID and from there you get all the run ID. Then you pass the run ID and you get all the details like in which device you run this particular job, which all test suits you have. So, we were using hook umber. So, we just fetching the feature file name. Then, when it's started, the name of the scenario and you just log all the stuff and you just call a Google sheet API, it will auto fill it. So, we were using gspread to handle with the updating the Google sheet. So, it can give you a demo. So, just hardcoded the run ID. So, it will fetch only one run ID. It will take some time. It's not only the case with AWS, whatever be the cloud service provider you use, they will be giving you such kind of API similar like this. So, it's quite important to collect all the test result and you need to analyze it. So, it fetching and if you look at the Excel sheet, it auto filling. So, basically I'm collecting the start time feature scenario execution time who is just going to support device name all the information. So, we did something on the top of it like we just created some Excel macros and it will give you the scenarios like which is the most failing one, most passed one. So, by looking at this high level overview, you can have a look at like, okay, I need to work, I need to fix this one. This is the most failing scenarios, this is the most critical one. So, if you fix it, then it can fix some other test also. So, one cool thing is that it will give you the exception details also. Like for example, some of the assertion fail and this one failed due to, yeah, by some of the button is not visible over there. So, you can simply say that because of this issue, the same issue happened on the next scenario also. So, it means that if you fix this one, I can easily fix the other scenarios also. So, if you are going to do it individually by taking everything, then it gonna take some time and it will kill your time, right? So, it will always good that you need to create something like the statistics like this and it always good that if your management ask you, okay, so just give me some dashboard on what all test cases are failing now. So, what the statistics, now you can easily just show this one. This is not a perfect system, but it's like, it's better if you can create a web application like this. But for an easy purpose, extra sheet is good enough. So, how already Open Source did, you guys can have a look at this one. So, you just need some small configuration like some setup need to be done. For bot or some credentials and config need to be done. And for the Google sheet API, you need to download one JSON file from the credentials.json in order to communicate with the Google sheet. That's it. That's it. I think not all the steps, but almost, yeah. You can just have a look at the link. So, it has some examples, so you can have a look at it. Yeah. Did you try to use a little reports? Not with this one, but have used it. Because it can do the same. Yeah, yeah. Yes, but yeah, exactly. So, basically, Allure can be integrated with this one. But we are thinking about having our own database and a web application for this one. So, you can do much things like, I need to select a range of date. Then, I get the statistics from within that range. Such things will be quite difficult to done with the Allure reports. Yeah. Yeah. Yeah. This is just a quick implementation. It's because Jared and this is because he don't wanted to do it manually. So, I thought that they're sharing this to everyone. Any questions? You guys are in the Slack channel. Everyone is in the Slack channel? OK, so what you can do, you can go to this URL, Singapore API Meetup-slag3rub.com. Can you share a QR code, maybe? OK. Oh, I can out. Oh, no. How can I? Online, generate QR code. Which one? The Rift QR code, just online. Oh, you can use Bitly, for example. This is also OK, right? Yeah. Yeah, I can scan this one. Oh, where did it go? Any questions? We're supposed to have one more session, which is a discussion. So, if we would like to continue, we can have any discussion related to APM. We can have the discussion for another 20 minutes. So, how many of you guys use any device from? It's not really device from. So, mostly at Lazada, I worked before at Lazada. So, I worked before in Lazada company. I was the leader of mobile automation team. So, mostly we used simulators for automation, several real devices just to have a little bit faster tests, because we had about 200 tests for Android, about 150 tests for iOS, and yeah, it takes a lot of time, because Lazada works in six countries. And for each country, we have some special features. So, almost every test has to be executed at least six times. So, yeah, you can multiply 200 tests for Android six times, we had a huge amount of tests to execute. So, we tried to use clouds, but when we find out the real price for our requirements, because we were not able to use public cloud, we had to use everything on our own. So, Bitbar made us an offer as about 11,000 US dollars per month to have private cloud for our purposes to be able to execute all our tests. Yeah, so, we decided just to increase our device lab by ourselves by devices, because it will be much cheaper in close perspectives. Hi, so for SPH, we are just starting up with the automation framework. So, that is one of the reasons why I actually came over here. So, yeah, that is, and we are using AWS as well, but we do see that there are already available CICD tools like Bit Trice, then Cloud CI. So, yeah, that was one reason why I was asking you about why you were actually going for AWS only. Yeah, I mean, as I mentioned that we did a lot of things, but this was one of the most flexible, what we found that we can just plug and play any devices and stuff, it was, we can ramp up and ramp down quickly whenever we need, or we don't need, it was much more flexible. Yeah, because even for me, when I'm writing now the automation framework, although, so initially I started with like everything, Cucumber, Test NJ, APM, APM, all of them together, but yeah, before going further, I just wanted to build up the framework first and then start with the script. So, yeah, so those were, I mean, there are certain challenges that I'm going through now, so which tools to start using, like just like AWS, which I was not aware of, like they don't support Cucumber, am I going to face any other challenges with the other cloud applications that I'm not sure of? So, so like what Martin said, right, it's all about software, we can, we can, there will be always work around and we can, we can put it in. So basically, we found, basically like he said, right, for, for him, it was easier for simulators and it worked well, but for us, we had difficulties with simulators, we opted this route. So basically, it's not one solution for all, it would never be like that. You have to see what your company needs, what was their demand, what's their, I mean, everything and then based on that, you decide what tools to use, whether, maybe for you AWS won't be an option, maybe it would be only option. You never know, but based on your requirement, you need, for us, we wanted a lot of devices and different devices to work with a variety of whatever we want. I mean, Martin. Yeah, I think the good part is almost all of them allow you to, to test it. So if you're looking, if you're in the process of deciding which, which tools you should go for, you can demo, you can demo them and make a POC, see what fits you. That's basically what we did. We made a POC for, I think, six different solutions and then figured out, okay, which route, which route do we want to take? So I would really advise you to first, see if you can narrow it down to maybe two or three, but then get in touch with them and they give you, AWS will give you three minutes to do some testing, all the others will give you three credits as well and you can actually do quite a lot and then figure out if you encounter any difficulties or any roadblocks and then you can make a much, much better, much easier decision also. Yeah, basically just adding on that, what you can do, you can create your own acceptance criteria as to evaluate all the service providers and you need to measure this against the service providers. So what we have done, we have created a couple of, not couple of, almost 15 to 16 acceptance criteria and we were giving green amber and red signals and we were counting which ones you test more based on what we really needed. Thank you. You guys still maintain an in-house lab of actual devices for sort of manual testing? We do have, but we don't maintain it anymore. I mean, like, we still still run, but we are not really actively looking at it. And what's the problem, so the data that we've run again? Basically we still have the devices, but we don't really use it for automation. It's more of, when we want to reproduce some bugs, we have the devices on site to do manual testing and we also use them for epium testing on a local machine for development or also for for reproducing bugs, but it's not in a way that we have a proper setup device phone. Like you saw in the picture, this is no longer in place. It's still there, but it doesn't have any phones. More decoration. Yeah, maybe if you have more than two, three products in your company and if you want to test all these products, then maybe in-house device lab may be an option to you. But if you have only one single product, you can think about the cloud solutions. Imagine you have five products and you are testing its own, you are selecting AWS, you are paying this much money and time. There are some cloud solutions which will charge you based on how much time you use, not really unlimited stuff. So that would be not be suitable for you in that case. And if you talk about some test cases like you need to enable the Bluetooth and other some special stuff, like scanning an eye or something like that, you cannot simply do it on cloud solution, right? So if that test cases is very important to you, then you need to have an in-house device lab. How many of you have tested the cloud guys and what is the percentage? Coverage, yeah. No, no coverage, yes, how many tests, how many non-stable tests from the all-amount of tests? Now it's like 90 to 90% of just passing. There are five percentage of unstable tests. We tried eight slots, right? Eight slots, then it's take 40 to 45. So it's not a fixed test, I think it's... Oh. Yeah, so basically this whole project is going on for most of the second quarters or almost three months. Oh yeah. We're not only working on fixing tests, but to a large extent. You see most of them is API tests or UI tests or what kind of tests? Oh, it's UI tests. UI tests. Yeah, yeah. But we are using API also in between us, as I said, which like, for example, for Carousel, I don't even know, you need to list an item, right? So to test the bump functionality, you really don't need to care how you create the item. So what we do, we create the item using API and we take it from there using the UI. So it will give you much faster feedback. So, and you really don't need to test all the other steps, like clicking on camera, filling up the form and submitting the listing button, everything you really don't need to test. So it kind of a mix of both API and user interfaces. So do you, when you evaluate the so-called clock provider, do you evaluate also like, get a device path from experience tests? No, we didn't start experience tests, right? We didn't, we didn't. Do you have the actual experience sheet that you actually created, like? The acceptance criteria, right? What's the acceptance criteria? We have, we have. Yeah, I mean, if you want them, in the shop, you can share them. Like, it's not the experience sheet, but you can share the acceptance criteria. Yeah. What is the feedback? So basically for SPH, because it's having 40 applications, approximately, for the application, but most of them are quite mature, all be there in the market. So we are, most of the time, focusing on the performance of the application that we already have. So, yeah, that is one reason, actually, because you're saying that if you have multiple applications, we should have our own device. No. It's not exactly like that. What he's trying to say is, like, every company, like, for you, right, 40 application, but not every application would be updated every day. Not every application would be, you know, developed every single day, right? It would be 40 application years, but a few application only updates once a month. Then you don't need testing every, to test it every time, only when it's being released or something like that, or maybe once per week or sort of that. But certain application, maybe the news application, which is PSAs, maybe that would be frequently updated and that would be the main application. You can start with that. And these, it's not that AWS, you cannot have multiple applications. You can have a different application, it based on you, how do you use it? So you can upload any APK and it will run it. $50 per month is for any number of devices. $250 for one slot, and one slot is one concurrent execution. But you can run a test on one device and then once it's finished you run against another device on the same slot, so you're not stuck with one type of device or with one model. If you want to have multiple models and run it at the same time, then you need multiple slots. But if you have enough time, let's say you want to run against three different device models and you have enough time to run them sequentially, one after the other, then one slot is good enough. So one slot doesn't say it has to be one particular device. We can also do, Shambu's mentioning some providers are charging for minutes, others for slots. I think AWS also has both. You can also say I don't want any slots that just pay for them for the number of minutes I use. And that again depends on how much do you use it. So we figured for us the slot approaches is better. Any more questions? Any more questions? So it will be great if you can give your feedback. So I have tested the link in the Slack channel. Just please feel free to fill up this. Everyone is in the Slack channel, right? So everyone can access this link. So it's already here. You need a QR code? I don't think. Here is a QR code if you need. Yeah. You have given the access to everyone, right? Yeah. Cool. I have just separate machines with simulators. So I start simulators, and run tests, make all the screenshots, and close the slides.