 So, let me introduce ourselves, not look at photo, this is, yeah is that me behind this beard, this is my original face, I am Priyank and I work for ViewClip as a senior quality engineer. Also working for ViewClip. So, before we start, would like to know what is the expectation that you have out of this presentation, anyone from crowd, what do you think, what you want to learn, sorry, ok, it is not a source lab, it is a mad lab, ok, anyone else, sorry, sample test environment for automation, yeah. We will definitely give you data points on that. Any aspect not involved, I think, alright, if there are no other expectations we will proceed. So, before we actually get on to this session, it is important for you to understand the app that we are working on. So, note that we are not selling our product in any way, we are not promoting it and also we are not saying that this solution will work for each and every one of you in your project, ok. We are presenting this more as a case study, so we want to talk about the experiments that we did, the things, where we failed, what we learned from them and where we are at now. So, we are presenting this as a case study. The app that we are working on is called View, it is over the top entertainment app, it is similar to your Amazon Prime Netflix, right, so it is a video streaming app and we have a whole lot of content that we display, it is dynamic content, so it is bound to update based on whoever program sets the way they want. There are different movies, like categories like movies, TV shows and so on. Of course, we have the search functionality where users can search and play the video. We have the download functionality, so users can watch it even when they are offline and their data will not be consumed when they watch it. The business model that we have is called Premium Model. So, we have a whole lot of content that is available to users for free. There is also premium content for which users have to subscribe, right. And for the users who subscribe, they see an ad-free premium experience and the users who haven't subscribed get to see ads while they play videos. So, that's how we are doing our monetization, it's a premium model. And we also have offers. Now, I'll give an example of offers, right. If you are a Samsung user, if your device is Samsung, maybe certain versions, certain series of the phone, then you may get an offer which says you get 90 days of premium ad-free experience, right. So, we have such device-based offers and we also have other type of offers like carrier-based. So, if you're on a Vodafone data network, then you might see some type of offer. If you're on an Airtel data network, you might see some other types of offer. So, we've got a whole lot of things here in terms of offers as well. And just to understand the scale of things, so we have an Android app, we have an iOS app, we have a website as well. We have presence in a whole lot of countries including Indonesia, Malaysia, India, Middle East, and so on. And we have just a very large fragmentation of device OS combination. So, we support a whole lot of device OS combination. So, I'm a testing perspective. For a testing perspective, I would see that this is an opportunity along with the challenge. The first thing that I would consider, there is so much device and OS fragmentation are there. So, we need to test our application across various device and OS combinations. Having said that, she already mentioned that we have some of the offer which is carrier-based. So, that means we also need to support our end user getting proper offer or not. So, that has to be tested. Along with that, it's a streaming application. So, it should work on your various network condition, be it 3G, 4G, or your home Wi-Fi. And just like any other applications, there were so many new features are coming and you need to regress your existing functionality. So, you need to ensure that by including new features, it is not any breaking things on your existing application. Last but not the least, the thing is that all this testing activity should give that pair visibility to business about your overall quality of product. So, I would say these are the opportunity, not along with challenges also. So, the thing is that we have so much device fragmentation, device and OS fragmentation, and we think that manual testing with all these possible device and OS are not feasible solution. However, manual testing is the must, I would say that because so many things that we need to think about manual testing because some of the issues caught by manual testing only. But such device and OS coverage are not possible by manual testing. So, automation is the must, automation is the lifesaver for us. So, but we need to test our automation across various range of devices. This is again not possible thing. So, we need to find out what are the most used devices by our application users. So, we need to filter it out. Now, can someone tell me, how can you figure it out? What are the most used devices by your application from crowd? Definitely, analytics is the one way, is the one way basically. We also use analytics, but in our case, we have our application already spent one and a half year in a public domain. So, we already have some sort of historic data. So, based on that, we started and we analyzed that these are the most used devices and we making our own device lab. Making our own device lab right from the beginning, right? That doesn't seem ideal. So, we conducted a lot of experiments. We tried out a few things, failed at a lot of them, and we want to share that with you today. So, when we first started off, we started writing tests on emulators. We tried to use emulators to run our tests, but that didn't work too well for us. One is, we struggled to get our app installed on the emulator. We used to get ARM translator-related issues. Of course, we could work around it by building our app to support Intel architecture. But then, that would not really be the app that the end users use, right? And our end users are not using emulators, they are using real devices. So, that was one of the reasons we said, no, we don't want to go ahead with emulators. Also, when we tried to run some tests on the emulators, it was extremely slow. So, when then we decided to not use emulators. So, the next ideal thing would be a cloud-based solution, right? So, in case of cloud-based solution, the issue is that in some of all the cloud infrastructure are being hosted in either US or European countries, where our application is not being supported. As long as you launch an application, you would be getting, app is not supported within this particular country. Because our application uses IP-based, first, our application use your, check your IP, and if it belongs to allowed area, then and that you would be allowed to launch your application. Otherwise, it would show the error message. Second thing that we have about device-form issues, like carrier-based simulation is not possible. So, that means you cannot simulate good upon or idea simulation within your cloud premises. So, that is the there. And the third thing that device options. So, most of our application user are located in Indonesia and Malaysia. The device which our end user uses are not present in device form. So, this is the limitation that we have about cloud-based options. So, it again, it didn't work for us. So, we couldn't use a cloud-based device form. So, then we decided that we want to go ahead and build our own in-house device lab, right? Now, Priyanka has been talking a lot about building an in-house device lab with your devices that are most used by our consumers, right? So, did we build an in-house device lab with the devices that are most used by our consumers? What do you think? Yes, okay. What do others think? Okay, we're not really using emulators, but okay, that's one option. You're saying that users in the field itself, but for automation? So, okay, some people think that we actually use the devices that are most used by our consumers. I'm really happy you think that way. But unfortunately, that's not the case, not automation-friendly. Now, what do I mean by that, right? Oppo is one of the devices that our consumers use a lot. But to run an automation test on an Oppo device, we have to enable developer options which need capture, okay? Let's say we did that manually anyway. After 10 minutes of inactivity, the developer options get turned off automatically. So, it is extremely tricky for us to run automation on these devices. We don't have them in our device lab. So, the ones that are very frequently used by our consumers, like are in the top range, but not necessarily the most used ones. But does that mean we ignore devices like Oppo? No, we use them for our manual and exploratory testing, because those devices are equally critical too. A little about our criteria, like what our criteria is for automation. Yeah, so in our case, our automation criteria, first of all, we need to cover all the critical path of our application, because we have 10 plus millions of users. So, we need to ensure that our automation cover all the critical user journeys. Like in our case, search should work well and video, you should be able to download and play. That has to be tested and along with that various, some of the PPP feature, particular for offers and everything. So, that has to be tested very well. Second thing that it should align, your automation should align with your CI CD. So, that means as long as your any chain that happened, be it from your API side or your front end side, that has to trigger automation automatically in order to get faster feedback. So, that is the another criteria and whatever you do, it should generate meaningful and content rich report, that really value addition for business as well as the QA community, QA also. So, that is the there. Now, let me take you a deep down of our automation technical stack. So, first thing we use Cucumber JVM to define business rules. So, we use I mean all the step definition has been formed in Cucumber JVM plain English. It can be easily understood by any non-technical person also. Second thing is that it is a tool selection. So, our application present in Android as well as the iOS, it's a cross-platform applications and we do not want to maintain two separate codebase just for automation. So, APM it's a good tool that so within the same script, you can cover both the platform. Third thing we use Gradle. The interesting aspect is that it is not building only our project, it also do some sort of infrared activity like spinning off APM server, as well as dispatching your feature files across the devices. So, that way we leverage Gradle capability within our automation. And last and last option is the last thing that we have. We need to analyze our test trends. We need to analyze our failures in order to find a flaky test. Some could be automation issue. Some could be genuine failures. So, we need to analyze also. So, we have a TTA integration. It's a test and analyzer, which is a brainchild of Mr. Anand Bagmar. Anand, can you just wave your hand? Yeah, thanks. So, in case of any integration you want, you can contact him. Yeah, so these are the technical stack that we have. We can briefly talk about CI, how we set it up and a few things that we have done. So, our automation tests, E2E tests are triggered in three different ways. One is when the app code changes and the app builds, then we trigger the test to ensure that this latest version of the app does not have any issues. We also trigger tests when API is changed. When backend code changes, then again we ensure that the tests are green. And finally, if we make changes to the test code itself, then our tests run again. So, these are our trigger points. We are using Jenkins as our CI. So, we have different Jenkins jobs, which basically run the Gradle W command. And they run tests on Jenkins nodes. Now, these Jenkins nodes are mapped to different devices. So, these nodes select which devices to run the tests on and they handle that. This looks like in a very high level. We check out our code. In the Jenkins node configuration itself, we've set environment variables with the device IDs. That's how we are doing it. So, whenever our pipelines run on certain nodes, right? Yeah, I mean we are doing parallel in some sense. So, our job is a pipeline, right? We have multiple pipelines which get triggered automatically based on whatever the triggers I mentioned. So, these things get triggered in parallel. Now, always your jobs are connected to certain nodes in Jenkins. And these nodes, we have set environment variables on these nodes with the device IDs. So, let's say we have a pipeline which would run tests on a Samsung A5 devices. Now, we in our lab have two Samsung A5 devices. So, both those devices, the IDs are present as environment variable Jenkins nodes. So, when this pipeline gets triggered, tests are run on both these devices, A5 devices. Now, similar setup we have for let's say a Moto G5 device. So, if we might have multiple Moto G5 devices, their IDs are present in environment variables of the Jenkins node. And then when a pipeline triggers that, then it runs on these devices. Does that help? Oh, how it starts the thread? We'll come to code in a bit. I can show you what we do there. We're using grader to control all that. We'll come to that in a bit. Any other questions at this point? No, it's fine. It's great that we're having this interaction. So, feel free to ask questions. Yeah, why? You're saying, yeah, I just want to know, like, why do you have to run the same test on, you know, two devices which are of same configuration? You said, like, you have two SA devices and you provided those. UD IDs in the Jenkins node, right? So, in that case, it triggers on both SA. Does it give any value add? It doesn't run the exact same suit on both. We do distribution there. We run few scenarios on one and few on the other. Okay. It looks something like this. Very simple. We're just taking out our automation code, copying the app from an upstream pipeline and then building our tests and running them. For now, our tests take around 40 minutes. Which is not ideal. We're trying to get it down a little further. And then we, of course, publish our reports, archive the artifacts. And as Priyank mentioned, reporting for us is very crucial because our business teams look at it. So, we have very rich reports. Are we using Kukumba reports? Take an example of reports where we show the scenarios that ran and you can see something red, which means that this is a real report and the test actually says a lot of information. So, we capture the build number, the APK build number, so that it helps us in debugging when we have to look at it later. We also capture API side details. So, the API version and so on. We also record videos for the scenarios that we run. That helps us in debugging. Now, we're using a gem called Flick for this. Initially, when we started doing Android automation, we used to use ADB screen record. The thing is it, of course, we couldn't use the same thing for iOS, that was one. And the other is it has a limitation of recording videos for a max of three minutes. And some of our tests were longer than that. So, then we used this gem called Flick, which supports both Android and iOS and that's what we've been using now. Would like to add about it. So, in iOS, you would be capturing GIF. In Android, it is picking up MP4. Very critical thing that we do is the analytics automation. Why this is important? Because a whole business, almost the entire business depends on analytics, like on analytics data. For example, which content to show at which location is probably, is kind of determined through the data that we have. The other reason I'll give an example. We, as I mentioned earlier, that we have a premium model. So, we also track how many ads users have seen. And that kind of helps us in actually getting money from those ad providers. So, analytics basically is extremely crucial for our business. So, it's important for us to test analytics. And what we do is we test it at the source itself. So, our Android app is the one that generates this, as in the apps are the ones that generate analytics events. So, when we run the test, we check the events there itself. So, we are checking them at the source. As mentioned, we use PTN for multiple different reasons. So, this is an example of a graph which shows the test execution time over a period of time. The spikes are failures in our tests. There are quite a lot of them. So, now what this could help us with is, let's say you're seeing an upward trend with time. Then it could mean that maybe the performance of the app has degraded. That's why the tests are taking longer. Or it could just mean that there is something wrong. Maybe you change the test and maybe there is an additional, I don't know, explicit weight or something which is causing an issue. It could be anything. So, it kind of gives you an insight into things that might be broken. And of course, you need to do more analysis to figure out what exactly is the cause of it. The other reason we use TTI is to compare test runs. So, let's say we ran the same suit on a motor device and on a Samsung device and it passed on one of them. So, we can compare these runs and figure out which were the failures on one and why it didn't fail on the other. Maybe that gives an insight into an app related issue which says, hey, on certain versions, certain ways there is an issue problem. Or it could just mean we've not written great tests. There are some flaky things. Around and then we need to debug that. So, of course, it doesn't pinpoint into what exactly the issue is, but it gives you an insight into things where, which will help you to look at it further. The days on which test run. This is the time it took in seconds. Execution time. Yes. Okay. This is one of the graphs. TTI, of course, provides a lot more. Do you want to say something about it? As Lawan had mentioned, right? No, this could be because of either showing trends in execution time. It could be noting it's taking more time because of app performance issue. It could be because of test issue. You have added some more extra validations, extra assertions over there, actions that could be there. Or it could indicate something else in terms of infrastructure. So TTI just gives insights into what a test execution is and with human mind that context, you have to see what potential is going on. In this case, you could see there are pretty much three set of data points where the test execution has happened, right? One is very much at the low, which is a high density of execution times. Second is in the middle and third, there are some peaks, okay? So my guess right now, just looking at this, this is a real graph by the way, but it's also a very old graph. The ones at the bottom means that test did not even run. That's why it almost failed instantly, right? And it could be for various reasons. It could be there was a problem in the APIs. There was a problem in the video playing because of ads or pop-ups that are the way we handle it. Because the tests are handling the video play, which also means handling the offers and the ads that come in skipping the ads. In some cases, there were very genuine issues which we found on device specific. So this gives an insight into what is happening and you can potentially look at this and say, oh, there is a lot of failures that happen at particular point in time. Is there something happening with the infrastructure at that time? Or is there some builds or deployments going on, right? So this is one of the meanings you can infer out of this kind of chart to see what is going on in the application. This is execution trend per test case. Per test case over a period of time. Yes, it has been mentioned there. Yeah, each bubble over there is a test execution. Okay, so for any one test, you could say, okay, I want to see a login test. How much time is taking for login? Over a period of time, how much time is it taking? So that kind of, yes, it's one test. It's only for one test. Yes, we have automated. So basically our backend, so we have a separate SSID within our office premises and our backend system that understand like, if you get a request from that particular IP address, consider that as a Buddha phone offer or I mean any other offer, carrier based offer. So that way we provide some, I mean provision to do carrier based integration test within our office premises. It's a simulation of that. So as we spoke about pre-did parallel execution by having different pipelines itself which get triggered when the code changes or when a new build is there or a backend deployment happens. We've also done distribution. So to answer your question again, and I think part of your thing as well. Using Groovy's deep part pool, I think the number of devices that are connected as an argument. So it basically spawns that many threads and we run all these scenarios on the devices. So what happens here is that we acquire a device that is free. Now this device pool is just a map of, let's say a device ID and status, whether it's acquired or not and a few other basic things. So we acquire a device that is free and then run a scenario on that device. Now when this, like if there are multiple devices connected then different tests get assigned to them and then when one becomes free, immediately it gets assigned the next step. So we're not determining upfront which scenarios should run on what. We're not doing that distribution upfront. We're doing it runtime almost, right? As in when the devices become free. And we're doing APM server management. So this is actually a very simplistic approach. We get the, we decide a port number based on the ID of the device and then we start our APM server on that port. And then have a session with that device. Like the APM server will have a new session with that device, right? Of course this doesn't happen on every test run. So once we, this is a very simplistic version of what we actually have but if let's say an APM server is started once for a device then we don't stop it and start it again till like the entire execution is done. All the scenarios are clear. Is that clear? Does that help you? Is it one APM server with multiple sessions to serve multiple device IDs? No, one device is connected to one APM server instance So see 10 devices are connected to you you start 10 APM. 10 APM server, it is there. So will it down the overall speed? Haven't faced that issue as of now because we have 16 GB RAM connected Mac mini. We haven't faced that issue. But definitely we want to improve that by, I mean since now APM is supported like within one APM server you can allocate multiple sessions. So it's definitely we want to do this. So the latest version of APM has that we haven't moved to the latest version yet that's why we have this code. Me too that for I have asked the question. Okay. Yeah. Yes. That's what you mean. Yes. You continue the execution of multiple test cases across the test. Yes, yes, yes. So you don't have the state problem or something like that. You need to bring back the state of the device. We clear app data between tests but there isn't anything else that we particularly have to do in our case. So we use the same thing. No, we do, right? It is clear the app data before each test runs. So it's starting with a clean app install but each device has got a similar to user per device. So there are some APIs that are written which also clears up the user details from the backend systems along with clearing the app data before running the test. It's always running in a clean state unless there are scenarios which need persistent data across sessions. Sorry. So let's just repeat the question. Okay. The question was why are we clearing data every time because in real world, the users will use the app continuously not clear the data. So we have multiple scenarios in it. Let's say in a test, what we want to do is just download a video and go offline and play it and that's one scenario. And let's say in another case, we want to play a video from the recently watched section. Right? Now these two are actually different tests. They are separate concerns. If we do not clear the data, it is possible that initially we were expecting there to be no recently watched and then something gets added to that recently watched section after a video play. But if I had already played a video in my previous test run and then just continued here, then I would see recently watched item already there. But it is dependent test cases in that case. So you will, I mean, so in that case, every time whenever you run your test, your application should be in predictable state. So if you store all the previous information, then probably you may lose that. The tests have to be independent, right? Yes. See, I'm using the times of India I've been used today and just closed it. And then again, next day I launched it and again, another news. So I'm doing this continuously activity. So when I am going to test this, I should not clean the data every time because maybe there is a tutorial on the launch of app. Maybe there are some ads as you implemented the AdMob. So starting point should not be clean. What do you say? Right, so there are multiple things that that I can think of right now. One is we are distributing our tests at scenario level, right? So even if we do serialization, then we cannot just say that, you know, now because this device is free, assign it the next test because then again, we might face the same issues, right? That was one of the reasons. The other thing is that let's say, I mean, it's true that this kind of a scenario does happen in real life, but then what I would think is maybe we, like if it is extremely important for us, then we maybe have one scenario which tests these things, right? Like launches the app, plays a video, closes the app, pre-launches it again and then plays something else. That's what I'm saying. So we can have a test, we already do. So I think it also comes on to the intent of the test, right? What are you trying to achieve? There are some APIs that I mentioned where we clear the user data before launching the app, right? There are also some APIs which sets the user state in a particular case that I want this user to be expired subscription user. And then I launch the app, then how does it behave? So not everything has to be chained as tests. We should be leveraging different strategies to set up data to say, okay, now if this is the state of the application, how is it going to perform? Depending on the intent, that's what would happen. Depending on what the test is supposed to do. So we can take the steps that we've done is surround ADP utils. So what happened initially is we used to go back home after work with the green bill and then come back the next day and see that some of the bills are red, right? What was going on? Then when we checked, we found that some of the devices had lost ADP connectivity. Now there were a few devices in our lab which would lose the connectivity at 8 p.m. every evening and we still don't know why. So what we did as a workaround is we wrote some scripts that would just reestablish ADP connectivity and then after that we didn't have test failures because of this reason. The other reasons we had test failures were sometimes randomly the devices would lose internet connectivity. So then we wrote some scripts to connect to the internet, the Wi-Fi of our choice and then we would do it before every test runs and then proceed. So then the test again didn't fail for this reason. So these are a few things where we failed for reasons that we had not expected at all and we did some work rounds to just help us proceed. So now let me just walk you through about overall mad lab journey, how we started. So this is very simplest thing. We connected our Mac with one simple Android device. We started QA scripts and it worked well. So after that we procure a few more devices. This is again very basic approach. All the devices were initially connected, I mean hook up with two way tapes. So sometimes glue become weak and device drop off. So that happens. So how we communicate we procure industrial Velcro tape. So all the devices were connected. I mean gel with this tape only, which probably in using mechanical industry by hooking up your hammer and some of the tools. So that way we started and all the devices, all the cable passes through that one magical black box. Do you know what it does? Has anyone idea? It's a secret ingredient of our automation by the way. It is just, no, it is just what I say. It's a cable manager. It doesn't do anything. It's a carpet for our house. So basically we hide all the masses within that one, you know, beautiful black box. So that is that. And after that we procure a few more devices. We have our own device grid within our office premises. Yeah, you can see a few more devices. So currently we have 18 to 20 devices that connected with three Mac mini's devices coverage in terms of we have various section of Android, Moto, and as well as the iOS devices also. So still it has so much scope to add a few more devices, few more places are still empty. I hope we will cover it soon. Similar with this kind of setup also. Can anybody guess what? Pretty well connected to. We don't have a photo of the one with the doors open. Maybe we'll share it sometime. Stealing phones, yes, that's what happened. So this is placed right in the middle of our office. So it's at a place where you enter and you see it looks nice and extremely accessible. So if somebody wants to test something on an SA device, the easiest thing would be to just walk over, unplug it, and then do your testing. And what we realized is maybe we didn't communicate enough to people that they shouldn't be touching these devices. So what we did was we just use the board and wrote something. We cannot do fencing, right? So this is the only way that we can do. It says do not touch these devices unless you want to be killed by a mad woman. No, guess who will be mad lady? Ravan, yeah, he's a mad lady. So this was our mad luck journey. Now we'll show a quick demo. It's just a small test which a lot of you might have, as in probably our APM users, so you would have seen this, think like this, but this is just a basic test where we are launching the app and searching and playing for a video. Don't search, it's a bindi while I'm only able to. Yeah, this is a basic video stream. We pause and check the time and things like that. So by the way, 2x fast. Yeah, it doesn't run so fast actually. What we wanted to cover as part of this session, I've written a lot about MATLAB on his blog. That's one of the nice sources to get more information. We've also, the infra part of our code is open source. That's one Anand's GitHub repo, that's also available. TTA is again open source, that's also available. The Wi-Fi related scripts that we have written, that's also open source, which has a link with us. Yes. That's the content about anyway. Sorry, you had a question. I just have a few clarification. How did you do region specific tests? That's a good question. So the advantage of our app is that we detect location based on IP. What we did is like the mapping of IP to location is also stored in our database table. Like we cache it in some sense. So in our testing environments, we've cached a few IPs like that. And what those IPs are, basically, we have public IPs with different SSIDs. So in our Wi-Fi related script that we spoke about, we connect to any of these SSIDs based on which region we want to run the test on. So let's say I connect to an SSID called Middle East. So then that my public IP is already mapped in my table, in my backend. So then I am treated as the user from Middle East. That's how we're doing it. We could use VPN. We do use... Does VPN doesn't work because video streaming on that, it becomes a challenge for running tests. So flakiness in that and getting ads over the VPN, it can become a problem. So that introduces a lot of, already these are end-to-end tests which needs a full environment setup. Adding VPN makes it more complex. We do use VPNs at times for manual testing, but not really for running. I have a couple of questions. One, can you talk a little bit more about how you managed iOS challenges, iOS device specific challenges one. And two is something related to what you've mentioned about the Oppo phone. So, I mean, there's probably a little bit controversial. Why didn't you consider rooting it? I mean, I understand it's not the perfect solution, but did you consider that? What were your thoughts? What was your strategy behind that? Right, I'll answer your second question first. We did consider rooting a few phones, but we also looked at our analytics data and most of our users didn't have rooted phones. So we decided that that might not really be the best approach for us. It is very contextual. I mean, that's of course one way of solving this problem. But we thought we would rather use it for our exploratory testing which would give us more value in our opinion. That was the reason. But we did have a few phones that were rooted that were used for testing as well. And for iOS, so can you repeat? I have lost the context. For the iOS, what challenges? iOS devices, right? I mean, it's a little bit more close to the wall garden. So what kind of challenges did you face? Especially, you know, like setting, setting up Wi-Fi is, you know, through scripts and things like that. So for iOS, yeah. Almost, yes. Automation-wise, this is not possible in iOS. I, to be frank, that. Yeah, so we haven't done the switching of network on iOS yet. On what? Like, our Android suite is in a decent shape. iOS is not that mature yet. We're still working on it. We haven't been able to do the Wi-Fi changes and things like that on iOS yet. And to your question, yes. It is the same suite. What we do, like in both our apps, our business flows are actually the same. So except for the screens where your elements are different, apart from that, the flows are completely the same. So we reuse all that except where we have to actually do the like click, play, you know, actions like that. So we've basically created factory which would either do an Android click or an iOS click based on what is the driver that is initialized. But the rest of the code we are using. So all the business layer, what should go next, where it should go, everything, all that is used. So sorry, in the interest of time, we'll just take one last question and then we can have follow-up one-on-one conversations. Sorry, there was a question over here. Thank you for your talk, man. It was nice. Is your app is hybrid or native app? It's a native app. The second question is regarding in-house device form. It was nice and implemented. Do you have any suggestion that one PC can handle how many devices? You know, any metrics we have? Sorry. One PC can handle how many devices? One system having, for example, a 16 GB RAM of two CPU cores. How many devices can be connected? So in our case, we have eight devices connected to one Mac mini. It works. Sorry? Yeah, so we have, okay. So we have a Mac mini of 16 GB of RAM and it can accommodate easily eight devices without an issue. Oh, thank you. And in case of iOS, if you have iOS device, make a room for your Xcode. Okay, yeah. Thank you, thank you. Okay, sorry. For the rest of time, we'll have to stop further discussions, but any follow-up questions, I'm sure Priyanka and Launay are around for some more time. Also, we have a little time before we get back to track one for the panel discussion, which will be happening over there. So thank you very much, Priyanka Launay. Thanks. It was insightful. Hope everyone got some value out of it. Thank you. Thank you.