 We're going to show some of the stuff that we've been building at Hike. How many of you have used Hike? Wow, that's good. So for those who have not used, you probably might know Hike is a messaging platform. We have 100 million users that use the platform. We have 40 billion messages per month kind of transaction going through the system. And obviously, we support all platforms, Windows, iOS, and Android, Android being the biggest market in India. So we have a pretty strong user base on Android. So for an application of this scale, which is running at 100 million users and supporting all kinds of devices, what are some of the typical challenges you would anticipate in this kind of an application? So scalability is one challenge for sure. Then we talk about the cross-platform, especially with the fragmentation in Android. That could be a pretty challenging thing. Plus, a large user base that we have ends up using a very wide variety of phones. And a lot of them also tend to be on the lower end of the market phones, right? So when you have lower-end market phones, one of the typical challenges you will see is performance-related issues. But also, in general, the overall experience of the app might not be as great, right? So we've done a lot of interesting tweaks to make that as fast as possible, plus network, right? That's another aspect that kicks in, right? Like, majority of the people still end up using 2G network in India, right? So you want this kind of a messaging app which allows you to send files, videos, images, all kinds of things to work seamlessly on a 2G network without breaking any functionality. So now, let's imagine you are in charge of testing this app. What kind of things you would do? We used to follow a 15-day cycle into shipping into production. Now, we moved it to a 30-day cycle predominantly because we realized that our consumers are not gonna pull a new version every 15 days. So we actually moved to a 30-day cycle to adjust to what our consumer is. But we had a pretty short release cycle. And so if you're releasing every 15 days, right, what are the kind of things that you need to make sure from a testing point of view? And I'm sure it's pretty obvious that you can't do this manually, right? Considering the number of functionality that we have, the different kinds of phones that we need to support. And if you want to be shipping into production every 15 days, it's almost impractical to try and do this manually, right? So Vivek and I are gonna talk about some of the things and also demonstrate how we've actually gone about trying to do these tests. We've just have a series of demos where we will try and give you a quick preview of how the Automation Suite is built and how it's run. And then we'll dive into a little bit more details around what are kind of some of the tweaks we have done and some future plans that we have. Of course, by no means we think we have solved all the problems. There's still a lot of things to be done. But it's in a decent shape where it's actually getting used every day in our company. So we wanted to showcase what we have so far. So with that kind of a quick context setting, I'll hand it over to Vivek to jump-straight into a quick demo. Thank you. Good afternoon. So testing such a big application which requires multiple devices to communicate with each other, right? When you send a message, it gets rid of your friends. One of the examples I'll take is out to chat where you add, say, 10 members or even more than that, there's no limit for that right now. Other type of example that I'll take is file transfer. So with the earlier model that we used in automation was a client server architecture where we used to send a message from one client and go to hit our server API to verify whether the message has been received by the server or has been entered into the DB. They were the API calls that we were checking. But we found that it was not actually an real-world scenario that we assimilated. That means if I am sending a message to you, whether it is appearing on your mobile screen or not, I have transferred a file, whether you have received it or not. So there was no way to verify in the UI. We know UI testing is quite challenging into automation, right? It is very difficult to maintain all those past failure rates and everything. So many a time through the failure. So I'll quickly jump into a demo where there are a few aspects that I'll be checking. One is just verifying a one-to-one messaging. It will be an end-to-end and there will be no server involved in that. So all of you just demo's and just, I'm just playing whether this demo goes well. It's challenging, but we'll give it a shot. So I have four devices connected over here. Out of them, three are rendered out of different OS versions and one is iOS. So you have seen the app has been launched in two different phones. The first, the third phone has started a chat with another user, maybe your friend, member, or mother, father, and anybody you can chat with. You can also send awesome hikes stickers over here. And the best way to verify, we want to verify every single detail that is being appearing in the UI. We are verifying the counters that are appearing over there. We are verifying the stickers, whether the same stickers has been received by your friend or not. Because we might have issues where you sent a sticker and something went wrong in the server and another sticker has been received by the client. So this is how messaging works, right? You send a message, you get back a message, and stickers, and all. So we thought why not take the testing automation framework to another level, where it is communicating with various devices. So this was just a small example where two devices were communicating with each other. I'll explain a few more examples where I'll make more than two devices communicating with each other. Simple example is group chat. Before that, I want to jump into a scenario. So if I ask you guys, how will you test? So being in an instant messaging space, we need internet to communicate with each other, right? Or a data connection, or Wi-Fi, or something like that. But if I say you can communicate with your friend without using internet at all, without consuming your data. Actually, before we jump there, sorry, Vivek, did want to quickly pause. So you saw a quick demo, right, before this, where we saw a simple one is to one messaging. One person sent a message to another person, and sent also a sticker, and the other person received it, right? So if you were to actually test this, what are the kind of things that you would anticipate in this, and what are the kind of things that you would actually test for this? The time it takes to, that would, the time it takes to receive the message. So then we can start saying, okay, some of these will fit into performance related stuff. Some of them will fit into the actual functionality related stuff. We have another talk tomorrow, where we'll specifically go into performance related aspect. Let's kind of focus this one for mostly the functionality related stuff. So from a functionality point of view, right? Vivek talked about a little bit in terms of you want to make sure the same sticker actually showed up. The sticker is actually rendered correctly, is visible, all of that stuff. But there's more interesting stuff that we actually verified under the hood, right? Over there. So what kind of things would you do? So what happens if the other person is not online? So the different network types, you might want to verify. Different file sizes as in the files that you're going to transfer. But also someone brought about different versions of the app itself, right? You can't expect everyone to be on the same version of the app. So you might have people on different versions of the app. We'll cut to that, so there are more fits, right? So you can see how we start opening the Pandora box and you can very quickly just run into like all kinds of cases. Yeah? So I think Vivek kind of jumped into the next section, which is in height we support something called as offline messaging, which basically means that you can actually use Wi-Fi Direct and communicate with other people without actually using data or using going through the internet, right? So we want to quickly see a demo on that. Or you want to explain something before that, Vivek? Yeah, so. We'll come to the infrastructure, hang on. That's the meat of the talk. We want to get a little excitement around the kind of things you can do. So when we actually go about talking about the infrastructure, you'll understand why some things are the way they are. We need to understand there's offline messaging, there's other kinds of things, right? So we just wanted to make sure that we help you understand all the different things that is there and then jump into the infrastructure side of things. Just to update, this is a local setup. We support a number of devices that can be connected to the machine and we can make communication to a number of devices at the same time. And all the devices that you see here are real devices. We are not using any embedded sources at all. So just to jump into, I'll jump into another demo. There is a sample case which will verify the messaging without internet. So here again I'll make two Android devices communicate with each other. And you'll just note the Wi-Fi signal over there. It will just go off. And even then we'll be able to communicate with your friends and battery speakers and even you can send file up to 100 DVDs. So, and it will be real quick. So the other friends and request to connect it using Hike Direct. Great for some time. Wi-Fi is turned off now. So as you can check Wi-Fi is turned off and in one of the phone it will get connected and it will get an exclamation mark over there. So the first guy started a chat. He's sending a file. It sends a file. I am not using any internet or no internet connection, no data connection. All you say is your money. So this case is very useful when you are trapped in a very remote village where you don't have your Wi-Fi and even your 2G is not working. So your friend is in another room or stay in another hut. You can directly chat with him. You can send files, send pictures and all. So it's right now it's 100 meter. So within 100 meter you can classify. So it's enough, right? Even if you are staying in a single apartment and they put in so on or something like that, it will work because for you. So, this is completely based on that. So it creates a chance. You just have to just get your Wi-Fi. And you just have to get it on. That's it. All right, let's jump into a little bit more details. Next. So until now we saw that only two devices are communicating with each other. What if I extend, what if I have some scenario which is more than two devices? And so the best example is the group chat. I'll take a very specific scenario where say suppose I create a group chat with two other friends and I named the group chat as say wow or say hi friends or something like this. And the group chat name that appears on other mobile is different. This can be a case, right? This can be a corner case. This can be a boundary case that we might miss. So here, now you can see that these devices are connecting with each other and all this is happening in runtime. I'm not assigning any devices and you can do a test case, test case, make a request to some location and the device has been allocated. It sends messages. We even can pray now. Yeah. Right now we're using at least because it's in one location and nothing is happening in my case. So. That's fine. I think we could skip this. So still we can see the three devices simultaneously brought up iCAP and they can communicate with each other. Something is wrong with that. Yeah. So there can be specific scenario like transferring files to multiple OS versions in Android might cause some issues. And same thing can happen with messaging or some other feature that we provide. So important thing why we wanted to show this demo is that a lot of times, at least in the past when I've done any kind of this thing you mostly are testing with one device with making sure that it communicates with the server back and forth like we talked earlier. But in kind of instant messaging apps or other kind of peer to peer apps or think of games and stuff like that where you actually are doing playing games with two phones. You know, there might not even be a server involved, right? So in those kind of cases, how do you even do testing? And so what we're trying to demonstrate is it's not just between two devices. It could be multiple devices and we can still do the testing across multiple devices. And it's real time because, you know, you're taking the same test and running it across multiple devices. It's not separate tests running on each device. It's one test which is actually running across multiple devices. Yeah, so this was the demo till now was just using Android phones. But we are into a market where people are using various OS platforms like iOS, Android and all. There can be a scenario where a file sent by Android is not supported by iOS and it can make something go wrong. We thought of a plan on why not make these test cases run on different platforms simultaneously. I'll put up a demo again. Till now you saw two devices, then we moved to three devices which were Android with different devices. Now we'll move to a demo where all these four devices will be communicating with each other. So we started with the first, the third Android device. So the best part of this is I'm not maintaining different test scripts to run on different phones. It's just a single trigger point, a single test which is triggering all these stuff. So Appian provides you to run multiple instances at the same time. And my script take care of switching between devices. Yeah, yes. Yeah, so it's given to that. Yeah, we'll get into that, how we are doing that. Anyways, Appian provides to run multiple sessions, right? With different code numbers and all that. So this is happening something like, so you saw the app launched on the third device first. It's my script switched to the second device, did the stuff over there, moved to some other device whenever required. So it's totally based on my script. Here you can see it created a chat with iOS user. iOS user, so we are verifying each and every details in the UI that you see. Even the user join events and everything. Now to verify whether the speaker that is being sent by iOS is being supported by Android or not. We started sending speakers. I am on different screens, switch on devices, verify whether these devices is receiving the correct information, correct messages, correct counters and everything. So here I made proper utilization of my resources. So this is a local demo where it is starting to work the device was sitting idle. But those can be taken by other test when they are running in parallel. So I'll jump into that. So far, did it make sense? This was just like the quickly demo part of it. We're gonna jump into the details just to quickly recap what we've covered so far. We've talked about running the same test across multiple devices and the devices could be of different operating system. We are of course using APM under the hood. But there are other things that we've built to basically make this possible which can run tests, the same test across multiple devices. And you're able to connect across multiple devices in real time and verify if these are going. And we can also have a pool of these devices that you can pull on demand and use them and then put it back into the pool so that you don't have to hold on to those devices onto the single machine. Right now we are actually demonstrating all of this machine but it doesn't have to be that way, right? So that's kind of what we have covered, just kind of showing different kind of demos to showcase what we have done. Now we can talk about some of the challenges we ran into and then jump straight into the actual solution. Yeah, so this was a demo. So it is in a local setup as you can see. There we have a separate lab which handles all these things. Now I wrote few scripts which make multiple devices communicate with each other including various OS. The question was the execution time. If I run it in my local, all my test cases, it might take a long, the execution time might be too long though. The next problem that it pays was to run the test in parallel. We searched in the market, looked into number of apps, number of mobile apps that are available. Use many of them and we were using clients over architecture. We just need one device and the other thing was a server. They can easily handle that. However, we came to solve this issue by building our own, I'll not jump into detail on how we achieve those things. I'll just give you a diagrammatic overview of how the system interacts with each other. So the other problem that it pays was the execution time which was like killing us. The build came and if I ask the developer to wait for 10 hours, it will not wait at all. And we are moving in agile environment. We are working with different teams. We want to catch work as early as possible. So these executions should be as frequent. So quickly too, just so the demo that you saw was essentially running one test case at a given point in time across multiple devices. Now obviously we have like thousands of test cases. You're not going to run them sequentially. So what we're going to talk next is how do you run these tests in parallel across multiple devices? Yeah. So we did not reinvent any wheel at all. There are a number of mobile labs that are in a market that gives you palliations that distribute your test cases. We thought why not make multiple devices communicate with each other as we were not able to find a person for that. They were few solutions available, but being a tester, the person who is writing a script should write it very easily, right? I just want to initiate my FN capabilities in my local like I do with everybody on the local ocean, all those stuff. I don't want to change those IPs or make some configuration into capabilities, which I think we can achieve this from that as well. So what we did was we build a system which distribute all these type of test cases. Right now you saw my social realization was not good because I'm running it in my local. In the starting I just used two end-door devices. Suppose there was one other device with say one or more end-door devices, it will take the other devices. We have 15 test tools which have around 5,000 test cases. If I told you want to check each and every minor details in the device. Second, we built our own tool for parallel execution of multiple device communication. This is a system as you can see a single point, single click solution for automation. You can just log in, it's just a web page. You can just log in, select a template. So one of the things that we need to take care of is if I'm making a change in one of my modules, I don't want to test a complete app when it is just in production. We have uploaded a number of templates in our system. If we want to, if say suppose I made a changes to my group chat functionality. I don't want to check, I don't want to check any other function that is available over there. So we just upload a template, we just contain only those type of test tools which only deal with that group chat. So any developer who is making changes in his or her module can just execute his or her cases to check whether it is working fine. With, so this is a graph, we'll talk of some numbers, how we reduce test execution time. With the clients of our model that we started in 2014 when we started out automation building stuff. It took us around 12 hours to execute those tests. 12 hours. And this can be done only on staging environment or just a branch below that. That time we were not into agile, we slowly moved into agile. And then we did the distribution of clients of architecture that we followed. We used some labs in the market, we have our own lab that is in on set. And with the texture, the tool that we named, the complete tool that is running all these things, we named the texture. It came down to 1.52 hours. Now I can run my regression, the complete regression, every build that I'm shipping in the market, whether it's beta, alpha, internet build, whatever it is. So this is quite a good jump, right? From 12 hours, I made the execution time from down to 2 hours. So right now the system, I'll cover it sometime and I'll go over the, et cetera. So there are, I'll talk about that. What are the features there? Yeah, IP data. Yeah, well, at 12, 12 hours. So this is the average time. I'm not talking about the best time, this is the average time because the run that you give depends upon the number of devices available in the lab. If two developers give a run, first run, the second run will be in queue, we'll be waiting for the first run to complete and we have made some significant changes where you can make a request number of devices you want to run your tests in. So if you are giving a run, you'll say, okay, I don't want to waste many resources, I have only 10 tests and it's okay with me if I get the reports in half and all. There's a lot of room for improvement we're gradually going through, but right now each request for an entire build will knock everything down while we are in the process of actually only taking selective parts of it. So it's still not there fully, but on the way. Yeah, everything on real devices. So different versions of the test will request which version of the app that it requires on which kind of a phone and we'll pick one of those based on that configuration. Okay, so your best cases would have a good mix so that when something fails, but actually no pinpointed there is because of the version that this functionality is failing not necessarily that test failed and then you have to go and debug. So let's quickly wrap the next one up and then it'll give a little bit more clarity in terms of what we've been talking so far. So these are two of the components that are included in the company. Test suite is a client site that you have a web page, complete client site for all the templates as you noted and everything. Test case executor TCE is a middleware for us and Dexter which has all those you have connected to it. That is type of you can say a lab setup. So what happens is when you trigger a run it takes specific test cases from the complete test store assign it to test case executor. Now each test case executor takes care of a single test case. Now if I have four test case it will go to four different test case executor. This is to enable parallelization. This is to enable parallelization. Now your test case would have made request to our lab through an API we have API integrated into our test case where the test case make request to the lab saying that I need one end of the device and the test case make request I need one end of the device. This API checks for availability in the lab whether those devices are available. Assign it to just check, it checks in the lab whether those devices are available in specific OS version. Response back with the number of devices back to test case executor. So it requests for specific OS versions and with specific hike version installed on it like our app version installed on it and that's what is written back. Each test case would get access to a set of devices via desktop. That's good. We'll come to that, right? Surely there are a lot of questions back. Let's actually just make sure everyone understands this. Hold on to that question. That's an important question. Yeah, so as soon as test case executor gets that devices that it needs, that device gets request it executes the test run, collect the report, we collect device logs at the same time, we collect test logs at the same time. If you are running on the release build, mapping.exe to just on of skates device logs, we provide with all those things in the report. Screenshots are available, videos are available, everything is available in the report. It sends back to the client and a wonderful report is shown where you can just select the test case with all those regions of failure and the same is being sent by email. So there are a lot of things that needs to be implemented. The main one is scalability. Right now, a single Dexter instance supports up to 70 to 80 devices. We are working on scaling it so that more devices, we can accommodate more devices into a app. And even by doing this, the test case execution time from two hours, we are trying to bring it down. Second thing is security. I've been to a lot of stuff in Google and everything. And a few of the reasons that I find that the lab doesn't allow devices to communicate on the same Wi-Fi network, they're on the same Wi-Fi network, they can be a security loop over there. So we are working on these two stuff. This being for us, at least everything's in-house. So for us, security is not the issue. But if we were to, let's say, decide to put it out for other people to use it, then of course, security will become an issue because you don't want random devices communicating with each other. Which is actually a core feature for us, we need. But that can also be a security issue, right? Because that's why if you see most of the real-time device farms that are out there, they don't allow devices to communicate with each other. And if that's not possible, we can't even run our tests. Which is actually what got us started on this whole route. So that problem, security problem is still kind of not really addressed. Because right now, being in-house, it's not a problem for us. Good question, we've not decided. We said, first, let's try and solve the problem. We should be sure that it actually works for all our cases. And then the decision, I mean, hike is not into, it's not gonna go and make a commercial version of this, right? That's not business we are into. So it could be open source. It could be something we just keep in-house. We've not decided that yet. I'll directly jump to the next one. Those are two kind of open issues, right? And then there are other things like dependency management between the test, maintenance of the test itself. There are a lot of other kind of things that are kind of common challenges with most. What we really wanted to demonstrate here is, having multiple devices communicating with each other and running those in parallel, this seems to be a big challenge for most messaging-related apps or for most games and stuff like that. And that's the kind of problem specifically we were trying to address, right? Emulator will not never give you the same experience, right? So we do use emulators for lower-level tests. So when a developer runs a set of what we call as instrumented tests, there we do use emulators. We also have unit test where it is without any Android iOS dependencies. So those are all at the lower stacks. This is at the top of the pyramid where we are essentially looking at doing the full end-to-end check and it's not... So we have a lot more tests below this. It's really the topmost layer of the test where we want to make sure that it is as real as a user experience. So the question here is, how do you manage the cost? Because if you're going to set up your own kind of a lab with all the devices and stuff like that, that's going to increase the cost. Whereas we already had a lot of devices because we had to make sure that we are testing, all we are trying to do with this particular exercise was to basically automate all of them so that we can shorten our test execution cycle and give feedback as early as possible. So essentially getting to a continuous deployment kind of a model, right? So cost-wise is not a concern because we already had those devices and if there's a lab outside that essentially gives us all this functionality, we would be happy to just use that, right? But right now because of the security issue that we talked about, there's a blocker right now. So just to give you a quick thing, we are using a very low end machine, but we are using a very low end machine. So machine, the cost of machine is quite low, right? Yeah, so it's a complete set up that you do it right now. So right now it's one physical machine which basically you are connecting up to 80 devices is what we have gone up to once in a while. We actually want to increase that to a lot more so that's one of the future directions for the scalability aspect is 80 is not sufficient. So far now 80 is good enough, but as we start scaling, 80 might not be good enough. So we're looking at actually how do we make this more scalable from where it is. But most devices you should be able to, most hardware, standard hardware, you should be able to, I mean, server configuration. I'm not talking of laptops versus server configuration, but you don't need like a very high end server configuration. Most should be able to support up to 80. So it's all USB hub, all devices connected to USB hub then. It's all through USB hub. Yes, connect a bunch of busy change USB hubs. Keeping alive is a good question. How do you keep alive all these devices while they're connected and tested? Yeah, so as I told the complete system right from last part, I'll just go back to this slide. There's texture thing, as you can see, take care of everything. So if there's some device goes out, there's a listener that keeps listening to all those requests. If some device goes out, it sends the request, resets the devices, bringing it up, does the initial setup that we require right now? Because we don't want to do initial setup every time we go to test, because right now we are only testing right. So there's some initial setup that is already done on the device. So we keep on tracking those things. So if there are some failures, right, then some human has to go over there and do something. But we are trying to reduce that as much as possible. Right now it's like two people, I think two people who essentially manage this. It's not like a big IT setup. But yeah, I mean, that's one of the reasons why we said this is not fully ready, right? It's not ready for either open sourcing or things because there are still issues around, you know, sometimes you have to go and physically reset because even though you're trying to do that through the texture, it might not be possible. So, but those are not, at least I think, in the last months, maybe we've had like four or five cases. It's not like every day. Hang on, this gentleman has been raising his hand. We'll go there and then we'll come to you. No, any other feature, right? No, it's just one to one machine, right? If I'm sending it back to you, I need to, the reason that we get and something that we have read to the internet is security, so each and every device is connected to a different environment. So they won't be allowing it to connect to each other. And even if they do it, so, there are some IT configuration that needs to be done. So this doesn't require any, you can say some second. So to your question, if you were to just have two devices, one device want to talk to each other through an internet, through a server in between, for sure, most mobile apps can do that, right? But because of the Wi-Fi Direct, because there are a few other features which actually require the devices to be physically, not physically as in directly communicating with each other, those are the cases where we won't be able to solve this. They also have limitations on how much data you can send and all of that stuff. Like you can't transfer a one GB file over that. It'll cost us like a bomb. Hang on. We'll go there. So as Vivek said, we actually have the same script that runs across multiple platforms. It's not, we don't write different scripts for different platforms. There's a nice layer of abstraction that we built on top of Appium. It essentially does that for us, so that it's essentially the same script that runs across multiple. This is very hard to kind of quickly say, okay, this is the stuff. So sometimes we use different ones because they might be some case that the iOS is unique and different. That's the abstraction layer, right? And there are some words related to stuff which is different in iOS and different in Android, like some pop-up, something between and all these things. So those things are real to be handed differently. Yeah. You already know what device you're talking to, right? Because at every point, you're basically saying that this specific device execute this step on it. And so when you do that, it's not, you don't need a conditional logic because that's all abstracted out in the wrapper that we've written, which is essentially wraps around this thing. And so if you're saying, okay, there's a pop-up that's gonna come, it's gonna be handled differently on different devices, but that's the abstraction because you already have a handle to it, it'll handle, right? You don't have to have conditional logic in it. That's one of the things we tried really hard to avoid because if you have the conditional logic, then every time a new kind of scenario comes up, you'll have to go and change in n number of places. It's actually a mix of, we kind of have a hotspot of things. We originally started with something completely random, then moved to a page factory model. Now we are in between a refactoring where we're kind of trying to move to a slightly different model. So it's kind of not fully ready. That's still kind of half-baked. Hang on, yeah, sorry. Okay. Yeah, to answer your question, there are two different ways to verify this. One. Yeah, so the question is, we showed initially a demo where basically one client is sending a sticker to another device or a message. So his question is, how do you actually verify if that sticker or message is arrived on the other and is actually rendered correctly? There are two things that we have implemented. One other thing is using this verification. We use SQL-y initially, bad idea. Yeah, because the maintenance was quite high. Every time I change my phone, I need to take a screenshot of every phone, show it somewhere over there. It's a quick win if you want to do something very quick and dirty, which is how we got started. And then that became like the precedence and we said, now we need to kill this and move to a different model. Yeah, so at the back end, each speaker has some attribute. Now, when I'm sending the same attribute from one phone to another phone, I just verify over there when I'm saying my attribute. The actual rendering itself is all tested at lower layers. You don't need to actually verify that because that you don't need to verify across every communication. As long as a particular sticker or image or file, this video, ton of different things, native cards, each of those are actually tested at lower layers. Whether you give this, it's rendered correctly or not. Here in the end-to-end test, we are actually interested if you send something, is it actually rendered? I mean, you're not actually looking at how it's rendered, the color and all of that. You're essentially ensuring that on the chat thread or whatever the thing we have, if this particular ID is visible on it. Yeah, some properties. So again, like we said earlier, we are not mixing up performance and functionality. We keep those two separate. So for this was what we are mostly focusing on is purely functionality. We won't essentially check on 4G how long it's taking as part of this. We have a talk tomorrow where we actually gonna talk about for performance-related stuff, how we are doing benchmarking and how we are doing across different network types, across different phone types and all of that stuff. It's a peer-to-peer protocol, right? Never shows up on the server. Now you initiate a Wi-Fi Direct. Someone connects to your Wi-Fi Direct. You're talking of security testing in general about Hike itself. I mean, standard stuff, right? Encryption. We won't have that detail and we wouldn't even be able to reveal that if we knew that. Correct, there are some issues around that which is why we won't be able to reveal that. I'll talk about that offline, not on the camera. What do you mean, Delta? So luckily for us, we don't have that. We try and ensure that it's the same experience across different OS versions. Otherwise, it would be a very bad experience for a user when they shift from one OS to another. So we actually, luckily for us, we ensure that the experience is same. However, there are only certain features that are available in Android, for example. Those features are not available on iOS at all. So in those cases, the test cases are specifically written only for Android devices. Going forward, that's our direction, but right now we have a feature, parity is not at the same level because of, again, certain limitations in OS. For example, in iOS, you can't do certain things on Android, you can. So right now we are trying to move to make sure that the compatibility is the same across devices. So where we don't have, we simply only request those devices from Dexter. Right? So we will go there quickly and then come to you. To tell you that, I'll not go deeper in this. So we have used a Docker and we have used separate VMs to handle all the things. And Dexter, that's what is simply a template kind of thing where you just dump. That's what is homegrown, we're using Cucumber, we're using other kinds of things to basically manage the text execution and stuff like that. Dexter is homegrown, it's all handwritten stuff. All those are different. App.em is there, of course. So basically we named it Dexter because we want to make it a single or no vision, it's kind of thing, which handles all your request features, ask Dexter something, and you can do it for Dexter. But you ask it for devices, you ask it for a gate, you ask it for some other, right? I can just, that is the programming language that I have used, it's Java, simply Java. And all those- It's a Java server that runs, I mean. Again, there's certain difficulties we have in terms of revealing everything here. I'll talk about those reasons why. We can't get into specific details around here. But it's a standard Java server that we have built that's pretty much handwritten stuff in it. Again, I think this is pretty straightforward. It's right now, it's just a connection. It's a pool of devices with a server running on top of it which allows you to connect to these devices, request these devices, and manage them. The test suite, again, is a Java client that we've built, a Java server that we've built which again manages all the test execution, reporting all of that stuff. This could technically be replaced by something like Jenkins or whatever, but this has something more specific that we have done. So instead of trying to build a plugin, we just hand-roll something ourselves. Each of your test executor, right, will essentially run in a separate Docker instance. There are specific reasons why you would want to run it on different Docker instances, right? I think the last talk actually, he did a really good job explaining that. We were hoping that we would just build on top of the last talk, not happen. I say, this is why we're not talking much about Docker here, is we're just a session before I spend into details of that. We just thought, okay, we'll just build on top of it and not go into details of Docker itself. Sorry, yeah, you've been correct. So I'll answer this question. So running multiple instances, everything is taken care of. No, so Dexter just takes a request of number of devices, also devices to test it, you know, and test it and you're gonna take care of all the, for the, for the, for the, for the calling number. Appium instance is done in DC. The port is given by Dexter, but DC would actually bind to it, create the Appium instance and all that stuff. Do you get it? Sure. Because you want it to be scalable, right? Not decided up front. It'll be on a queue. Because unfortunately you can't physically run multiple tests on the same device simultaneously. It's only one test that it gets executed. So the next request that comes, if no more devices are available, it actually goes on a queue. You would have multiple devices, yeah, of the same configuration. You would have multiple devices of the same configuration to parallelize it, which is what we are saying, that right now it takes about 90 minutes or so for us with the, about 80 devices that we have. Now we want to actually replicate that to run more tests in parallel of the same configuration. OS version, hike version, other kinds of constraints, the phone type itself, and have same instances available, multiple of those, so that you can run more tests parallel. No, it's not open sourced yet. Yet, or I don't know if it'll ever be open sourced, but I don't know. All of that would be part of the application itself. No layers of tests would actually verify that, right? You don't want to have one test trying to verify everything. Here we are essentially making sure that communication across multiple devices is working correctly. That's really our main focus. There are lower layers of tests. We have about six different layers of tests, which each verify a specific aspect. So there is a UI test which only verifies the look and feel on the UI, how something's rendered on the UI on different devices, landscape, portrait, right? Like so many different permutations combinations, offline mode, online mode. You don't want to do all of that here. So you actually push those layers down and handle them and the respective layers. Test cases have to be independent, right? If you want to run them parallelly. If it's easy said, then then I know. Like he's like, yeah, I know. But of course you have, because you want to run them in parallel, you have to take extra care to make sure your test cases are actually fully independent. So if the same test case, if you put some dependency on test case, anyways the goal test cloud needs to be executed on a separate UI. So it's not a good practice, right? But it's not a good practice. So if you see majority of our test cases are independent, you can just execute them directly. But there are some cases where certain steps need to occur before them, right? And those would have some kind of a dependency. Again, you know, there's a lot more work for us to be done in terms of making sure a lot of tests are still at the top layer, which actually we're trying to push down to lower layers, you know, because right now 5K, in my opinion, we can actually bring it down to about 1K because about 4K tests can actually be pushed at lower layers of our application. So because of that, we actually have some dependencies, but those are inside one test class and that test class gets executed as a whole. Hopefully we will be able to push most of them down so each one can be pretty much independent of each of them. We've done this in other places so we're quite confident it can be done as it's going to take some time. This is at a class level. We are saying inside a class there are methods. We want to also get each method to be independent. Not, so we've already had the class level, but I think we want to get to each test method should be independent of anything else. So you can just execute a specific scenario on a specific type of device and get the feedback without actually having to run the whole class. Right now we have to run the whole class, right, general note there. There are some bits don't require just one device, right? There are some bits that require just one device. So there are things that you can do on your phone, like play games on your own, right? That requires just one device. But there are ones where you require more than one device and as shown in the diagram that that guy's requesting one Android and two iOS devices. So we'll get back a handle to three devices. Yes. Is there a limit on the executors? No, but the problem is if you don't have devices, if you keep spinning executors, then it's not going to be useful, which is the bottleneck we are running right now into, which is why we want to scale up on the Dexter side to have more devices available, right? So basically when you write the script, right? In Cucumber the way we write the script is in the given part. We essentially say that these are the devices that we need, right? And so it holds on to each of the device and then they say device A sends a message to device B. So then device A you would basically take a handle, send the message. Then whenever you encounter device B, it will switch to the device B and send whatever, execute the step on it. So iOS is still a big, big healthy provision. So we keep on basing Dexter with iOS, but I think in Android you can just be handle on it. Yeah, so in my first video when one guy was typing something there was three dot, right? That was typing the situation. So we verify the attribute, again I'm saying the attribute, but when you are on the conversation side and all your conversations are there, you get a message like this particular guy is typing. So we can verify that. The question was basically more from a concurrency point of view, how do you test concurrency? And so there are some parts of concurrency which will get tested in this layer of tests, but a lot of concurrency related stuff would actually get pushed down to lower layers. Because there you have a lot more control and you can actually simulate different kinds of things and so forth. So a lot of concurrency related stuff is actually pushed layers down. Here it's more of communication, like I was saying earlier, that's our focus. Obviously there is a tendency and this is one of the things we are struggling with. The tendency is to actually try and do everything in one test and it's actually a really easy thing to do, but essentially ends up basically making your test very fragile. So we're really working hard to try and make sure that we push things to lower layers of tests. So concurrency is one performance is we've separated it out. There are other kinds of things that are also pushed down. There are API dependencies, other kinds of dependencies, those are pushed down. So here we are mostly assuming from an API point of view, other point of view, it's a happy part scenario. Now it's a complete, a single-run report. You go to a client, give a run, whatever test cases you have selected, then all of them get over to get a combined report. It will last to about 90 minutes. Again, that is not parallelization, right? So if I want to run 10 test cases on 10 different devices. You're talking of client, one-to-one? No, you're talking about one-to-one? No, you're talking about multiple devices talking to each other with different OS versions. When you say across five devices, as in you're talking about one test case runs on one device, doesn't require other devices. In our case, we actually require for a particular test run multiple devices, right? That's how we can actually verify that a cross-platform messaging is working correctly, cross-platform, other kinds of things are working correctly. So for our specific scenario, it's not like you only need one device. There are few cases which require only one device and as and when they're available, they'll run those tests. Not yet. Not yet. We've kind of almost written our own grid in some sense. We've essentially built in some sense of our own grid implementation. There are reasons for that. We can talk about it. Why you went down? Here for projection. Now for projection. This is not how we'll do testing. This is only for projection. Yeah, we can, yeah. So we can implement it using the number of cores that are available. VNC is one of them where you can make a connection to the device and send it to the client where you can see what else is testing. In our case, anyway, you let these tests run. The video will get recorded if the test has a failure and you'll get that. So you don't actually need to sit there and watch it run. It doesn't make much sense to do that. Yeah, why not? That's why these are all separated out and there is a reason for using Docker instances to kind of keep that decoupling, right? So tomorrow, let's say SauceLab or BrowserStack decides to provide this functionality. We will just tuck off all of this and just move to that, right? It's easy to do that. So we don't have to maintain these devices. The problem though is these guys won't have specific devices that we need. And so there are other kinds of problems that we also have. In fact, if you come to tomorrow's talk, we will talk about how we even go about figuring out what devices we should test on. It's not just randomly pick whatever devices you have, right? You have to do some pretty interesting analytics profiling off the existing user base to figure out which devices you want to run and so forth. So we'll talk about that. And that might be a problem still even if something is available in the cloud, whether they'll be able to provide those specific OS versions on specific device. Some devices are not available outside India at all. So you can't expect like SauceLab to provide that. iOS, I don't think it's possible technically so far, but yeah. Yeah. Technically it seems possible, we've not tried it. Technically it seems possible, but we've not tried it. So when we try, we would know. The question obviously would be like, what's the incentive, right? Because this is actually working load balancing. Right now with AD devices, that's a lot actually, like AD devices and we're able to run this. As we scale, obviously we'll have to re-evaluate. We're also looking at ourselves scaling Dexter itself so we can connect a lot more devices. And if you're able to do it, it's just a lot more cheaper and easier for us to manage it. Not necessary, but right now it is, right now it is, but we're moving to a model where that's not necessary. All right, I think we've overshot that time. Thank you again for listening in. Hopefully this was useful. Thank you.