 So this session is about benchmarking and performance testing. We had a session yesterday where we talked about how we're doing some of the automation at hike. And that was more on the functional automation side of things. This one is more specific on benchmarking and performance. So KP, Prathish, and I, I'm going to kind of co-present this one. I'm going to give you a little bit of background, some context setting, and then Prathish is going to jump in and do a demo of some of the things that we have. Unlike yesterday's talk, this will probably reveal more information for some obvious reasons. Here, our goal is to actually walk you through kind of our journey of how we started in 2014. And up till this point, how we've done some performance testing. How many people are aware of hike? Wow, awesome. The numbers just went up, right? So we have, as you guys know, about 100 million users using this out of that, about 40 billion messages are exchanged every month. So from a volume's point of view, I think we are one of the top apps right now in India being used, which obviously means that performance testing, benchmarking, becomes extremely important for an app like this. But I want to hear from you, right? Why do you think we should care as an industry? We should care about performance and benchmarking. What are your thoughts before we get started? Why should we care about these things? We as an injurer, not just hike, right? Any other company. Why do they care about performance and benchmarking? Want to make this interactive, right? No one wants to hear a boring talk. User experience, OK? That's an important one. What do we mean by user experience? Want to make sure the experience on the app is seamless, right? Everything scrolls smoothly, loads fast. There is no lag. People get to what they want to do very seamlessly, all of that stuff, right? So that's a very important aspect from that point of view. Anything else you guys can think of why this is important? Time, but that's the same point, right? I mean, we basically want to give a seamless experience to the user. That's only part of the problem, in my opinion. So a non-performance app can actually impact the behavior or the functionality of your application itself, right? So it's not just UX part of it, but it could also lead to people not being able to do what they want to do. So if I send a message to you, and it's an instant message, and it took three hours to get to you, right? Even functionally, you got the message, but it's kind of broken, right? I mean, it's not instant anymore. I might as well send a pigeon over, and you'd get it faster. Anyway, here we're going to talk. So we're talking about these, and there are other factors why benchmarking and performance is important for sure. Let's actually jump into and look at from a user's point of view, how they would see an app that is not performant, right? This would be a user's experience, and we certainly don't want people destroying their phones. Only if Hike was in the business of making phones, we would love that. So let's talk about how this all started back in 2014. We were about to ship out a release of the latest, greatest version of Hike. We were shipping every two weeks back in that point, and we were about to ship a release, and at this point, we didn't really have, we were just starting our whole automation effort inside Hike, and someone was basically manually testing the app, and they said, you know what? When I actually force-killed the app and started again, it feels like it's taking a little bit longer. It doesn't feel like it's very crisp, and it's very great, right? We didn't have really data to say what that means. So we said, okay, let's actually try and pull some data, and we actually kind of ran some data, looked into this, and what we found was there was a shoot of 130 milliseconds on the app launch. 130 milliseconds, who cares? Yeah? But 130 milliseconds, cumulatively, over a period of time, soon hits 300 milliseconds, which is human-eye-perceivable, right? So you can actually notice a 300 millisecond lag, and that's noticeable, so we wanted to basically kind of catch that and stop, and actually if you see release on release, the blue one, the red one, and the yellow one, we're actually seeing that every release somewhere we are kind of shooting up slightly. So we said, you know, soon this is gonna hit a point where this is gonna cross the level where when a user launches, they actually see the app still kind of launching, and that's a bad experience. So we wanted to address that issue. So we decided to actually pause the release. We decided that the release won't go out. We need to actually find out what's the problem before we ship out this release. And you know, obviously when you take a call like this, many people will jump on it, right? Because people want the releases to go out on time. So we had a lot of stakeholders jumping in, and at this point, a lot of people were saying, here on my phone, this is working perfectly fine. I don't see any lag. Other people were like, no, this looks like some lag. And it doesn't seem like a scientific way about going about an app that is used by 100 million people, right? So what we decided is we decided that we need a little bit more scientific way around how we do this. And so we decided to actually look at analytics to make certain decisions on when something will ship, when something will not ship, and put a basic framework in place that will help us make more informed decisions, right? So the first part of this talk, we're gonna talk about how we went about creating this basic framework which helps us benchmark things. And then we'll talk about diving deeper into the performance side of things. But when we talk about benchmarking, right? What do you guys do typically in your company? Do you pick random 10 devices? Do you pick random 10 use cases and try and do performance testing? Or there's a little bit more structure around this, right? So you look at, in the market, what are your top devices being used by the users, right? So, sorry, correct. So top devices that your app is installed on, you wanna basically look and do performance testing on those app because that's gonna give you the biggest buck for the bank, a bank for the buck. What else would you do? That's only one part of the story. So you wanna look at specific parts or use cases of your application where you think things might not be most performant and you might want to do that. So one is looking at the devices, one is looking at the use cases. There's more to it, right? Crashes, I wouldn't put that into this segment of the bucket that would go into the functionality side of things. So usage, data usage on the customer side is important. So basically what we're talking is when you do profiling, you need to do profiling on the user side of things, right? And you need to do profiling on the device side of things. What does this mean? We'll dive into a little bit more detail. So on the user side of things, we wanna basically look at height specific data. What is height specific data? We wanna see how many chat threads do you have open? Like how many active chats do you have open, right? If someone has five active chats open and an app launch would not take as much time, if you had 500 chat threads open, it might take longer time. How many stickers do you have downloaded in your phone? That would have an impact. How many groups you have created and so forth. So there'll be a whole bunch of user specific data that'll impact and there will be device specific data that will impact. This is things like how many contacts you have in your address book. How many photos you have on your gallery, things like that. So there are all of this, which is user specific data. And then we look at devices, right? So like you pointed out, like Pooja pointed out, what are the top devices being used by height users? But that's not sufficient because we might be missing a completely big market that we are not into. And one of the reasons could be because your performance is really bad and people are not using your app on those devices. So you also want to look in the market, what are the top apps right now? Sorry, what are the top devices right now? So you take all of this and you mishmash and you create basically a kind of metric that we came up with which is basically we segmented our users into different buckets. We call them 50, 80, 85, and 99th percentile. And then we look at some of these things in terms of the number of messages, group chats that are active, one-on-one chats, sticker packs, status updates, address books, so forth, right, like the long list of things. These numbers are obviously cooked up numbers. We can't reveal the actual numbers, but this should give you an idea of how we try to segment our users into different buckets. Once we've segmented our users into different buckets, now the next thing is to actually look at, this is the user-specific data. The second thing is to look at the device-specific data. So in India, what are the top phones on which people are generally using messaging apps? And inside that, which are the top phones on which or devices on which Hike is being used? So we looked at both of this information. Obviously, this is, again, for various reasons, blurred out. But again, the point is that you do this kind of an analysis, you pick up this list, and then you need to come up with how, now you're gonna benchmark based on this. So I think I've spoken enough, but given you a broader framework of what went through, now we'll jump into a demo and actually look at step-by-step, what do we analyze, yeah? So KP, over to you. So before I jump into Hike or Market, there are companies that do that for you. Nareesh talked about profiling of users and profiling of devices. So now regarding all these parameters, we have our own benchmark application simulating the real market scenario. We have created the environment, we have devices which have pre-loaded data and the top-most devices that we'll be using. But that's not enough, right? I mean, the major question that comes here is, what do you benchmark your application on? What are the actions, what are the activities for which you want to check whether your app is performing well or not? Can I get some ideas? What would you like to check in your application? If it is working, I'm good to shift. Or at least this is the basic functionality which should be working. That's a very good point. App launch, anything else? I'm sorry? Yes? That is also a very good point. Of course, if you're making constant calls in the app, some data is getting loaded dynamically. Yes, so that's like putting your app in. App loading and then specific screen loading times you want to measure. Putting your app in the back. So rendering of data once you bring your app back to foreground, all right? Background, foreground, activities. Exactly. So for us, it's a messaging application. You'd say the chart trade messaging, right? That is one parameter that I would like to shift to performance side, because user does not actually perceive that. But still, we will be covering that on a later stage. In our case, we work both offline and online. So it's not important for us. Definitely, if you have some kind of, yes, sure. All right, so we've got a bunch of ideas. And I think that's a very rock solid idea. So let's jump into the demo. Let me show you the application that we have developed. It's hosted on our local machine, of course. So this is the UI that looks like basically these are the actions currently that we are benchmarking our application on. Of course, like we discussed, this is not enough. There's a long way to go from here. Open this dropdown. So like we talked about app launch time, right? So there are two major aspects, force kill and force stop. So we are measuring app launch time after both of these. Then the next thing that we talked about, the core of the application itself for us, it's the chat thread opening. That comes next. Then there was seamless dynamic data loading. So that is chat thread logging, chat thread scrolling. You don't want scrolling to lag or any kind of animations to lag. Then we talked about external library. So when you start, try to start a new chat. All the contacts in your devices are imported and listed in the application itself. So that is one screen which should open seamlessly and instantly. So let me just select any particular action. And we have run this on a bunch of applications. We do it periodically. So I have a list of APKs. Let me just... One thing I forgot to tell initially is a lot of what we are talking is Android specific. We're not going to discuss iOS specific stuff here, but this should give you a pretty good idea of how we are doing on Android. So a lot of what we are going to talk is Android specific. So of course, when we run this application, the benchmarking sooth, we get all the logs. Now, as a customer insights team or as an analytics member, it's not very user friendly going through the logs, right? So they need to have it in a very formatted manner. It makes their life easier. We dump all the data to a graph. It's just a basic comparison. So as you can see here, there are two points that I like to mention. So first is usually you have a set of actions. You would like to perform it, take the reading, parse it and get the data, but that's not enough. Sometimes now we are simulating the real users device. Sometimes there might be multiple applications in the background. And because of that, your phone might be lagging and the effect of that impact of that, you might see on your foreground application, which is hike messenger. So one iteration of the reading is not enough. You might end up with outliers sometimes, which might cause a panic. So it's always better to take five to 10 iterations, take an average of those. Despite of that, you might still end up with outliers, multiple outliers at the same time. So the only way to overcome that is run the suit again, get a good data. So here you can see three particular application versions are there. There's a jump and so these jumps, I selected this specifically because this data is with the outlier. Actually we were having a uniform data and then this is where we caught that, okay, we should have multiple iterations because we are having one outlier, very high, which is screwing up the overall average. So this is pretty much the application. Any questions around this? Sure. So for benchmarking, we are considering the wifi speed itself. Right now I'm only concerned with the activities loading. I'm not concerned whether they're interacting properly with the network or not. That test gets pushed down a little bit lower. We'll come to that part as well. When we start playing around with networking, network and other aspects of performance testing. So here I'm more concerned with my application as a normal application is working fine or not without considering other environments. So generating the load, we talked about the user profiling and device profiling, right? So the numbers that we showed, let's say we talked about 99, 85, 80, 50. So let's take any one bracket, 80 percent right. I have the numbers. We create backup files of those and then when we sign up, we have the functionality of restoring from backup. If you reset your application and then when you try to log in again, sign in again, we restore the backup for you so that your chat messages and all your data in the application basically does not go away. So we do that. We perform a fresh sign up with the backup file restored. So I have the user data and the contacts are there and the device data as we discussed, that is simulated. So I have the environment setup and then I run my test suite on that. So for example, it starts at the opening. I need to take 10 iterations. My UI test will perform 10 of those actions because that's how the user works and in the backend, my thread will be working which will keep thinking for the log lines. It'll collect those log lines. Then there's my parser which will pass through the log lines, get the values, push them to DVR, wherever you want to store them and then send it to the UI for the formatted manner. It doesn't- Now just the visualization part. Everything else we're gonna show how you can do through. Now this is just rendering the data that's already collected. Yeah. This is a CI system which actually kicks off these. This is about rendering those and visualizing those. Here you're not kicking off the build. This is just for a visualization. There's other ways, other part from where it actually gets kicked off. Any other questions? All right. This was the demo that we actually went through. All right. So we talked about benchmarking. Now, but as you saw, a lot of questions came around network as well. This was completely from apps point of view. There are other things in the device itself, especially Android being an OS which facilitates you to play around. There are a lot of things, a lot of parameters that you want to take care of. Right? Example, network. One of those. Any other examples that you can think of that we should consider should be optimized memory, CPU usage, memory usage. Right. What we like to call it hike is the four pillars of benchmarking. Memory, battery, CPU and network. If I do not look at these four aspects, I might end up somewhere where my app is actually working fine in the UI, in the benchmarking side, but in the performance side, it's not that optimized. Let me share an incident that happened with us. Quite a few releases back. We saw that our app was getting killed in the background. It was an internal release, of course. So we were still in the development phase. Now we didn't know what was causing the app getting killed in the background. So we dig deeper. We found that we had integrated a new feature then, regional keyboards. We were supporting, we were starting to support 10 regional languages. Now when we go in the chat thread and you open the keyboard, and if you have already selected that, the custom keyboard comes. So we found that whenever that was getting loaded, particularly the app was getting killed. Not a good sign. We cannot ship at that moment. So we decided to dig deeper. We found that the external library that we were using for our custom keyboard was internally using a native Android library, which was shooting up the SO memory map values. I'll talk about all these terms later. So this was one particular aspect of memory which was shooting up, and that was causing my app to be killed in the background. So why does this happen? Android has made it very clear that if the app exceeds a particular threshold value of memory consumption, it will kill the application. So they do not assure you that if you remain below the threshold value, they will not kill the app. They might still kill the app, but at least you have to be optimized in that way, because if you cross the threshold value, your app is definitely getting killed. So we dig deeper into that, and that is where we realize that memory is probably the most important aspect among these four. So let me get a little bit deeper into memory. So on a layman term, if we talk about very basic, so there are two major parts of memory which Android plays around with. There's a private memory that is allocated to your application which is private, dirty, or USS, that is your unique set size. That part of memory will only be shared by your application and it's released only if you force kill or force stop your application. It might be released if you push it to the background also depends on how your application is making use of the memory. The next part that comes is shared dirty or PSS. This is proportional set size. Now this is a chunk of memory which lies there and it's shared by a lot of processes. One of those processes is your application and let's say a couple of them are using it. Now if we add this private dirty and shared dirty and say this is the memory that is being consumed by your application, that's not right because a lot of processes are sharing this PSS value. So what we do is what Android does is they divide this PSS equally among the number of processes that are sharing this, then that chunk and the private dirty that is allocated to you, that gives you the total memory. Apart from that, Android does not really, it does not have at all the concept of swapping memory, plays around with paging and memory mapping. Memory mapping in turn itself has a lot of different, different branches. So I'll just name them a few. There's SO, JAR, DEX, there's ART, OART. And there's a lot of aspects. I mean, if you start looking at each one of those, it's a day long conversation. But yeah, most important ones, like I talked about the SO native map. So what does it contain as a native code, like native elements, like your code itself, then JAR is any external libraries that you're using. DEX is basically your Dalvik executable. So these things. Now you as a developer cannot control each and every aspect. You can bring them down as a whole. What you can control really is your private dirty and your heap memory. So how do you calculate these values? I mean, a lot of companies are out there who are providing benchmarking and performance. I'm pretty sure the basic, the base which is being formed for all these readings is ADB. So Android Debug Bridge, they have provided us with a set of operations which you can perform and get the readings directly from the kernel level. So we have one for battery, Thumbsys battery. We have CPU info. We have mem info along with the package name of your application. And then there is this particular file, proc.net, xt, qta, guid stats, which digs into the networking files and tells you how many packets you received and sent. And it gives you a very, very clear bifurcation. Let me just quickly shift to the terminal and show you what kind of readings these give. So that will probably give you a better picture. Okay, we could quick time check for three minutes. All right, so probably we'll not dig into the... You can show one or a couple of them. Sure, sure. All right. You got a phone connected right now. So it's a physical device. So one quick point that I'd like to add here is, if the phone is connected, the question asked, raised is, how do we do battery testing? There are two ways of going about it. You can directly go into the kernel files and fetch the values from there, how much current is being flowed and... But I'll not go into the details and it's very complicated to do that. The other way is, you can connect your device wirelessly through ADB and then trigger your test suite on that. So for battery, the command is ADB shell, it dumps its battery itself, then two devices connected. All right, so as you can see here, I have only one device connected. It's showing two. The other one is actual device, which is being showed when you actually connect your device, right, with the device ID and everything. However, to run the test cases over battery, wirelessly for battery, actually, we've connected it through the TCP IP protocol with the IP address of the device. So what happens is, I disconnect the device, I run the ADB devices command again, and that's the only device that's connected. I have the device connected without the cable. Now if I run the battery command, all right, this is why we pray to the demo gods. All right, I'll do one thing. Since we have a time crunch, I'll just directly connect the device. So this is something that you get. What we are concerned with is the level. Now, when you scroll down and you look at the battery value, this is exactly what you see as a user. So this is all that I'm concerned with. Then quickly jumping to CPU info. So you see a list of services, all the services that are using CPU right now, right now, no operation is being performed in Ike. So the package is not there. Then quickly jumping into mem info, because that is a very important aspect, providing the package name. So a lot of data is thrown when you do that. So most important of them that we talked about, the native heap, Dalvik heap, and the PSS total and the private dirty, you can see there. That is what we should be concerned about. And when you come down, there is SO map, JAR, APK, TTF, DEX, there's a lot of values. You need to delve deeper into the concept of each one of those and see how they impact your application as a whole. All right, so these are the basically commands and how we are running our performance route. So optimization, when we started with our performance route, now there are four aspects that we have to check across so many templates. So we ended up with a time of somewhere around 13 hours. When this was massive, I have to run it on an urgent basis, it gets a problem. So what we were doing then was first run the CPU, then battery, then memory, and then finally network, get all the data and then send out a report. And then we were taking around 20 iterations. We observed taking five, 10, or 20 iterations, I'm almost getting a level value. Why not reduce my number of iterations? We came down to somewhere around 10 hours from there, still a big number. So we thought, all right, I can do one thing. I have to connect wire for CPU memory, a CPU memory and network, wirelessly for battery. Why not run these three parallely? I have the same set of templates from the analytics team, like we talked about the user profiling. I know on a daily basis, at least a user sends out 50 text messages. So that's a test case for me. But I have to check CPU, battery, memory, and network for all those. Let's perform it once, get the values for all three at once. That brings us down to somewhere around seven hours. From there, we said, why not run everything wirelessly? Why do connect wire at all? Run all four at one go, the templates remain same. They remain constant. Run everything at once, bringing us down to somewhere around 3.5 to four hours. That was a big dip that we have seen. And this is actually a lot more realistic because people won't connect their phones and do messaging. They typically disconnect. So this is a more realistic test. So while it improved the performance of the test suite itself, it also gave us a lot more realistic read of how the users would actually see the performance. So of course, this is not enough. There's a lot, a lot more to do. So talking about future enhancements. We could quickly pause before we jump into that. Any questions so far? I think we rushed through a bunch of things. Yeah. So I did not show you the application actually. What we do is we have been running it on a bunch of applications, versions, right? APK versions. So let's say I'm on version one today. I ran it. I see I'm pretty good to go. I've optimized. It's using very less CPU, very less memory. That becomes my benchmark value, my threshold value. So I'm going to go back to the baseline. I build version two tomorrow. I get all the data and see what's the jump. If there is a dip, good enough. I should do that in every activity of my application. There is a jump. Why is it? Then we try to delve deeper into the code. What exact activity caused it? Because we are benchmarking activity-wise and we are performing template-wise. So one template we talked about, 50 text messages being sent out. So I know that is causing a shooter. We can always run trace views on that through DDMS. Get the exact spike and then bifurcate in our functional basis everything from the code itself. So that of course requires a bit of debugging because of course you need to verify where exactly is your memory going high. And finally we run these commands to get the values and see what is shooting up. So to your question, there's no golden standard that you should meet this number. It's basically a relative scale and which is where benchmarking becomes important. So you keep benchmarking and any time there's a shoot up, you know something's going wrong, right? It should not exceed certain thresholds. So there are certain thresholds that are set. It should not exceed certain thresholds. If the thresholds are exceeded, then it basically fails the test. Someone had a question over there, yeah. You made certain assumptions. You're saying that if I have 100 million user base, I need to test against X number of devices. If we don't follow the industry, whatever you said. I thought this is the first time I've actually heard something like that. It didn't really affect us. I mean, if tomorrow we had 500 million users does not mean I will increase. That's, so what we are talking is mostly client side performance, right? There's obviously server side performance that needs to be done. And server side is where the number of users will impact you, right? On the client side, number of users I have 100 million or 500 million users. Actually on the client side, it doesn't impact you. Provided your server side, your handling concurrency and all of that stuff on the server side. So decouple the two, right? On the client side, you're mostly focused on, there are, again, we did profiling, we figured out what is the average number of chats you would have, right, typical user. Bucket them into different percentile because we want to make sure across the different percentiles people have seamless experience. So we might optimize only for the highest performance bucket and then screw the others, then that's not the right thing to do. You want to make sure across all the buckets there is a reasonable performance. Of course, if someone's sending like 500 million messages every single day to lots of people, their performance will be slightly more than someone else who's sending in the 80th percentile. So those are the things that we are measuring on the client side. It doesn't really matter, in my opinion, how many overall user base you have. Talking about the number of devices that we are actually using, at this point, we are essentially testing it on three devices. This is something we'll talk about in future enhancements, is we want to increase that number of devices to more. But for now, we are actually benchmarking on three devices, which is our top three used devices. Okay. Did I answer your question? We are lucky we are only in India. We talked, right? We picked the top three devices that are used on Hike. You need to start somewhere, right? Again, there's a point we'll talk in the future enhancements where we want to improve this thing. But what we are seeing is in the top three, we are able to catch, we are able to get 80 percent feedback, right? So 80 percent feedback is good enough. There is obviously room to improve on getting the rest 20 percent feedback. But again, starting somewhere, I think even like we started with one device. Now, we've gone to three devices. Gradually, we'll grow to more devices. We did a talk yesterday where we talked about APM and Dexter and stuff like that we're using. Yeah. Yeah. It's APM. Yeah. Let's say your CPU usage shot up and then you want to do an analysis of why did it shoot up. That's what I think KP was explaining about using. Go ahead. So you know that this particular activity is creating a higher value in your memory consumption. You go to your code, you start a trace view on those activity in it methods, and then you go function by function. So it gives a very detailed analysis of each function, how much memory they are consuming, how much time are they taking to initiate? So you can benchmark on those values and then you know, take an analysis, what exact function is there, which is causing. In fact, initially we talked about one example where we saw a shoot up of 130 milliseconds, right? There where we actually dug in. We realized like deep down in the chain, call chain somewhere a developer had added a new call which accidentally goes and does an IO operation. And when you try and do an IO operation on a UI thread, it's obviously going to introduce some kind of a lag. Now that lag might vary significantly depending on the device type, right? But you can catch even things like this, like small millisecond jump and then analyze those. Which is where the benchmarking is important. So we have built our own tool. There's a time crunch, probably once we finish, we can sync up offline. But essentially building on top of what is out there, we've not like built, we just built our own stuff. You will keep hearing this not invented here syndrome, very big in hike. We basically try to do a lot of stuff on our own. Not a great thing but also on another side it's a nice thing, you know, from actually kind of pushing the boundaries. Okay, then let's go through the future enhancements, hold on to the question, we are running short of time. So let's quickly jump into future enhancements. So the first one is we talked about right now we have CI as part of our, we basically trigger this weekly right now. And we want to actually make this part of our, with every check-in we want to basically trigger this. So that's one enhancement that we are basically working on right now, where it'll become part of this thing. We talked about device coverage improvement. Right now we are basically on three devices. We want to improve the device coverage. Yes, in yesterday's talk we talked about how we are set up Dexter which can support up to 80 devices. So the idea is to essentially have, you know, this suit run on those 80 devices that we have set up. So that's a future direction that we want to do. As of now we only do APK size benchmarking on the, you know, like just before making a release and it's also kind of manual check someone runs an APK size check and verifies whether, you know, but we want to actually make this also part of, so when we talked about the four pillars, we are planning to add APK size as part of that thing that we'll get regularly checked because we are also very sensitive of not increasing the APK size given our users are very data sensitive. So we want to make sure that that becomes a part of our whole benchmark. Along with these, we run through four commands. That's not enough. Android is providing you to, they give you the freedom to read into, dig into the kernel files themselves. So there's another command proc stats along with Meminfo which gives a different kind of data. As you compare those, you might find a difference of data for the same aspect. Now we're not very sure what the difference is, but I mean, you can look into any of those, you know, you have to take it with a grain of salt. So probably these are the future enhancements. And so yeah, to finally summarize what we talked about, we talked about analytics driven benchmarking, simulating the real market scenario in our own test lab for performance and benchmarking. Then we talked about the four major pillars, CPU, memory, battery and network. And we focus mostly on memory because probably that is the most important aspect. Then we, yeah, this is deep analysis into memory. And then we talked about the parallelization of test suits going completely wireless, running everything and getting the real data as the user would actually end up using your application. So that's kind of, yeah, a quick summary. So we have three more minutes for questions. Yeah, awesome. Not true in our case. So I'll just repeat his question for the benefit of everyone. So he's asking, what about server side performance? You know, can you throw some more light on for server side performance? Because at the end of the day, if you only measured client side, it would massively get impacted by the server side performance. In our case, that's not 100% true because we also have peer to peer messaging, offline messaging which doesn't hit our server at all. So for a lot of our use cases, like I can transfer a one GB file to you, our server will never even come to know about it. That's just gonna be directly through a Wi-Fi direct to you. So we are doing, on the server side, obviously there's a really impressive team at Hike which does a lot of performance testing on the server side. Here we are mostly focusing on the client side of things and how we do performance testing on the client side of things. Because for us, without the server also, there are a lot of use cases that exist. So it's not true in our case. Maybe in other cases, it might be true where server is an integral part of every use case. In our case, that's not true. Okay, one last question, let's talk offline. All right, yeah, so actually that's a good point. We should, so just repeating what he said, like on the performance side of things, you know, 2G network, 3G network, Wi-Fi network, 4G network now, there are a lot of different kinds of network and the performance can, you know, get impacted by different kinds of network. So how are you doing performance testing on it? KP talked about mostly right now, we are doing all our testing on Wi-Fi. So that's again, I would say, is an area of improvement that we need to add. We do do some, go ahead, sorry. So adding to that, we are actually testing on these three networks as well, 2G, 3G, and 4G as well. So normally if you see, if you're sending any data over the server somewhere, the amount and the chunks of data remain same despite of the network. But we have optimized our file transfer and message transfer and everything. Such an extent that you change the network and our chunks that are going, go in that manner. I will not delve into the details of the same, but talking on a very abstract manner, we have, we always have log lines, we switch the network, we run the test case and we see that whatever chunks they are supposed to go in with respect to the network that we have selected, whether they are going or not. If I hit that particular threshold value, it assures that my time limitation, whatever I have a threshold for that, I'm meeting that. If that chunks are not met, it means I'm increasing that and I need to dig deeper into my code and see where exactly is it failing. So why I said this is an area of future improvement is because there's a lot of interesting work that is done on the way we work with different network types. And those kind of tests actually right now, we're trying to push at a lower layers of our tests rather than keeping it at this level. So right now we have few tests that we run across multiple networks, but ideally we want to push those at lower layers because that's where a lot of ALGO and other interesting things are built to basically determine how you do network-specific use cases, which right now, we are actually not fully automated. So that's an area of improvement in my opinion. All right, thank you everyone. I hope you had something interesting to take away from this session. Thanks guys.