 So I have planned 40 minutes of ice breaker, 3 minutes of talk, and 2 minutes of Q&A. That should, I think, balance it out well and make sure we wrap up the day on a high note over there. So thanks for coming here. A lot of great options in the day. I hope you are also finding value from the conference, what is going on. But since you are here, it's good to start with a reality check and understand why are you here. It's good to set the right expectations. So can you say why are you in this particular talk? What are you looking to get out of it? Anyone? Yeah? Okay. Other uses? Okay. Yeah. Okay. Okay. So just to summarize what Sudipta said is what should be a good testing strategy for native apps, Android and iOS, which is different from web, essentially, right? Yes, we'll definitely be talking about that. Any other expectations? Tools, techniques for doing testing. So we'll not really be talking about the specifics of the tool, but we'll be focusing more on the strategy side and how some, I'll just keep mentioning what are different tools along the ways that can help you in doing this, because this is not going to be a tool-focused conversation. This is more of a strategic planning side of things, right? How can you approach that? But yes, we'll cover some aspects of the tooling side. Anything else? You're making it easy for me, so that's good for me in a way. If no other expectation, then I can push for my agenda along there. Okay. So cool. Let's get started. And by all means, if any questions along the way, please do stop me and let's talk about it. If it is a little bit off track of what the core content is going to be, then I'll ask you to let's have a follow-up conversation around that. But don't hold on to your questions as they come up, because it all keeps building up along the way anyway. Quick 10-second introduction about myself. I'm Anand Bagmar. I've been in the software quality space since about 20 years now. I've worked in various different product and services organizations, various different domains, worked across the globe from different countries as well. So I've had the opportunity to make a lot of mistakes and learn from them. Hopefully I've done some good as well out of that. And you can follow me on Twitter. Anand is Bagmar Anand. And it will be great to keep in contact and keep the conversation going on. Okay. So let's focus on why we are really here, right? Why is there a focus on testing and release strategy from a native app perspective? Already there was a question. What is really the difference? And let's try and understand this more. And these are the first few minutes of this particular conversation. It could be 5, 10 minutes, depends on how really interesting you make this. But I want you to help drive this particular discussion. And that's when I can share what difference might be there on top of that from my experience, which can hopefully add some value to why I am here. Okay. So let's quickly talk about what are the similarities between web and native app testing? Anyone? Both of them might be talking to the same backend services. Yes. That's one. Sorry? Similar kind of UI. Maybe may not be because native apps, smaller form factors, web apps is different. Mobile web might be similar to this. But maybe I can rephrase what you said. The functionality is similar from the product, right? The way you would interact with that functionality might be dependent on web versus mobile web versus native. So that might be one aspect of difference. Anything else? No? Okay. Just try it. What are the similarities are there? Sorry? Yeah. Yeah. Hold on to the difference. That's the next slide. Have you seen the slides? No. So hold on to that thought. We'll be talking about differences as well. Let's talk about the similarities first. What else would be similar? Yeah. So large part would always remain similar. So functionality, aspects of NFRs potentially, right? Performance, security, the browser OS combinations, for example, right? Or the device OS combinations. That complexity is again similar. Yes, you'll have to address it in slightly different ways. How do you actually interact with it? But that again is similar challenges on both sides in that sense. Okay? Anything else? From a testing perspective, planning perspective, what else would be similar? So user experience, right? It's not just functionality. User experience testing and validation also is going to be similar. You have to do that in both cases. Web, mobile web as well as native apps as well. Okay? So at a high level, I think we understand what these really are. What would be the differences? I know you had a head start. So let's start with what you think. The way you do your security testing would be different. Can you explain more on that? Okay. Okay, so data security from a native app perspective, because it's on your device versus for web based, maybe on the server side, the content is there. So you need different strategies for that, right? Okay, that's one. What else? Native features which the app uses and how, from performance perspective specifically, that might be a difference. How would it be difference from a browser as well? Because that's in a way a light client, right? That would also have a lot of functionality on the client side. So is that really a difference or it's similar? Maybe it's in a gray area. It can go in a different context, right? Orientation, that becomes a big difference for sure. In a web based, you're looking at in a specific form. Mobile web maybe, yes, depending on the responsive nature of What the product is designed to do, that might be different. Okay? So the other differences that would be there is the devices itself, right? The capabilities of the device might govern a lot of how the application is going to function and what type of interactions would be happening. In case of a web or mobile web, it is just a browser. Inside a browser you are doing that and other than the resolution of screen size, the dependency on the actual device OS combination might be very minimal in that sense, right? So that is one aspect of differentiation between web and native apps. We have to consider more from a device-specific perspective, whether it could be resolutions, form factors, the hardware capabilities of the devices itself, what is supported, not supported, memory, CPU and other aspects that are there. Also the OS dependencies, from a web perspective or a mobile web perspective, we are really talking about a browser and maybe the browser vendor has provided different flavors of the browser based on different OS that it needs to support on, right? The majority of the implementation would be similar, but the packaging and certain OS-based dependencies might be different. But other than that, that is not really a problem from a web perspective. However, from a native app perspective, the OS can be very fragmented. Samsung uses stock OS and customizes it heavily and they have different levels of customizations, maybe for various different reasons, for different types of devices. So you take three different types of Samsung devices, low and mid-range and high-end, and the OS capabilities might be different, even though they are at the same OS level, per se, what pre-installed software might be there or not. So there are a lot of different challenges. Likewise, each hardware manufacturer might either use stock or a modified version of the OS, and again, the hardware dependencies creates many more complexities over there. So these are the differences from a web app versus a native app testing at a high level. Now this is from a testing perspective we are talking about what things to keep in mind. What about from a release approach itself? I have finished my development and testing. I have done whatever due diligence is required in terms of am I building the right product and is it correctly built right? I've done that. But release itself is a different aspect. So what are the similar approaches or other similarities between a web and native app releases? Push release, what do you mean by that? Is that similar or difference? It's similar. So I actually think that might be a little different thing. But let's hold on to that thought. Let's explore it further. In some cases, yes, it's a matter of doing a build and then doing a deployment. But the deployment strategy itself is very different. What else would be similar? The complete pipeline is going to look very similar for all the different platforms. The CI pipeline and all the steps if it's not completely automated as well. That would be in very similar lines from a web or mobile perspective. What all things you would need to validate before you say it's ready? All the checks. It may be static checks or maybe your test cases, your UI test cases. Everything is going to be very similar. Absolutely. I would say similar, not same because of course there's different stacks but you're absolutely spot on. Correct. So the whole dev QA lifecycle steps at a high level would be very similar to web versus apps. There are other similar aspects when you really think about what you do from a release perspective. Regardless how I have released, you would be doing a sanity on the app after it is released, whether it's a web or a native app. After release, I'm a user able to get it and use it correctly. You would still be able to have the same kind of experimentation that you would want in your product, A, B testing or any other form of validations or experiments you want to run would be on similar lines from web or native perspective as well. It's not that I cannot do this in native or I can do it only native, nothing else. There would be certain restrictions probably, but that thought process is still the same. The configuration is required, right? Managing environments, pre-prod versus prod, likewise for native apps as well. I've built it. I'm pointing it to an internal set of deployments or data what is required for the app to be functional and to be tested. Likewise for web also, we do the same. So from a release perspective, we again think about these aspects are going to be in a similar line. Okay? But now what are the differences when it comes to doing the release for these? Distribution. And that actually goes down to your point about how it's pushed out, right? And that is a very, very big difference in my opinion because in a web or in APIs or on server side changes, even if you have messed it up bad for whatever reason, you can immediately apply a hot fix, push it out. The next time the user comes to your product, we'll get the updated fix. Of course, there'll be certain criteria, cash, those kind of things, right? But other than that, you will be able, the user will be able to get the latest functionality on-demand whenever you really want it. However, for native apps, you have to create a build. You can automate this whole process as well, but you have to create a build. You have to upload it to a server, play store or app store, assuming you're distributing using play store or app store. If you have your own distribution mechanism, maybe in case of B2B applications, I don't need to be in play store or app store, right? They come to my website or they know how to get my app and they'll be able to use it. But there's still a thought process of how you want it to be distributed. And that is very different from a web perspective, which means we have to understand it in more detail otherwise we might miss out on a lot of cases that happen over here, a lot of problems that can happen here, okay? We could be doing, from app's perspective also, you can say that, no, it doesn't really matter, right? Like a web, we can ensure that users are always on the latest version. But how often do you get a pop-up on the app and say, there's an update, I want to update it? The first thing I do, you know, call me old-fashioned, but the first thing I do when I get a new phone or if I reset my phone, I go to my play store settings and say, do not auto-update the apps automatically. I want to be notified or better yet, I will go and see whenever I want, if there are any updates in the apps that I have and if I want to update them or not. Because I know some types of apps, the main updates that they have is how they're tracking you better or not or what kind of ads they want to push into you. So news apps, for example. News content, okay, I don't have the better user experience, but I am able to read. I don't need better tracking of what I am reading or not over there. So it's my control, right? For some other types of apps, I would just go and update it directly. But the point over here is, as a producer of the app, you can say that I want to force my users to upgrade and if they don't upgrade, they will not be able to use it. That's a product strategy. Depending on the context of the product, you might want to take that or not. In most cases, what you would have is soft-upgrade notifications, right? There's an update. Do you want to update it? And maybe if you are way behind in the version history, then you might say, no, you have to update now, otherwise you cannot use it because there's some critical change that has happened, maybe on the backend system side, right, that now the app has become incompatible with them. So the choice is really in the user's hand at that point, though. Do I want to upgrade or I don't like this push attitude? Let me just uninstall it. I'll find some other alternative. And that makes a very big difference in your release strategy that you need to think about your product. How are you really going to test it, ensure everything is fine, and push out that update to the user. And if they use it, it is fine. But if they use it and there is a problem, there is a risk that you might lose that user forever. This app crashed when I tried to do a basic thing. It's a crappy app. Let me just uninstall it. How many times would you go and see if there's an update or try it again? Depends on your mood of the moment as well at that point, right? There's also another aspect from an app release perspective which sort of makes it a little bit easier, and that's a beta release approach. In App Store as well as Play Store, you have options of doing beta releases, and there are rollout strategies as well that can be applied. In some cases, you have more control. In other cases, you don't have as much control. We'll talk in more detail about that later. But that is another difference. On the website, either you can just release to all your servers immediately, or you can use blue-green type of deployments and manage the risk in a better way because your cluster might be so big and you don't want to have a very big impact. You want to see how that release is going. If I have 10 servers to be updated or released on, let me do that on one, validate on one, and if that one is working fine, then I'll just update all the rest as well. Various different strategies can be done. The big difference, again, I want to highlight over here, is from a CI CD perspective, web becomes much more easier to keep on making that small incremental one-line story changes that might be there and do a release to production, web or APIs. For apps, you have to think about the app fragmentation that happens for every release that you do, and how are you really going to control that and manage that as well. So there are a lot of things that you need to keep in mind over here. There's another very interesting concept that has come out. I have not really used it as much, but there is a feature in Play Store now where there's something like an Instant Apps feature. Probably you have to build your product in a slightly different way as well, but what it means is it's almost like a web-like experience, more native interactions possible as part of it. You don't have to install an APK, but you can interact with the product without having to install it. And that's, again, a very interesting approach to investigate to see for your product, does it make sense or not, if it's going to add value or not. Especially very valuable if the APK is a large APK and you want users to quickly see what the product is before saying, yes, they should install it or not, a very good capability. So when it comes to CI CD side, as you correctly said, the stages in your pipelines really remain the same, the tool sets required, there will be different quirks in that whole pipeline to handle web versus native. But really, it all looks the same. There shouldn't be any other difference in that. You can still be doing continuous integration, continuous deployment to your internal environments and keep the last mile as a single-click deployment to production, whatever. Both are very feasible. Of course, the kind of efforts required from a native perspective will be slightly more for the differences that we've spoken about. So, so far, we have looked at similarities and differences in web and native apps from a testing and release perspective. Let's see how it really comes together in form of an example and how we can apply it. So let's take an example. A random screenshot which I found from Google which can be reused. I think this is a Netflix screenshot that was there, right? But the case study which I want to walk you through is for this type of a product, a media-based app, a streaming app, essentially. In this case, it's a video streaming app, not a music app. So that has slightly different complexity than just an audio streaming app, okay? Now, we've spoken a lot about these differences and similarities. Here's a question to make sure you are still not sleeping. How would you test and release such an app? Has anyone worked on such type of products before? Any dynamic data type of product? Okay, one hand. Sorry, she's just passing the mic, yeah. So in this sort of scenarios, right, you basically release a client that can handle multiple DLC versions and each DLC version has... What is the DLC version, sorry? Downloadable content. So each DLC version has its own episodes, new series or something like that and you need to develop your client so that it can handle multiple DLC versions. So one update sent to the client and the follow-up updates when a series is released and a new video is released is in a form of a DLC which, by this approach, the user doesn't need to reinstall or update a client, but everything is done in the backend. Okay, so just to summarize in my own words and correct me if I got it right, okay? So basically what he's talking about is content versioning and there's different types of content or updates to contents, new episodes or sequels of movies, whatever that might be the case, right? As new releases come up or new versions of content come up, you should be able to push that content into the client's side and ensure it is still playable and viewable by them, right? That is one form of testing that is required because the content is dynamic. Of course, the content is not going to be embedded inside your app, right? The presentation logic is in the app. The data being served is going to be real-time, whether it's on data or Wi-Fi that is there. So that is one aspect of how you would test this. What else would you test on this type of product? Or what do you think will be the challenges of testing this type of product? Please take the mic from me. I just wanted to understand how to automate all these things, like automation... You ask a question for a question. We'll hold on to that question right now. We'll get to that aspect. The solutions we'll get into that side. Rahul, Adam. From UI automation perspective, the identifier's ID is like a basic mechanism we use. In such kind of products, the IDs themselves are dynamic, so you can't build on that relationship. So the challenge here of what Rahul is saying is that identifying a specific locator from an automation perspective might be a challenge. I actually don't think that is a challenge because if your product is structured in the right way, you can have IDs and dynamic locators also over there, dynamic content over there as part of a metadata, which you can query from an automation perspective and do that. So it is feasible. It's a challenge, but it is feasible. It needs more dev cooperation and collaboration to make it happen. Okay, so let's go through any other thoughts. Okay, so let me paint some reality over here which hopefully gives you more challenges when it comes to this type of product, right? User base. If you take Netflix, how many millions of users? Is it in tens or hundreds of millions users worldwide? Huge challenge, right? When it comes to this type of user base and the regions that are there, when it comes to the content side, what he spoke about, what does that mean? A content may or may not be available in certain regions or not because of the licensing issues. How do you test that? That becomes a very big challenge, right? It's not just if a content can be played, but the license restrictions is a very important aspect. Otherwise, Netflix can get sued for it or any other product, streaming product can get sued for it. It's a very big aspect to it, okay? The unique device OS combinations. Now, for this case study, I had considered a different app that I was using. It was supported in just four or five regions in Southeast Asia. But in those four or five regions, there were more than 13,000 unique OS device combinations that a user base was using. Analytics data told us that. It could be old version, slightly upgraded version, Android 444, 445, whatever those versions might be. But unique combinations, more than 13,000 of them. Okay, 13,000.001 if you add Apple also to it, right? The number of combinations is going to be different, but that complexity still exists, okay? There are certain non-negotiable criteria from a release perspective for these type of products. Now, some of these, the obvious ones, functionality has to work. Functionality could be whether I'm browsing or I'm viewing the content or not, right? That has to work. Functionality, whether it's downloaded, downloaded video should play or not. If it's a freemium model product, if ads are going to be served, then are ads coming correctly or not? These are all part of functional aspects, okay? The releases, we want to be doing frequent releases. I cannot wait two months to get a new feature out because my users always want to keep on getting new features. It might be better ways of finding content or anything else, but these are in the presentation layer, the experience layer, right? So we need to be thinking about an app which can be released often enough at the same time. So frequency is one. At the same time, in this case, there are blackout periods that we had the constraints of. Why? Because for all of these regions where our users were or where our product was supported, a product was a freemium-based product. So users can use it for free, but they will get ads served in it or they can subscribe and they'll have an ad-free experience. But based on patterns that we observed from users and which was different for each region, we found out for one of the countries in Southeast Asia, which was our largest user base, the pattern that we found from our users was they are people who are constantly on the road. They are working employees, working people, right? They get their salary towards the end of the month. After paying the basic bills or whatever else is there, because they're spending so much time on the road, that's where they are using our product most, they will also renew their monthly subscription when they get their paycheck, which means we cannot have any disruption in the app functionality, whether front-end or back-end, during the time when we'll be getting money. Such an important criteria which gets missed out and the challenges for each region, this blackout date might be a different rolling window. Now, if you put this different rolling window from a release perspective, how do you make frequent releases? Where can you experiment and say, okay, it's okay if we fail, we can push out another release over here? Very big challenge. The other aspect over here from blackout date or release perspective, this is again how Play Store App Store works. You upload apps in Play Store or App Store, not for one region or the other. Or rather, you don't have different versions of the app for different regions. It is one app you are uploading and you are saying in which regions it is applicable or not, and if it is not applicable, when you try to install it, Play Store or App Store will say, sorry, this app is not available in your region. So it's one app and you are telling it which regions it is applicable to. So I cannot say in country one, I'm going to release it at this point in time, then I'll upload another app for another country in a different date or enable it at a different time. No, it doesn't work like that. That's another big challenge. Analytics is as important in many cases it is more important than the core functional features. It's okay if my search does not work very well and from the feed, maybe users will be able to play the videos. But if analytics does not work, it's a premium-based product, then I am restricted by what Facebook or Google or any other ad integrator that I'm working with to tell me how many ads were served and accordingly, either I have to pay them or they will pay me based on whatever that transaction is. There is no way for me to correlate that from a financial perspective. What should have been happening? At the same time, I have no way of understanding my user behavior if any of the analytics breaks. A core feature, how many videos were played and did they play it end-to-end to say this type of content was valuable for the users. If analytics breaks down, I have no way to figure that out. So analytics in such type of products is more important or as important as your core functionality as well. And then there are, of course, partners and integrations that happen with payment gateways that might be standard. But there might be things like if I have got a Vodafone or an Airtel prepaid connection and I'm renewing a pack, if I buy a 999 pack, I'll get three months of this subscription free. And it's done automatically. When I recharge that, I automatically get that associated. So how do you ensure that all of such integrations from one app across regions work fine? So these are the non-negotiable release criteria. And the tipping point what happens is because these are start-ups of smaller organizations, Netflix is completely different ballgame. But if any delay comes out or inefficient testing comes out from this quality, rather not just testing, it's a big problem because any of these non-negotiables does not work, it's health-breaking lose. Any automation missing, product does not care or business stakeholders don't care how much automation was done or not. They care about is it done and I want it to be done fast. They don't realize it can be done fast first time, second time it is going to slow down, third time it will go slow down more. And that's where automation is a key over here. Okay? So limited test automation becomes a challenge. The business metrics, the analytics testing becomes a very big challenge again. How do you really do that? And if anything breaks, how do we manage that chaos? Internal chaos is about fixing it quickly or being able to fix it quickly. It's about facing the wrath of the stakeholders, your own stakeholders, that becomes a challenge over here. Okay? And if any issues you find during the blackout periods, how do you really manage that? So with this context I hope you're able to relate to the challenges, right? There will be different type of challenges for different type of products. But I chose to take this example because it amplifies certain of these challenges in a better fashion. Okay? So what is the testing approach and the release approach that I took for this particular product? And I'll also share certain anti-patterns that we did along the way. It was a conscious choice for choosing those because of the limitations we went through. Okay? So first is about establishing a structured way of working. We cannot be working in a chaotic fashion. We set up relevant agile extreme programming practices for different roles on the team for the way we do it brings out. We enforce definition of done. I've highlighted enforced. Enforcement is not coming from agile. It's because people are doing things in a very chaotic ad hoc fashion. The first step to get them on the right path is by starting to explain and enforcing and then slowly steadily taking back that enforcement route and letting them become agile and doing the right thing. But enforcement was essential at that point in time to get them started on the right path. No ad hoc implementation. Product or country managers, they knew the development team directly. We have a plan what the next release is going to be, but someone is going to send a Skype message or a phone call to their favorite developer. Can you add this one feature? And then suddenly during release time, okay, why is this coming up or has this been tested? No one even knows why that came up in the plan. So we pulled the plug over there. Country managers or stakeholders, how we are going to prioritize requirements and bring it into the process. They seem very obvious ones, but believe me, I've seen it fail more often than not. Okay? The other aspect is we have to find ways how can we build a quality product inside out instead of testing for what the quality is outside in. So preventing defects versus finding defects. How can we establish that? So we started doing activities like feature kickoff, product team telling everyone involved from a dev perspective, what is expected out of it. These are the business requirements. Next, story kickoffs. How does each individual user story affect that overall requirement and do a kickoff for that because you know what the scope is and you focus on that aspect only? And that again gets everyone in sync when the actual implementation needs to start. Dev box testing, don't let the dev say I am done, you figure out what mess I have made in the codebase. The dev is not going to move an inch because testing happens on their machine. Though the CI test, everything has run. Dev, QA, and product owner will go to the dev machine, test the story, acceptance criteria, most important test cases. If any problem, we are not filing defects, it's okay. We found a problem, fix it before you move to something else. Dev box testing, very important. Okay? For a story testing and exploratory testing. That's obvious that was required. So and then of course product owner has to sign off per story. Each story has to be signed off by product owner, not after a two-week sprint as soon as the story is done. End of the two-week sprint, it's a demo of what has been achieved to all the wider stakeholders as well. But before that, the sign off from product owner has to happen. Again, we are bringing UAT in as much upfront as possible from a product owner perspective. They cannot come back after four weeks and say, okay, where was my requirement? How is it? Sorry, you've got equal stake over here because quality is a team responsibility. Everyone needs to come here. Okay? The other aspects that we did is, of course, automation is the key to allow moving forward faster with safety net. So we focused on various different types of automation that were there. But automation cannot solve all the problems, especially in this type of complex environment. Okay? So we had to be very cognizant and smart about what non-automated testing is required to be done here to reduce the risk from the non-negotiables perspective. So we had a guiding light what to look forward to. What fires do we want to avoid? What fires are okay to take? We have to understand that. And based on that, how do we really structure our testing efforts around that? So, optimized testing efforts based on what is changing or not, exploratory testing became the key. Sanity testing. Now, because of the complexities, our engineering team was in Pune. All engineering dev QA work was happening from Pune. But our country supported four regions. Many more countries in those regions, right? We cannot be doing all type of testing possibly as real testing after the final candidate build is available. Why? Because payment integrations, partner integrations that might be there. You will just not get that. Ad network integrations, though the integration is one, is the ad getting served from one country or compared to the other? It's very important to find out. So for that, we started leveraging two different aspects. We started leveraging our business teams, our country teams in all these regions and we gave them a set of test cases or scenarios to say, these based on this type of changes that have gone in, we would like you to go through these type of scenarios at the minimal, plus whatever else you think and give us feedback. But test the app in the field and figure out what is going on. The second thing that we did is we started working with partners who gave support for on-field testing perspective. You can say, I want some testing happening in Kuwait for this particular network. Find me someone with these type of devices. Again, same approach. But get that feedback. And that became very important. And again, after all that feedback came through and it was okay, we had a pre-release sanity checklist. This checklist was a standard set of things that we want to validate because our automation coverage is not to the level that we want. Or there are things that we cannot automate. So how are we going to ensure those parts are checked first before going out? Certain analytics events which could not be automated as part of functional testing, how can we do that as well? And based on the checklist we used to execute that as well. We can talk separately about that. So the question was a sample of this release checklist. And we'll just talk post our core conversation for that. A CI CD, we had set up pretty well for that matter. Any app build or any API build and deployment or any testing, end-to-end testing changes that happened, used to trigger all the set of relevant tests, of course, and also run our E2E tests, end-to-end functional tests. To run our end-to-end functional test, we were not able to use any cloud solution for this, like source labs or AWS device farm or anything else. Because of various constraints of the licensing and content that is possible, right? None of these device labs are in the countries, all the regions where a product works. So if you launch the app over there, first thing it will say is the app is not supported in your country. So we had to set up our own lab. And this is, again, a separate conversation which I started calling it a mad lab. We set up an in-house device lab which could scale up to the level that we wanted. The lab, the last time I was there, we had about 30 devices, Android, iOS running in the lab. And the tests were almost running 24 by 7 over there. I'm very proud to say two devices, battery exploded in the one year of running the test, which means the devices were really exercised as much, right? It's not because the battery exploded, but that's a different quality issue for the manufacturer. But we actually made use of this kind of information. And that was very helpful. From a release approach, I spoke about the testing that we were doing. Stage releases is a very important concept which is required to be done. Now, how do you make experimentation fail fast in these type of non-negotiable criteria? And one way is to find that window of opportunity to say, I know this feature is not completely baked. But unless I try it in the field, there is no way for me to say, is it worth proceeding or am I in the right direction or not. And Google Play Store and App Store has this great feature of stage releases where you can upload a build and choose a stage release option. In case of Apple, the last I had seen was once you get into the stage release approach, it will release that build from 1 percent to 100 percent over a period of one week. It will keep on increasing it. You cannot change what percentage you want it to be. You can pause it. But the minute you unpause, it will again continue on its rollout strategy and it will get to 100 percent automatically. In case of Google Play Store, you can upload the build and you can say, this build I wanted to be at 1 percent. It's all natural numbers only. It's 1 percent, 1, 2, 3, no decimal points over there, right? So 1 percent, 2 percent and you could potentially keep different builds at different percent level for whatever experimentation you want to run. Okay? The risk keep in mind though of doing this approach, your user base will get fragmented. But what's the type of experiment you want to run? What's the type of experiment you want to get? So your release approach, when do you want to change that percentage? It's uploaded to Play Store. It's ready to be released. I want to release it at 1 percent at 6 p.m. India time because the India user base I know becomes active in the evening to use my product. So even if I keep it at 1 percent for 3 hours today evening, I will get sufficient data and I can stop that release. And that becomes a very important data point. Of course, in that 1 percent, I'm using at 6 p.m. India time because my maximum users are coming from India potentially. There will also be users from other regions who will get this build. Because it's 1 percent means 1 out of 100 users who comes in because user base will be maximum from India. I have a high probability of Indian users getting this build compared to other region users. So you have to see the volume also, what type of data you want to get. But use stage builds effectively and that can add extensive value, immense value to your release cycle. The other important aspect which is very, very crucial, especially from an app perspective, is the monitoring of these releases. It doesn't make sense to just say I'm doing a stage release. If you're not able to really look at the data coming back from it and take decisions, it's futile. Might as well put it to 100 percent directly. So it's very important, look at the analytics, look at the crash reports if at all coming in. Look at the best source of information from users. How many times have you complained on Play Store or versus on Twitter or Facebook or Instagram about this product is crappy? Look at these review comments based on where your user base is coming from. That can tell you tremendous insights about how your product is working for them. And it's very good feedback. So social media is again a very important aspect from it. And of course if you've got a structured support system look at the tickets that are being raised by the users as well. Specifically for the stage releases because that's where your experimentations are going on. And you're looking for that type of feedback. So that becomes a very, very crucial activity. So I did not speak about the actual testing strategy over here. Because automation you can use any tools. From native app perspective, APM works great for Android and iOS but there might be other tools as well and there would be other commercial tools as well. So you don't have to go down in the market anywhere. Choose tool sets that will give you value the kind of feedback that you're looking for. But these other aspects is something that from a native app perspective you do not think about too much. Your goal is not just to test a story or do sanity of that product before release. After release what is happening is as crucial as what you did during release. Because you don't have control of what the users might be doing. What kind of crappy or high-end devices they might be working with . So I hope that gives you some food for thought about what different things can happen from a testing and release perspective for native apps. Do we have time for some questions? About three minutes. So we'll talk about the checklist specifically separately because I know that's a different topic unless others are also interested in knowing about it then it's fine. Any other questions or thoughts? Yes? One minute. We'll get the mic. So you just mentioned that you have created that for testing purpose. So I just wanted to understand one point. Like you said that you are testing on 30 devices. So what was the main criteria when you are testing different OASs, different permutational combinations are there. So criteria for how and where do you run your automation? Whether it's on a local setup lab or a cloud-based. I think the criteria is the same. Look at your analytics data in terms of what are the users really using. What type of devices, where they are coming from. Identify the right user journeys. We automated from our UI perspective, end-to-end perspective only the user journeys, the critical user journeys, which our users would be going through every time. So based on that identification, based on the user segmentation, types of devices we chose which devices can be used for automation, which devices are automatable. For example, anyone from OPPO over here? Too bad. I would have given direct feedback again. OPPO device to do Android-based automation on the OPPO devices, you have to enable developer tools. That's the first criteria for any Android devices. In OPPO device, they are very secure. There is a capture to enable developer tools, which gets disabled every ten minutes of inactivity. There is no way you can run automation on OPPO devices. And that might be the most used device for your product. So there are a lot of such criteria based on which we selected devices and we ran all tests on all these types of devices identified. Okay. One more thing. I have one more question. When you're automating your test, in the native application, you're clicking something and your application gets crashed and you don't know the reason. So how are you handling scenarios like this? So we did instrumentation in our codebase. When we run the Android test, this we did for Android iOS. We were not able to get to this level of sophistication. There were different reasons for that. But we used to get logcat from the executed test. Clear logcat, the console logs for Android before running the test, after the test, captured it from the device, parsed it because there were patterns in which it told us if the app has crashed or not in that. And if the app has crashed, the functional test might have completed, but at the last stage of the completion, maybe the app crashed, right? So we'll still fail the test if the app crashed. Every morning, someone from the QA team, it was a responsibility from QA and at least one person every day looking at crashes that are happening, whether it's internal or external, from production and analyzing what is the reason. If you've got millions of users, you might get hundreds or thousands of crashes. Not all of them might be because of it, might be because of some payment integration SDK that is there, right? It's a problem on their side, but our app crashes because of that. But analyze it and take corrective actions if there's a crash that we need to fix or handle better on our side. It goes into a sprint planning and we address it immediately. And all your test cases were running on emulators or real devices? Real devices. Media streaming doesn't work very well in emulators. Let's see if there's anyone else who has a question, otherwise if you have any more we can... There's a question on this side. Just a last question. I'm scared of if you have to ask just one more. When we do a budget planning for the testing, what is the incremental cost you need to plan vis-a-vis a native app testing? In your experience what would be the... I mean in terms of approximate percentage or do you see there is no change? There's definitely a change. So fortunately, I didn't have anyone who asked me that question because we believed in the importance of doing automation and covering the set of things that we needed to cover. But it's an important criteria for sure. If I've got 13,000 unique devices I'm not going to spend on 1,000 devices out of that. You have to prioritize, pick and choose what is there. So the criteria that you need to look at first of all is what type of device coverage do you want from a native perspective to execute your test? Is it build versus buy? Can I work with a cloud-based solution where I'm outsourcing the execution? I'm building the automation for it. I'm just outsourcing the execution for that and what would be the cost comparison based on that. So there are different things to consider. From a web versus native perspective the difference is significant because from a web perspective I can take a device or rather browser OS cloud solution to just execute my test. It is much cheaper to use that compared to a device form itself. So these are different considerations to have. There is a difference but depending on the context of your product you might say just five devices is fine for me. I'll run tests once every night. That is okay for me. In our case we were running 30 devices the last time I was at 30 devices 24 by 7 tests were running. And we were not worried about the cost because other than the device cost everything we had built on our own. If it's a cloud solution maybe that can become a factor for some organizations. So I hope that addresses the question at least in some way we can definitely talk more before Deepthi pushes me out from here. I would like to say thank you for this opportunity and looking forward to talk to you more outside as well. Thanks a lot.