 Okay, good afternoon everyone. How was the first APM conference in India going on? Okay, good? Food? Good food? Okay, even I had a lot and I did not sleep very well last night, so I don't know about you, but I am very sleepy. Okay, so it's gonna be your responsibility to make sure I keep awake and I deliver on to what I think I should be delivering over here. Okay, can I count on you? Yeah, and the way we are gonna do that is by making you talk more and participate more than me just keep laboring around over here. Okay, that's what my strategy is. Just give it away. So, before anything else, what do you think this particular session is about? Sorry? App releasing strategy, okay? You read the title, that's good. What else? Anything else? Okay, let me change the question. Forget about what this session is about. What do you expect from this session? There's a difference in that, right? Why are you here? What would you like to find? Maybe continuous delivery? Yeah, we'll talk some aspects about that, okay? Yes? Sorry? So how do we build a good quality product and how do you test based on that, okay? Yes, we'll be talking about that. Optimal way to do changes in the code before we push it on production, yes? Identifying risk at initial stage of testing. It's a very important thing and we'll cover some aspects of that as well. How fast we'll be completing the release management change that is there, okay? That is very interesting and yes, that is definitely one of the key aspects that we'll be looking at as well, okay? So, spoiler alert. This is not about testing and release strategy. As some of you rightly pointed out, this is about quality and release strategy. Do we understand the difference between that? Testing strategy versus quality strategy? Anyone? Testing is a part of quality, well said, okay? Before I explain more about what this difference really is, I want to, this is where the participation starts, okay? And I'll know if you're sleeping or not. I want to understand the different roles in the room over here. QS, can you raise your hand? Wrong question, sorry. Automation QS, almost the same answer, okay? Anyone focused only on manual testing or exploratory testing? Developers, any developers? Business analysts or product owners? Managers, okay, which role am I missing because this doesn't add up to 100%? Sorry? DevOps, okay, DevOps. Still doesn't add up. I'm probably missing some other roles as well, sorry? Yes, that's automation related, right? Okay, so the other roles, you are still very involved in what we're going to talk about, okay? So what, the difference in quality versus testing is, testing is what the QA team sort of does. That's one way of how I'm looking at it. Quality is how are you looking across the whole product as you're bringing in what is required to be changed or implemented as features and how do you get it in front of your end consumers and also understand how the consumers are using your product. Your end users, how are they using your product and taking that into your feedback cycle and iterative development again, right? So quality is a much more larger gamut of how that product is being built and used. That's what we are going to talk about. Two seconds about myself. I'm Anand Bagmar, my Twitter ID, hashtag is there. Bagmar Anand, feel free to reach out to me if any questions, follow-ups, discussions. I also blog at Essence of Testing with the name Essence of Testing, though not very regular, but that's where I share my ideas and thoughts as well. Enough about myself. Let's start with the core topic. This is, we want to first understand what are the differences between web and native apps. So again, I want you to now raise hands and share what you think about it. What are the similarities when it comes to testing web and native apps? What are similar things? Functionality, yes. Sorry. Compatibility, in what way? So browser versus devices or rather whatever that combination is, does your functionality still work consistently across all that? What are the other similarities? Sorry. Tin polo, sorry. Can someone repeat, sorry, I didn't follow that. I didn't understand what that means. Can you explain? Sure. So that's similar to the compatibility or the configuration that you support, right? Okay. What else? UI, user interface. So again, it's related with functionality in some ways, right? And also user experience also is what you would be testing, not just UI, but the user experience also is what you will be testing around it. Okay, so we sort of understand this. Any other things that you think are similar or what you would need to be doing for web as well as native performance. So let's expand that. It's NFRs essentially, right? There could be various different types of NFRs applicable to your product, which you would need to test on web as well as native. Great. So we sort of understand this space better. Good. What are the differences in web and native? How you would interact with that, right? So locators, that's one difference, okay? What else? Ecosystem, what do you mean by that? So hardware resources, right? Hardware resources of the device will play a bigger role on native compared to web. In some cases, the mobile web component also would play a part, a poorly written JavaScript in the mobile web can make a device go via in various ways. Okay, what else? Native libraries, expand more on that please. Access to data related with native apps. How is that different for web versus this? Sure, okay, okay. So access to certain device information from a mobile web perspective would be different compared to a web perspective, right? Yeah, what else? Progressive web app, what is different about it? Is that more on functionality side or is it really a difference between web and native? So you're talking about the instant apps kind of thing that is there? Yeah, okay, so that is one difference from a web versus native app perspective where you can have instant apps that's a cool new thing from Google at least. I don't know about the Apple ecosystem on that one, but that's a good thing, okay? So there are many other factors as well, right? The rotation factor can come into picture. So portrait versus landscape, that can be another aspect out of it. There are a lot of things when you consider hardware, right? There are sensors plus the actual device hardware that is there, how it's going to interact or support their application in terms of resources, usage, availability, those aspects are there. The OS dependencies play a much more larger role in native apps compared to web, because in a web it's in a container of a browser per se. In a native app, you are dependent on the OS and it could be a customized OS or different versions of OS. So the same app can be across literally hundreds of combinations of OS devices that might be there, right? And these can have an impact on your functionality and how your product is going to be used by your end consumers. Good, so we understand similarities and differences. In terms of releases, okay? You spoke about the testing side, let's understand the release aspect of it. What are the similarities when it comes to web and native? Test it and release it. Similar, yes, you have to make sure functionality works, everything, the compatibility, the NFRs, everything works, and then you would release it. Similar, agreed, what else? You cannot roll back the app, that's a difference. That's not a similarity. You're jumping higher to the next slide. Hold on to that. Percentage rollout to users, how would that be similar in case of web versus native? Very well said, so for those who didn't understand, basically you can roll out your web app as well, your web-based product as well, based on percentage or different criteria, geographies, whatever way you want to classify that, and there are techniques how you can do the same for native apps as well. Of course, the level of control would be different because the distribution mechanism is also different, but that is a similarity. What else? What about certification? Okay, so that's again a difference, it's not a similarity. Right? You're right, but it's a difference between the two, not similarity. Okay, so similarity again could be in terms of my app or web app in production would need certain configurations, and you need to test for that as well. It's not just functionality, it's based on a configuration that is going to drive that functionality. There could be A-B testing or different types of experiments that you are doing, and you would need to test that as well in both the cases. That could be there. Okay, now let's talk about differences. Certifications was one. You had said something, rollback strategies, right? Rollbacks cannot be done in native apps easily or cannot really be done in native apps. In web apps, you'll have to take extra efforts to make sure it can be rolled back. So it's not out of the box functionality. You would have to do that. What else? What else is a difference? Beta releases. How would that be different? So beta release as a concept for a released version of the app versus a experimental version, that is one difference that is there. But also, you can potentially, you can think about the percentage rollout in the form of beta release as well, right? So you can think about that in that sense, okay? The instant apps, what we spoke about, that is a difference, right? There is no such thing as instant apps in web. There is a concept of instant apps in native, which means now you also need to test not just a native app functionality and all other criteria we spoke about. You would also need to test if instant apps is working correctly. You cannot just assume it will be there, okay? The other important aspect is that there's a deploy versus upload, which sort of comes to a certification point, right? A web app, you will just deploy on a server or a set of servers, whatever your environments are, the architecture is, and you just a deployment and the next time a user comes to that site, he or she will get the latest changes that are there. In case of native apps, you would have to upload it to Google Play Store or App Store, and each of these would have their own certification process or review process to say, is this app fit to be released? In some cases, in Google case, for example, the app can be potentially approved within a couple of minutes. In case of App Store, I have seen cases myself where it can take more than a week or so after upload to get the approval. So that has a very big impact on how do you plan your releases, right? That's a big aspect. A very other important big aspect is, I spoke about the web, right? You do a deployment, the next time the user comes in, that user will get the new functionality automatically. In case of native apps, it is the user's choice, do I want to upgrade to a new version or not? Now you can build functionality in your app to say I want the users to force upgrade to a new version. You have to build that functionality, but it is still the user's choice, do I want to upgrade or not? I could very easily say, I don't want someone to force me to do anything. I'll uninstall the app. It's a user's choice. So you have to think about that aspect as well. How do you plan your releases? What type of features are going out? And how do you want that rollout to go? Another aspect is even if you have auto updates on, on your device to get the new versions, it really depends on when the user has logged in and has network connection and the app is going to be downloaded for them, right? So you can never really assure that even if I don't do stage rollouts, I just do 100% rollout of the app that is approved, you cannot ever be sure that all your users will be on that version. In fact, the app usage, my version usage will be so fragmented in your user base, you'd be surprised to really see that. If you have not looked at it, you should look at that data, okay? So these are some of the differences with me so far, okay? Let's talk about CI CD. Is there any difference or similarity for web versus native? Both are same, right? However, there's one big difference. The upload process can still be done automatically. You would still need to wait for the approval to come in before you release, okay? From internal perspective for the organization, you can still implement CD in that same fashion. If it's an internal app and you can make sure it is available automatically as you progress in the environments, but the last mile from a production perspective, especially if you are on a Play Store, App Store, there will be that manual intervention required for it, okay? So that is something to think about. With this in mind, let's look at some case study and using a reference of that case study, I want to share with you what approach I took for that similar type of product from a quality perspective and a release perspective and hopefully that brings some interesting data points for you of what you could be doing as well, okay? Any streaming app, anyone who doesn't use streaming apps? Netflix, Amazon Prime, or any others, right? YouTube, do you use YouTube? No YouTube also, great, you are in a good talk. Lot of new information for you, okay? But others can understand, but you would be knowing about these apps, I'm guessing at least, right? What it means. Now these are apps that basically are content aggregation, right? You have a lot of content, whether it's music or movies or TV shows, there'll be various different ways how users can interact with that and play that particular type of music or videos, right? Depending on the app, popularity of the app, you literally have millions of users around. So for such a type of app where the content is dynamic and it's not just dynamic in terms of the content can change, there will be licensing requirements of source to say I cannot have this content available in certain regions or geographies, right? There's a lot of such criteria that can come in which makes the content even more dynamic than just saying is this available or not, okay? So for such a type of product, how would you test and think about the release strategy for it? Thoughts, what would you do? Excellent, thanks for sharing that. So can I know a name please? Akash, Akash would have been a great person to stand with me and do this talk because we are talking about the exact same problem statement over here. So what he said is basically they've got CI in place based on the geographies that is there some contents available or not. He spoke about a tenant concept over there how they are managing that separation and they trigger the automation based on different tenant IDs and see if that content is available or not and if the video can be streamed or not, right? Does that summarize it correctly? Great, so that is one way, one of the things you would have in your testing approach that is there. Anything else others can think about what needs to be done from a testing and release perspective? Now streaming app is there on Android, iOS, mobile web as well and on the web as well, right? In some cases the business strategy might be I don't care much about web or mobile web, my focus is only on this which is fine but definitely it is Android and iOS native at least, right? What other things you would want to be thinking about from a testing and release perspective for such apps? What about network speed? Okay, so based on the network speed that you are in 2G, 3G, 4G, Wi-Fi and different variants of these, right? It's never fixed number. You are also switching between these when you're on the road. Does your functionality work well? So it's not just about can a video be played or not? When you're switching, when you're on the road and going between these different network regions is it playing correctly or not? How would you test that? Would you be on the road constantly? Sure, so emulation using emulators versus real devices, that's one criteria but there are ways you can throttle the bandwidth and verify if things are going well. Think about it, a functionality rich app. What parts of that functionality would you want to be testing across all these variants of throughput? And how would you manage that? I don't think it's a trivial problem, okay? So the reality is in such apps there's a huge database, user base, right? And huge could be in millions of users across geographies depending on how your product is really distributed and available. When I was working on this particular streaming product at that point in time when we looked at analytics there were 13,000 plus unique combinations of Android OS device combinations. 0.0001% probably of that was a variation in iOS, right? Because there's just three, four types of devices, OSs that would be there. But literally so many combinations of Android OS devices that our users were using. How do I think about a testing strategy especially when it comes across when I'm on the road when these network conditions are changing is my streaming and functionality going to work correctly or not? It's a huge challenge, okay? There are certain non-negotiable criteria whenever you do a release, right? Of course functionality has to work. There is no, that's the primary thing. If that doesn't work, you cannot do a release. You have to do frequent releases because gone are the days of doing monoliths and wait for six months before you do a release or one year before you do a release, right? You have to be able to take frequent releases out and make sure if there are any blackout dates you are taking care of that as well. For example, why is that important? Based on your user pattern, if it's a subscription-based product that you have, right? You have to understand your users and say, this is, my users have a monthly subscription based on the geographies we are targeting. Maybe at end of the month when they get their salary they'll pay off their rent and the necessities, other things that they need to take care of and at that time with the remaining money they will probably do a resubscription of my product. Which means I cannot afford to have any problem in my product when they might be paying me, right? It's not just about functionality. When do I do a release becomes a very important aspect to think about these areas as well. In some cases it might not make too much difference. The other aspect is for native apps, analytics becomes a very critical aspect to understand what your products are doing and in many cases it's not just understand what your users are doing with your product. Many a times analytics is a source of really doing your billing and revenues. For example, if I'm going to be getting ads served in my app, I might be integrating with Facebook or Google or whichever other ad aggregator that might be there. They would have their own tracking in terms of how many ads were served and based on the contract arrangements either they will give me money or I'll pay them money, whatever it might be, right? But this data in terms of usage is what is going to determine who needs to pay how much to. And if analytics is broken in your product they might still be able to track it but you are at their mercy to just agree to whatever they are saying. You need to be able to correlate what is going on. So analytics is not just important to understand what your users are doing with the product. Also for you from a business perspective is it working correctly and what do I really see as a pattern out of this? What next decisions do I need to take with that? Okay? Other aspect which becomes very important is the partner integrations that can be done. So for example, if I'm on a particular network provider Vodafone for example, right? I might have a deal with Vodafone to say if it's a Vodafone subscriber automatically give them three months free. In another geography, it might be Singtel in Singapore probably it's Singtel and if they're on a Singtel network they get a six months free or whatever that deal might be, right? So I need to make sure these partner integrations are also always working as expected so that the end user experience is not going bad and of course functionality works well and business continues. So that these are the non-negotiable criteria in such type of apps, okay? Depending on the organizations there are some tipping points at which hell can break loose. Literally hell can break loose on the team who's going through implementing all these changes. In our case, it was about slow, inefficient or incomplete testing. I want to do a release quickly. I don't have enough time based on blackout dates or whatever the criteria might be. I have to do a release come what may. That's a tipping point. I cannot afford to have slow testing or inefficient testing going on over there. Impact of that reason could also be a poor code base which means I have to spend more time in testing the same things again and again. There's not enough automation as well. That can become a big problem. The business metrics, if any of the business metrics get affected negatively as a result of the changes you push out, that particular business entity is going to rain down on you, okay? And that's a big problem. It might be okay in some cases if certain functionality does not work well but analytics has to continue working well. We had I think at that point in time more than 800 or 1200 events that were being sent from the app. Android iOS combined were about so many number of events. Now think about this, if you don't have a good solution from a testing perspective for this, testing could be at any level. How can you check 800, 1200 events for all those different business operations making sure the data is also correct inside it so that my business owners are not going to come and rain down on me, right? It's a very big problem. And of course I don't want to be doing releases of making sure, releasing problems in production for the users during the blackout dates because that's where I'm going to make money or I'm going to get money, okay? So for this type of product, are you able to relate with me so far? You understand the challenges out of this, yeah? So what is the quality strategy that you can think of? How it can help start bringing change to build a good quality product inside out instead of testing for defects to prevent defects going in in the first place. So there are a couple of things that we did. So we established a certain structured way of working. I hate the word process. Any managers here? Sorry, I hate process. But practices are very important to ensure process are enabler to do the right things, right? So I prefer calling it way of working instead of processes. We established a lot of different practices in the team to make sure that we are able to control what is really going in the app and what quality it is happening. So we used a lot of agile XP practices. We established and post-definition of done, no ad hoc implementation, those kind of things. There were also a lot of things that we did in terms of how can we test early? And that could be in terms of the requirements coming in. So the product owner, have you really thought through what you're trying to implement and how are you going to measure the success of it? Those kind of things. This is not a process talk. So I'm just going to skip through this. Slides will be available. So you can talk, you can look at this later, okay? But these are standard things. There's nothing really very drastically different. But sometimes the obvious also you tend to miss out on those. We made sure we are doing the obvious things, right? And we started now establishing that. But the key thing is to enable all of these things to happen and ensure no hell was breaking loose or hell was not breaking loose as often. Automation was the key to get onto that journey. So we decided that we'll automate as much as we can and there were different levels of automation that we did. Unit integration, functional test, focusing on user journeys, analytics test, because we know the core problems where we are going through in the product. We spent good time in automating, but the value of automation is if you run the test on each and every change in the product that goes through. It's not just each and every change in the product. It's each and every change in the test code base, which is testing the product. If your test do not have assertions, you might have million tests which are running every day, does not make any difference because there is no assertion, the test will never fail, right? So the value of automation is running the test on every change and making sure the tests are giving valid feedback, okay? And then when you have automation in place, CI becomes a very natural aspect of how you will run it very often. And if you have done this really well, you can start thinking about continuous delivery. How can I make sure if everything is working correctly in one environment, propagate the same thing to the next environment, do a next set of tests on that and keep going till we get to production, okay? So what we did is any change that happened, we used to trigger an app build or any API deployment that happened, used to trigger the API builds which used to run their own set of tests, right? Unit integration, whatever was there. Or any test code changes that happened, we want to run the tests again. And these end-to-end tests were the last mile after all the unit integration and code coverage tests were run. And this was the part that we did very, very regularly, okay? And the way we did that is what I like to call as we built a mad lab. This is a separate talk or conversation by itself. Why did we have to set up a lab? But it started off by on my machine, running a test, then having a set of devices and running tests against that and eventually building a proper full-fledged lab in-house which could actually scale pretty well with very limited intervention or very less manual intervention. But these devices, the last time when I was there, we had more than 30 devices in-house over there in the lab, the tests were running literally 24 by seven on them. In one year, we had two device batteries explode or starting to explode. And there were a lot of very interesting stories in that journey. But we saw a lot of value in the way we ran these tests. The kind of features we had to enable over here is, of course, based on whatever criteria we looked at, evaluations we did, whether emulators or cloud solutions or anything, none of that worked for our streaming app. So we had to set it up in-house. So we had to make sure it supports Android, iOS, phones or tablets as well because that's what our apps were built for. Focus was not on browser because our key user base was on these platforms. So let's start off over here. Use Cucumber JVM, APM and Cucumber reporting in a way to specify the test, drive the application and the test and custom reporting using Cucumber reports to get meaningful reports about where the tests ran, what details are there in the test, knowing with the device details for each of those tests in order to be able to pinpoint if a test fails, where did it really fail. Okay, so video recording, screen shots and all these were standard things that we built into this framework to allow the root cause analysis of any failure to be done as quickly as possible without hopefully having to rerun the test again because that wastes a lot of time. So if you build the right instrumentation in your implementation and execution, you'll be able to get a lot of these details automatically. Okay, we ensured that we want to do distributed test execution. I don't want to run tests in sequence if I've got three devices of the same kind. I want to make sure if I've got 50 tests, they will run on whichever device is available first to make sure the devices are used in a maximal optimal fashion and we also built in utilities to manage the APM and devices in a fashion that there'll be no conflict when other tests are also running. You have to spend time and effort to make sure that happens. We built because analytics was such a core piece in our business functionality and how do we really validate that and the impact of certain things not working well. We instrumented the app in certain ways, had the developers put in additional log statements in the debug build of the app and we built an analytics automation solution on top of a functional automation which said that, okay, once a functional test ran, I know what scenarios have been executed based on those scenarios, I know which of the analytics event should have been triggered and we built logic into our test framework to say I know the scenario, what is the set of events that need to be going through and it's not just the event names, it also includes the property values of those events and do that validation. The tricky bit was it was a streaming app. I could get no videos while streaming or I could get one ad, sorry, no ads while streaming, one ad, two ads, I could have ads in beginning or middle or any place. So how do we really control that and the events are going to be triggered for the ads as well. So we built in logics and analytics validation to say what is the minimal analytics events that should be going and how can we assure that it is going every time. So you have to spend time understanding that and solving the problem in that fashion. The other part which was very important is Google has this good thing when you go to the developer console in Play Store, for your app they give you a lot of insights into how the app is being used by your users and what type of performance results or things that people are talking about. They'll do a lot of analysis on that. Based on that we figured out that Google has started flagging our app saying that there is a frame buffer issue. The screens on a frozen frames issue, sorry, which means the screens are not changing as quickly as they need to. It gives the appearance that the app has frozen and it started flagging us over there. So it said, okay, how can we test for these things as part of a functional automation itself? We did a lot of mix up. There could be some anti-patterns over here for sure, right? But given the conditions that we had, we could very easily extend our automation to say we added analytics. Can we do some GPU profiling as well or performance profiling as well? If I have to download a video of five MB or a five minute duration, I want to make sure the download in a given network condition is always going to take only that much time. We added those things as performance, functional performance benchmarking in the framework. And we could trend it very easily and say, okay, this operation over a period of time across whatever versions of the apps are there, is it consistent or not? There could be reasons why the numbers go up. It's fine. We might have added functionality, but that trend will give you indications do you need to investigate more into is it truly a performance issue or is the functionality evolved to say, yes, this is going to take more time potential, okay? So think about all these different aspects and that gives you a very different insight as well. Scaling was very important, how we had to build this app. And our solution was use Mac mini 16 GB with one TV hard drive, not the SSD with one TV hard drive because this space we don't want to run into that issue of managing this space on a Mac which we don't even really care about much, right? But invested in good powered USB hubs because we want to make sure our devices are constantly getting power and they don't drop connectivity because some other device is taking more power than the other. So we invested in that. We invested in good quality USB cables. The first cable that I got was from, I think, Chroma, one of the stores that was there, a USB splitter, one into four, right? Within three days of, within one week, the cable went bad, it started losing connection. And then the test would of course fail because you cannot connect to the device anymore. So invest in good infrastructure to make sure these kind of issues, you don't need to spend time on that. And that became a very helpful thing, how we scaled up as well. And it was a template, you set up one Mac mini, there was no setup that is going to be done manually. Setting up a machine is also scripted, just run that script, it will set up everything from, except JDK, you install that manually, it will set up Android, SDK, APM, whatever dependencies you need, it is scripted, no manual intervention required. So again, I get a new Mac mini, install JDK, set up couple of environment variables, run the script, I'm done, I can start running my test immediately, okay? The way this architecture eventually looked with some of the utilities that are there is shown over here, okay? We can talk more about that later if there is interest. But the key thing is again, understand your problem statement, what you're trying to do, and accordingly build a solution that will help solve that problem. Don't just go with the approach, I know this tool and this is a great tool, how can I use this tool to test what I have? That really doesn't match up very well, okay? So this is very important, and we didn't get to this architecture from day one. We evolved into this architecture as we continued in that journey, okay? The key point, I know this is an automation APM, so automation conference, but I have to say this, only automation is not sufficient, and that is where it's very important to find the balance of what can be automated and what cannot be automated, and how do you really compensate for that aspect, okay? That is a very, very important aspect. The approach that we took is we are focusing a lot on automation, but we are going to do very intelligently, select and optimize what non-automated testing that we need to do, okay? What do you mean by non-automated testing? What type of exploratory testing do I want to do over there? It's not ad hoc testing, there's a difference in ad hoc and exploratory, okay? So think about what exploratory testing is required, what sanity testing is required, and that testing is going to be based on, I need to probably do certain things on field, because I cannot really guarantee the switch between 2G and 3G and 4G by just simulating some conditions, whereas some other conditions which might be as you are important, which users really face, right? So if you have got offices across different geographies, what is the simple way that you can tell them, run, take this build, do these simple type of tests on the field when you're moving around or try different networks, and tell us what do you find about it. Have your business teams or product teams also do some testing, because they'll bring a different perspective. It's a third person perspective over there, right? That will give you a lot of value as well. So it's not that you are not doing everything that you can from a testing team perspective, or a dev QA perspective, but you are encouraging collaboration with other roles across geographies wherever you really are, and use that. If on field testing is not possible from your teams, there are teams which will provide you people to do your on field testing for you. So you give them a template, you tell them how it is to be done, and they'll do testing on the field with the type of devices you want, and you'll get good feedback out of it. So it's a very important aspect. In spite of all this, we still had a backlog of testing that is required, right, rather of what is not automated. So we maintain that kind of checklist, and that also kept on evolving. So whatever was there in our automation backlog, that goes in a checklist to say before a release, we want to validate these things. Over a period of time, that list should keep on going smaller, because you should be adding that to your automation. But there will be other scenarios which are not worthy of automation, for example, and you want to ensure before a release, you're just signing off and saying, yes, this is still working fine, right? So have that aspect. So for example, payments, certain analytics events which cannot be automated or tested, tested out manually before it goes out. Okay? So the release approach that we took was on-field sanity testing, stage releases, which we spoke about earlier. One big difference in Google, when you do stale releases, you can actually control the stage release for the duration that you want. In Apple, once you go on the stage release fashion, unless you pause the release within a week's time, it will take you to 100% automatically. Unless things have changed now, but that's what used to happen at least a year ago. So understand how the stage releases work and do that. Another very important aspect is sentiment analysis, which is not something that we would think about too much, right? We are thinking about functionality, NFRs and everything, but how are users really using the product? Are they complaining? Are they happy about it? How many support tickets that they might be raising, right? So extensive monitoring of the releases, more important in stage versus full release versions, because you want to see the impact of what changes you are releasing in that small fashion, right? But sentiment analysis and monitoring is a very important aspect to understand how your product is being used in the market, and that is a feedback for you to improve your product going forward, okay? Does that make sense? What I missed in this approach, sorry? Regression, it's all automated. I don't really, so by the way, off topic. Regression, smoke, sanity, how many of us use those terminology or similar terminologies? Get rid of that. Why? Because it just means you've got too much automation in place that you have to think about, I have to split my automation in different fashions to get quicker feedback. It just means you have not focused enough on unit integration API testing, and you are just continuing to add UI tests on top of it. Do the right type of automation, and then you just say I'm going to run my tests. Do developers say I've got my smoke tests, regression test, unit tests? They just run unit tests. They run within minutes, if at all, that much also might be a long time, right? We need to focus on how can you get your test, functional test execution happening as fast as possible to get that feedback, okay? So that's a good way as a banded approach to get quicker feedback, smoke, sanity, or whatever is there. Try and see if you can avoid that categorization in the first place, or how you can get to that stage. Split it up, component-wise. I want to, I don't care if I'm running smoke or regression. I want to run tests related with download, because my product has evolved, I have a new functionality for download. Let me just run all my tests for download, right? So that's the risk analysis. So you would think about that. So download means I need to be able to search for example. So you say I want to run my search and download, right? It's a very easy way to say I want to run my smoke, sanity, think about how you can increase the feedback, or rather speed up the feedback cycle from these tests. The harm is you're not thinking how you can reduce those tests which are increasing brittleness in your test feedback. Yeah, so the comment is for others who don't hear, what is the harm of having these categorizations, right? And at least we'll be able to write tests for positive, negative tests, those type of things, right? Yes, that is one way of thinking about it. The other way is still, the developers have made a change two weeks ago, by the time I build and deploy and run these tests and get the feedback, time has passed by, the developers have moved on. Could they have found this functionality or these issues earlier in the cycle? It's all about testing early. I'm not saying a product should have X number of tests, or this is a ratio between unit and functional. Depending on the context of the app, you might say one functional test is sufficient, end-to-end functional test, because I've got really great coverage below the layers in the test pyramid. Think very consciously about these aspects, instead of just saying I'm a QA team, I'm an ASD, I'm just going to keep writing tests. Can that test be automated at a lower layer of the pyramid? If not, can you enable it to be written at that lower layer? If not, and there is still value, of course, write that test at the top layer, okay? So the point is all about the feedback cycle and how quickly can you find it as a team, not as a QA team who's implementing automation, another team who's monitoring the results and creating defects and sending it to the devs later on. That is the thought process that I'm coming with. The smoke, regression, sanity, whatever, that is a byproduct because we're not thinking about these aspects about early feedback. If in spite of all this, you feel that it is taking too much time, think about parallelization. Can I run it instead of two threads? Can I run 10 tests in parallel? Will that give me better feedback? Think about these different aspects as well, okay? There was some thoughts, so do we have time? Okay, yes, I'll just step down and we can continue the conversation while the next speaker can also get set up.