 So, hi guys. Welcome all of you today on my talk, how to write test cases for faster regression suite. My name is Chitran Singh and I had the quality team and browser strike. Today, I'm going to share a few pointers on how you and your team can bring down your regression time from hours to minutes. Move fast and break things. Zuckerberg said this 15 years ago, but everyone in the valley, everyone intact lived it. Even breaking things on production, you are moving fast enough. Releasing working software was better than perfect software sitting on your computer. This changed everything. Software started getting pushed out the door exponentially faster. MVPs were released faster and that led to the explosion of tech startups we have seen in the past decade. Move fast and break things worked, but until a certain point. Beyond a certain point of product maturity, quality of product became critical. People now, because now more and more people would be affected when a bug hits production. Take Facebook as an example. Minor oversight led to a million of dollars in losses per minute. When this happened, Mark Zuckerberg realized that fixing a bug on protection slowed them down and long run. Facebook had outgrown its motto. Similarly, as more companies in every industry digitally transformed themselves and matured into a software company, the world started running on software. It became more apparent that breaking things were not at all okay. Now, we live in a world where you have to move fast and not break things. Today's most successful companies move fast without breaking things. Companies that win, all of them have one thing in common. They move fast. Trailblazers like Amazon, Netflix, deploy port to production 1,000 of times per day. Then you have companies like Adidas that have become software companies in their own right. Adidas deploys to 50 global markets daily. Speed matters. On the other hand, quality matters just as much. Companies that break things, they are heavy priced. Let's look at one example. In 2012, Knight was the largest trader in US equities trading over $21 billion every day. Until one fine day, there was an oversight or deployment error that fired off automatic trades by mistake. It went on and on without stopping. Unfortunately, there was no till switch. After 45 minutes, they managed to stop it, but that the firm had lost $460 million and this loss exceeded their assets and they went bankrupt. These countless examples on the cost of poor quality actually have one takeaway for all of us. Bugs on production are bad for business. Each of these three software failures could have been prevented with the right mechanism in place. The question we are facing as a software team is how do you move fast without breaking things? How do you get quality at speed? There could be many answers to this, but today I will focus on perhaps the most critical element to make this happen, your regression suite. So without further ado, it's time for some learning. So how do you make your regression suite faster? I would cover this in two parts. First, how do you write faster test cases? And this I'm going to talk about the key practices that we always need to adopt and adhere to. Second, how do you execute these test cases faster? Once you've written the test cases, how do you make sure that they run faster and give early feedback? Let's start with the first one. Write in test cases that run faster. In fact, how do you enable yourself or anyone to write test case faster? Anyone when developing an auto-touching framework has the right intent. That the automation should be structured and it should be easy for anyone to add test cases to it. Basically, every developer or a tester structure the framework to have feature files that usually capture the scenario and the requirement of the feature. Step files that contains the code of test steps of a particular test scenario. Form files that are commonly based on page-object-model design pattern that creates object repository for web UI elements on a web page. However, gradually when more and more test cases are added with time, the code becomes bloated and you observe code smell. Then you start thinking quite often, does your code need refactoring? Here are some signs that your code is bloated and it is refactoring. Is it hard to make code changes? For example, one simple UI change in a web page is that you have to update many files in an automation framework. Or worst, for a UI change like a dashboard revamp, your automation estimates and effort turns out to be huge. Does looking at the code you get confused? Like which method to use or you observe duplicate hyper method that look almost similar? Does your form file classes have 2,000 lines of code? Usually when we keep automating test cases due to tight timelines and in a hurry, we quite often unloanly keep adding the code to a single point and never realize if the form files code has easily crossed 1,000 lines. We often think that will refactor later. However, that refactor later never comes. Are your functional UI tests extremely flaky or unstable? The team realizes that while we have automated 5,000 more test cases, however, they quite often ignore that there are almost 1,000 flaky test cases too. This increases teams analysis time, thus putting break on the velocity of the team to release features faster to production. Or do you or your team spend a lot of time maintaining code? If the answer to any of this question is yes, then you should consider refactoring your code. So how do you fix this? By focusing on these key points. Instrumentation and adoption of right metrics. Second, by revisiting your automation strategy module. Focused and practice idle correctances of a good test case. Fixing cold smell that is refactor your code. And finally use of selector smartly. Hence let's revisit our basic strategy that help us to focus on fail fast and early feedback. Basically towards left shift. Matrix or instrumentation. I'm sure all of you would agree that in today's world, before we do anything, we should know how to measure and practice a test so that you are on the right track. Basically no one proved that your strategy is working. Automation analytics helps. We need to answer to our stakeholders on the expenditure that every penny spent is worth. That is, there is a ROI return on investment. Instrumentation is key to know if things are working fine. Just like production systems, it is very imperative that we have the right metrics to adopt monitor that we are moving in the right direction. Hence monitoring is crucial. Quite often some interesting questions are asked to QM managers. How does automation or testing objectives align to company goals? What is the contribution of quality to the revenue of the company? How fast do you release your code to production? Interestingly, sometimes testers do ask, what is the impact am I creating in the organization? What's my contribution? How do we know the automation strategy that we have adopted? In fact, the automation done is working and giving us a right ROI. The aid is also to drop the misleading metrics. Hence the mentioned metrics on the slide are quite helpful. Number of automated tests is created. Measure the total number of new automated tests created and not measure how many manual test cases that you have automated and given period of time. This time can be a week, month, or quarter. Second, number of bugs found in production. Measure how many bugs make it through production and don't measure how many bugs are caught during development. This metric will help you to measure how effective is your automation coverage. This also indicates the confidence level of the team. If thousands of unit and integration and entry tests are written by the team and the confidence level is very low, then all the effort and money spent is totally wasted. The morale of team goes down. So number of P0s of P1 bugs found in production also adds to the reputation of the product and hence the company. Execution time. Time taken by the CACD pipeline or build a test case, test steps is important. Measure the time taken by each of them. Identify and apply the right thresholds. Also measure the build and overall CACD pipeline time. It is a good practice if the instrumentation is done at the granular level. For example, if you have instrumentation at port pull request time or checkout and deploy time, this basically helps you to know the metric is how it is contributing to your company's goal and in fact, what is the time to market once a particular code or PR is raised. Number of times a test case failed. Measure the stability of a test case. How flaky is it? The aim should always be to decrease this number. It helps in increasing the productivity of a tester who is analyzing the results. And last, second last, the test case failure category. Capture and categorize the test case failures. Is it a genuine product failure? Is it an environment issue? Is it a automation script bug? Nowadays with many ML and AI tools like report portal.io, which can automatically mark when a test case fail as in a build finishes. This will not only increase the efficiency of the team but helps automate further corrective steps to be taken when a particular failure is detected. For example, when you're running a test on a device and category identified as a browser test, you know that framework should automatically read on the test. Last, the resource utilization. Every machine that test runs has a dollar value associated with it. That is a resource is always costly. Monitoring or CPU, buffer, swap memory, test, network, and so on. Everything is crucial. It helps to detect bugs under the realization of a machine. It helps you to optimize and finally save dollars. If you carefully ponder on the metrics just mentioned, then you would observe that these metrics helps in making the productivity of the team. If you're releasing features faster with high velocity, then it directly impacts the revenue as customer get features fast with a great sense of care and acknowledgement. With the adoption of AB testing frameworks across the industry, these metrics in fact have become more important. A tester can relate the impact he or she is creating in the team and contribute to the profitability of the company. The more unutilized your resources are, more you're wasting companies money and time. And optimization factor can align to your company's goal. We all know automation at scale needs proper strategy. Specifically, if multiple engineers are contributing to it with the aim to release fast without compromising quality. Whenever I am assigned a new project, in fact, I joined a new employer. The first thing I do is understanding the landscape of testing in the product of the company. As you are aware, the automation testing usually resolves around three major aspects and 20 tests, which are also called functional test. This basically focus on the user behavior, integration that consists of API or component level test, and finally the unit test. So I basically start asking questions like, what is the distribution of manual and automation testing? How many UI test cases are automated? What is the strength of integration test? In fact, do they even exist? What is the unit test coverage? What is the confidence level of the team in the automation? What is the go-to time, market time of a feature as soon as it is developed? That is a PRS test. So as soon I get answers to my questions, I always start observing patterns and more precisely the anti-patterns. The two most common anti-patterns that I observed are inverted pyramid or ice cream cone pattern. In this anti-pattern, the team relies primarily on end-to-end tests using few integration tests, even fewer unit tests. R-class, these anti-patterns are observed when the team starts with a lot of unit tests, then they use end-to-end tests where integration tests could be used. The R-class has many unit tests at the bottom and many end-to-end tests at the top and very few are in fact negligible integration tests in the middle. So once you have observed the anti-pattern, your always aim is to fix and reduce the deployment type with quicker time to market for your business features. Thus, what is the right automation strategy? One way of doing this is to use the test pyramid, which I'm sure you already have heard and right now even thought when I was mentioning what anti-patterns. So whether you are starting out a new software development project or working an existing project, it is important to have a right strategy in place. The test pyramid, which is also known as automation pyramid, tells us the cost and slowness of automated tests increases as one go up the pyramid. Establish your own test suite with simplicity of test pyramid that serves a good rule of thumb. Your best bet is to remember two things from test pyramid. Write test with different granularity. The more high level you get, the fewer tests you should have. Stick to the pyramid shape to come up with the healthy, fast and maintainable test suite. Write lot of small and fast unit tests. Write some coarse green test and very few high level tests that test your application from end to end. Watch out for the dawn end of the ice cream cone or inverted test pyramid. That should be a nightmare for you to maintain and take away too much time to run. Remember, the complexity rises of the test pyramid with more UI tests. Your execution and build time increases. The probability of writing of non-deterministic tests increases resulting in thickness. You have to keep the facility at check. The set of cost increases as the UI test requires infrastructure. Finally, no doubt, your ops becomes very heavy. Having discussed this point, it does not mean that you don't have to write UI automated tests. They are very much required to have a right confidence from the user's perspective. The point is to have right balance of all capabilities so that you are capable to get early feedback and overall automation strategy gives you enough confidence to the team to release business pieces faster to market. Sometimes, when you want to adopt and implement a certain testing strategy or a model, it is observed that you are not able to do that, at least not in immediate future. This could be because of, for example, monolithic or legacy code, or to use mocking services. We tested a huge effort with a lot of refracting and a tech depth of the team. Our writing unit test gives 100% code coverage which is an overkill. So basically, if you're writing all unit tests and you say I want to achieve 100% code coverage, that also may not be needed, or you're not going to do that. Then what is the way to get early feedback and have team confidence in automation? From my experience, I've observed that though you cannot write many unit test cases, but we can still write integration tests. Hence, I could read to another very good testing model for the testing trophy. It is a bottom to pop approach in which basically static analysis is introduced in the development phase just below the unit test. That is, you use static type systems and link them to capture basic errors like tie codes and syntax. Second, unit test reviews. The intent is not to write unit test for code that don't have any logic. That is write effective unit test that target the critical behavior and functionality of your application. Integration test should be maximum to test all the logical flows between different components. That is develop integration test to audit your application holistically. Make sure everything works together in harmony. Minimum end-to-end test case to automate mandatory UI functionality that cannot be tested or automated by integration test. That is, create end-to-end functional test for automation of critical path instead of relying on users to do it for you. There is no hard line on the percentage of each section, although I would advise to use what works best for your team based on the metric shared earlier. Good example could be if your business provides an infrastructure on the cloud and you have to automate the business logic that works on handling of that infrastructure unit. That is to give it setup, allocation, tear down and clean up. To make sure that all component works seamlessly, you may have to write more integration test. And writing unit test may not be applicable there and probably doesn't make sense even. However, I would highly recommend to have static code analyzers in development phase irrespective the model of it. Static code analyzer helps you to impose coding guidelines and reviews and automate code reviews. Once you've identified the right model for automation strategy, let's dive into the characteristics of a good test case. A test case should be repeatable. It should yield same result on every execution or can be used to perform the test over and over. This is a test case should be consistent. For example, a user usually signs in with a valid user name or password on a website. Here, the final state of a user should always be logged in irrespective the number of times you execute this test. Identify generic test cases. They should be written in such a way that you can that the test case can be used in multiple test cases. Look for common stuff in all the test cases for the functionality that you're automating. For example, the sign-in, sign-on functionality, search functionality of an e-commerce website. Every test case should be defined have a particular purpose. They should be clear, concise, and complete so that it supports the intended test scenario of the requirement. All test cases should be traced back to the actual requirement aspect. They should easily be trackable in the test automation reports too. That is when a test case fails in the report it should be easily identified which tests that fail for the exact test reason. Test cases should be atomic and always focus on one aspect without affecting the outcome of the test. That is, test case focus on the one single feature makes it clear on the intent of the test. If it fails, then you should have a clear out idea what needs to be fixed. Atomic test cases continuously show that your test cases test just one thing. It does not attempt to test various conditions in the single test case. To further explain all the mentioned attributes let me take an example of an e-commerce website. Consider a scenario where an existing user purchases a product via card. So, usual steps here for a user is to visit the e-commerce website, sign in using your valid user name and password. They search a particular particular product with a certain keyword. Identify and select a product of choice and add to the card. Finally, they purchase it using the correct payment method. So, the atomic test should be the test sign in, test search, select product, test add to card and test purchase. With these atomic steps defined, they can easily be reused multiple times with other scenarios. Like, for example, existing signing user removes the items from the card. In this scenario, user has to add few products to the card to complete the test. So, it's kind of a precondition. That is, atomic test step that we use would be all of them and accept the test purchase step. Further signing, search, purchase could be required for other flows. Testing them independently makes sure that they are not retested for other flows. For other flows, we can simply spoof, sign up, search and purchase via backend APIs or even by setting the appropriate session proteins. Since we are no longer rewriting those steps, testing different flows is considerably faster. Running a single script for testing a huge flow leads to debugging nightmare. For the above example scenario, it is easier to know what broke the automation. Is it a sign up, search, purchase? We split the flow and breakages can be localized, fixed and tested again quickly. Also, as these tests are now independent of each other, we can get faster by testing them in parallel. Let me pick another scenario. An existing user add a product to a wish list. Here, you would have guessed, right, just to, you need to create one atomic step add to wish list. Other atomic steps can be reused again, like signing, search and select product. That's basically, if you see, it's very clearly observed that these test cases or test steps are repeatable. The same expected outcome irrespective of the number of time they're executed. They're reusable. They can be reused again and again. They're accurate. They serve the same intended purpose. They're traceable. Can be related or tracked back to the scenario mentioned. They're atomic. That's only one thing at a time. So let's move to the next step, refactoring code. While refactoring code, identify codes and then animate them. Like, for example, duplicate code. Remove two code fragments that look almost identical. Alternate classes with different interfaces. These are two classes that perform identical function but have different method names. Simplify the if statements. Possibly replace them with switch statements. This makes code very easily readable and understandable. Spec long methods. Generally, any method longer than 10 line of code should make you start thinking on the intent of the method. Dead code. Delete a variable, parameter, field method or a class, which is no longer used. Remove all obsolete code. Long form files. Split your form files based on UI components on the page. If required, create some folders accordingly. The mentioned pointers can easily be identified using static code analyzers too. Nowadays, IDs have very powerful capabilities to highlight and provide suggestions to fix the code smells. For example, IntelliJ IDEA Visual Studio, JS Battle, LG TMP and so on. So usually in development phase, our static code analyzers are used to automatically examine code source before the program is executed. Similarly, we can even use this in our automation code and use the same concept. When a tester creates a PR, this tool can be triggered and executed. This helps in avoiding human errors during code reviews. In fact, you can even automate the whole code review using these tools. For example, if your, your automation code is written in Ruby, then you can use Fronto RubyCorp static code analyzer with the right tools configured. This tool can automate your code review to a great extent, and thus enhances your team productivity. Once we are done with our authoring or writing of automated tests, the next significant part is faster text distribution. You have thousands of tests. How to make sure they run faster and give value feedback? In an agile world speed matters with focus of high productivity without compromising the quality. Productivity is very crucial that you understand and value the time of a resource, whether it's a developer or a tester. The key is avoiding and minimizing the number of context switches. If a bug is found late when a developer has moved to Next Story or a feature development, then he or she has to come back and fix it. Similarly, if there is a high number of depth to clear cycles, productivity of team is hampered and team velocity goes down. So how do we get early feedback? The test execution time is essential. We all know selectors play crucial role in the automation of UI tests using Selenium. Hence, for exhibiting faster UI tests, we have to use them in the right way. Selectors let you search for a particular element on a web page you are testing. You can then interact with the element by clicking, sending keys and so on. There are multiple types of selectors like Find By ID, Find By Glass, Find By Name, Xpath, CSS. All of the selectors listed here find the same element, but there are few differences in their underlying implementation. Find By ID. ID is unique for a given element on a web page. Find By ID uses JavaScript's Find Element By ID, which is optimized for almost all browsers. Find By Xpath. Xpath searches based on traversing the DOM tree trying to find the element which matches the expression. This itself takes a bit more time than finding By ID. Also, it is optimized for all browsers, mostly older ones, especially the older IS. ID may be missed for some elements, but Xpath can be used to search all of them. Let me share some of the benchmarking numbers that were crunched for Find Element By ID and Xpath. For comparison, Find Element By ID and Find Element By Xpath was run 50 times on the same element. This benchmarking was done on Instagram app, and it was observed that using ID was 14% faster than Xpath. Similarly, the same experiment was done on Wikipedia app. In this case, using ID was 19% faster. To conclude, Xpath can traverse up the DOM, i.e. from child to parent, whereas CSS can only traverse down the DOM from parent to child. In modern browsers and mobile devices, CSS performs better than Xpath. For faster test runs, your preference should be Find Element ID followed by CSS selectors, i.e. by class, name, text, and eventually Xpath. Before moving to my next slide, I would like to emphasize that understand your application, how it behaves under different network and load conditions. Using appropriate libraries in your programming language of your child, do the benchmarking on the performance of the selector against the test application. Under the same network conditions as your regression suite will run. So moving next to APIs, using APS modeling congestion, the UI test really helps. Remember the test pyramid, which I talked a few minutes back. The right balance of using both UI and API test is the key. Wherever possible, use APIs to reach the instant state of the main test objective. That's linked to my previous example of e-commerce website, where an existing user signs in and should be able to remove all elements from the cart in the web page. Here, we can easily move some common steps that bring the user to the initial state of already having certain product added to the cart. Using the API exposed by the product, that is sign in, search product, add to cart, sign out. So what are the benefits that you observe with the API testing? They provide us faster feedback and hence reduce the vision time. They reduce the cost and helps in increasing the productivity without compromising the quality. Can be used and integrated with UI functional test and mocking service. It is interface independent and as data is exchanged using XML or adjacent. That is your main application and automation services can be implemented in totally different languages. With the use of mocking service, you can test early, as early in the development phase. Before even the code is pushed to a test of our testing, this helps in reducing the depth of your cycles. Next, Selenium Waste. As we all know, Selenium Waste are very important for executing automated test scripts. Use of Waste help us to handle different variations in time like for loading of web elements on the web page. Most web applications are ejects and JavaScript based. So, page load on browser, we see various web elements that take and interact with different load times at different intervals. This is obviously creates difficulty in identifying the right element and we usually see element not visible exception. So, basically Selenium Waste helps us here a lot. So, to use this part, it's very important that we understand the application, the behavior in the test ecosystem and different network conditions. I'm sure you all are aware of different Selenium Waste like implicit and explicit and fluent. Hence, I won't dive into their definition. In fact, the parts that you need to keep here while we're handling the time lags and eject requests are immediately stopped using thread.sleep. I have observed quite often that many developers or automation engineers use thread.sleep a lot. They usually wrap them with wind loops with the duration of one second and this easily goes across and adopted by everyone else across the automation code. Hence, just avoid it. You should only use explicit weight. Implicit weight, one set are effective throughout the user browser searching. Once I observe too much implicit weight used throughout the code and just removing them, I could reduce the execution time by 30%. Third, don't make implicit and explicit weight. This can cause you unpredictable wait times. For example, setting the implicit weight of 10 seconds and explicit weight of 15 seconds pause a huge time out of 20 seconds. Further, it is observed that implicit weights are often implemented on the remote side of a web driver system where explicit weights are implemented explicitly on the local language binding. Thus, if you're using a remote web driver or a Selenium bit, you may strain to undefined behavior unpredictability. So don't do that. Use explicit weight with expected conditions. Another powerful tool in our arsenal for our faster execution is scalability with parallelization. When number of automated test cases increases, it is important that we should have optimized infrastructure to run them properly and parallely. Two important factors that you should consider are parallelization and dockerization. With many teams working in tandem, the jobs to run tests for them to also increase this exponentially. For example, if you're using CACD tools like Jenkins, there should be enough number of slaves to run all the tests and the jobs. So how do you increase the Jenkins slaves on demand? That is how do you scale horizontally? Quite often I've seen that automation or DevOps engineer manually set up Jenkins slave machines or to optimize it using AWS Auto Scaling. What you usually miss is that the slave machine resources are not used fully or your resource utilization is not optimal. The recommendation is the dockeration of the test infrastructure. Dockers help us to utilize all the resources for the machine that you're paying for. With the docker, you can ease the configure multiple slaves in a single machine. Use parallelization capabilities of our features of the testing frameworks to run your test and parallel. Combine this with Selenium Bit that can be set up locally or preferably used that are available on the cloud. For example, consider TestNG framework. TestNG provides an auto-defined XML file where one can set the parallel attributes to method, test, classes. And by using the concept of multi-threading of Java, one can set the right number of threads and can achieve parallel execution. It is also known fact that parallelization help reduce execution time. One can complete the test cycle faster, executing in quicker delivery also leading to better ROI. An interesting fact to notice sometime just increasing the number of threads may not substantially reduce the execution time. For example, it could be related to external factors or limitation of resources that are allocated to you. So choosing the right threads is very important. That is, use the right thread pool size. However, they are caveates to parallel testing. In case of parallelization, for different modules to run in parallel, you need to create independent modules. Modules with dependency cannot be included in parallel approach. For parallelization, one need to have a detailed understanding of the product, its flow for a better results. Even though parallelization can help in cross browser compatibility testing, its coverage of multiple browsers is restricted unless it is a complaint by distributed testing. Their setup of multiple machine browsers is provided. Going to the need of having access to multiple platforms and browsers to run tests in parallel, the cost of compatibility testing with parallel testing increases. Also, you may come to a point where access to all the browsers and version may not be possible. Hence, opting for a cloud-based testing really helps. Use of cloud-based provider enables you to do distributed testing across the globe, which is very resourceful. The teams that are not on Docker usually face interesting challenges like how to always get a clean state of the setup before the automation starts. How effective is my automation infra? To simplify the test setup, is it doing an upgrade of pain? Dockerization of testing automation infrastructure helps us solve these problems. As Docker enables us to utilize resource fully, they give us ROI, help us save costs. It helps decrease deployment time. Our test automation runs are genuine slaves that are running on containers are completely segregated and isolated from each other. They provide full isolation of the file system, network and processes. This ability guarantees us clean setup and help reduce flakiness. Docker simplifies and provides flexibility to users to take their own configuration. Put that into code and further deploy it without any problems. For continuous integration, Docker works well like Travis or Jenkins or even Team City. These tools help us in version management using Docker image every time a source course is updated. With the help of a Docker, we can build a container image and can further use this image over every step of deployment process with the ability to run jobs in parallel. In fact, you can queue them. This help us make our CI CD efficient. So to conclude, use right metrics and monitor them to make sure you're on the right track. Use the right test strategy, whether you opt test pyramid or testing trophy. Choose after understanding your testing landscape. Make sure your test cases are atomic and you practice good test case characteristics. Start using APIs in UI functional tests. It helps in faster execution and reduce flakiness. Execute your tests and move your test setup to Docker. So that's it for my end. Thank you all for attending this session. I hope you learned something new today. So Chitwin, hi. We have few questions in the discuss tab under the Q&A section. Yeah, sure. So can we calculate code coverage or Selenium test with respect to web application code? There are some tools which you can do, but you need to tweak them. In what ratio we should divide the test cases in unit integration at E2E level. I would say that work as a team to identify that. Giving numbers, it depends upon the organization, the application and the business use case. That is why I've mentioned earlier the use of metrics. Those metrics help you to gauge all the required level for all the unit or integration into a level. In agile scrim model, what is the best practice to do automation in same print or next print? Practically speaking, I have seen there's always a spillover, which actually works sometimes because before the next story comes in for you to test, you can actually write the automated test cases on the previous print. So if you're practicing right, it usually doesn't, it takes care automatically. If you've done proper sprint planning and everything. Any other questions? It's a team who is responsible, sorry, the question is an agile who is responsible for the quality. I would say it's the whole team. It's the whole team who is responsible for the quality. Can we segregate which test to run penalty and which test not? It depends. For example, if you're using any resources that depend on it could be a desktop or it could be browsers or your devices. In that case, you may have to have a queen concept over there. You may have to queue your test cases. So I would say that do proper judgment and see how things are working. Don't put things without thinking and proper analysis. Hence, I really emphasize everyone to have a proper judgment of how things are working in a framework. I'm monitoring it regularly at regular table. As a team, you should try to do that. We have one more question. Two more actually. Where I can see those? Is using static variables causes trouble in running test and parable? Yes, they do. It depends upon how you have written the code. How are you using them? It depends. It totally depends on how you have written the automation framework. We have to be very careful while using them. Should we also capture number of defects from manually in metrics? I would say that do both of them if possible. It will help you to in a wrong long to know what is the, what is the error? Where is, as a team, you need to improve one? Any other questions? Yeah, there is one more difference between parallelization, parallelization and dockerization. Okay. So it's very simple, right? You are starting your test on your laptop on desktop, right? You can spot five different Firefox browsers, right? You say, in testing example that I gave, using parallel methods, you spawn it and you are running it. This is parallelization, right? Dockerization helps you when you want to scale as a team. Imagine you have 5,000 of tests or 10,000 of tests you want to run. And you want to divide your test because you have multiple pipelines. So assume that you have multiple wrappers. Now many people are moving to microservices, right? So you want to have multiple pipelines. So how do you do clean setup? And the one test is finished, they should be cleaned environment for the next test when it starts or next job when it starts. So dockerization helps in that. So dockerization is basically for infrastructure. Analysation is for test cases. You can even a rotary, you can even run saline grid or you can even run on your cloud saline grid.