 Welcome everyone to the Change Tires in a Moving Car Make Functional Test Automation Effective by Anand Bagmar. So without any further delay, over to you Anand. Thank you. Good morning, good afternoon, good evening from where you are joining. I'm extremely delighted and happy to get this opportunity to give a keynote talk in a conference in a project that I very strongly believe in. So thank you everyone for making this happen and giving me this opportunity. I am Anand Bagmar. I've been in the quality space for more than 20 years now. I've been part of product and services organizations and have worked across the globe. I have also been a contributor on various open source projects including Selenium and I also have a few open source projects on my own. I would look forward to continuing this conversation beyond this conference as well. You can connect with me on LinkedIn, Twitter and I would be happy to discuss testing, discuss quality and learn more from you in the process. So let's get started. I'm going to need some interaction from you for the next set of slides at least, if not more. So the first thing is I want you to raise your hand if you disagree with any of these statements. Your automated tests run fast or they need no manual intervention at all whether you want to run your tests across any different environments in parallel or local or any other machine. Can you raise your hands if you disagree? Okay. Great. Thank you. Next question. Your automated tests are triggered automatically on any product change and they can run on demand. They can run on demand on any different browsers or devices without making any code changes at all, any configuration changes at all. So I see some hands for people who disagree about the current state of the automation and thank you for being honest over here because this is important. Next thing, your automated test code is clean, easy to maintain, reuse scale and your tests can run in any order and in parallel. Okay. I see people disagreeing. Okay. Your automated test results are a consolidated report and you can do some trend analysis from that as well and the results are rich enough that you can do a root cause analysis very easily on what has happened with your tests. Okay. Great. So I see some responses again on that. Thank you. Last question. Your automated tests give you deterministic feedback. That means there are no flaky tests. The tests give the results as is expected as it is coded for and as a result of doing all this automation, your manual testing efforts are reduced. Okay. I see some people agree on this as well or not that they disagree. This is the kind of results that are being getting. So great. You are in the right talk. The question over here is why do you disagree? Why is your automation not effective? I think the answer is really simple. One, you've got too many things that you are doing. There's too much on your plate. Second, you're constantly running against time. There's so much to be done in a limited time and that is what this talk is really about. How can you be changing tires in a moving car? Essentially, the analogy is about you are doing so much of automation work. You're trying to keep your scripts up to date and running effectively while implementing new things, new tests based on the new features that are going on. And you have no time to make sure your framework is doing the right thing. You have tech debt. There is no way you can really address it in an effective fashion because you're constantly running against time. You constantly have too many things to do. And this is where we come across anti-patterns and anti-patterns which prevent the desired outcome, the desired end state from your automated tests. What is an anti-pattern? It is from a software engineering perspective. It's a common response to a reoccurring problem that is usually ineffective and risks being counterproductive. These are ideas that seem good but are not really good practically and in the long run, they prevent you from doing the right thing. But still, why do we do this? That's an obvious question. Why do we do this? And again, the reasons have to be understood. One, it's like a deer caught in a headlight. You just have so many things to do. You are put under pressure, you are put under the gun over there, under the spotlight. You don't know how to react and you want to take the quickest decision possible to move forward. The other reason could be your focus is very narrow, very specific in a specific area of your product, of your automation and you don't think about the big picture. You could also be having limited information based on which you are not able to take the right decision going forward. So what I want to do today in this session in the next 35-40 minutes is look at some common antipatterns that we unintentionally end up adopting. What is the impact of these antipatterns in the long run, why it prevents our automation from working the way it needs to be and more importantly, how can you address them? There are a lot of antipatterns that we can talk about and in this session, I'm going to focus on eight different antipatterns that we have. I'm going to start off with code quality. Again, I'm going to request your participation in form of raising hands over here. How many of us have code that looks like this? This large complex code, which actually makes it difficult to read and understand what is going on. This is deeply nested code, assertions all over the place. It is impossible to understand and if I look at it, I actually get scared looking at this code because I don't know what change I do is going to have. What other type of impact, what else will break because of this? This code has got an extremely high cognitive complexity. It has an extremely high cyclomatic complexity. If you don't know these terms, look it up. It's not very difficult to understand what it means and what is the impact of these. The main challenge with this type of code is, yes, when you start writing it, that test is working fine. What happens after a few weeks, a few months, or when new team members come and have to reuse that code base and have to extend that code base, that is where the problem starts. It is easy to get to that stage, but it is very difficult to scale and continue using that scale as you proceed. How can we get better? Let's look at some simple examples, simple things that you can do to help get better in this. We are in a modern world with good set of tool sets, technologies available very easily. In your IDE, your development environment, for your test authoring environment, any IDE you take will have mechanisms how you can inspect code and find the code quality of what is your code base looking like. This will give you very quickly a quick analysis of what is the set of challenges, as per standards that have been set in the IDE, which again, you can tweak, but these are as per standards, it is going to tell you as per the programming language what things you might not be doing correctly. So as you are writing code, if you keep inspecting the code, what you are doing, what you are implementing, you will be able to take quick decisions in the process of implementing code, because remember a stitch in time saves nine. I have been hearing this phrase right from my childhood and I see how it relates to automation because if during implementation, I pay a little attention, a little focus to quality, I cannot change the whole code base at the first time, but a small step taken regularly can help prevent a lot of complex issues. So code quality is the first anti-pattern and a very simple way to how to start getting better in your code implementation. Take help from your IDE, make use of the resources that are there and there will be many more plugins that are available, but I hope this gives you some insight how you can get better when you are writing the code itself. The next anti-pattern is related to weights. I did the workshop on Selenium deep type yesterday and I asked the question to the participants, what are some of the challenges they face in their implementation? The top challenge face is regarding weights and why weights necessarily it's because of flaky tests. They think because of poor responses from the application or different response times from applications that tests are flaky, hence they need to keep on adding more and more weights. As a result, what happens though, you end up using a lot of thread.sleep. By the way, how many people listening into this talk raise your hands again are using thread.sleep in your code, in your automation code. Amazing responses, which is also a sad reality check of where we are. I have done some assessments on codebase where I have seen a few hundred instances of thread.sleep. In some cases, it is one second, two seconds. It is as much as 20 minutes in some cases. How can you have a sleep of that much time in your automated code? And this is not even talking about. This is just the unique instances of thread.sleep. It doesn't refer to how many times that same code is called because it might be in a loop, the same function may be called multiple times. This is not even counting that. It's a scary situation to be in. What happens because of this? Of course, your test execution cycle gets delayed significantly and you are not going to get your feedback in time. The other problem when you're using selenium is we do not understand the different weight implementations that are there that selenium provides and what it really means. It is very clearly documented in the selenium codebase that you should not mix implicit and explicit weights. It has undesired effects. You cannot predict how much time it's really going to wait. It's a very simple thing. If you look at it, it will take you a couple of minutes to understand, but how many of us have read the documentation? That is a problem. So how can we get better at this? First answer is extremely easy and I think you might have understood this. Stop sleeping. Stop using thread.sleep. It does not help. In explicit cases, very consciously, if you have to use thread.sleep, I force the team that I'm working with to add a comment saying why a sleep of two seconds is required over here. It has to be a very conscious decision why you are using this, otherwise you cannot be using thread.sleep at all. But now if you cannot use thread.sleep, how are you going to have your tests work in the way your application is really working? For that, you need to start reading the documentation. You need to understand what are the different weight strategies. Build custom DSLs if required on top of it to make it appropriate to your product, to your application context that makes it easy for the team members to use different types of weights and proceed. But do not mix implicit and explicit. This is from the Selenium documentation. It has a very explicit warning. Do not mix implicit and explicit. Do that. There is a lot of posts that you will find on the internet as well. This was one reference which I thought to have explained it well, why you should not mix implicit and explicit. So read about this, understand this, practice this to make sure you understand the usage better. Let's move to the next anti-pattern, exception handling. So for folks who have raised their hands, you can lower your hands because the next question is coming up over here now. How many of us ignore exceptions or in other conversations, we call it a swallow exceptions. You catch an exception, you just print it to the console or log it and you ignore it and you proceed. I see a good number of hands raised over here. A similar thing is going to be how many of us return null instead of returning an actual object from our methods? I don't have an example of that right now, but how many of us return null if the desired behavior value is not available from the method? This is a real problem that we are having. How many of us throw assertions or do assert all over the place? These are problems and this results in very serious situations when your tests fail. So let's look at how can we get better at this. Remember your objective of your test should be it has to run fast. It has to fail fast and once it fails, you should adjust very quickly to fix the problem in the right way. That means if there is an exception, then you throw the exception right over there, let the test fail immediately where the problem happens and have rich verbose information available as part of the output so you can quickly find out what is the root cause of the failure and fix the problem over there. The minute you start swallowing exceptions, somewhere downstream in your test implementation, in your test execution, some other behavior is going to fail and now when the test has failed, you try to look at the reason and you have to go through all the spaghetti code, all the complex logs that might be there to try and understand where is your failure really coming from. So do not swallow exceptions. Do not return nulls. Throw an exception over there if required because the minute you return a null, all the methods calling that this method which is returning a null need to check if the response is not null then do something. It's a huge problem, unnecessary code you are writing. It is not helping anyone. The next thing is don't have assertions in page object. Again, this is directly from the Selenium documentation. It is one of the recommended practices. A page object is a dummy object. It does not have any sense of logic. It can just do actions on the page and retrieve information from the page it is representing. The concept of right or wrong is in the test or in the layer between the test and the page objects. That is what knows what is the right or wrong thing and that's where assertions need to happen. That is where you can have custom exceptions that you can throw and say this type of rule failed. It makes it very explicit when you look at the test failure why the test has failed. Summarizing no assertions in page objects. Use custom exceptions. Do not return nulls. Do not swallow exceptions. Let's move on to the next anti-pattern. Test data and data provider. This is one of classic cases of a great feature used bad. Data-driven testing is a great feature and TestNG, for example, makes it extremely easy to use a data provider so that your same test can be run against a different set of data. But the way we started using and in a way started abusing this feature, it becomes a problem when you are scaling your test execution framework or your number of tests that you have implemented. I like to use an analogy over here. Let's say you're going from one source to a destination. I'll use an example of let's say from Pune to Delhi or Bangalore to Delhi. Long distance journey. And if you have to go by train, there could be four, five, 10 different ways how you can get from Bangalore to Delhi by train because it goes through different routes. If your journey is not explicit, how is the train, in this case the Indian Railways system, how are they going to know which of that journey is not efficient? Which of the journeys is constantly getting delayed? The train is constantly getting delayed. A data provider approach essentially comes down to that. You are giving it a big set of data set to your test. And if one or multiple of those data sets fail, it is not very easy to figure out which part of that data set, which data set has failed, what is the impact of that data set from a business perspective? You have to spend much more time trying to understand what is going on, what does that specific data set represent, and what is the impact of that not working well. What this means is root cause analysis becomes challenging. And that is where I do not use data provider at all. I don't like data provider from that perspective. Why? Because I like to identify and automate unique journeys. It is important for me to know from Bangalore, to go from Bangalore to Delhi via Pune, that is one journey via Nagpur is another journey via Bhopal might be another journey. And which journey is interesting, which is not working well is important to me. It's the same thing in our software as well. You map out using state diagrams, mind maps, any such tools, map out unique journeys from source to destination. The source is going to be your personas. Destination is going to be what is the business value you are trying to achieve. And the stages in between are what are the business functionalities you are going to use to achieve the outcome. Map out these unique journeys, prioritize them based on the priority, which could be business priority, risk value, whatever the criteria might be. Based on that priority, you start automating those journeys, make each and every journey explicit. Internally, your code is still the same. So it's nothing much is changing over there. But you are making your test intent very explicit. And that is going to help you when you're running the test, figure out what part is working well, what is not working well. The other aspect, which is extremely important over here is in terms of test data. Now, test data itself is a huge topic on its own. I cannot spend more than maybe half a minute on this. But remember, test data in reality is extremely complex. It is not just username password. It is extremely complex. The data can be nested. You might need to use dynamic data. The data can be reused and shared. And many more criteria can be there based on the context of your application. So in this case, you have to think about how are you going to specify the test data that your tests are going to use for implementation, for execution. You should also think about what type of data is required for your automated test versus what is required for manual or exploratory testing. Manual I'm using very cautiously over here. Manual is test that cannot be automated, but needs to be repeated every time. Exploratory testing is something that you are exploring the app based on what is changed based on risk and getting better at it, understanding the product better. But all of these types of testing also need data. So what is the approach that you're going to have to seed the data, create the data, reuse the data? You need to think about that. And it needs to be thought of before you start building your test framework, before you start implementing your test, because if data is not correct, you might automate your test the right way, but the test is not going to be reliable for you. So keep that in mind. Next anti-pattern is about executing your tests in a particular sequence or the test execution priority. How many of us, again, I would request you to raise your hand if you're doing this, how many of you have your tests execute in sequence, sequentially rather? Okay, great. A follow up question or a related question, how many of us are doing this in sequence because there is a dependency in the tests? Okay, great set of responses. Again, people know of having to follow these anti-patterns. What this means is you are not efficiently utilizing your resources. A good example, a correlation that you can take is for those who have visited the US and other countries where they've got a high occupancy vehicle lane or a carpool lane, as it is called. That is a special lane in the expressways or the highways where only two or more people, if they are seated in the vehicle, they can be using that lane for passengers, just single passenger vehicles, they need to use the regular set of lanes. What this means is there is infrastructure available. Traffic could have moved forward in a slightly better way if there was no such criteria. There was more one additional lane available for your vehicles to move forward, but you're not utilizing that infrastructure well. Now, don't get me wrong. I'm not saying carpool lanes is a bad idea. I'm just using that as a correlation for what happens in our automation. So how can we get better or what is the problem of this execution? This means that we might have, let's say over here in this particular case, n number of devices available, but you're queuing up your tests to run one after the other. Now, what this could mean is in the first case, let's say this is a device over here. If these two tests complete, this device is going to lie, vacant, lie idle, even though there's a big line in these other cases for these other devices. That means this device is not utilized well. It is as simple as that. We are talking about the resource utilization over here. So what could be better? If you make your tests independent, each test is independent, each test is atomic. It can be run in any sequence without any dependencies. You could have them queue in a single line and whichever is the resource available, whichever device or browser is available. The next test is passed on to that device, delegated to that device where the execution happens and the queue moves forward in a faster fashion. This way, your devices, your resources are going to be more optimally utilized. That is as simple as that. So what you need to do in your test framework, which could be supporting different types of browsers and devices for native as well as mobile web or web-based executions, you need to structure your tests so that they are independent. Of course, you can run them based on priority. I want to run my P1 test first, then I am going to run my P2 test. That is fine. That is good. But your tests have to be independent. They cannot be dependent on any other tests. And if they are not dependent on any other test, then your test execution scheduler, which can just take the next available test and delegate it to the device where you need to execute it, and you are able to move forward from it. This is extremely important in order to get fast feedback. This does mean you have to spend more time designing your framework correctly, implementing your tests correctly. You have to get more creative in the way you are doing your test automation, leverage, mixing APIs for some data creation or setting the state correctly, and then going from the UI to complete the specific workflow to implement the scenario. So there is a lot of creativity that can come in, a lot of innovation that can come in, how you are implementing the tests. And this is again areas where we can learn and grow as testers, as automation engineers, how to write tests in a better fashion to make them independent. Next, let us look at how are we going to execute the tests. Okay. How many of us again will request you to raise your hands if you agree with this? How many of you use TestNG XML and configure your suites in this fashion? Any new class, any new test potentially that you add, you manually go to your TestNG XML files and you update it based on what new is happening. How many of you also provide some kind of test data in this TestNG XML file? Okay. I see a lot of hands again going up over here. This is a problem. These TestNG XML files can get huge. In fact, I have seen teams where they have got many different types of TestNG XML files that they have created because one file has become so used, they create different TestNG XML files, each catering to a different type of execution that they need to do. The problem over here is now your test execution is not really dynamic. I add a new test. If I do not add it in my TestNG file, that test will not get executed. That is a problem. If I want to run a specific type of test, it can be challenging, not very straightforward to figure out how to run a specific test, which means if my developers and QAs are collaborating really well and if developers made some product change, they want to run a specific type of test, they cannot do that easily without having someone who understands TestNG XML and tell them how to execute the test. That is a big problem again in shifting left, in having developers take more ownership of quality as well. So what can you do different about this? The answer is actually very simple. First part, I believe most of us will already be doing. We are tagging our test based on priority or smoke, sanity, regression, or based on different modules and components that is there. That tagging is essential. Cucumber, for example, as a tool makes it very easy to add custom tags on the test and proceed. It becomes very easy to add any type of tags that you need. Now, once you have this kind of capability, then you need to separate your test configuration from your test implementation. TestNG XML is in a way you're separating the configuration, but that is a hard coded configuration. Any change required, you have to go edit the file and then check in that file again in your repository. That is not what I mean by configuration. The configuration means that the outcome that I'm looking for over here is, if I have a set of guidelines, I could say from command line, how I can execute my full suite of tests, how I can run a specific set of tags, whether smoke, sanity, regression, or for a particular module, or I should be able to specify from command line what specific tests I want to run, or I want to run a subset of tests against a QA environment versus pre-prod versus production, any of these possibilities without making any code change at all. That means my environment data needs to be separated out. My test data needs to be separated out based on these environments. My tests need to have this tag and my build tool needs to just take these parameters that the user is providing, the tester is providing at command line and appropriately filter out the set of tests and execute it automatically without making any code change. And the minute you do it in that fashion, the same thing will work in your CI executions as well, because you are doing this from command line. Your command line is essentially driving what your test execution needs to happen. I love to use Gradle as a build tool instead of Maven because Maven again is hard coded XML. I cannot really configure it very easily, very effectively. Gradle will allow me to do any type of pre-test execution configuration, post-test execution, collection or data gathering reports, whatever it means. It allows me to separate that from my test implementation and that makes it very easy to proceed. Let's move. This is what I mentioned. This has to be on-demand essentially. You should not be needing to make any code change, any configuration change to run these types of tests. Let's move on to the second last anti-pattern that I wanted to share and that is magic. The main magic that I'm talking about is rerunning of failed tests automatically. Again, a show of hands. How many of us have encountered flaky tests in our execution? I see more than 40 responses on this that we encountered flaky tests. It's a big problem of test automation, functional test automation, flaky tests. That is a big problem. So what are flaky tests? Have you even thought deeply about why the tests are flaky? Here are some of the reasons that I can think of why the tests are flaky. Each one of these needs a lot of investigation, a lot of hard work, a lot of collaboration to figure out what the root cause of this flakiness is and how to fix it. But how do we typically address flaky tests? How many of us use a retry listener? Again, if you can raise your hands, how many of us use a retry listener to automatically rerun the failed tests? Again, many of us. Do not use a retry listener. Retry listener is a lazy way of saying my objective is just to make the test pass. Do not use a retry listener. In fact, some tools I came across this recently, I'm not going to name any tool over here. But some tools are saying even if your actions in the browser, your button, for example, is disabled, I will still send a forced click action to the button. That is not what the user is going to be doing. If a button is disabled, how many ever clicks you do, it will not go to the, it will not trigger the next subsequent set of actions in your application. This is not a way to handle automation. Your automation is to simulate end user scenarios. Your user or the reason you need weights is there are two reasons rather why you need weights in your implementation. One, because application actually takes some time to load. It takes some time for data to come back from the server and rendering to happen before the user can use it. Second is the user after getting the information also needs some time to process the information and take actions on it. Your functional automation has to simulate what the user is doing. That's where you need to have intelligent weights to say if my login, I should have been logged in in three seconds. So I need to wait for three seconds for the homepage to load post login. That's why I need a intelligent weight. But is it a realistic expectation to say I am going to wait for 30 seconds for login to complete and then proceed? How many of us have waited 30 seconds for a login to complete without doing a hard refresh of the browser or closing the browser and trying it again? We don't wait forever. That's why thread.sleep is not good. That's why just not waiting is not good. You have to have a balance of what type of weights you need to be using and how optimally to use that. Likewise, the user cannot force an action in the browser. The user in some cases will do a refresh, but will not automatically say that one network called failed, I'm going to refresh and I'm going to do that again. You cannot automatically be retrying under the hoods because some API called failed or some action did not complete successfully. The user has to be in control of what the test is. The user in case of automation is the automation engineer. What is the test you are trying to simulate? If the journey, the train analogy, right? If the journey says I need to do a retry to proceed, that's an explicit journey. In that case, the test author should be saying I want to do a retry over it. So the important lesson over here for me at least when I encountered these things is I cannot be saying if I fail, I'll keep trying again and again and again. I cannot take that approach. I need to spend time to find the root cause. I don't want to just automatically retry so that I don't have to find the root cause. I don't have to keep following up with other members, team members to figure out what might be going wrong. I have to start taking more ownership of the implementation that I'm doing only then the automation is going to be effective. I don't want to just rerun the test so that I don't have to do additional work because the hope is if I know the test passes, I don't need to do anything and I don't care about the underlying reason of why that problem happened in the first place. So how can you get better at this, right? Not fall into this trap. First is don't follow the bandit approach. Don't take the easy way out. Take the time. Understand what the root cause is. Don't focus on the symptoms. Try to get to the root cause and then once you identify the root cause, then figure out how can you solve that problem. This takes a lot of hard work, a lot of effort and time. But I'm telling you it is worth it because you're helping make your application quality better in this fashion. If you just keep blindly retrying hoping the test to pass, hoping the test to pass, you have not changed anything in the parameters. The key to learning and evolving is you change the parameters. You see how you can fail better to get to the root cause and then solve the problem. Don't just keep retrying and hope it is passing. You have to do proper root cause analysis of the flaky test. I'm actually going to be talking about this tomorrow in a short session. So we'll focus more on flaky test over there. But doing proper RCA is a key message that I want to leave you with in this particular anti-pattern. The last anti-pattern is probably one of our favorite topics or favorite patterns that we use, but we also abuse. And I'll tell you why we abuse this. I would like you to raise your hand again if you have something like this in your implementation or some base search test extends from base test or a search page extends from a base page. How many of us use this type of approach in our implementation? Okay, quite a few. I have unfortunately come across such situations as well where I've seen such a deep hierarchy of how the pages are set up. So my custom overview page is going through eight layers or seven layers of inheritance and eventually it is inheriting from a web page. And I think that does not work, that does not work for two reasons. One, if we understand the concept of inheritance, this is an example that I had learned when I was in college, when I was studying oops. What is inheritance? There is a base class that are derived classes. If human is a base class, then dad inherits from human. Dad is a human. Mom is a human. So with that analogy, can you say your search page is a base page? What is a base page? Base page doesn't even exist. Why are we doing that? This creates two problems. One, your design itself does not seem to be right. And second, in terms of testability, this creates a lot of problems. How can we get better at this? Again, I'm going to refer back to our Selenium documentation. I'm not going far. I'm sure there's a lot more information available for this. But Selenium documentation very clearly says a page object does not need to represent the entire page. It can be snippets of the page as well. For example, if I take this old version of the Amazon page, homepage, I could actually break up the Amazon homepage into multiple snippets based on again what is relevant to me for my testing perspective. And with each of these snippets representing page objects, then I could use a composition pattern, which Karthik has mentioned in chat for composition be better than inheritance. Yes, absolutely, Karthik, use a composition pattern to build your homepage using the snippets of these other pages that are there. That way, single responsibility principle also works. And at the same time, you are able to manage your code better. The actual purpose by page objects is popular as a pattern starts adding value. The other quick thing is I see a lot of people using PageFactory. They use it blindly. Simon Stewart, the creator of this pattern, the creator of WebDriver very explicitly said, this was a demo code about how you could use locators better. But he put it in the repository and people started using it so much without understanding its implications that now it is difficult to remove PageFactory from the code base. Hi Anand, sorry to interrupt. Yeah, yeah. Okay, so do not use PageFactory. Think about design of your framework, have proper framework design in place. If you have a big picture view, then start building towards it that will help build your automation in a scalable fashion in a way that is going to be useful. So let's summarize. How can you turn the anti-patterns into good behavior? How can you say I need to take my car to a pit stop, change the tires there instead of trying to change the tires when I'm racing on the roads? Okay, so first prevention is better than cure. Stitch in time saves nine. There are so many different phrases that we can think about why it applies over here. We need to think about how can we avoid this trap of anti-patterns and for that we need to understand the context of our application really well. We need to understand the concepts. What are the different things that I might need to use in a effective fashion? What concepts? What ideas should be implemented to make this work? It's not about tools, technologies over here. These are about ideas. These are genuine concepts, what you can implement in your automation to make it better and then you take a step back, think about the big picture, what you're trying to solve, what is the context, what are the concepts. You do a risk analysis and based on that, you understand and choose the right practices, what can be used in your implementation. Once you have this blueprint ready, then you start taking baby steps one step at a time to reach, to start building a simple test framework to making it a scalable, reusable, maintainable automation framework. In the process, you have to have to think about tech debt in automation as well. What is the tech debt you are incurring based on certain decisions you are taking? At what point do you need to take a hard stop to start addressing that tech debt before the automation framework becomes unusable? And I've seen this happen many times. Remember, you are not making these mistakes intentionally, right? Or you are not using these anti-patterns intentionally. At least that is what I would like to assume. Mistakes happen. The key thing is learn from your mistakes. And what your mistakes should indicate is that yes, you are trying, but also it indicates proof that you are getting better. You are learning from it. And you're not going to repeat the same thing again. So remember, forget the mistake. Remember the lesson, learn and grow. I hope these anti-patterns that I have shared with you and these techniques that I have shared with you, how you can avoid the anti-patterns will actually help make your framework better, actually start getting more value from your automated tests. And you're not just doing that writing test because it's your day job. You're actually getting value from the framework. The product quality is getting better. So thank you so much for this opportunity. Pushma, I know we probably just cut in time, but I'm open for questions. Everyone, Anand will be taking your questions now. Okay. So there are a few questions that have been posted in the Q&A. Pushma, let me know when we have run out of time. Otherwise, I'll just go through those questions quickly. Okay. So the first question is, will Selenium have auto weights? I don't know what auto weights really mean. There has to be something more qualified to make the statement. And based on that, again, probably I'm not the right person to say if it's there in the roadmap or not, but we can definitely have this follow-up conversation and see what it means and what could be... Oh, Manoj is saying he'll be speaking about that. So yes, attend Manoj's keynote, the State of the Union to get an answer to that. Shreya is asking, what if the test flow requires the sequence? Shreya, it means your test design is not correct. So let's connect separately. Let's understand the scenario. And based on the scenario, maybe I can give you some tips how you can avoid sequencing in your test. Okay. But again, it's a very generic question. So I cannot really say too much more about it at this point without knowing more. Charlie Pradeep is saying, what if there is a sign-up process and then sign in? I have one test case to check sign-up, the other one for sign by using the test data used in sign-up. Can you provide a suggestion how to remove dependency? It's a very good example for that matter, Charlie Pradeep. The first test of sign-up, I will go through the UI and see if sign-up works correctly. But now my sign-in is going to require some data. So now, can I create this data using API? For example, I'll make an API call, create a new user over there. And based on that, I will use that in my sign-in. Okay. So that is one way of doing that. So, Kushma is saying that we have actually run out of time. There are a few more questions. So maybe we will move to the Hangouts table and we can continue the conversation over there. I hope this was interesting for everyone. Again, thank you so much to everyone for this opportunity. It was great sharing my thoughts with you. And once again, thank you, Anand, for sharing your experience with us today.