 Hello, everyone. Welcome to the APM conference 2021. We're glad that you can join us today. And this session we'll be talking about how to eradicate flaky test with Anand Bagmar. So without further delay, over to you Anand. Thank you so much, Alicia. Today we will be speaking about how to eradicate flaky tests. It's a tall claim, but let's see how much we are able to accomplish. A very quick introduction about myself. I'm Anand Bagmar. I've been a quality evangelist in the quality space for more than 20 years now. I work with various types of organizations in any role that is relevant to help build a quality product. I'm also a contributor on the Selenium project, small one, but I really feel very proud that I've been able to contribute to that. And I'm also on the conference committee teams for Selenium conference, APM conference, Agile India and so on. You can reach out to me on Twitter or LinkedIn, and I would really love to connect with you and discuss quality and ways how we can get better at quality going forward. So enough about myself. Let's start with the core topic. How to eradicate flaky tests. To understand how to eradicate flaky tests, you need to understand certain prerequisites of sorts or you need to have some holistic information about why does this really happen. And for that, I want to take you through some examples of what different products might be and what types of issues can come up. And then we'll see how to address those problems, some of those problems as well. Any complex system that we are working with any complex product that we are working on a typical architecture would be something like this. There is one or more front end applications, it could be native apps, it could be hybrid apps, it could be mobile web, or just desktop web. But those front end applications are going to communicate with your backend systems through some authentication gateway. And the backend systems is also a pretty complex system that would be there. You could have a monolith, you could have micro service based architecture, hybrid, whatever it really means, a lot of data stores. And the important aspect is that however complex your system is, you are still going to rely and communicate with some other systems as well in the backend, from your backend. So there could be a system like payment getaways, credit systems and so on that you might be communicating with to give the functionality to your users what is expected. So I hope this type of complexity you can relate with. Now, in such a system, how do you typically approach doing your either API workflow automation, or end to end test execution. So typically what we do in such cases is we remove the front end applications. And we simulate the interactions of those with those applications in form of the end to end functional UI tests, or in form of the API workflow tests. And the orchestration of these scenarios is mimicking or simulating the end user behavior. And the purpose of these types of tests is to ensure the user is able to execute all the different business flows and transactions. The feature set that you have providing to them, and it works in a properly integrated fashion. That is a typical approach that we take. Now in context of end to end testing, what are the type of challenges that you come across in such situations. So let's look at some of the challenges. I'm not going to talk about all of them. I'm going to talk in terms of the topic for today. So first of all, these challenges are in form of the long feedback cycle. Our scenarios are cutting across the product feature set or different modules. Hence, these scenarios also take a very long time to run and execute. And hence the feedback cycle is slow. That's a big challenge. The other aspect is that there are certain aspects of your automation that are not possible to be done because we are doing end to end validation. And these could be because of certain other dependencies in your system, which you are not able to control very well. And hence, you cannot really simulate those types of situations in an easy fashion. I will be speaking more about this aspect later in the session, but also there is a session later today by Sagar Singh Pawar and Joel Rosario about how to automate the un-automatable scenarios. I would strongly encourage you to attend that one to get more details about a real case study how this can be tackled as well. There is also a challenge when it comes to cross-browser or for that matter, cross-device testing as well. You want to make sure your product is working correctly, functionally and it has the right user experience for your users for any of the browsers or devices that you might be using. So how to really tackle this problem of cross-browser or cross-device execution. That is something very important as well that we encounter. And most important of all, which we really as testers, as automation engineers, again, titles do not matter. Anyone who's involved with doing automation or UI automation, especially a big pain point for them is flaky tests. Flaky tests are something that will pass intermittently or fail intermittently and there is no clear direction why that has happened and how to fix that problem. And if you dig in deeply into what really the flaky tests are, there's a huge set of things that you can potentially look at to understand why your test execution is flaky. This is a completely different talk or conversation on its own. So I'm not going to go through all the details about why tests are flaky over here. But I'm sure you would have come across situations where your tests are flaky and it has led you to frustrations and situations where you feel like pulling out your hair because you're not able to understand why really that is happening. In context of today's discussion, I want to focus on a few aspects of why the UI tests are flaky or also another way that we call it brittle, any small changes, the test fail. And these are the reasons for these potentially could be because the UI has changed, the locators could have changed, everything in the UI is fine, but the locators have changed. And the UI itself has evolved, so that could be one reason of test flakiness. There could be a challenge when it comes to downstream dependencies. You don't really have control on the dependencies if they're available or not, if they're responding in the correct time or not, that could be a big problem. And then data dependencies also becomes a huge aspect, especially when you're working in an very integrated big environment. The data might change what you're relying on, hence the test is failing. And there could be issues related with the network as well. Maybe the network link between your system and an external system is down for whatever reason. And that is causing your test failures. The impact of any of these challenges is it makes your test execution unreliable. So how do we really tackle that? And these are some of the challenges that I want to focus on. How do we solve this in the rest of the session today? So let's talk about the way forward, right? Just talking about problems or challenges doesn't help anyone. What are some of the techniques that you can use, you can try and understand and use in your own context to see how to make this aspect better for you. So how do you really reduce the flakiness of your test execution? First of all, reduce the number of UI tests. Simon spoke about this in great detail yesterday in his keynote about move your test lower in the test pyramid. That will give you faster feedback and less code you write at the top layer. That means less reasons for having flaky tests. So that is technique number one, reduce the number of UI tests. Second, remove the dependencies on your external services or external systems by intelligent virtualization. And we'll spend a good amount of time talking about this in just a few minutes. And third, how can you write less code while doing more, while getting increased test coverage, all by writing less code? So visual assertions is a great technique where you can get the power of functional and visual testing together. You write less code and you get increased coverage and we'll see that aspect as well. So let's begin deeper into each of these solutions that I'm proposing. How do you reduce the number of UI tests? We know we are in the automation space. We are working on building test automation. So we know the various different types of automation that can be done for your product as well. And the test pyramid is a great way to understand what are the different types of tests that could be done on the product to get fast feedback and increase coverage. While slowly integrating more and more of your systems together. The test pyramid also includes the NFRs, user experience testing and NFRs. It is very important to bear in mind. And this is just we get so caught up in our way of working. We fail to realize this aspect, but each and every activity that you do with respect of automation is helping you give an indication of what is your product quality. So your product quality is not just if my UI tests have passed or not. It is all the different types of tests that have been automated. What is the kind of feedback you're getting from them? And overall all these tests combined, including the exploratory testing that you're doing on your application will tell you what is the quality of your product. So that is very important. Reduce the number of UI tests. UI tests are very important. Don't get me wrong. I focus a lot on end-to-end testing. But identifying the right candidate of tests to be automated at a UI layer is extremely crucial. Whatever can be automated at the lower layers of the pyramid. That is the first approach you need to take to move the code down in the pyramid. That will give you faster feedback, more stability of the test. And it helps you focus on the overall quality in a better way. So that is how you move your tests lower from the UI layers. Have less number of UI tests. Second thing, the second technique that I was referring to is intelligent virtualization to eradicate or remove the external dependencies. So again, let's take an example to understand what we mean by this. So looking at the same architecture diagram that I referred to earlier. What do we do for end-to-end tests or automated tests? We are going to remove the actual client application or the manual interaction with those applications. And we are going to implement the simulation of those scenarios either using our end-to-end functional automation tools or something like a builder framework on top of APM or Selenium to simulate those scenarios. Or at our API workflow level, you would be implementing those scenarios. But now with this in mind, let's take a concrete example of a test scenario, what you would want to automate. So your test, let's say in order to accomplish a workflow, the first call that it needs to do from the UI is call a particular service in the backend. It could be in form of a click or whatever action you're doing in the app. This is going to result in a call to service 2 in the backend. Service 2 does some processing and at a relevant point, it may call some external service to do further processing. Now, if this external service, unfortunately for whatever reasons, takes a long time to respond. So the request times out, for example. Or if that service is not available for whatever reason, maybe a new deployment is happening in that external system that you are interacting with or integrating with. There could be various reasons. But if this particular call from service 2 to the external system does not go through as expected, what happens is your service 2 is going to go into an error state. And this is not really an unexpected error, right? Because your service needs to be implemented for various situations of how the interaction with dependencies is going to happen as well. But because this particular call of service 2 is now going to fail because an external system did not respond as expected, the test is going to get an unexpected response or an incorrect response from the service 2 and hence your test is also going to fail. And this is a problem because all your test was really concerned about in this trivial example was how is service 2 responding in a particular context of the test execution. But because of some external service not being available, your focus of validation of service 2 has gone for a toss. And you get a false positive that the test has failed over here. Your service might have, service 2 might have worked as expected, but in context of that test, it did not do the right thing. Hence this test has failed and that is a big challenge. So the challenge over here is how to deal with scenarios where there are external dependencies which itself are flaky. Or the second challenge over here is how do you really test out different types of responses that are possible from your external dependencies as part of your test automation. Now it is not possible to change service 2 from your testing perspective to take the service 2 down in order to execute this test because this is all happening in an automated fashion. Also there might be other tests running which rely on a correct working of this external service. So you cannot actually make physical changes to this interaction with the external dependency because it will have a big impact on everyone else. So these types of scenarios become a big challenge to automate. So the solution over here that I propose is you stub it out using SpecMatic. A very quick note about SpecMatic. SpecMatic is an open source tool. It is available on GitHub. You can find out about it using SpecMatic from SpecMatic.in website. At a glimpse, it helps teams start doing contract driven development. And that is very important when you want to have you have producers and consumers and you want to start working on that interaction, but you don't want to make it a sequential implementation approach. It will take forever. So how can you really start doing contract driven development while making sure you have executable contracts, you have backward compatibility checks and so on. The biggest advantage that SpecMatic allows us to do is it decouples of producers and consumers and it does so with confidence because of the various different types of checks that can be run by doing taking this approach. So you can find out more about SpecMatic from SpecMatic.in and there could be other tools also which will allow you to do similar things. But this is the approach that I have taken which has given me a lot of flexibility in my implementation and I'll show you how that really works as well. So in the context of the scenario that we were speaking about earlier, if we come back to that, if I'm using SpecMatic in my test execution environment, then how would that work? So now first thing that I have is I am using SpecMatic as a stub server in my environment and what this means is all my services, whenever they need to speak to any external systems, we are going to stub out those external systems using SpecMatic and my services are going to call the stub server instead. So step one is set it up as a stub server and the external services are stubbed out, internal services point to these stubbed endpoints. Step two, my test, given the context of the test, right, so if I want to simulate that my external endpoint has to, it's a positive case scenario where my external dependency is available and it is going to give me the right response. What is the response that I'm expecting from this stub? I'm going to tell this stub server when you get a request of a certain criteria, here is the response that I want you to send back. So I'm setting a dynamic expectation on SpecMatic for a very specific use case. And once this expectation is set, now my test calls service to service to does its processing, in turn it calls the external service, which is now stubbed. And now, because I have set the expectation on the stub server, the request criteria is met, I'm going to get the response from SpecMatic, what I had set from my test itself. When this response comes from the stub server to service to service to is able to process that information as expected as per requirements, and then it gives a result back to my test, and the test is now able to assert the behavior correctly. And if the assertion fails, then it is going to be because service to has not implemented its requirements in the current fashion. So with this approach, what we are saying is, we are able to simulate positive cases, negative cases, edge case responses from our external systems, all in context of all in control of the test that you have implemented. And that becomes a huge value add when you want to test different types of scenarios, and at the same time focus on what you control and your control is about service to over here is that implementation working as expected. Okay, so that is technique number two that I want to a solution number two that I am recommending is stub out the external dependencies intelligently and have your test simulate all the different types of interactions required in context of the different responses required from external dependencies, and you will still be able to validate that make sure your product is working as expected in all those cases. Of course, this is all happening in your test execution environment. Typically, this is going to be an isolated environment where only automated tests are running. Once all these tests pass, then when you run a subset of these tests in your real integrated environment, then you just have to focus on your positive case scenarios with the external dependencies actually integrated, and you will be able to make sure everything is fine. Okay, solution number three, using visual assertions instead of functional assertions. We already saw what the pyramid does for us and all the different aspects that of automation that when you do on your for your product, the kind of feedback you'll be able to get to know the product quality. However, you would realize that even though you have done all these types of automation really well defects still escape to production. And the reason these defects still escape is because automation automation frameworks, we are not really building it or using it correctly for the modern apps. Some examples of types of defects that you would see is everything is working fine your functional automation using APM is still working fine for this Southwest airline app. There is a username password field over here, just the colors are such that the user is not able to see it, but your functional test will still be able to pass using APM because the locators are the same. You will be able to interact and fill in data over there for username password and click on login. However, from an end user perspective, this login does not work. They don't know there is a username and password field available over there. And likewise, you'll see countless examples, whether it is some overlapping content or images or banners not seen correctly. Across any different type of product and domain, that is going to be a problem for your end users. The functional automation test is still going to work fine in most cases as long as those elements are visible, you can interact with them, but the experience is completely broken. And you have to, with a curious mindset, assuming you have a curious mindset, I'm sure you're already thinking why would such types of issues still be happening? Why are these issues which seem trivial? Why are these still going out to production? The answer is simple. These types of issues escape to production because our approach to testing or test strategy or test planning itself is incorrect. Our approach that we take is, requires us to do a lot of mundane activities. These are extremely tedious, repetitive activities that you need to keep doing and you're always running against the clock. And when you're running against the clock, you're bound to make mistakes, you're bound to miss out on things which could have been found if you had the luxury of time. But think about this, it is not just one device or one browser that you want to validate. Each, the thing that can still be different across these different browsers or devices is the rendering engines. The way the screen gets displayed is going to be different. There are, of course, aspects of functionality in some cases that would be different, but that your APM test will be able to catch. But the rendering aspects is difficult to catch unless you go through each and every screen on all the relevant combinations manually. And the approach that we typically take to find these types of differences, whether it's one device or multiple, is we play the game or the puzzle of spot the differences. Given two different images, similar images, you want to try and find the differences on that. And if I tell you, you've got five seconds to find the differences here, how many can you really find? Some people are really good at understanding this, some people take time to understand this. And a lot depends on the state of mind when you're looking at these images as well. So in this case, there are so many differences that have been found, and maybe I have missed out on identifying some differences over here as well. But that's the error-prone nature that comes when you're looking at these things manually. But you might say, we are really working on software products. We are not looking at two different arbitrary images to look at this and find the differences. Our problems are real-world problems on software applications on different types of browsers and our devices. And these have text, dynamic text, static and dynamic test. It has got images of different forms. They are responsive pages that are form factors to consider. There is user experience to consider. There is so much context that a product has. How do we really work on this? Unfortunately, the answer is not different. It is still, we are still doing spot the difference type of thing from a testing perspective to make sure everything is fine. Given two pages on a mobile device or a browser doesn't matter, we are still looking at it in the form of is what I am seeing the same as what is expected. So in this particular image, left side is what I am expecting. Right side is what my new page behavior is when I am running it against a new build, for example. Are there any differences in this? And these are real problems, challenging problems because these are long pages. And going through it in detail to figure out what might have changed over here is a huge overhead and extremely error-prone. In fact, the challenge is bigger because you don't really have a baseline to compare each and every time. What you are doing is once you understand the requirement, you are looking at the new image that is seen or the page the weight is rendered on the screen. And you are trying in your mind to correlate it with what was the expectation over here and is there any difference. So you're not even having the luxury of putting it side by side to figure out if there's anything wrong or anything different over here. And this is a real challenge again, which is extremely time consuming and error-prone. And that's where we need a different approach over here. That approach is how can you use a part of the computer that can replicate what a human eye and brain can see and do, and give you enough information very quickly that will allow you to take decisions if what you are seeing on the screen is acceptable or not. And that approach is called visual AI. So using visual AI, you are focused on not looking at the rendering differences. You're focused on finding out what are the differences that really matter to the end user. You don't want to be using pixel comparison in such cases because any small change in the screen size resolutions is going to result in false positives, which means, again, you are defeating the purpose of finding out what truly was different over here. So visual AI, you want to use not pixel comparisons. Also, your product might be available across multiple types of platforms like native apps, hybrid apps, web. You could also have PDF documents for various reasons that you need to validate. So you need a platform or solution that can work across all your products in a seamless fashion integrated with your functional automation to combine and get the value of functional and visual validation in one test execution. And at the same time, you also want to make sure your user experience is consistent across all your supported browsers and devices at the same time. Do I really want to run my test the same test on all different types of mobile web devices or native apps, all devices that my native app is going to be used on or the desktop browsers. You need a more intelligent solution that will allow you to scale very seamlessly. And by the scaling, it means if I have 100 tests, which is going to take one hour to execute, I don't want to run the same 100 tests on 10 different browser device combinations, which means my feedback cycle has now gone to combine 10 hours. That is too long a feedback cycle. And of course, there are going to be aspects of test failures, intermittent test failures, because of different devices, network conditions, whatever. So there's additional investigation time to rerun your failing test to figure out if there was a real problem or not. You are spending a lot of time in this scaling approach. And that is something that you also want to avoid. So with that, let's take a concrete example to see how all these things start coming together. So in my context, what I have done is I'm using Apple tools for visual AI to do my visual validation, along with my functional validation. And I'm using Specmatic in my product deployment in my test environment, where I'm stubbing out the external services using this Specmatic tool for intelligent virtualization. And I'm going to orchestrate various different scenarios to make sure everything is working fine. So an example that I want to walk you through what tests I want to run and show for you is, let's say, these are very basic APIs that you have where your test is going to first log in. After logging in, you're going to get the list of users that are available in the system. But to get this list of users, your application actually needs to connect to some external system out in the Internet and get a list of users. That list is going to be dynamic because it is going to come from an external source. Based on the list of users that is returned, I'm going to edit a specific user. And after editing the user, I want to verify, I see the updated user details in my list of users that I see in the application. But now what would happen in this case is, my external service is, first of all, going to return a dynamic set of users. So how do I know which user do I want to edit and see if it is working correctly? Or what if this list of users is empty? What if this list is very huge? What if the service is down for whatever reason? It could be a network connectivity issue or that service might be down. The result is my test is going to fail because of this, even though there is no problem in the intent of my test from edit and verify user details. So in this case, the approach that we are going to take for this is, I still have my system deployed, but instead of reaching out to the external service to get a list of users, I'm stubbing it out using Specmatic and the Specmatic server is part of my test environment. It is not going out over the internet. Now with this, I'm logging in. After login, my test is going to tell Specmatic what is the response that I want for this list of users that I want you to give back whenever you get a call to get list of users. So once the expectation is set that test is going to call get list of users. This is going to call instead of the external service, it is going to call Specmatic, which is going to return whatever the expectation that I had set in step number two. Based on this, I'm going to edit a specific user and I'm going to verify if everything is fine. And this approach scales very well because now I could say, I could very easily simulate how should this API get list of users work if the return list is empty, if it is extremely huge, or if that endpoint is down, it is timing out, whatever scenarios that I want to understand out of it. And based on the step number two, I will be able to validate all the different flows how my application needs to perform given these types of external responses that are coming in. Okay. So this is where we are something here. Yeah. So let's look at a quick demo of this. I have a local application running over here. In this application, I have set up, I will now need to again tool windows see the terminal window. I have a local application running, and I have Specmatic running over here as well on my local machine, which is stubbing out a particular endpoint with this if I run the test. I'll just start running the test and I explain what is going on. So now the test has started and I'm doing a live demo so fingers crossed everything works fine. So now in this case, my test has started running and the browser is of course opening up on a different window. So here it is the browser window. I hope you are able to see over here. And the test is running. While the test is running, I'll tell you what is happening with this test. The test is in this case using a APM web and over here, whenever I want to do any validations, I'm using apply tools to do the functional and visual validations for me. I don't have any assertions over here. Okay. As this test completes, what we'll see is in the apply tools dashboard, the results start coming in. So this is a test results which are coming in right now, but I'll show you the results from the earlier run that I did just before this session started. I ran one test, you saw the browser load up. The test ran once over there. However, in the apply tools dashboard, we are seeing there are 21 tests which have run. I ran just one test but I'm getting feedback from 21 tests. What are these 21 tests? I could group them based on the devices. We are seeing this is for Galaxy Note 3 at different resolutions. I'm running it in landscape and portrait mode. You see the behavior is slightly different over here. I'm also running it on desktop browsers and we are seeing the differences for that as well. Each test has actually seven screenshots that have been taken. If you open up any one of them and I highlight the differences because I ran this in the layout mode, we are seeing that from a structure of a page perspective, there is something different when I ran that same test again on this Chrome browser. And this was a difference. I could do root cause analysis and figure out what was the difference in DOM and CSS. Based on this difference, this seems like a genuine defect. Let me report this as a defect button missing, which can create a defect automatically in my defect management system, whether it's Jira or Rally or anything. And at this point, I could fail the test. In fact, I as a QA don't even need to look at this on my own. I run the test and the developers should be looking at this dashboard. The product owners should be looking at the dashboard to see how my product is behaving in the different context that the test has run in. But there is still one gap over here, right? I have not answered a question. I ran the test once. How am I seeing the results come up so many times? And that answer is because I use aptly tools, I am using the ultra fast grid. As soon as I can find out, there we go. In the ultra fast grid configuration, what I am doing is I am specifying how many different browsers or devices. And what viewport sizes, if it's desktop browsers, do I want to run that same test in. So essentially every eyes dot check call that I have is automatically going to render for each of these devices and browsers automatically for me. And hence I see the results come up over here. So what does this approach do for me, right? First, you're making a test automation intelligent. Remember, there is no magic over here. Magic is a very bad thing to have in test automation. You don't want to automatically rerun and find things out. You want to make your test very predictable, very realistic and deterministic. So make your automation intelligent, reduce a number of UI tests, use visual assertions instead of functional assertions and remove the external dependencies using intelligent virtualization. That will give you a lot more power in your test implementations. You reduce your UI test by moving whatever is important to the lower layers in your pyramid, hence get faster feedback. Hence you're also reducing the flakiness and written as due to UI changes or locator changes in our application. Use the visual locators because one line of code over there will give you functional and visual validation. We saw the button was missing over there and it was capturing that if I quickly go back over here, there was one thing which I wanted to highlight. Though this test is showing as fast, that is because of layout mode. If I see it in the strict mode, it is going to tell me each and every difference that was highlighted. So depending on the context of your test that you're doing, you will be able to use the correct algorithm and get the validations that you are requiring for. Also, you are able to scale seamlessly in a very easy fashion. All you need to do is specify your browser device configurations, portrait or landscape modes, whatever it might be. And using the ultra fast grid, you can scale it very seamlessly just by running the test once. There are a lot of different advantages of the ultra fast grid. I don't want to go through that right now, but the slides will be shared to you. Any questions you have, we can definitely talk separately about that. And more, most importantly, or to conclude, virtualizing the dependencies at runtime, not getting some standard static stub responses helps you test very different types of scenarios, which will give you confidence to make sure your requirements have been implemented correctly. There is no unexpected behavior. There are some set of resources that you can refer to. And with that, I'm going to stop. I see there are a lot of questions in the Q&A panel over here, and we do have some time to address that. So I'm going to stop my content and start going through the questions to see how I can help you. Okay, so there is a question. Is there any solution for dynamic for this problem? I'm sorry, I don't really understand that question. Can you add more context to that? And I'll come back to it. Gaurav is asking, in your opinion, how is specmatic a better choice than packed for contract testing? Gaurav, this particular question is answered really well on the specmatic website itself, but in a nutshell, this is doing complete decoupling of producers and consumers, and you have the ability to implement completely in tandem both sides, producers and consumers while ensuring your contract is always adhered to. There is backward compatibility check as well. And of course, there is a lot of different types of support that is there in specmatic. So I request you to take a look at specmatic.in website and get more details on that. And of course, we are connected separately as well, so we could always talk more specific details about the same. Atmaram is asking, does specmatic support dynamic stubbing, like the one resource in Typical Crud when you call list endpoint that resource is listed? Yes, it allows you to do dynamic stubbing, but remember, as I mentioned earlier, there are various different types of contract tests that can be run. Doing intelligent stubbing is one of the use cases of specmatic that is coming at the later side of the cycle. Before that, except there's contract tests, there's backward compatibility checks that you can run that you will be able to use in your test automation. But answer is, yes, specmatic does support dynamic stubbing. Gaurav is again asking, one challenge with stub is to sync stubbed with underlying service changes. Any approaches to ensure consistency is maintained? Do you recommend some live end to end test to get signals from live system? And that's exactly the point what specmatic helps achieve Gaurav is, and you should definitely attend the session later today by Joel and Sagar about how to automate the un-automatable. They will be talking more about similar things over there as well. But the overall contract-driven testing approach means there are various different types of tests that you need to automate to ensure whatever you have stubbed in your test environment is actually in sync with your contract spec itself. And that's the core power of specmatic because there's one contract spec which is used by everyone, producers, consumers at various different levels of the test pyramid to ensure we are testing against the same stubbed end point, the same contract, essentially. Santosh is asking, isn't stubbing external service also means we are potentially maintaining a parallel system by staying up to date with the changes? Not really maintaining a parallel system. In fact, you are testing early in your development and testing cycle, what your integration points need to be. So again, this is a much more involved conversation, but contract-driven testing, the way you would create your contract spec and the way the same spec is going to be referred to by the producers and consumers to run all different types of tests, ensures that you are building and testing against the same contract at all points in time. And that is the key. You cannot have a copy of the spec in one place for one type of test, another copy in another place. You are bound to fail this approach that specmatic allows you to take. There is one contract spec definition that is there in place and that is referred to by everyone. Any advice on loan testers on an agile team? How would you approach this with context and what layers would you focus on first? Trying to read this again. I'm actually not understanding the question very clearly. What do you mean by loan testers on agile teams? If it's an agile team, there is no loan tester. You're working in collaboration with the other roles as well. So I'm sorry, I don't understand the question completely. Maybe you could rephrase that. What if there are new changes in the external APIs with this way of testing and against a stubbing, wouldn't we be mimicking incorrect behavior? Same point as before, everyone is testing against the same contract stub. There is no copy of these stubs lying around. One stub is there in place. Producers, consumers both are referring to the same stub in one repository. So you could have maybe one git repository in your organization where all the contract specs are there and everyone including the consumer and producers are referring to that same stub. Any change that happens in the stub, immediately one or the other or both will crib about that this spec is incorrect. There's some test that is failing as a result. So the last question, is there some open source tool library for visual test automation that you would suggest? There are actually a lot of great tools out there open source as well. But remember, what is the criteria of your application that you need to do for visual testing? What do you need to test in visual testing? Is there any aspect of dynamic data that is coming across over there? What about the different screen sizes, viewport sizes, landscape portrait that might be there? Would it support web, native apps, hybrid apps? All those aspects need to be considered to say which tool is going to work well for me. So tool evaluation is a very important criteria should not be taken just based on someone saying this is the best tool. There is nothing like a best practice or a best tool. It has to be something that works really well for you and your team. So a lot of options there and understand what parameters you are interested in and which tool is going to help support the best for that. So with that, I will pause over here. Thank you very much everyone. Thank you. Thank you so much everyone for joining us. Thank you so much and then for sharing your experience today.