 Welcome everyone for today's session by Anand. The topic of today would be automating the real user scenarios across multiple apps and multiple devices. Without further ado, I would like to give to Anand. Thank you Shri Krishna. Hi everyone. Good evening or good afternoon. Good morning from where you are joining. Thank you for joining Agile India. It's a great pleasure to be back in Agile India. And today I will be sharing with you my experiences and my approach of how can you automate the real user scenarios across multiple apps, multiple devices as part of your functional test automation. Little information about myself. I've been in the quality space for more than 20 years now. And I've played various different roles along the way to help build a better quality product, whether it is as part of product organizations, services organizations, or as a consultant that I am being since the past four or five years now. I'm also a contributor on the Selenium project. I help out and volunteer on Selenium conference, APM conference Agile India and various other conferences as well. And this is a way for me to learn more from others experiences and try and get better. You can reach out to me on Twitter or LinkedIn. And without further ado, let's get started with the core topic of today. How do we automate real user scenarios across multiple apps and multiple devices. If you have any questions, please do keep on asking it in the Q&A section in the conference and I'll try to address it as we go. Otherwise, we'll definitely catch up on the hangout table and discuss further. So before we get into the solution of how do we automate the real user scenarios, it's important to understand why do we automate the functional tests. And that is of course to simulate the user actions and behavior so that we get better sense of quality of a product before we do the release to production. It is important to remember that these scenarios, they include delays and waits to inspect and then proceed with the next action. Users actually do this more than what our test to users will wait for the screen read the information and then proceed but they will not keep waiting forever. At the same time, our test should not keep on waiting forever, but we should have sufficient delays the right type of delays to make sure the product is rendered correctly the functionalities rendered correctly and proceed. We should not accept any random arbitrary delays over here. It has to be realistic. How much will you as a user wait for the screen to load before your attention gets diverted to something else. You have to always put on that hot process when you're automating. Now, to do functional automation, you need to understand your application under test. This application could be in form of a web app. It could be a mobile web app. It could be available as a native application on your Android or iOS devices or tablets as well. But once you understand your application under test, you would create a set of flow charts or scenarios or mind maps and prioritize then what should be automated, what is required and important to be automated for your product. And now once you identify these scenarios, then it's actually a solve problem. How do you automate this? Automation is not new. There are many options that you would have today that you can choose from and this is a very small set of options that I have listed on the screen, how you could select to do the automation. The functional automation is what I'm talking about. What I want to present to you is one additional option that I have helped create, which is called TestWiz. It's an open source framework that I've created that will help do the real user automation along with the type of scenarios that we are used to. How does TestWiz solve this problem? It's an opinionated framework. It is guiding you. It is telling you how you should be doing your automation and this is based on my experiences in automation, what has worked well, what has not worked well. And what I have gained as experience from there to say how you should build a framework that is going to be long lasting, it is going to be maintainable, scalable. New users can come and continue contributing to it in a very easy passion while getting the type of value that you expect from automation. So how does TestWiz solve this problem? It uses Cucumber JVM as a BDD tool to specify your test intent and in this test intent, as you can see, it is very easy to understand if you define a structure, which set of steps are being implemented using API. The annotations determine which platform the test is going to run on. So if your functionality is available as an Android app or as a web app, then you could choose to run the test across any of them simply by providing the annotation or the parameter, which platform you want to run. And because we are using a BDD tool, it is very easy to specify the business logic that you're trying to automate at the top layer of the pyramid. These are tests that sit on the top layer of the pyramid. These are end-to-end tests. These are not test cases, granular test cases that you want to test. So if you focus on the business logic, define your steps in a declarative form, then you will be able to provide a very clear, concise set of actions in form of business rules that can be executed. That can be executed by the user to implement the functionality. Now, this is again a simple standard solve problem. Okay. If you run this test using testvis, let's say if I have to run it in Android, here's a test. I can simply run this. I have emulators running on my machine over here. And I can simply run this test by giving the right configuration parameters on command line and execute the test. The test is going to execute. You will see the results also as soon as they are complete. And it takes care of managing the driver, managing the browsers, managing the apps, installations, and so on. All of those aspects are taken care of by the framework automatically for you. Okay. So this is a very standard test that you'd be able to write using any tool and testvis is no different in that regard. But there is a different category of complexity because of which I really had to think about creating testvis. This complexity comes from the fact that we live in a hyper-connected world. No thanks to the pandemic. We have got used to being in front of the computer almost 24-7. We are available 24-7, unfortunately. And we are collaborating with people, our colleagues, our team members across the world, and they could be collaborating with you on a completely different set of device than what you might be using. You could be on a Windows laptop. They could be on a Mac or a Linux or on tablets, anything that might be there, or you could also be doing some online screen sharing and live coding over there, for example. The point over here is there's a lot of collaboration happening, a lot of interactions happening. If you are testing a product which is a collaborative platform, an interaction-based platform where multiple users are coming together to achieve a certain objective, achieve a certain functionality, the question really is how would you automate such scenarios? And that is again where testvis comes into the picture. So let's take some specific examples and let's start off with thinking about multi-users. In this particular case, let's see how testvis solves this problem. The scenario over here is, if you notice from the earlier demo, it was a video conferencing platform like Zoom. This was a GeoMeet use case. In this particular example, what we are saying is as a guest who's coming from Android device and a host who is also on an Android device, when they are in a meeting, they can send files to each other over chat communication as part of the meeting. So in this case, the test is saying the host is going to sign up using API. So we are creating data on the fly and then the host is going to log into the Android app and start an instant meeting using the credentials that were just created for it. Then the guest is going to come and join the meeting from another Android device and only when there are more than one participant in the meeting, then chat option is enabled. Hence we are saying then host should be able to get to the chat window before that chat is disabled. And when you can get to the chat window, then you can send a file in the chat message and the guest should be able to receive it. So in this case, the step indicates who is doing the action and where that user is. But the interesting thing over here is in this particular orchestration, there are two users coming on two different devices and they will be interacting with each other. As part of the same test orchestration. So let's see how test with solves that particular problem. Interesting, the video is not wanting to play over here. It's not seen rather apologies for that. Let me start it again. There we go. So now in this case, I'm going to run only a small snippet of the test where there are two users on Android devices. And again, it's the same two emulators running on my machine using a command line argument. I'm starting off the test and we see that now test has started off. The first device is picked up. The host is going to sign in over here. We can see sign in option is coming up. You're signing in. Once you sign in, the host is going to start an instant meeting when the meeting is started. Host saves the implementer is capturing the meeting ID from here and saving it in the framework and that same meeting ID is passed to the next user. Who's going to now, which is a guest was going to use a second device start the application. But in this case, instead of signing in, the guest user is going to join to the meeting that the host has created. And when the guest joins the meeting, we will see that on the emulators, we are able to see now both the users are available over there. And because they are available, the chat option is also enabled. Now the next set of functionality can be executed. Okay. So this way, in a very easy fashion, where all we did was specified the personas and the platforms where the users are coming from, and we are able to orchestrate the interaction between them across multiple devices. But this is not sufficient. Our users might not be only on one platform. What if they are coming from different platforms, where someone is on web, someone is on Android and so on. So the way Tesla solves this problem is, again, it's exactly the same thing. The only difference over here really is we have the persona and where we are doing the action we are specifying a different platform. So in this case, the host is coming in from the web browser, the guest is coming from an Android device, and that is how they are going to communicate and collaborate in this particular scenario. So let's take a look at an example of such a scenario. So over here, again, host is coming on Android, guest is coming on web. We run the test. I still have both the emulators over here. We see that first the app is installed, the host has come over here on Android, sign in is done, meeting is started. And now, instead of the guest coming on Android, in this case, a browser is opened up and the guest is going to join the meeting from the browser. When the guest joins a meeting, we can see there are two participants in the meeting and we are able to now continue the orchestration of what should happen in a meeting between these two participants. So this is how you will be able to create multi-user, multi-platform scenarios to have a different level of interaction. Is that all? No. There is something even more interesting that is possible. Let's look at different scenarios what can be still implemented. This could be a multi-app scenario. Let's take an example. I'm ordering something as a consumer. I'm ordering something from an Android app or it could be from a web or base anything, but the app is for a consumer to place an order. When the order is placed, the order eventually gets to the warehouse to say, yes, the order has been processed, payment is processed, ship the order to the customer. The warehouse will do the processing using a completely different set of applications. And from there, when the order is finally reaching the last delivery station, the delivery person will pick up the items and go to the delivery address to the customer and deliver the order. Now in this case, it's a highly simplified e-commerce flow, but the minimum what we see is three set of applications over here. What if you want to orchestrate the end-to-end scenario? This is the topmost tip of the top of the pyramid. These are the type of scenarios where you're doing pure end-to-end across different systems and applications as well. You would have extremely low number of scenarios for this type of validation, but this type of validation is still important and you still need a way to automate it instead of having to do this manually. So in this particular case, you have multiple applications being interacting with each other as part of an orchestration and you should be able to automate that. There could be another scenario as well where you might be interested to know what if there are different versions of the application and can you collaborate with each other using those different versions. A classic example could be one version of Zoom, participants are on different version of Zoom, post and participants, and you should be able to collaborate between those. How can you run an automated test that will allow you to do that? And this is where again, TESVIS solves this problem in a very interesting way where the step over here now is indicating who is doing the action. And you can have as many number of personas, number of users interacting in that orchestration. The only limitation is about your machine where the test is executing or the number of devices or browsers you have available that can be executed on that infrastructure. That is really the only limitation. TESVIS can support whatever scale you really want. So in this case, we have the step indicating who is doing the action. We are also specifying over which application is being used by that persona to do the actions or do the orchestra and be part of that orchestration. And of course, which platform are they coming from. So in this particular case, we are saying there are three different applications that are really being used. Host is coming from the latest version of the GeoMeet application. The first guest is coming on an older version of the GeoMeet application. Both of these are on Android. And the second guest is actually joining from the web. And we are able to do this orchestration as well. So let's take a look at an example of how this would work. So here's an example of what we spoke about. In this case, again, we are just going to run this from command line. We have two emulators running. The host is starting off on first device, logging in, start a meeting. Meeting started, host is the only person over here. Now the first guest is starting, but in this case, though the application looks the same, it's actually using a different version of the application to install on that device. And the guest is going to connect from that. So in this particular case, the example is a little trivial because you cannot really see the difference in the applications. But believe me, and the score is actually there on GitHub. You could see these actually different applications, versions of applications used to run the test. So now the guest has joined the meeting that the host has created. And now we see there are two participants in the meeting over here. And now the guest is number two. The second guest is coming from the browser where the browser has launched connecting to the same meeting. And now we see there are three participants in the meeting and that orchestration can continue again from here. So this way you are able to implement real user scenarios across multiple devices, multiple applications, multiple platforms as part of the same test orchestration and achieve that end-to-end validation, what you would want to do in an automated way instead of doing this manually for every release that is happening. All the code that I ran is run from command line using simple properties provided as environment variables or directly at command line. And TestWiz just takes these and runs with it directly. You do not need to make any code change for this. Let's look at what else is there. We saw how the test can run, but just running the test is not sufficient. We need to be able to see what has happened in the test, what kind of reports are generated and what kind of value we can get from the executed test. Because just running the test is not important. Knowing the status of the test and what next do you need to do, that is an important aspect. So for this, TestWiz integrates with three different tools from a reporting perspective or coverage perspective. One is Report Portal, which gives you a holistic view of your test execution. It's a one-place stop for all your test execution. TestWiz also integrates with AptiTools visual AI and the AptiTools ultra-fast grid to get visual testing included as part of your test coverage. And the ultra-fast grid gives you increased seamless scalability across all different browsers and devices for your web-based execution. So it automatically increases the test coverage. You do not have to write as many validations for it. And another important aspect, a question which I get asked very often, what is the code coverage from your functional tests? We do not do code coverage. I strongly believe you do not get enough value or any value by capturing code coverage from your functional tests. Instead, what is important is to capture the feature coverage or the functional coverage. And let's see how TestWiz solves these problems. First is Report Portal. The way we define the scenario in your cucumber scenarios in given when then, we see exactly the same representation of the test executed in Report Portal. And you can drill down into these reports and see screenshots attached and impact visual test results attached and any artifacts that might be there, additional artifacts, log files. Everything is attached with the same test scenario. So when you come over here, if the test has failed, hopefully 99.9% of the time is what I would like to believe to do not need to run your tests again to figure out why it failed. We have captured the device logs, the browser logs. We have visual test reports over here. Everything is integrated in one place or is linked from this one place. So this is the only one link that you need to bookmark and use for your test result evaluations. In addition, Report Portal has an amazing capability for result analysis and trend analysis. So you can create widgets in the Report Portal dashboard, which can give you insights like which of your tests are flaky, how much time is the test execution taking, what is the reason of the test failures. So if you start taking decision on the fail test and marking them as a product issue or automation issue, data issue, environment issue, flaky test, whatever the categorization might be. As you keep doing this diligently, Report Portal will automatically do the analysis going forward and give you analysis of what is happening with your test execution in a very seamless way. So this becomes a very popular, powerful way of knowing what your test execution results look like and a trend of your execution over a period of time as well, which makes it very important to understand how your product quality is evolving. Apply tools for visual testing. Now, because we've integrated visual testing with this, I have to write less assertions in my code. The only assertions that I probably need to write in my code are business assertions. I should have seen the quantity as X instead of whatever right because that's a business rule validation that you're really doing. But all of your functional and UI based validation user experience based validation will happen from Apple tools and by simply choosing the right type of algorithms that you can use in your implementation itself. You can implement your tests in the correct fashion get the visual results automatically in that way. And of course, you can provide the ultra fast grid configuration in your test implementation itself. And with that, you'll be able to get as part of the same test execution, get increased coverage from all of your different browsers and devices automatically. Now, what this means is your test may have passed functionally, you're able to complete all the actions the way you want to. But there might have been visual differences user experience based issues, or your design is not conforming to what your baseline is, and you're able to do that validation as part of the same test execution. And this of course works across for native apps as well as browsers. In fact, it also works for desktop applications which I'll talk about shortly. The last part of the reporting is a feature coverage or a functional coverage. Because we are writing a test in cucumber JVM, we can add as many annotations as we want as part of our test, and these annotations are something that I recommend you add as functional components, or modules of your application. And when you run your suite of tests, your smoke, sanity regression, when you run all these tests, this type of report is generated at the end of the test, where it will give you a heat map of the tags that you have used in your test execution. And that way, you'll be able to say if I am a banking product, adding a pay might be one set of tests, transferring money from one account to the other might be another set of tests, viewing balance might be another set of tests. Now, adding a pay might be less important compared to doing a transfer between accounts. So with the heat map, you'll be able to see how many tests have used the tag for execution of account balance versus add a pay versus amount transfer. And if you see this is not correctly prioritized, the number of tests for add a pay is, for example, more than the others, you know your tests are not prioritized correctly, you are implementing and running tests, which are not more important to your business. So this is how you can actually get your feature and functional coverage visible from your executed tests. And hopefully that gives you insights into saying where you need to focus on more as part of your implementation. Okay. The next part is understanding what makes test was unique and why you should use it, consider using it. The unique capabilities that we have is of course, this is open source, and you can use it to automate real user scenarios and real user scenarios across for multiple user based applications multi device. Multi applications as well. It also has various cloud device farm integrations. So you do not have to run your test locally itself. You should be able to run it in device farms are also very easily. In terms of what type of support it has, it supports all the popular web browsers, mobile web browsers, Android apps, iOS apps and Windows desktop applications as well. The tech stack that is used over here is cucumber JVM. This is how we write the intent of the test. We use APM test distribution, which manages the Android iOS and the windows applications using APM. APM of course to actually do the last mile interaction with these devices. Selenium web driver to interact with the browsers. Report portal from a reporting perspective, a central reporting server. Visual testing using apti tools, visual AI and seamless scaling as a modern way of doing cross browser testing using apti tools ultra fast grid. And the build tool is gradle. In how do you run the test? We saw the command line that I used to run the test earlier. The command line is a way to override the defaults that you have provided that test with provides. In fact, there is a three layers. There are three layers, how the properties or arguments can be provided to test with in terms of what should be executed. First is the defaults that exist within test with itself. There are certain defaults that are assumed, but test with does not assume much. I started off saying test with is an opinionated framework or somewhat an opinionated framework. We do not do magic. We do not take decisions automatically. If there is something that is not expected, not provided explicitly, we would rather fail the test make the error very visible to you. So you are in more control of how you want to execute the test. So there are some set of defaults and you would provide the additional information required in form of property files and some JSON files as well for capabilities that are required for device interaction. And though I have property files, I could override that easily by providing environment variables for the same name and test with will automatically use the overridden values for execution. And this I believe is very important because let's say the testing team is implementing these tests, but you want your developers to also use this, but they do not want to run the full suite. They want to run only a few set of tests, which are important based on the kind of changes they have done. So just by telling them, okay, clone this repo, here is a command that you would run to run a subset of tests, they could easily run that as well. Or for that matter, if you want to run only a specific set of tests for investigation or whatever other reason that might be, you could very easily do that by just passing these arguments at command line or as environment variables. And that will override the property file values and you'll be able to execute it accordingly without making a single code change that is required. Okay, so this is what testvis is about. How can you proceed from here? There are two ways how you can proceed from here. One, you could be a consumer of testvis. You like what you see. You think it will add value to you. So the easiest way to get started is go to getting started with testvis. And this is a Kickstarter project. It has sample codes. It has the same sample test that I actually ran and showed in the demo. You'll be able to run the test, just clone the repo, run the test. You'll see how testvis works. And you could just start making changes and adding your own test over there instead of the sample test. And you're using testvis automatically for your project. The other way is help make testvis better. As I said, testvis is there on GitHub. So you could go to testvis, the GitHub site, and I have listed out over here a different set of enhancements or additional capabilities that I think will be important from a testvis perspective as features, as enhancements, as fixing certain things which might not be working as well over here. If you think you're interested in contributing to testvis, go over here. Let's collaborate and figure out how to implement some of these features and get it merged in testvis for your own use as well as for others. Of course, it's a age of social media. What is a use if we do not have more stars or more likes? The least you could do if you like this approach, like this thought process, go to GitHub, give a start to testvis and help make it a little bit more popular and give me and the other contributors of testvis more incentive to try and make additional capabilities available for you. With that, I would like to say thank you. And I hope to have more conversations with you, what additional things you would like to see over here and make it better. So with that, let's move to some questions. Pradeep is asking, you touched upon automation heat map. I hope this is the coverage report. If we use the right tag, will this tool automatically pick and show the coverage? Yes, Pradeep, this shows the coverage from the functional automation perspective. This is not the code coverage. Again, just to be very clear, this is not the code coverage. And if I go back to showing some of these, one of the slides, it's really whatever you have implemented as tags on the test. That is what is going to be used in terms of creating the heat map. Now, testvis also provides a very easy way for you to say which tags I do not want to show up. So for example, login might be a common module across 90, 95% of your test probably. You don't want to see that in the heat map. So you could very easily exclude that as well. Okay, so this is functional coverage. Absolutely spot on. This is functional coverage. It completely depends on how you have added these tags. Testvis will just create a heat map out of it based on the execution. Of course, what this means is that test has to run in order for it to create the heat map. If a test is excluded, then that tag will not be included in the heat map. Just clarifying on that aspect. Okay, so I hope that answered the question, Pradeep. The next question Saurabh is asking, does this framework have the capability to run the test in parallel? Yes, it does. So I strongly believe on certain principles, certain practices to be adopted in the framework to make automation framework usable, reusable, usable, maintainable, scalable. And one of the aspects is of course a clear test intent. If I don't know what is being executed, then everything else is futile to me. To get fast feedback, it has to run in parallel. So it supports it as an inbuilt functionality. You just have to change the parallel count, the default, you can set in the property value. You could change that value on command line if you want as well. So parallel test is there, visual testing is there. And because it is running from command line, you can easily integrate this as part of your CI pipelines and run it as often with any combinations as you want automatically without having to make different type of TestNG files or anything. There is no such manual configuration in TestWiz. So Saurabh, I hope that answers the question. Yes, it can run the test in parallel as well. And that is for web as well as mobile apps for that matter. Amai is asking, test data is very critical for end-to-end testing. How is that automated in TestWiz? Well, TestWiz does not do any automation. It's a framework that will give you the capability of doing what you want to do. So it supports making API calls. You will have to implement those API calls to say, how am I going to create data or query data from your application and use it in your test. TestWiz makes it very easy to define these aspects as part of your framework. So for example, if I go to the browser over here and I will open the Getting Started with TestWiz project, over here you will be able to see in source, test, resources. There is an environment station file. So your test should be able to run against multiple environments. Dev, QA, pre-prod, prod. So you can specify the environment data as a separate aspect. And your test data for each environment can also be specified over here that you will be able to use. Okay. The parameter that you are going to pass into TestWiz is going to be over here in configs. You would say, for example, if I'm running a local test for GeoMeet, which is my target environment, I'm using a prod environment. And again, you could override from command line to say, let me run the same test in Dev and see what happens. So the target environment determines what environment data is used and what test data is used for running the test. Okay. I hope that clarifies our test data. TestWiz does not create it. It gives you the capability to create it and keep it separate for each environment so that your test can run seamlessly across any environment that you want. Next question is from Prashant. We already have Cypress-based tests written. Can I integrate TestWiz so I can test across devices, measure coverage? TestWiz uses internally APM and Selenium WebDriver to run the tests. I don't see any easy way for you to run Cypress-based tests over here. But maybe that can be a capability that could be added to say instead of using Selenium, run it with something else. And this is just the driver that we are interacting with. How do you manage the driver? I frankly do not know how Cypress manages the driver instances. But maybe it's a conversation Prashant that we should have separately on the hangout table or afterwards to understand more. But assuming your test can run with Cypress, the rest of the infrastructure is still the same. As long as you create JUnit reports from your Cypress test, you will be able to create the heat map and measure coverage as well out of it. Pradeep is asking a question. Is this same facility available in plain Jenkins? I'm not an automation test expert. Excuse me if I ask the wrong question. There's no wrong question Pradeep. All questions are good questions. So don't hesitate. Any additional questions, please ask. Jenkins, in Jenkins you would create a job. A job, the way you would run the test is you would give a set of commands. What is it that you want to execute? That command is what I ran from command line. So yes, a very simple answer is yes. It can work with plain Jenkins as well. I have a lot of teams using TestWiz in a very large enterprise using Azure Pipelines, a build as well as a release pipeline. And they're using it for web as well as mobile on device farms as well as local devices. So this works fine across any CI tool because any CI tool, the only requirement they would have is give me a command that I need to run on a particular agent and the test should run from there. So yes, it will work. Okay. So I believe. Oh, there are some more questions and I think we still have time. So I will answer that over here. Does this framework spin up pods real time on GK? I'm guessing GCP instead of GKP or AWS for parallel execution. No, this is not going to spin up any pods or anything. Parallel execution is handled on the local machine where your tests are running. If you want to distribute your test executions, you would use a device farm where on the local you don't need any additional resources. A device farm will be able to provide a device to you, run the test over there or provide a browser to you, run the test on that. So that would work fine in that way. Okay. So it does not spin up any pods or anything. It's not required to do that. Next question, can we run test with against our application? How different is it from Selenium automation tool, which is already automated for our test cases? It is nothing very different because internally, TestVis is still using Selenium WebDriver. It is still using APM for native app automation. What TestVis is doing for you, it is abstracting away all the pain that you have in terms of managing the browsers and devices, managing the reporting in a seamless fashion, doing visual testing in a seamless fashion. You do not need to build any of that. You just focus on implementing your test, you have an approach for how data should be specified, how environment configuration should be specified. Just provide the data and you focus on implementing your test, getting maximum value of automation for your application. You do not need to worry about anything else. That is what TestVis provides for you. You could also very easily take inspiration from TestVis, the features capability it has, enhance your own framework to do the same thing. The difference is you have to build and maintain that infrastructure and that framework logic versus with TestVis. It is going to take care of all of that. Okay. So I hope that answers that question as well. And with that, I think we have a few more questions coming up. Interesting. Thank you. Can I use TestVis to test an end-to-end scenario which involves a desktop and an enterprise application? Of course, yes. The criteria only is going to be wherever you are running the test from. That needs to have connectivity to the application that you're talking about. So yes, you can do that. But if you want to have more concrete answer, we would need to talk more specific in person or over chat or Twitter or LinkedIn. And I'll be able to guide you how to make that happen. But yes, you can use it. Can we get the usage of it using any video or docs? Yes, I am building a lot of documentation. That is an area which is slightly weak right now. The existing set of functionality is working pretty well. But getting started, I am working on the documentation. I'm looking for a use case who wants to use it and I want to take help to create documentation based on actual usage by someone. And I would love to get participation from you and create documentation while helping you get started. That becomes a reusable asset as well. Okay. So it is possible. But the best way to go is right now is go to the TestVis GitHub. And over there, there is ReadMe. ReadMe. I'm trying to beef up as much as possible. I will add more videos of this as well to make it easier for others. Okay. Yes, we are done with the questions. Is there anything in chat? Okay, the same question from chat was there in Q&A as well. So I guess we have covered all the questions over here. So thank you very much for this opportunity. The questions indicate that you do see some value from TestVis. I would love to collaborate more with you. And I'll be on the hangout table so we can continue the conversation over there as well. Thank you again very much for the opportunity and look forward to working with you in the future as well. Have a great rest of the conference. Thanks, Anand, for sharing your experience about the automation tool, the open source automation tool TestVis.