 Welcome to this session, Advanced Automated Visual Recreation Test by Shweta Sharma. So we are glad that they joined us today. We would like to thank BrowserStack for sponsoring this session. So I think without any further delay, Shweta, over to you. The stage is to you. Thank you, Harbik, for that. I'm just going to turn off my video because it would interfere with my demo. Yeah. So good morning, everyone. People attending from Asia, India, and good afternoon if you are attending from a slightly different time zone. So today's session is going to be about Advanced Automated Visual Recreation Testing as the title suggests, and my name is Shweta Sharma, and I work as the Director of QA Services at Exilirint, and these are like quick social media details about me. If you want to stay connected with me, and I would like to please feel free to reach out on any of these. Okay. Quickly about Exilirint. Firstly, is that we are a completely distributed remote working organizations working across six different time zones, and we've been in the industry for over 10 years. In fact, I think we've been like for around 13 years now, and it's a small organization in terms of strength. Yeah, like we are like 110 plus enthusiastic kind and open professionals. I joined Exilirint like we were just like when 25 of us. So yeah, we've seen growth in terms of strength as well, and we've had over 150 plus partner engagements, and so yeah, we are primarily a Drupal agency, which means there's a lot of contribution we do via code to Drupal, which is open source, and till date we've had around 1000 plus open source contributions from Exilirint. This is going to be the agenda and not a readout. This is just for the recording and for you to glance through. So this is what I'm going to be talking about for the next 40 minutes roughly, or 45, yeah. So let me just brief you about the concept. What do you mean by testing visual, right? When you say that you're going to test the visuals, probably what you mean is you're going to test the user interface or the graphical user interface, right? Just go on adding prefixes to the interface itself. You're going to test how a particular interface looks like on several browsers and several mobile devices because mobile is the current and the future as well, isn't it, right? So when you add the prefix automated to this kind of testing, it means that you're going to automate the verification of the user interface, yeah? Simple, I mean, that's how this term actually is derived. So when you say that you're going to check the user interface, primarily what do you mean by you're going to check the user interface? What do you do as a tester or as some other person on the team? What do you do when you're going to say that you check the user interface? You're going to check the visual content, right? There's so much of content displayed, whether there is so much or whether it's little. You're going to check the content. You're definitely going to check the page layout, right? What I mean by the layout is broadly the page is divided into, say, the header, the footer, the main section, right? So you're going to check the layout, whether when they are bundled together, do they appear correctly? And we spoke of responsive design, right? Because yes, you definitely want to ensure that your application looks visually perfect on various resolutions, right? So that's how the third point comes in. OK. And before diving into how we do it and all, it's really important to understand the objective behind this, right? Because it's such a sad state that when people want to jump into automation, they are directly focused so much on the tool aspect and which language should I choose without even understanding what's the objective behind it? So let's understand why do we need this sort of automated testing in place first, right? So the first point is the human factor. What do I mean by the human factor, right? First of all, like, you know, there are two limitations that I can see is that I have actually experienced this. There are limitations of the human eye itself, right? So if you say two hex codes, two different hex codes of orange color, right? They are actually two different hex codes, but the human eye might perceive or will perceive it as one orange color, right? So that's the limitation. And I've worked with clients who are extremely particular about having the exact hex codes in place. Like, you know, the entire family of orange is not the same for them. And of course, there is science of accessibility behind it. And that's the reason why they wanted that way. The other limitation is whenever humans are involved in repetitive mundane tasks, they are prone to error. I mean, I am sure if you're going to check that user interface for the next six months, you're going to miss out on the most obvious bug also on homepage. That's a given. That's how we are programmed as humans. The second factor is the larger device OS matrix and which adds to, you know, the larger release cycles. I mean, I want to thumbs up if you test on, say, at least minimum three browsers, right? Minimum three browsers and say, you know, at least on iOS phone, iPad, or Android phone and Android tablet, right? That's the requirement. That's the minimal requirement we have at Exilerance, be it any project, right? And to add, we've also had specific requirements when it comes to IE11, right? Of course, you know, there's a different story that the support might be taken off. And, you know, IE11 is going to move to edge completely, but that's a different story, right? So the number of browsers go on adding. And just imagine your QA team needs to verify this on Chrome, Firefox, Safari, and then there is this UC browser on mobile, which, yeah, I mean, yeah, I have worked on a client which was so focused on UC browser. So doing it all humanly is it's just going to keep you behind in terms of releases and it's going to keep you out of the market that way, right? And this is, again, a very important point is that your automated functional suite doesn't really verify the UI. Does it? No, unless you have written assertions to verify the CSS, which is, again, a time consuming task and not as effective as this tool would do it for you, right? We will see an example. And last, like, you know, I think as testers, as human testers, we should be focusing more on what we are doing at best, which is verifying the usability and understand how user-friendly is the application. Instead of doing mundane tasks, I'll reiterate, leave the mundane repetitive tasks to tools and utilize your creativity, your innovation in bringing more value to the application that humans would bring and not the tool, really. So let's quickly understand as well how does this basic algorithm work. Even if, you know, many of you all know how it works, I would just like to quickly reiterate. So once the test runner is initiated, right, what's going to happen on the first run is your script is going to capture baseline images, right? And it's going to store it. And on subsequent runs, what it does is it's going to capture the screenshot again at runtime and it's going to capture this current screenshot with the one that you stored here as a baseline image and it's going to run a comparison algorithm. And once the comparison is done, you know, there are going to be two outcomes, right? Either the test passes, right? Or it's going to report differences. It's not going to fail the test because the tool doesn't really understand whether the test has failed or not. So it's not going to fail the test, but it's going to report differences, right? And what happens when it has reported differences? There are just going to be two possible outcomes here. One is that it's unintended, right? So which means that the differences were reported and you do not really expect those differences, which means it's about. The tool is highlighting the difference and it's about. But if it's intended, if you like, you know, if you were expecting that difference, it's because your baseline image needs to be updated, right? Say, I'll give you an example. Say, for example, your baseline image had a screenshot here of the header, which did not have the add to cart icon. But after the feature was developed, you know, the add to cart icon needs to be updated as per this step. And that's the reason the screen reported that the tool reported the differences. So let's take a quick look at the demo using Backstop.js. So I'm going to give a demo of this basic concept using Backstop.js. I would expect or I would request the attendees to go full screen so that this is properly visible to you. It's a recording. Oops, just give me one second. Yeah, so what we're going to do is we're going to run the command Backstop reference. And it's going to this command is going to capture all the baseline shots, like which is the second step, right? So it's saying secreting new reference file. And once the references are created, it would store those images here. And since I had configured Backstop.js to capture screenshot on three different viewports, which is desktop, phone and tablet, it has done that accordingly. Yeah. So this was the site or this was the website that I've used for demo, which is Urban Hipster, which is built using Drupal Commerce. And if you see, I have captured here the screenshot for category listing page for women. And it has captured the screenshot correctly for desktop. This is for phone. It's captured the full page screenshot. What we're going to do is we are going to rerun this test, assuming that, you know, there was some changes or, you know, there was some development or, you know, there was some parallel development that was happening. And we're just going to check the regression by using the command Backstop test. Oh, so fantastic. You know, all three tests have passed and nothing broke. And if you see the reference image is on the left. And the recent image that got captured on the actual run is under the test column. And you can see that it looks fine for all three viewports. Yeah. So there were no differences reported. Right. So just give me one sec. Yeah. But then, you know, things are not really as rosy as they appear to be in demos, in presentations, right? There are challenges with this sort of automated testing as well, as you would face challenges with any other automated testing types is. Yeah. I mean, there are challenges and all we want to do is look for potential solutions. So I'll address a primary challenges over here of automated visual testing is anti-aliasing. So what is really anti-aliasing, right? So anti-aliasing, I would, in layman terms, what I would say is that, you know, every machine has a different hardware configuration. And there are different softwares associated with those hardware. And probably, say, a picture captured on a Mac would be different than a picture captured on a Windows machine. And if you're going to have comparison between those two images, oh my god, this tool is going to scream like every time you run that comparison. So that's one of the primary challenges. The second challenge is interesting dynamic content, right? Now, if you go to, say, Yahoo homepage, right, or if you go to Yahoo News, you would see that the layout might be similar every day, right? But the news is definitely going to change, right? And there would be even cases that the news changes twice or thrice a day. So which means that your tool, every time the content, the text changes on that application, the tool is just going to scream out with, you know, oh my god, there are so many differences. And there are other also dynamic content, like even if you have a comparatively decently, I would say, you know, static page, you would see that you would have different ad blocks, or, you know, there is a slideshow which is running, which would have different content displayed. And how would you deal with such challenges, right? Because every time there's a difference, the tool is going to scream. So we want to look for solutions which is going to deal with this. And these are inevitable. You cannot just expect a site to be static. Okay, so talking about the first primary challenge was anti-aliasing. These are the few solutions that I have tried, that we try at Xelerant for anti-aliasing is that use a docker setup. If you are not aware of it, what I would say is that, you know, it would give you a uniform environment every time. So which means, you know, if your docker engine is, or if your docker setup has, you know, a Linux box, and if you plan to run those tests, say, probably on a Chrome browser within that box, then at least when other team members are going to run those tests, we are not, the probability of running into false positives is going to decrease because of that. Another solution, of course, this doesn't come free because, because, you know, cloud is not free really, the service is not free. Right, so you can, what you can do is you can run your tests on browser stack, source labs. What I mean by that is you can run on the environment that is provided by these. So like, you know, there is a set of, we all know there's a set of browsers, devices with the OS combination provided by these devices, these tools. So again, you know, that solves the problem of anti-aliasing. If your team is on different machines, different browsers, and so on. So, you know, just go on cloud. And the third option again is, or go for a tool which handles it implicitly, right? There is a tool which handles anti-aliasing. Okay, and now, so I've been talking about dynamic content challenge, right? So I would also like to demo this challenge, right? So that you all relate it more to it. Let me just go full screen. So I'm running the test again. The previous test only is just running on the home page now, which has a slide show really, right? Okay, it's going to fail for sure. Yeah, you see, so three failed. Okay, which means the home page test for desktop phone and tablet, all three have failed and you can see the differences highlighted here in pink. This section, right? So we can clearly see that the slide show is the reason why this particular test has failed, right? So this is, the scrubber is one of the features provided by Backstab.js, right? And you can see, so the reference image here was created using exclusive styles, new arrivals. Whereas during test, what it did was it actually captured screenshot from the first image of the slide show. And that's the reason, you know, this tool is screaming that, okay, there's difference. By the way, I forgot to tell Backstab.js is a free tool to use. So, you know, you should give it a try. I will also give the references here. Okay, I'll have to present again. Yeah, you saw, you know, how dynamic content really is a challenge. So what are the solutions for dynamic content, right? So there are primarily these two strategies which is called hide and remove, right? So what the hide strategy is going to do is it's going to hide all elements queried by, so either WebDriver, IO or Backstab.js, you know, both these tools actually, they utilize the same strategy, right? So we are going to see how this strategy can be actually implemented. What hide does is it's going to, so what we've used for the slide show is we are going to hide that slide show. What it's going to do is it's going to fill that element space with solid space, right? And what the remove strategy is going to do is it's going to remove the elements that we have captured in our configuration files. Now, to help you understand better, what could be the good candidates for hide? The ad blocks, the images that change, right? And remove could be sticky headers and footers, popover health chats, that windows that appear on your application. You don't really need that when you want to capture the screenshot, right? So you can use the remove strategy there. And let's see that strategy in action, really. Okay, right? So if you see here, what I've done is I have actually hidden the homepage carousel slide, right? I have put that CSS selector in the hide selectors. This is an array, so obviously you can hide multiple such elements. And there is this also beacon marker, right? Which I haven't showed you or probably I can show it to you later, right? There is also this flickering beacon marker on the website, right? Which I don't really want to be captured during my screenshots because it's just interfering with this, right? So this is a screenshot from Backstop JS, right? And yeah, so what I did was I captured a new reference image after I introduced that strategy. And you see that how this slideshow is actually filled up by a solid space, right? So the next time the screenshot is going to be run against this new reference image, right? And it's not even interfering with the page layout. It's not that I have removed this slideshow completely. No, otherwise it would interfere with the page layout. So that's not a good thing to do in this case, right? But there were these flickering star beacons that were running here, which I'll show you later, right? And I've removed those because they are not really adding any value, right? So let's quickly look into the demo now, right? You see all three have passed, right? Because you see the reference image is also one which doesn't have dynamic content. We've handled that and hence the test has passed. So this is how the reference image looked and then the test image also, right? Clear? So this is how you can, this is like one of the ways to handle dynamic content. Okay, coming to the testing strategy, again, capturing screenshots can be done at three different levels. One is at element level. Second is full page screenshots. And then it can be also done for the current viewport, right? But when would you want to do this really, right? So if you talk about element level comparison, right? I wouldn't really recommend it unless the development is in progress, right? Again, there, the grouping of the components has to be done logically, right? So for example, you can have the comparison run at header. You can have the comparison run at footer. It has to be logical. Just don't grab, say, a particular element, some sort of a drop down or a checkbox as an element to be captured, right? So development in progress, yes, element level strategy is good. If you are using any component-based libraries like Storybook or Pattern Lab, then element level makes a lot of sense. Of course, there are tools which will help you extract stories from Storybook. And, you know, I mean, I could do a different talk really on testing strategy using Storybook, right? So but just to brief you, yes, you should use that when you are using component-based libraries. Full page, yeah. So when would you go with full page is like, for example, if you want to compare to similar environments, right? What I mean by similar environments is you have a website running in production and you have a state site which is a replica of your production environment. And you have pushed your recent changes. And before, you know, you push it to production, you want to ensure that things are looking fine on the UI on stage. And you can compare after you've pushed and that would give you exactly the result that you're looking for. And once an entire page is developed, since we work a lot on constructing pages and websites using Drupal, right? We know how the development happens there. It happens in increments, right? I'm not going to have the entire homepage ready in one sprint, right? So once the entire page is developed, go ahead and change your automated visual tests, yes, test maintenance applicable here as well and change the screenshot strategy for to capture the entire page. Okay. And plan the level of visual coverage that is needed really, right? So even for your browsers and devices, every project is different because our customers are different, right? Because I have worked with customers in the Middle East, right? They have a different browser and device matrix entirely, trust me, right? And there is absolutely no point in providing coverage for browsers that's not needed there. No, just remove that. So you have to plan that beforehand. Identify patterns from your previously learned lessons. What I mean by that is that if you have observed certain type of pages, break on certain types of browsers or devices, you know, I would say that capture that in your lessons learned, create your documentation and ensure that those kind of pages are certainly checked on those browsers and devices. You do not miss out on that, right? And application why? Don't capture all pages. Recently, you know, we got a project wherein there are hundreds of pages, like and you know, it's the legacy application is developed using ABC technology and that they want it to be developed in Drupal 8 right now. You think we are going to capture pages for all hundreds of pages? No, definitely not. We are going to choose appropriate samples. What I mean by samples is we have different categories of pages, like we have a landing page, we have a listing page and within landing page, you know, we have different combinations where they want the page to look this way. But of course, we are going to choose appropriate samples and capture screenshot. We are not going to capture screenshot for hundreds of pages for sure, right? Good practices to follow. Sorry, I think the first point is repeated. Let's move on to the second point, which is organizing the test suites. As I mentioned, you know, take your lessons learned, right? So it's not compulsory to execute everything on all browsers, right? I will show you how I have actually categorized feature or test as per browsers as well, right? So you can have certain tests running only on Firefox, only on i11. If you feel that there is a certain set of important test cases or probably you can have a smoke suite ready for your automated visual test as well, you can have that smoke suite executed on all the browsers and devices. But then you can have the others, you know, just that's just talking about, you know, reducing your execution time. And certainly, you know, we are going to also be talking about few commercial tools or not completely free tools. And they charge you on screenshots, right? So you have to wisely use your plan there. And that's the reason we need this kind of strategy in place, right? Identify the frequency to run your visual tests. Do you want to run it with every build really? No, because in one of our projects, you know, what I asked them to do is run it after every deployment. That's enough because that's the strategy we need there. We don't want to run it on every commit really. That's not that kind of project. So it's not needed, right? You're going to save on execution time. You're going to save, forget execution time. You're going to save on your plan. If you are an organization that is newly looking for adopting these tools, you would have to think about that, right? Yeah. So this is a practical example of how you can organize tests as per the browser, right? So if you see, if you see in this panel, what I've said is that run the blog listing test only on Firefox, right? And run the contact us form only on IE 11 because we found out that, you know, forms were breaking on IE. So I just said, okay, then go ahead and run the contact test only on IE. And similarly, these are our common tests. So there, so, you know, there is this blog, blog node page, like, you know, the blog page, main page, which is an important test for me. And that's the reason, you know, I'm going to have it run it on all browsers and devices. And talking about running them on cloud. So this is the configuration from WebDriver.io, right? So you can specify browser stack capabilities here over your clubbed with Selenium. You can specify the browser, the OS, the exact version, the build name, project name, and you can also mention which tests should run on this particular combination and which tests should run on this particular combination. Yeah, now creating suites is another good practice that I would suggest is because, you know, when you would want to have your test running as part of the CI pipeline, you would want only the smoke test to run as part of the build, right? If that's the strategy you are planning to adopt in order to reduce the build time. So start creating your suites well before time, right? So, like, you know, once you have a lot of tests in place, this might be time consuming. So let's look for a proactive approach, then a reactive approach start. And again, you know, divide suites as per your feature. So say tomorrow if there are changes only to the block feature, right? I wouldn't be interested in running all the tests, right? Smartly what I would do is probably I would just run all the tests that are related to the block suite, right? To have quick and immediate feedback. So divide your automated tests also into suites. Okay, and how we integrated really in our development workflow at Excel. So this is how the CI CD pipeline looks like at exhilarate. What we follow is that, you know, once the code is committed, we have a DB Docker running. So the advantage of DB Docker really is that we don't have to worry about the data creating the test automation data every time we have, we have the data seeded quickly into the database and then our acceptance and visual regression tests are running in no time because the scripts primarily focus on having more valuable tests and assertions in place really rather than worrying about, you know, the test automation data. And further, once that's one those tests are run, it's deployed to several servers. Okay, now let's talk about some practical things in as, you know, when you want to introduce automated visual tests as part of the CI pipeline, right? One big challenge would be storing and maintaining images, right? So just imagine a scenario that two developers, they are working on their feature on their UI feature. How do you really expect each one of them to capture a screenshot store and store the image in the CI? Where is it exactly going to be stored? Is it are you going to store it in your repository? And what about the maintenance part? Yeah, these are actual challenges in the CI, right? Identifying the comparison environment, right? So, I mean, the first time the developer is going to push her code and what is she going to be really doing? Yeah, I mean, the baseline is created and when next time she pushes it again is, I mean, against what is the comparison going to run? Because she is going to be on her feature branch, isn't it? So what is the comparison going to run against? Is it going to run on the same feature branch? And after she has pushed to the master branch, what happens after that? Right? So these are a few things that you will have to think about when you are in the CI. Resolution, right? So what at Exilent we've done is we've used tools like Percy, right? Primarily, Percy is our tool for the CI pipeline and Apple tools. We use the free version really available from Apple tools for smaller projects because we personally, we all like the tool. It has a lot of advantages over the tools. But when you talk about CI, Percy is an affordable tool, really. It takes care of storing the images, right? And it has an inbuilt logic already in place for comparison in CI. And so this is the actual implementation from one of our, I would say, in-house projects. What we did was last year we migrated our Exilent website to Drupal 8. And that's when we had visual tests also implemented for this particular project. So here, if you see on the left, you see that this is, there's nothing. It says it's a new snapshot because we've captured the screenshot for the first time. And that's the reason this is the new screenshot. And what happens is when you rerun, it says that there are no visual changes. Fantastic, right? And if you see that these are changes from develop. So we have set the logic to run these visual tests on develop branch, right? And our pipeline consisted of checking few code quality checks. Like, you know, we had the Drupal code quality check. We had the front-end code quality. And then we had our visual tests using per se here. This is the implementation from GitLab, right? Now, there was one difference that was reported by the tool. If you see the footer here down below, it, you know, just screamed in orange, saying that, you know, there's something wrong with this. And if you see it, one of our testers, you know, they collaborated and said that it's a padding issue if you see a comment here, right? But the tool did scream that there was a difference found immediately. Like, and what was the difference? You can see here that this padding issue, right? Below careers, that was missing. So that was the issue. And we've also integrated it with Slack. You'll have to do this because, you know, if you're looking for beyond the local visual validation, if you want the feedback to be reached to the entire team, you should have your collaborative tools integrated with visual testing tools as well. And since Percy provided that, we just leveraged it. And, yeah. So what were the key results really achieved? One was test data, as I mentioned, since we use DP Docker, we didn't really have to work on handling the data or even we did not run into an issue of dynamic content really because since the database was consistent every time, we did not run into the dynamic content scenario at all. The tool stack that we have used is actually seamlessly integrated with each other. So if you talk about, say, we use Cypress, we use WebDriver.io. If you talk about integration of tools with, say, Apply tools and Percy with Cypress WebDriver.io, it's seamless. It's out of the box. All you have to do is just use their libraries, APIs, and connect them. Yeah, it's not difficult. Yeah, visual tests that were validated for the site were the replica of the wireframe. Since we had populated the database as close to the wireframe, we tested our visual tests very close to the wireframe. We had populated the database that way. Less number of assertions required for acceptance test automation, and I'll show you how. So if you look at this example, this is an example to test the contactors form on the Drupal commerce website that we saw. If you closely observe this particular test, there isn't a single functional assertion written here. All you can see is that there is only one visual assertion, which is at line number 19, which is going to check the entire page, and it's going to validate the functionality also. The header, the footer, and after submitting the contactors form, whether the response message is correct, there isn't a single functional assertion here. Whether I have to validate the header is visible, the footer is visible, whether the response message is visible, there is a map also associated on this page. I haven't written a single functional assertion here. So that's the power of having automated visual tests. What's the biggest limitation so far, really? So the biggest limitation here is that we've spoken of backstop JS, we've spoken of Percy, we've spoken of, we haven't really spoken of, but then there are visual regression services available with WebDriver.io as well. All these tools, they do pixel to pixel communication, which means that you run into the risk and probability of false positives, right? And that's the biggest limitation because all these tools, they really work on a factor called as Fuzz Factor, right? And you have to specify a mismatched tolerance, right? So the lesser the mismatched tolerance, the more robust are your, I wouldn't say robust, but yeah, you'll have a better feedback on the test. But in order to handle the pixel factor, if you go on increasing the mismatched tolerance, that's going to be a pain you might miss out on actual issues here. And let me show you where the mismatched tolerance is really, yeah, if you see here, yeah, so this is the mismatched threshold that I mentioned is 0.1, right? So if you run into pixel issues and if you raise this 0.1 to save in 0.8, then you might miss out on actual issues as well. And hence we reached to ApliTools. You know, why we need to come to ApliTools is because of this biggest limitation, which is pixel to pixel comparison, which ApliTools doesn't do. I'm not going to run through all these points because this is for you to just quickly take a look even after the session is over, right? So broadly, ApliTools does provide, I would say AI-based comparison in place. And that's the reason, you know, we are not going to run into false positives because they do not do pixel to pixel comparison at all. And talking about match levels in ApliTools is that if you have, depending on your website, right, you can choose the match level. Forget about the website. Even within certain parts of the website you can switch to different match levels. So when do you really have a strict match level, right? When you want to compare the content, right? And that's when it would be recommended. So, sorry, okay, the strict is the default, right? And it's going to compare the content and we're going to have font, we're going to have layout, we're going to have color and position of elements in place. Then there is a content match level as well, which is similar to strict, right? But the only difference is it's going to compare content, but it's going to ignore colors. So for example, if you have a website where everything is the same, right? But it's just going to have different color then you can use the content match level there. Then there is this layout match level which is perfectly applicable in terms of Google news, right? Sorry, Yahoo news that I just said wherein the news changes every day but probably the layout might be the same. So they have these match levels in place and you can use them. Okay, talking about when to use what, right? So if you have static content, use really any of these tools that I've just mentioned, right? They would be able to handle it. If you are going to have dynamic content, a lot of dynamic content, you can use these tools, which is visual regression service by WebDriver.io then there's Shove, then there's Backstrap.js. We just saw how they handle back dynamic content. They're shifting content as well. What I mean by shifting content is they probably, if you want to use, if you want to check a portal wherein the username is going to be different based on the login, then I would say, you know, just go ahead with the aptly tools. Now quickly some facts. There are efforts needed to maintain baseline images, even if you are using tools which have, which provide the feature or the provision to store images, you would have to maintain baseline images as and how your application changes. Respect the test pyramid. You need to have unit tests. You need to have service level tests, except instance, and then at the top level, I would say would fall visual tests. Visual test is not a silver bullet. You need to have these tests in place as well. It's not a substitute for this. It's just going to complement it. These are the few best practices that we have looked at before as well, that avoid too many element level tests because, you know, you would have to maintain the element locators, ensure full page validation, choose the right candidates for automation, and do not expect overnight success with this. Yeah, I mean, I think I've been trying this for five years and, you know, since two years, we've had it in the CI pipeline. So, yeah, do not expect overnight success overnight success. If you, if you, if I've painted a rosy picture from this session, that's not true, but then where do you go from here? Right? So, if you do not have coding knowledge at all, what I would say is use configuration-based tools like Backstrap.js, RAID, right? You need not write a single line of code. You just need to configure the JSON file and with browser URLs, and then you would be able to run and have your... Hello, Shweta. So, there is last three minutes going. You can just run two couple of sites. Right, right. And this is, in fact, the second last slide. Right? And pair up with developers, right? If you do not have coding knowledge, pair up with developers and have the structure ready using any code-based tool, right? So, if your developers or probably your SD ETs are capable of writing code, which they are, pair up with them and help them understand, you know, what are the right test cases to be added to this automated visual suite. If you are good with coding, right? If you fall into the second category, look for harmonious integration with your functional suite. What I mean by that is if you already have a functional suite in place, help the team by identifying the right tool, the visual tool. Reduce the unnecessary functional assertions. If you already have a functional suite in place, I would say go ahead and start refactoring your suite because if you're adding visual tests, then just delete those unwanted unnecessary functional assertions to speed up execution. And, of course, be brave and bring the tests in your CI pipeline. That's where your technical knowledge is really going to be challenged and helpful. Yeah, let the whole team benefit out of this strategy. These are a few references which you can refer later. And, yeah, thank you. So let's I'm ready to take questions. Thanks a lot, Shweta. I think we have just a minute left. We can take a couple of questions. So there is one mostly like question is like, can we integrate backstop JS with Selenium framework? This question is by Abhishek Gupta. Okay. So why do you want to like integrate it with Selenium? You mean with your functional suite in Selenium? Okay. I would say that there are better tools in place. Backstop JS is not really a tool which can be integrated so seamlessly with other tools that are there. If you are looking for, if you're looking for seamless integration, then I would say do not go with Backstop JS. First of all, it's a config-based tool, right? You can. It's not that you cannot, but then Selenium has nothing really to do with Backstop JS. It's a totally different setup and you can, but Selenium has nothing to do with Backstop JS. You can just go ahead and integrate it. So what is a good practice number for setting the mismatched? That's a good question. You'll have to identify that for your project, right? I have seen cases where mismatched tolerance of 0.8 has worked good for us, right? But that is if you are doing locally, right? Now, if you're going to move to more sophisticated tools like Percy, right, there you cannot really control a lot. It's the primary logic or the primary algorithm within the tool is going to identify you for that, identify that for you. If you talk about Backstop JS or WebDriver, IO Visual Regression Service, you will have to figure that out based on, by trying out multiple runs. 0.5 has worked for us, 0.8 has worked for us. Don't go beyond one. You're going to miss out on actual issues. That's my experience. Don't go beyond one.