 This is the link to a sample framework with some example code that we'll be using in this talk, which will be present in my GitHub. How can you find me over the internet? I typically go by the alias automation hacks because it's primarily more easy for me to actually, what do you say? Like more easy, more easy for people to actually remember. Yeah. So cool. And you can check out my blog on automation hacks.io or Twitter and LinkedIn. So let's get started. A bit of information about what this talk is about and what this talk is not about, right? So whenever someone really learns Selenium or Appium or any other tool, the next step that typically like we encounter is to actually decide on how can we build a framework on top. So most of the times is to figure out what are the building blocks around it. So we'll discuss about some of them in this talk. We'll also discuss about what are the considerations you should have while building a framework and discuss some benefits and tradeoffs of either of them. We'll spend some time on useful design patterns, which come heavily recommended by many industry experts and are known to provide valuable for us. Also, knowing the good of everything is good, but we should also know the pitfalls by with something that we are working on, right? So we'll discuss on some anti patterns. What this talk is not about. This is not a discussion about Selenium API or how it's actually works and all that. There are much better talks and Selenium conference is a great medium for us to actually learn about that. This is not a definitive guide to building frameworks because framework building is largely dependent on the organization and the project that you're working with and actually depends on the context. So these are some guidelines that have worked well for me and I'm sure you must be having your own set of guidelines and we can talk about that after this talk as well. If you have questions, please part them or mention them in the Q&A section and we'll try to address them at the end and if not during the conference. Okay. So the first step when building a framework is actually to decide on what language you will choose, right? And there are typically some available choices for you. So you can either choose a statically type language like Java, Kotlin, C sharp and TypeScript. And while the compile time feedback is really awesome, because if you break something, your compiler is going to immediately flag that and that's really nice. On the other hand, you can also choose a dynamically type language like Python, JavaScript or Ruby, and it's quite flexible and quick to author tests in these languages. Since you, for the most part, you don't need to worry about types. So writing your tests is actually quite easy, but here are some additional considerations when deciding to choose a language for your framework. So you should try to choose something that the devs can also use. Why is that? Because in the end quality is supposed to be the whole team's responsibility, right? And if you choose a language with the devs are also using to code the app and there will be less friction for them to actually take a look at the automation and maybe contribute. And at the end of the day, that is something that would be very desirable. You should choose a language that's easy to learn and find skilled people in because beyond the initial POC, you would want something, which is some language, which is easy to onboard more people onto. And if you choose a popular language, then you would quite easily find people in the industry, try to choose a language that is undergoing active development and has a thriving and supportive community. But this basically translates to is a good ecosystem of tools and libraries that you can use and basically less work for you and more for the community. And probably if it's a very supportive language community, then you might also get a chance to contribute in. So if you're looking to for some suggestions here, you can't really go wrong with Python or Java Kotlin. Even JavaScript and Ruby are very good options. Selenium and Appium has line bindings for all of them. Cool. So now that you have decided a language, the next logical step is to actually choose a test framework and an assertion library. And the popular or the most common choices that you have is to either choose an X unit style framework like J unit test and G on the JVM side or Pytest or unit test on the Python side, or choose a behavior driven tool like cucumber or behave Pytest BDD. So the primary way of how I think about these is you should ask yourself this question of who are actually going to be the primary author of these tests. So if it is going to be an audience of developers and automation engineers, then choosing an X unit style framework might be actually quite good. And we'll discuss some of the benefits and trade offs of them. But if your audience also involves product managers and manual testers, then BDD can also be an option. And let's see why. So here are some of the benefits of choosing an X unit style framework. The first one is it's quite easy to actually author your tests. They are quite flexible and typically there is very less boilerplate. So you can quickly get down to the act of writing your tests. Most of these frameworks come with very good setups and tear downs at different level based on your requirements. So you can choose to either do setup or tear down at suit level, class level or test level. It is very easy to data drive your tests. Since at the end of the day, it's a very minimal layer with which you work. And very easy to actually paralyze these tests as well, using the build tool and stuff that is even built in out of the box. The main trade off with choosing an X unit style framework is the tests can turn to be a bit less readable if not very designed. So there are many patterns that you can follow to work around that. And one of the most common one is to use the range act assert pattern. And we'll discuss a bit about it when we comes to the design pattern section. So the other obvious choice is actually choosing a BDD tool like cucumber or serenity and whatnot. But before we start, if like the creator of cucumber himself says that if you think cucumber is a testing tool, then you should read this blog because you're probably wrong. And the blog is quite funnily titled as the world's most misunderstood collaboration tool. So I would encourage you to check it out. The main benefits of or the selling point for choosing a BDD framework is that it gives a very readable sort of layer with Gherkin wherein you can specify your business acceptance criteria as given when then and and whatnot. It can also provide a layer for people who are not familiar with coding such as product managers to actually contribute to your framework. It generally the report that is printed is actually quite readable because of the English like steps, right? However, there are some pitfalls while choosing this framework. Essentially, if you're going to follow BD if you're not going to follow BDD practices and involve all the three amoebos in the agile cycle into the development, wherein let's say your product managers are defining your specifications and the developers treat it as an outside in perspective into authoring your tests, then maintaining this BDD layer just for the sake of using cucumber is going to be an overhead because you have to deal with feature files, write step definitions and whatnot to basically achieve this on top of page objects. The other trade off with BDD is it can turn out to be a bit less flexible. To paralyze tests, you have to depend on a feature level, though this has been worked around, but it's still a bit more steps compared to a next unit framework. And it's quite easy to fall into the trap of writing feature files which are imperative rather than declarative. So here is an example of an imperative feature file, right? So given that you have this Gherkin syntax, you write all the steps to basically open the browser with a URL enter a username password, click on submit and whatnot, right? But this is not what the business actually cares about. And reading this for all the feature files that you write can be quite tedious. Instead, it's it should be written in a more declarative fashion wherein you go one higher level up in the abstraction. And you basically write something which the business can understand that given that you're on the login page and you enter correct credentials, you should be able to see the welcome page. So this is derived from one of the source labs, a blog on best practices for running tests. And I would encourage you to check this out. So now that you've decided on whether you will choose a next unit or a BDD framework, the next logical step is to actually choose a good assertion library. And we want to do that because we don't want to reinvent the wheel, right? There are good assertion libraries which can give a lot of the common methods that you can make use of without having to write them on your own. And there are good fluent libraries available. What do we specifically mean by fluent? So you can essentially chain multiple assertions one after the other. And it generally tends to be a bit more readable than the normal syntax. I've been playing around with Google truth for some time and it's a evolution on top of assert J. So this is probably something you can evaluate on the Python side, there is a cert by and the other choices are obviously Hamcrest and even native J unit or test in the assertions. So you can choose something that actually gives you the maximum value for your framework. Awesome. So now that you have a choice on test framework assertion library, the most important part or the lifeline of your test automation framework is actually logging and reporting, right? Because at the end of the day, if your test fails, you want to be able to quickly figure out why it failed and did it catch a legitimate problem with your application? Or is this an automation but that you have to fix and good logging and reporting is going to enable that. So while making your choice, you should consider something that is very easy to see exceptions track traces in. You should be able to quickly spot what are the failed assertions and be able to see what were the inputs to this test method and what were the logs around it, right? Your reporting tool should be able to give you good pass fail and skip the metrics. And the most important is you should have historical results comparison because if your test fails, you need to have the historical perspective of whether this started failing now or has been failing for a long time. If a test is consistently failing, then that's probably a lesser concern because you know you have something to fix. But if a test is failing in a flaky manner, then you have to really dig in deep and like identify what is the root cause behind that flakiness. So some of the available reporting frameworks are you can choose the native tools provided by grader test engine reports or even cucumber reports and cucumber reports can integrate very well with Jenkins wherein you can see that historical perspective. But if you want something with a bit more power and more features on top, then I've been trying out report portal for some time and it nicely integrates with test engine or cucumber and takes care of most of these requirements when like coming up with a reporting framework even integrates very well with logging libraries. Other choices are obviously a lure and extent reports and I'm sure there must be even more reporting photos that you might be using in your framework. So choose, ultimately choose something that's able to give you good feedback about why your tests are failing. Awesome. So now with logging and reporting done, the next thing that you should consider is continuous integration, right? And continuous integration is very important to build in into your framework right from the start because typically when someone starts building a framework, they start with thinking that I can write some set of sequential tests and make it stable in my local setup before trying in CI and the moment you push it into CI, you start seeing failures, right? So you want to avoid that trap and make sure that right from the start, whenever you write even your first test with the framework, it runs in a CI so that it's able to give the value to your project and your developers in terms of quick feedback about your application builds and whatever you are testing, right? So certain choices can be Jenkins, which is a very popular CI tool that's used, or you can even use a CI that your project or organization uses. Some of the other choices can be Azure Pipelines, GitLab, Travis and Circle CI. A couple of other key practices in terms of CI are you need to make sure that you have a good smoke suit defined that really identifies the critical flows in your app and make sure that they are stable before you like dig deep and like run the big regression suit, right? And you can set up your regression suits to either run on a nightly basis or on a bi-weekly basis. It really depends on how quickly and how frequently the developers and project managers really want that feedback. Another good tip here is to make sure your test suits are always healthy. So if you need to really spend time with your test framework and make sure that they are very maintainable. So if something goes wrong and it's a problem in the automation framework, which is going to happen because even the application that it automates is going to keep on changing, right? You need to make sure that if a test fails, it always fails for the legitimate reasons. And this basically means that your developers and your project managers will have good confidence in the framework instead of discarding it as a failed project if you start to ignore like all yellow and red builds. Okay, so I think we have talked about some of the basic pieces that make up a framework. In terms of Selenium, the next logical choice is to decide on where you're going to run these cases. So you have couple of options to set up a Selenium grid. Either it can be in-house or it can be on the cloud level. So if it is an in-house setup, there are some obvious benefits. The cost is obviously less because this is something that you are maintaining and it's going to run on your own server. Also, like you can have more control and customization for all your unique needs of your organization. The common trade-offs with this is there is a good amount of maintenance overhead to actually maintain something like this because it would ultimately need a good dedicated team of engineers to maintain if you are going to scale this across your projects and multiple teams. And there can obviously be a limit to the number of VMs or containers that you can really have. The cloud solution does work around some of these. So let's discuss about that. The main benefits of the cloud solution, it's going to be highly available and the management is going to be outsourced by a dedicated team of engineers. So if something goes wrong, you always have a set of people to offer you support and debug and fix that problem, right? A cloud infra promises a large amount of platform and browser combinations. Honestly, it could be more than what you can maintain in an in-house setup and plus the automatic update and the cleanup of these machines and containers are going to be taken care of by them. Obviously, there is a good amount of monitoring dashboards that's also present. So you can very quickly debug into what are the tests that are running, see the video and even get a good amount of initial reporting there. The main trade-offs with the cloud solution is at the end of the day, it can actually be pricey for certain organizations and that is probably one of the reasons for not going with that for smaller organizations. There can be concerns around network latency if the data center is in a different geographic location, but this is a common enough problem that most of the cloud solutions have worked around this. There can be certain concerns around data security. So your security team and security organization should be clear in terms of how your data is going to be managed and teared down. Some of the popular choices are sauce labs, browser stack, appletools, ultrafast grid, perfecto, headspin, among many others, and some of them are even gracious enough to support this conference so that you and me can actually interact and share these learnings. Awesome. So now I think we have discussed most of the basic considerations that you should have before starting a new framework. Let's discuss about some of the patterns and practices that have been heavily discussed by industry experts and proven to deliver value. So let's I want to first discuss about atomic tests. So this is a concept that has been written about a lot and what it essentially means is you should be writing tests that are very targeted on a specific piece of functionality that you're testing, right? So this is a contradiction to actually long winding end to end steps, end to end tests that you might be writing and we'll discuss why this makes more sense. The general aggressive recommendation there is to make sure you don't write more than two assertions per test and initially this can turn out to be quite challenging because most of us start with actually writing longer tests. You should ensure that the outcome of one test should not cause another test to fail. And this is probably a hint because like there are features in test energy and other frameworks where you can basically chain one method to depend on the outcome of another one. So the problem is it just becomes a maintenance problem because if the initial test fails, then a lot of the tests will be skipped and just the overall debugging is a bit more tougher. However, if you write atomic tests which are sealed in their own boundary, then you know that if a test fails, it's going to fail for only maybe one or two reasons and you have a very easy debugging path. Certain other heuristics around this is the Trims pattern that Richard Bradshaw from Ministry of Testing has suggested. You can check out this blog and it does talk about other capabilities that your test should have in terms of making it something that is more maintainable and better in the long run. So any discussion about sensible patterns in Selenium should include a discussion about explicit weights. However, we are not going to discuss too much about it because it's been talked about a lot and it's one of the most accepted good practices. In a nutshell, you should avoid writing any hard-coded sleeps or implicit weights in your framework. If you have any of these in your framework, then make sure you start replacing them with an explicit weight on a given expected condition. The next important pattern is to basically make sure that you use an API or a database or some of the other methods for actually set up and tear down. And why is this important? Because even though Selenium is capable of actually dealing with a lot of your set up cases, it involves more steps on the UI which is inherently slow and can be flaky. So if you remove these application state setup and actually replace it with one of these methods, your tests are ultimately going to be more reliable. So the most common method is to basically integrate API into your UI tests and use them to create any test artifacts that you need. You can also inject JavaScript or modify cookies or modify database. And these are just some of the ways that it's been written about a lot and this is honestly like really increases the efficiency of your automation framework and the reliability. So I think we mentioned a range act assert pattern initially, it's been written a lot in the C2 wiki and I have a link of that in a links section, which you can refer later, but it essentially divides your tests into four main blocks. You have a state setup. You basically do an action on the application, essentially just one action that you want to test. You write a singular assert assert on top and then you do clean up. If you're using a ex unit style framework, then you can even just replace the setup and the cleanup part with annotated methods that you can easily run. However, I like to divide the tests in these different blocks so that if anyone is reading that that test, it is very explicit for them. Like what is the state we are talking about in a test, right? So the next useful advice or practice is basically to make sure you are able to run your tests in parallel right from the start. And this can turn out to be like a difference between a successful automation project versus something that is disregarded. So I've been into situations where when you start authoring your test framework, it essentially is takes a long time to run, right? And the longer time it runs, the more difficult it is for depths to actually wait for that period of time to get feedback. So you should not fall into the trap of only writing a sequential tests all the time. Try to make sure that your tests are able to run in parallel right from the beginning. And atomic tests ties nicely into this principle because each of these tests are actually sealed in their own boundary and you can run them in any order and in any parallel threads that your infra supports. There is a great library, which is Selenium Grid, which obviously helps a lot with this. And with Selenium, we have the new Selenium Grid for, which is in alpha stages right now and has good amount of documentation. So you can check that out how to move to that part. Also, general bit of advice here is to make sure you create a new web driver instance for every test and do not share the web driver instances because the moment you do that, you're start going to get some weird behavior, right? You should try to avoid static keywords in your framework and try to minimize the amount of shared state between tests. Like basically, this is the reason for most of the tests to be heavily coupled and not be parallelizable from the start. So try to take care of that. Any discussion about page objects is like any discussion about Selenium patterns is incomplete without giving page object the respected deserves, right? So anyways, this is a pattern that has been talked about a lot. So I'm not going to deep dive into how this is structured. But I just want to add one additional insight that when you think of a page object, try to think of even components between the page objects. So try to break down your page into reusable components, which can then be composed into a larger page object that way you also follow the do not repeat yourself principle. So you make sure that you only write the minimal components and then you can very easily use them. Also good naming is at the heart of any good software development project. So make sure your page objects are well named with clarify the intent and what specific component they are actually dealing with. Cool. So the next pattern that I have come across in terms of UI automation or even automation in general is the fluent pattern. So here you have the simple to do MVC app that many people many of you might be already familiar with wherein you can add couple of to do's and then see how many to do's are left and also switch around with completed and to do's that are still active. So this is how you would actually write a test for it without fluent. You would basically create a page object for homepage and then call the add to do or other methods like this, right? But with a little bit of changes in the page object, you can make it something which is more readable and reads more like an English syntax. So this is how this looks once you have refactored it with fluent pattern. So essentially you can just create the page object once and then you can just chain all the methods one after the other and this can also be extended to be something like if there is a next logical action to be performed, then you can make sure that that is accessible right in your editor. So it's sort of creates a very friendly API for people to use and basically write a long chain tests. So how is this actually enabled? You need to make sure you return the current page object or whichever is the next page object to enable your methods to be fluent. So this is an example of the add to do function where at the end of the function, I'm returning this, which is basically the current page object, thus enabling you to call this as many number of times as you want without having to specify the page object explicitly. And you can also use these in these in assertions. So the Google truth library that I mentioned earlier, you can write a simple assert of two values as this way, which actually turns out to be a bit more readable and understandable. However, this can also be a personal preference and fluent pattern is something that has been talked about in earlier Selenium conferences. I have links to these in the links section, which you can refer to. Okay, so the next popular pattern in terms of Selenium or UI frameworks is actually page factory. And this comes pre-built out of the Java bindings of Selenium or Appium. What it essentially means is you can easily annotate your web elements with a find by and then whenever you make use of those elements, then a lookup will be performed. Right. So this is quite friendly and it actually cleans up your page objects a lot. So I try to use this as much as possible. The key thing is in the constructor of this page object, you basically initialize all the elements once. Also, if you know that there are static elements in your site, then you can even use at cash lookup, which is going to do a lookup once and keep it so that a fresh lookup is not performed every time you have to interact with that. So this is a need performance boost on top of page factory. Okay. So the next pattern that I want to discuss a bit about is a screenplay pattern. It basically has a selling point of following all the solid principles of software development and takes a user centric view to writing automated acceptance tests, probably discussing how it's structured or details about it is out of the scope of this talk, but you can check out the page on serenity BDD and basically it has a lot of good documentation on how you can basically build it out. There is a good course listed there as well. The next pattern that I want to talk about is a builder pattern. And while this is not something that selenium actually directly uses as such, but it's a very nice test automation pattern in general. So if you have to construct a complex data object with multiple parameters in a constructor, you can instead replace these multiple patterns with instead creating a builder, which is basically going to keep on returning the same object. And you can have the flexibility of customizing different sets of data. So where I typically use this is if my API basically needs different sets of inputs to create different sorts of test data, then I will typically have a builder which I can reuse in certain helper classes and basically create the required object that I need. So with that, we come to like a good discussion about the what are the available patterns, but we should also know what are some of the common pitfalls that we can fall into, right? And so I want to spend some time here. The first anti pattern is essentially to put assertions in page objects. And I've been guilty of the same thing. Initially, when I started with selenium, what my viewpoint was putting an assertion in the page object is more sensible because the page object actually is the representation for the UI page, right? But what I've come to realize and it's also been talked about by other industry experts is your assertion should actually be in your tests. What that basically allows you is cleaner page objects, which are just making sure to provide an interface to basically drive your browser. And so you can use the same method to even test positive and negative cases. And it overall leads to good separation of concerns. Probably an exception to this could be writing a single assert to verify that all your page or your elements are loaded properly, probably in the constructor. But apart from that, keeping it in the tests really makes it nicely organized. Okay, so the next anti pattern is essentially putting web driver methods in tests and page objects. And Simon Stewart, who's the creator of web driver himself had this code that if you are putting web driver APIs in your test methods, you're probably doing it wrong. So what that typically means is there shouldn't be any code to have a deal with web driver in your tests. Essentially put it in the page objects. And another common practice there is to make sure all your page objects inherit or compose from a base page, which has good wrappers over selenium methods. And what that basically affords you is less duplication. So if there is good implementations of finding or getting certain values, then you can reuse them easily in all your page objects. And even if in a later version of selenium something changes, you don't have to make that change in all your page objects, rather just in the base page, right? This also clover ties quite nicely into the don't repeat yourself principle. So the next common anti pattern that many automation frameworks and projects have is actually writing long end to end tests with multiple actions and assertions. Why is this a problem? It's initially quite easily to just write another step and maybe add another assertion in an existing end to end test. But the main problem with that is it turns out to be a debugging nightmare because the more steps you have in your tests, if any of the earlier steps fails, then you actually have to go down to the process of basically figuring out why it's failing, right? And the tests are not very deterministic. Also, like if it fails earlier in a stage in your end to end test, then you don't get the feedback for the remaining assertions. So essentially breaking down your long end to end test into smaller atomic tests is essentially going to make sure that your framework is much more maintainable. Okay. So the other common myth or misconception with UI automation is essentially writing too much UI automation. So most of the times we tend to treat selenium as a silver bullet and now that you have that selenium hammer in your hand, you try to like hit every nail with that. So selenium is a good tool and a good library to deal and drive your browsers, but that doesn't mean that every test has to be present there. So in these cases, actually less is more. You can take a minimalistic stance to writing UI tests or try to not test everything from the app or the browser, right? Even though that might be possible for you, because that essentially means that you end up with 1000 or even more a number of cases depending on the size of your project. And that does just turns out to be very difficult to maintain in the long run. Probably a good idea to use selenium or UI test would be to test for the visual aspects of your application. We should be the primary use case just to mention and make sure that your application is looking fine and you're able to get through the major functionality without any breakages, but try to prefer like writing more API tests and even like push it down to integration and unit as if applicable in the long run, this would make sure that you respect the test automation pyramid and you essentially have tests at the correct layer, right? And probably the last anti pattern, which is not really a anti pattern with selenium automation, it's more around the conception or thought process behind like when you start writing UI automation is that you would write and automate all the tests that are possible in your suit. And what that typically leads is even though you're chasing 100% automation, you are going to start automating tests that are not going to have value, right? So essentially exploratory testing and functional automation can both go in hand in hand, you can basically use UI automation to augment your testing and make sure that whatever are the repeatable boring tasks, you can just get rid of them with your UI automation and still get the time to do more exploratory testing put a pair of couple of human eyes on the application and figure out what are the areas which we have not even explored, right? So a general thing around UI automation, how I think of them is they are good change detectors. So you can run them any number of times and always get a feedback of has your application changed, but it might not necessarily give you a confidence of whether you're covering everything, right? So I think with that slight controversial note, I'm coming to an end to the top. So where can you actually go from here? There are all these lists to the blogs and videos that I referred and have been like reading or following over the years. So you can take a look at these because they might give more insights than something that I can fit in a 40 minute stock. But what I can suggest for yourself is to learn constantly and you have a very good testing and automation community out there to basically learn from. I would encourage you to experiment and try some of these approaches in your frameworks and over a period of time, develop your own heuristics about what works and what doesn't work. And once you have figured that out, please come and share it in a talk with Selenium Conf or write blogs so that we can all learn from each other. There are all the blogs on automation on BDD Cucumber running tests in parallel and some of these patterns that have talked about. Also, the newly revamped Selenium docs are really nice. They have very good guidelines around writing effective automation. So I think with that, I would really like to thank all of you to take a time and attend this talk. Thank you, Gaurav, for sharing your experience. Sure. So I think we have around seven minutes left. Let's see if we can answer the most common questions. And if not, discuss more in the VIP booth. Sure. Sure. So like most likely question are like can we integrate any SEO tool with Selenium for SEO audit? Like an integration acts with Selenium. This was asked by Vibhav Shukla. Okay. So thanks for the question, Vibhav. I don't really have insights on whether integrating that would make sense. But at the end of the day, Selenium is a library to drive browsers. Right. So if that is something that is basically entailed in that you can probably try to write something on top. The API is quite flexible to do that. I'm sorry, but I don't have a good experience with this with doing this. But yes, maybe try asking other people in the Selenium Conf and they might be able to give more insight if someone has actually dealt with this. Okay. So next question will be how to customize reporting system which make complete clarity between past and failed test cases. This was asked by Karthike Rai. Okay. Thanks, Karthik for the question. So the thing is there are two things there. If you are using a reporting framework that someone else has developed, then there you can like explore around the features. So if for instance, I talk about report portal that integrates with your test and G and your cucumber tests and also integrates very well with your logging frameworks like log back or SL4j on the Java site. Even if you are coding in a different language than Java, then it can integrate with them. And what it essentially does is keeps on listening to whatever your tests are running and starts pushing it into a database. And it actually has a nice UI on top, wherein you can get a good visualization of how many tests actually ran, how many tests failed and also the historical purpose. So like I would recommend to try that out. It's been working quite well for me. And if you have some other tools, then do suggest. Thank you, Gaurav. So the next question would be, can this atomic test be used with sanity smoke suit? Whenever there is a new build to verify, this was asked by Adarsh Kumar. Okay, thanks for the question, Adarsh. So definitely yes. Atomic tests is not something unique. It's a way to actually write all your tests, right? So what it essentially means is you figure out smaller chunks of test cases that you can run on every smoke or sanity suit. The only difference would be instead of being a long end to end test, these would be much more smaller tests, which you can run in parallel. So which are actually going to run much more faster than your end to end test and probably be less flaky. Thanks, Gaurav. So there is one more question. Can you share the best practice to measure the network latency by Vegeta? Okay. So I think there are a couple of proxy tools that you can integrate with your Selenium automation, which should be able to give you good metrics on how long your pages are actually taking to load and all that stuff. So you can probably check out a browser more proxy and I'm sure there are other tools that have also come up, but probably integrating something like this in your Selenium automation might help you to get these metrics that you want. Okay. So there is one more question. Why do we use to limit our assert command to two by UV Tran? Okay. So that is mostly a very aggressive recommendation. I wouldn't say that this is like a line in the sand that you need to follow. But keeping it, the general idea is to keep it very less. Right. So I'll give you certain examples. So typically, if you're doing an assert on an application, what happens is you might think that I can write a single verify method, which is going to assert many things on this page, right? And what that essentially means is if one of them even fails, that very efficient method is going to get failed, right? But rest of the asserts might not even get executed. So the general idea is to even break down your tests into much more granular level. If you have an end to end suit of, let's say 10 cases, then you can probably break it down into maybe a hundred or 120 cases, which are going to run in a very short amount of time, because they are going to be very focused. And if you go to the links section in the slide, I think there are a good amount of talks by Nikolai at Blotkin from Sauce Labs. And he really explains a lot about what these atomic tests are and how you can run them more quickly. So I would encourage you to check that out. Though don't read the tool to assertion as a golden rule. Use your best judgment into what makes sense, but the general idea is to limit it as much as possible. Great. So there's one more question. Is there is difference between arrange, act, assert and given when then pattern? Sure. So it's sort of same name for the same thing. So if you can actually decide on if given when then makes more sense for you and there are obviously frameworks like rest assured, which actually even provide you the construct to write your tests in this way. But arrange, act, assert has been something that has been proposed a long time ago. And it's just something that I personally resonate with more, but you can choose to even structure it with given when then. It's just a matter of dividing the blocks or the responsibilities into something which you and your team can agree to. So probably putting it in like a coding guideline for your automation project might make sense so that it's consistent there for everyone. Thanks, Gaurav. So there's one more question by Anil. Why do we avoid using static variable? Can we not have web drivers as a static? So the main problem with static is it's essentially shared right across multiple classes. So the general idea of like if if you have anything that's static, that is not really tied with an object. And if you are running your tests in sequential fashion, then it's going to work properly. There is no problem. But if you try to run your tests in parallel, then multiple threads are going to start interacting with the same thing. And if you're mutating the state of that, then essentially what's going to happen is you will get very flaky tests in the sense that some of the threads that have actually executed initially would pass. But remaining of them, which depend on a different state might start failing. So this is just an anti pattern that kills any of your parallelization dreams. And thank you very much, Gaurav, for sharing your experience with us today. Thanks a lot, Hardik. Yeah, thanks a lot for moderating these. And I really would like to thank the audience. I hope you have a good Selenium conference ahead.