 Yeah, welcome again to another exciting session and we have a peep with us telling us about the road for reliable automated tests and this is a burning topic at least for me when we all have uncertain unreliable tests and the suit and flakiness everywhere. And so peep, she is head of QA and bloom bloom bloop and she has been actively participating in get open source libraries, other blogs that she has and other comics also series. Yeah, so yeah, so you can follow I can pass the link in chat and so over to you. Thank you so much for the introduction. Hello everyone. Good morning. If it's morning like this for me right now. Thank you for joining would really be interested to see where you're all coming from so if you can just type something in the chat just for me to see that you're awake. Yeah, I'm here in Romania. It's kind of a cloudy day. So I hope the thunderstorm will not have any impact on our talk. But anyway, I will start sharing the screen because I have a very nice picture to show you. Let's see. So here we are. So I've seen that people have already started writing. Oh wow so many interesting locations. Thank you for joining from everywhere basically. And let me get into the reason we're actually here right so reliable automation and what I have to say about this. So, first of all, you know, I'm going to if you don't know me yet. I've been doing automation for the past 10 years now, probably, I guess, um, I've done mostly. One second. Can I There you go. So I've done mostly automation with Selenium throughout my career but I'm also involved in the backend part. Also some mobile testing and so on but the biggest chunk of my work is actually focused on Selenium. And today I'm going to talk to you about automated testing in general but because of course we're at Selenium Conf, I'm going to refer a little bit to some scenarios that we encounter when we're doing Selenium testing. So I want to give you my tips and tricks on how to achieve reliable automation, automation that you can actually count on automated tests that you can run, and that you can be certain that are properly validating the feature that you're testing. And of course I want to start out with discussing a little bit what I mean by reliable tests. So this is not per se a definition because we don't really have a definition for this but, you know, our, our instinct tells us that a reliable test is one we can count on. So that every time we run it, if the software that we're testing hasn't changed from the previous run, the result of the test will be the same. So if there is no bug in the system and if no new deployments of the software we're testing were done, then our test is supposed to pass. However, if we encounter things such as random failures, whenever we are running the same test and nothing changed in the infrastructure, then we're talking about a test that needs to be addressed, a test that is unreliable, and that we cannot count on to validate our product. Another case of unreliability comes from not properly writing the code. I will see, I will show you some some examples what I mean by this, but one of the most frequent situations when our tests are not reliable is when they are reporting false positives or false negatives right. So maybe you don't even realize you think your test is going to pick up on a bug, but in fact it will never pick up any sort of bug because of the way it was written so we will see some examples there. But what is our end goal when we're doing automation. So what we want is not to create just an automated test. We don't just want to write another test to add to our repose. We want to write tests that are reliable, that we know for a fact do not require any more updates in the future. We want a test that once we edit, a test that which is once added to the, let's say, CI or CD pipelines, they can run there every single day and they can, they can continuously validate the software that we're working on. So in order to achieve such a reliable automated test or to create such a reliable automated test, we need to do it properly from the start. So whenever we are starting, starting working on our automation, we need to make sure that by the time we finish that piece of work, it is the best version of it that we can come up with. We shouldn't do something like, okay, I'm just going to write a test right now. Okay, I know it fails here and there, but I'm just going to tweak it sometime in the future, because it's never going to come up again, you know, so we've probably seen if we're working in sprints that, you know, once we consider a task done, it's really difficult to come back and modify, you know, the tests we created as part of that test because there's never any time. There's always new priorities coming in. There's always new features we are working on. So those tests that we worked on initially will probably not be revised in a very long time, unless we have like a really good reason to address them. So having said that, when we start working on the test, let's make sure we're creating the best version of the test. Let's make sure we set enough time aside to write it the way it's going to actually help us validate the software that we're testing, because in the end, why are we creating automation? We are currently developing a new feature. We want to test it in a certain way, right? We create automation to test that particular feature. Now, once the feature goes into production, we don't want to have to manually revisit it every time we're doing a deployment that might impact the particular feature. We don't want to test it manually every single release. So having an automated test that can cover this part for us can help us very much. We don't even need to bother about the feature. We don't even need to remember about it. We just have our automation and it's going to do the checks for us as long as the automation, as I said, is reliable. If it's not, then you will need to invest time and you will have to look into, first of all, the feature that you're testing to make sure that it's working properly. And secondly, into the test to see why is it failing, right? So maybe it was failing randomly and you just needed a bit of tweaking. But if you didn't tweak it from the start, as I said, it will be difficult to tweak it sometime along the way. So in order to create the automation for the feature that you're working on, you know, usually when you're creating a new piece of software or when the developers are doing that, there is an attached, let's say JIRA user story or JIRA Epic, right? There's always a ticket to which developers assign their tasks. Similarly to the developers, us, the QAs can also assign automation time by creating our own dedicated tickets. And we can only consider or we should only consider the entire feature as done when the automation tickets are fully done and closed and validated so when we are happy and we consider that the tests have properly been implemented. I'm sorry to interrupt you there. Are you changing your slide because we can... No, not yet. But thank you for the... Yeah, not yet. Please continue. Yes. So let's say, okay, we have now created the automation. We now have the reliable test that we've dreamed of. Does that mean we will never ever have to update it ever again? Well, not really. Sometimes we will still need to do maintenance on that test because as we know, features might change in time. So if, let's say, we've implemented a feature today, we have validated it. For six months, we run the tests, but after six months, something changes in the requirement. Of course, that means there will be an attached, again, JIRA ticket that will have the requirements of what we need to implement, but it also will have the dedicated development tasks assigned to the developers. Similarly, because the software is going to change, and of course, our tests will now fail, we need to assign a QA task to the same ticket in order to update our tests so that the tests reflect the fact that the software has changed, right? So whenever the change is done to the software, automatically we need to add a QA task for that. I know that sometimes people say, okay, we created a test, but the test is failing because the software has changed and the test is flaky because of that. That's not really correct. If the software changes, it's obvious that the test is supposed to fail. So it's obvious that the test also needs to change to reflect what the new implementations on the feature have, you know, what the impact of the new implementation has done to the test. So in order to update the tests, again, QA tasks should be added to the same ticket where the development work is created. Big changes to code should be revealed as new functionality in general. So if it's something like only a label change, that's just a minor tweak, right? But if there's something in the logic of the feature that we were testing that is changing, then definitely this is like new functionality and it's obvious that we need to have a QA task assigned to that. So don't be shy in creating these tasks whenever you see that the features have changed and that you need to update your tests. I would say update the tests. Some people say you need to fix the tests. And from my perspective, it's not really okay to say that you need to fix the tests because the tests are not wrong, right? When you created the tests, they were testing a certain feature. When the feature has changed, your tests are going to fail obviously because they detected that something has changed. So you don't need to fix them. You just need to update them together with the code of the product that was updated. So I hope that makes a little sense. And let's actually start creating some automation and where are we going to start? Well, the first thing we need to consider is to follow best practices when we are creating our automation, especially if we are working with programming languages because if you have some tools that are automatically generating your code for you based on some drag and drops in the UI, that's a different story. But when you are creating the automation code, you need to create it considering coding best practices. Otherwise, whenever let's say a feature is changing for which you have already created a test, sometimes it might be very, very difficult to actually update the test. I can give you an example of something that happened in my case. So we were testing a feature which was quite large. Our tests that were created well before I joined the project were something like copy, paste, copy, paste, copy, paste. So there were like maybe 20 tests with the same pattern and only a few things were different, like certain values that were passed in or there was some additional step here, additional step there and so on. So there was little difference between the actual steps that were taken by all of these, let's say 20 tests. When it came time to actually update the code, I spent probably three days trying to update them. And I realized it's way too complicated to update them because they were really difficult to understand and to read. You didn't really know where you had to make these changes because there were a lot of steps. There were like a gazillion steps. So in order to introduce the new changes, it was really difficult to pinpoint exactly where in the code you needed to make the changes. So after three days, I decided, you know what, I'm going to rewrite this from scratch. I'm going to use a different approach. I'm going to use better coding practices. And it actually was less time consuming to create the new tests than to update the old ones. So this is a clear example where copy, paste was used, but instead, you know, a basic thing like, you know, extracting repeating code into methods could have been used instead. And this would have made it much easier to update all the chunks of the tests that we had. So this is just an example with methods. But of course, think about, for example, multiple ifs that you might have in your code, like an if in an if in an as in an if and so on and so on. When you have such complicated code, that's not the best practice and it's not really readable. So you need to consider coding best practices whenever you're doing your automation. Okay, this is the first thing you need to consider. And always try to keep the code as simple as possible, you know, not just for readability purposes, which is of course one of the biggest parts. But as I said, for, for having that option to change code, right, to be able to change the code easily if you have to. Don't write a ton of code for a simple task, you know, I know we are very passionate about writing code as testers. And if we see that we need to implement a task for testing, we might think about something very complex and, you know, a lot of code because we love to write a lot of code. But in some cases, or in most cases, less code is better. So if you find that you can make use of some existing code from either the same project or from an external library that you can use, instead of you writing a whole, whole very complicated logic. Just do that use that existing code instead of trying to create like a monster of code. There is no need to do that. Simple tasks require just a little bit of code. And when you have just, you know, smaller, how should I say less code, right, less units of code, it's much easier to pinpoint failures when they occur. Keep in mind that, for example, if you have a failure which is not very difficult to understand where it's coming from, you might need to do debugging. And when you're doing debugging, if the code is very complex, you have no idea, you know, what branch of that complicated if structure you're on or what values you're currently using for your variables that have changed in so many places above the current line and so on. So less is better because it's easier to read, it's easier to understand, and it's easier to debug. It's very important to look at the test and for the test to just tell you easily what it's doing, right, you can easily figure out, oh, okay, this test is, you know, performing this particular scenario that test is performing that particular scenario and so on and so on. Regarding this, I have like a few tips, like always name your test classes to reflect what the tests inside the classes will test, right, so kind of reflect the scenarios that will be found in that particular class. And of course, each test method or function depending on your language should also reflect the scenario that they are testing. So the naming should be as clear as possible, as clear and sometimes, yeah, it's a bit longer, but just make sure that from the method name, you can understand what the test is going to do so that when you want to find a particular test, you know exactly where to look for it. And when you're writing your code, your automation code, your test should focus only on the requirements, right. We have seen a lot of reliability issues with our tests because of our environments on which we run the tests. Sometimes environments are slow. Sometimes they, you know, they have all kinds of sync issues between different services that are running there, but your test shouldn't really care about that. Your test should focus on what it's supposed to test, namely the feature and the feature is obviously reflected in the requirements. Your test should only have those steps that reflect the requirements or the test scenario that you're trying to test. If you have environment issues or if you have lack of testability, for example, you don't have, let's say, IDs on your page and without the IDs, the selectors are really difficult to identify. You can address this separately by, you know, talking to the people who are dealing with the environment or who are implementing your code for them to add testability. You shouldn't have anything like dedicated lines of code that say, okay, if I'm on this environment, perform these steps, or if you're on that environment, you need to have a certain time out because here each request takes three minutes, for example. That's a bad approach. So focus on the test and anything that's outside of the scope of the test, like environments or testability, address them separately so that your test can run smooth on every environment and so that the test doesn't know anything about the environment. Your test should be like environment agnostic, they should not know anything about such stuff about infrastructure or any other other things that are not related to the actual requirements. Of course, coming back a bit to Selenium, when we have such things as environment issues where the environment is slow, well, surprise if you haven't heard me talk about this. I'm going to recommend you use WebDriverWeights or methods based on WebDriverWeights that can help you kind of wait for a little bit before considering every action done or before starting an action so that you add more reliability in your tests. That's like the only thing I recommend to add to the test which is kind of, let's say, trying to fix an issue in the environment, making sure you're waiting for the proper amount of time before interacting with a page so that you don't have tests that fail simply because I should have waited one more second before clicking on a certain button. But if you see that you have WebDriverWeights whose waiting time is of three minutes, five minutes, eight minutes for a single request or a single action, then that's not the case, that's not a good approach. So you shouldn't have a timeout of eight minutes to wait for the click of the button to generate the request to the backend service or for a page to be loaded. That's way too much. So that you clearly have an issue with the environment and that is where you have to talk to the people who are actually managing the environment in order to fix that issue. And again, if you haven't heard me talk about WebDriverWeights, I'll be on Hangouts afterwards. We can chat about it. I've been talking about this for years and for me, this is one of the key points where you can make your Selenium tests reliable. What I do in my tests is I don't really use directly the methods from Selenium like click or send keys, but instead I have my own custom with methods, which help with different, you know, exceptions that might occur when I'm trying to click or that for example for send keys are also doing a clear of the field before I type into the field and which are also checking that what I typed into the field is still there when I exit the field. So when I switch focus, because, you know, we have JavaScript these days, which might influence the behavior of our fields. So using these custom weights that deal with all of these steps for a simple task like send keys, for example, can help with the reliability of the tests. So yeah, we can discuss this afterwards in the Hangouts. Now a very important thing for me when I write tests, first of all is to have short tests, right? The shorter the test, the better it is from the perspective of debugging it and it's from the perspective of running it independently. I shouldn't have a test which has a hundred lines of code, right? The test should be small. It helps with debugging first of all. It helps with, you know, just understanding what the test is doing because if it has like, as I said, 100 steps, you don't know what the test is all about. And you don't know if all of those steps are really required in order to get to the end of the test. Maybe you're just doing too much in a test. Consider that. But what I like to do is to write the test in small chunks. So if I have like, let's say 12 steps in the test, right? Let's say I have three pages that the test is covering, right? So those 12 tests, sorry, those 12 steps get you through three different pages. Now, I will probably focus first on writing those steps for the first page. And once I'm finished with that, running the test again and again and again. And if there is any issue with the code I wrote, this is just a little bit of code. So it's easier to pinpoint if there is any random failure here or just any random failure whatsoever, right? So by running these small chunks of code, I can easily identify the issues in these chunks of code. After I'm happy that the test is reliable, what I wrote until here, I can just move on and add more chunks of code. Rerun anything again. And this time when I'm rerunning the same test, in theory at least, if I did a good job the first time, any possible failures should only be in the latest chunk of code that I added, right? So by rerunning the test multiple times, first of all, I'm revalidating the initial code that I wrote. So the first, the code for the first page, but then I'm also properly validating the code for the second page. Multiple runs means more reliability, because I know for a fact that I have run the test so many times that if there was some issue, I could have picked it up while I was running these smaller chunks, right? So this way, I will just add more chunks, more chunks, more chunks, rerun, rerun, rerun, fix, fix, fix if there is something to fix. And by the time I'm done with the test, I have more confidence that the test itself is going to be reliable, because I tested it a lot. So I rerun it and rerun it and address any issue that I saw while I was running the test. So by the time I committed to the repo, it has already been, let's say, tested by me. So the test was tested while I was running it and fixing it. And don't forget a very important thing. If you want your test to be reliable, you need to add checks. I know it might seem obvious that we need checks, but I did see quite a few tests throughout my career that didn't have any sort of check in place. Especially because we have Selenium, sometimes people just expect that everything was going fine if we didn't see any Selenium exception, right? So if we tried to click on something and there was no exception, it means it was successful. That's true. It was successful, but we need to also check what happened after we clicked that particular button. So for example, if you have a form where you have 10 fields that you need to fill in and you need to submit the page. Okay, you filled in the page. You clicked submit. That was perfect. Submit worked. Something happened. There was no exception. The button was there. It wasn't stale or anything, so everything was fine. However, let's say that out of those 10 fields, for one, we didn't input the correct data. Hitting submit didn't do anything. It didn't submit the actual page. It just made a call to the backend and the backend said, hey, that field is not okay. Do something about it. So in this case, without having a check that says, okay, now I got the success message or I got some sort of confirmation that the page was submitted successfully, my test is going to pass even though it shouldn't have passed because we didn't actually do what we wanted, right? So make sure every time you're doing an action, you're checking the response of that action or the behavior generated by that action. Don't just rely on Selenium to tell you, okay, something happened because Selenium can only show you exceptions if you are trying to interact with something that wasn't in the right state when you try to do so. For example, it wasn't there or it was stale or any other such exceptions. Checking can be done in two ways. You can use assertions if you want to. I prefer to use weights. As I discussed earlier about web driver weights. For example, let's say we want to make sure that there is a success message on a screen. We know in which web element we want that success message to be. We know the text for it. So, you know, the classic approach would be to just call an assertion and say assert equals get text of that element is the expected text. However, because again, we might have JavaScript and other things going on, by the time the assertion runs, it may be a bit too fast. So we should have maybe waited for like 30 seconds more in order to call the assertion. However, if we have a weight method that says wait for the element text to be this particular one, then you're kind of making sure that the test is going to pass because you're actually going to wait for the success message to be there. The assertion on submission of a page can take a few seconds. It can be very fast, but it can also take a lot of time, especially in a test environment. By not waiting for the success message to happen, you might cut it too short, right? So you might do the assertion too fast. The test would fail, even though, you know, maybe the success message was going to be displayed, but just not right then when the assertion happened. So waiting for the activity to the result of the activity, rather than doing the assertion on it adds more reliability into your test. So you're not going to fail it right away, especially in those cases where you should have just waited a little bit more. So wait makes sense in this particular case. So when we run our tests, it's a good idea to run them as much as we can, right? But, you know, we have a lot of tests to run and usually we put them in a CI CD pipeline. However, it's a very good idea for us to actually look at the tests while they are running. Especially because we're talking about Selenium, we have the option to see what is going on. When the test is running, we have the access to the browser. It's a good idea to look at a few test runs just to see what is going on. Because in some cases, for example, even though we're waiting for the success message to appear, an error might also be displayed. By looking at the browser, we can actually see that the error was present there. So make sure you are looking at the test a few times just to make sure that you don't encounter anything you didn't expect. If you see anything that you didn't expect, add a check for it so that the next time that thing happens, you can actually catch it with one of your checks and then you can report this as a bug if it is a bug. I hope this makes sense. And of course, you can visually inspect the tests while running them on your machine. Or if you run them remote, you can access that machine and you can look at the browser instance on that machine to see what was going on. Another reason why the tests might pass or fail, so let's put it in another way. Whenever your test is created and you've set your checks in place, you need to make sure that they are the right checks. If they are not the right checks, that is going to give you some false positives. So every time you create the test, run the test under normal circumstances, you expect your tests to pass, they pass. But update the expected results just to see that the test is failing so that it doesn't pass no matter what values are present on the page or whatever values you provided to the expected results. For example, if you expect, let's say, three elements to be on the page, you have your check, it passes. If you update the code to expect five elements, make sure it fails if indeed on the page there were three. This is just a basic example. But usually if there's some more complex logic involved here, you might try catches, for example. This is a very good example. Handling the try catches in a certain way might lead to having your test pass all the time, no matter what is going on on the page. So this is why I'm saying try to update the expected results just to see how the test is behaving when you have done that. Look out for the false positives because this is very, very important. You don't want a test to just pass no matter what is going on. There are two large situations or two large causes for false positives. Try catches and if and else is. So whenever you're writing a try catch, make sure you're addressing both cases, both the try and the catch. Make sure you think about what happens if the try happens and make sure what happens when the try doesn't happen. Also, the catch is the same. So address both of these branches in a proper way so that, for example, you have the try and you have catch all exceptions. Sometimes you just catch exception and that's it. Exception is the mother of all exceptions in Java. Let's say your code is supposed to only throw a particular exception. You have the catch if that particular exception was caught, you are doing what you're supposed to do when it happens. However, if some other exception was caught, you're going into the same catch which says, hey, there was an exception, perfect. Let's do this particular action we were instructed to do and that's it. However, in this case, your code might have only expected a certain exception to happen, but not the other one. If the other one happens, then you should actually raise a button. You should actually have the test tell you, hey, there is a problem here. So make sure you don't use, for example, catch exception when you have other exceptions that might come up apart from the one you expected. Make sure to just consider what is the behavior in both cases. Similarly with the if and else. For example, if you have an if, always consider what happens when the if or when the code in the if doesn't occur. So if you have an if and you expect the if to happen, that's great. But what if that doesn't happen? Is that a bug or is it something we don't really care about? Always consider this part. Always try to also add the else or to at least think about, okay, is the else impacting the presence or is it highlighting properly the presence or the absence of a bug or not? Is it something we should consider? Many times it is, and especially if you have something like if in an if in an if in an if, you know what I mean. So very like nested ifs. Many, many ifs without all the corresponding else's can lead to a very easy way of not uncovering any bugs when they're actually present in the software you're testing. And always understand what code you are using, whether you're using an internal utility code or a code from another library, make sure you know exactly what that code is doing, because sometimes, for example, you are using a method. Again, I'm talking about Java, but you know, it can be a function that has a certain name, for example, is displayed, right? This is in Selenium. Just by looking at the name of the method, you would say, oh, it's going to return a Boolean either true or false, right? And it does return true or false, but it can also return an exception. And most of the cases where, you know, when the element is not displayed on the page, it's going to throw you an exception not return a Boolean, right? So you need to know that this is going to happen in order to properly use that code which you have taken from a different place, code which was probably not written by you. Always understand what it does and always understand whether it actually performs any checks, because as I said, you always need to perform your checks. Make sure if it's doing, let's say, a larger chunk of work, like submitting a page, it has the corresponding checks attached. And always try to debug an issue if your tests are failing and you don't know why. Sorry. I select only the required test steps so that you don't do a huge action in order to try to debug. Add breakpoints and just, you know, go from breakpoint to breakpoint. Evaluate variables. So you have the option in debug to take a variable and assign different values just to see what happens. Also, attempt different actions. For example, if you have a case where a web element is not found on the page by your test, while you're in debug, you can actually create the selectors and try to identify them in your debugging process. Sorry. So always try to evaluate as much as you can while in debug, because when you're doing debugging, you can control the page that you're testing, right? So you can attempt different actions there and it's easy for you to see the outcome of every action. You don't have to just run the test, wait for it to finish, try to understand what it was doing and so on. You can actually control the test run from the debugging process that you're doing. And if it's very difficult for you to debug, or if you just cannot do it, try to do a lot of printouts to the console to identify, you know, where you are in the code run. For example, sometimes it's not very obvious that you were going to a certain if in the if and if and if and if and so on. So if you're doing some system outs, you can actually say, hey, I'm in the third if or I'm in that if where I don't know what happened. So kind of try to have some hints and clues regarding what the test is doing and where the test or which steps the test has already taken before the test failed, right? So give yourself as much information as you can. But debugging usually is the best way to go about it because you have so much control over what is going on there. And you can try different things and see the output of those things directly, right? This is easy, for example, when you don't have, you have a selector for a web element and you're pretty sure it's correct and you are pretty sure the element is there on the page. If you're doing this debugging, you can actually, for example, identify the fact that it's actually in an iframe because you're trying the selector, you see the exception. But then you're trying to switch to a frame and then you're trying to select again and then you're like, yeah, okay, it was in a frame. So this is the step that I actually needed to add in the test itself. And always try or not try, always make sure you have code reviews on the code that you're writing. Every person who takes a look at the code can come up with some additional useful tips to improve the test, for example, or just to let you know, you know, maybe some reasons for your code not behaving properly. And apart from that, before you're doing the code reviews, you should do your own code inspections. There's quite a few tools for doing these inspections and one of the best ones is the one embedded in your IntelliJ, because if you're running or creating your test with IntelliJ, you can do an inspection on the code before you're actually committing the code. So it can point out all kinds of issues for Java, for example, like all kinds of, you know, improper use, for example, of the language or some improvements that you can do to the code and so on and so on. So make sure you do reviews to get a second opinion and do code inspections for like an automatic review and for receiving hints on what you can update. And if you have some other colleagues which are helping you debug the code that you wrote, make sure you are giving them the latest code to do the debug on. Always commit the latest changes to make sure that you and they have the same code before they start debugging the same code. I've had situations, for example, I've had one situation where a colleague was having an issue with the test. He said, Oh, there is an error here. And when I ran the test, I wasted an hour trying to get to the error he was telling me about. He said, Oh, no, no, I commented this particular line, comment the line out another issue. I told him after an hour that I just cannot get to the issue he was reporting. And that was because he had more changes on his machine that I didn't have. So I wasn't able to actually reproduce his issue because we weren't looking at the same code. So whenever you're, you know, you have other colleagues helping out, make sure you're looking at the same code in order to actually identify the source of the problem. And always run the tests on the schedule. You have CI pipelines, run the tests you created as much as you can daily, nightly and so on, because that's going to help you quickly identify any issues that you have in your software. And from time to time, try to run your tests on your own machine, because that's going to help you see if, you know, there's something, for example, on the page that you didn't realize that was introduced. Maybe you need to add some extra steps in your tests, or maybe by looking at the test, as I said, you might pick up on some errors that are not very obvious that you didn't have any checks for in your test, and you can add those checks. If you, if you run these tests and running the tests from time to time means you're actually going to run them under different environment conditions, let's say, because there might be some deployments going on in the, in the meantime. And if those deployments somehow affect the product, by running these tests several times, you can actually pick up on the problems really fast and report any bugs that might might occur. And don't just retry your tests. Always fix those failing tests or those flaky tests, because if you don't, these tests are just going to be considered irrelevant. Every time the same test is going to fail, people will just ignore the test because they will say, oh yeah, it's that test that always randomly fails. Even though in some cases, it might fail for a different reason than that random failing reason that always happens, right? It might actually fail for a very good reason, but because it has a bad history, people will just not look at it anymore. So it's going to miss, you're going to miss out on some bugs because of that. And of course, the test will not be able to be part of a CICD pipeline if it randomly fails. So you don't want that, you want reliable tests and you want tests that you can count on to, to validate your, your software. And you can fix the bad test at any time. Anytime you find that there is an issue with the test, just fix it. Create Gira items to fix the test or to update the tests and just, you know, set aside some time for them, especially the testing feature that is very important. Don't leave bad or unreliable code in the project because somebody else might use it in the future, so you're just going to propagate some bad behavior into the tests that somebody else is writing. And just two tips for less maintenance. Try to use CSS selectors because they are less dependent on the HTML structure. So the HTML changes, you don't need to make so many updates to the CSS and try to extract repeating code into methods, use parameters, they are obviously to cover more scenarios. And this way, when you do update, when you have to update a particular part of code, you just go into the method update that one and all of the tests using that code are going to get the changes at once. So that was me. Thank you so much. I hope I didn't bore you very much. And let's see if we have anything in the Q&A. This is a question from Charlie Pradeep. Yes. Okay, so what if two buttons with same properties are available in different screens? How to make a check whether the button is clicked first screen. So again, two buttons with the same properties and how to make a check. Yeah, we will have to take a look at the scenario. There is a way to do it, but we'll just have to see. We can discuss this in the Hangouts. Maybe you have a bit more clear example. Okay. What is the idea of time to wait for an element anytime? So by the way, there is a library I created. It's called Waiter. It's in my repository. So after you will receive the slides, if you go to my GitHub project, you can look up the Waiter library. And you can see a few examples of wait methods I have there. You can wait, for example, for an element. So basically, your wait can be something like only exit the wait successfully when the click happened on a button. For example, sorry, only exit the wait method if you manage to scroll to a particular element and so on and so on. There are different wait methods for different actions. Wait for the correct value to be selected from a dropdown, for example. That means you can first of all wait for the dropdown to be there. Then you can perform the actions with the dropdown like selecting the values. Then you can again tab out to focus to another element and make sure when you focus out the same value that you selected is still present in the wait. So there's a lot of weights you can create. It all depends on the code that you are testing and on the scenarios you are trying to cover. Okay. We would go ahead with Gaurav's question in the Q&A section. Yes. So any experience to share with Devs writing end-to-end tests? How do you encourage them and provide them guidance? I don't. I have kind of a worse experience with developers writing tests, meaning there's one particular case where I asked the developer to write 10 scenarios. He automated only nine and I was doing a demo and I demoed that single one for which we didn't have automation and it was failing and explained that in the demo to the stakeholders. It's a bit more challenging. Usually I prefer testers to write automation for what the developers are doing from two perspectives. The developer has a certain understanding of the requirements. If they are wrong about it, they need somebody to validate that, somebody who is not that. Otherwise, they are testing with the same misconception with which they implemented the feature. So they might not necessarily implement what was the correct requirement. Secondly, they don't have the same understanding of the product or the same vision, let's say, of the product. I like the testers. If you do want your Devs to write automation, you need to sit down with them. You need to walk them through the demo a little bit. They need to understand more about the product and obviously you can just present to them the scenarios that they need to test. And whenever they have questions, they should feel at ease coming to you and asking for your guidance in case there is something wrong. So it's all about communication. If they do want to do that and if that's something you want to have them do, make sure there's constant communication and there is constant review of what they are doing because code reviews are going to validate whether they're actually testing the thing that you wanted them to test. So I hope that answers the question. Thanks people for the insightful talk and this would really be helpful in our daily lives.