 Welcome to say goodbye to test automation flakiness using AI tools. Today I'm going to talk about how you can use automation and AI tools in automation to reduce flakiness in your framework. So, first of all, let me introduce myself. My name is Amit and I'm the head of solution engineers at Test Project. And I'll be walking you through today how you can leverage AI tools to achieve more stable automation and stable, more stable tests overall. And it will be very detailed. So, not very, but it will be detailed. If you have any questions, feel free to ask them in the Q&A section. I'll be looking at the Q&A section, the entire presentation and from start and throughout the entire session. Let me know if there's anything you're looking for. If you need anything specific, we will also be at our booth in the end at Test Project booth. So, let's begin. First of all, let's talk about the session outline. The first thing we are going to cover is how common test automation framework and architecture looks like. From there, we will continue with main factors for automation flakiness. We will go over how the AI tools can help and reduce test flakiness, save us valuable time, a lot of resources, and reduce the time that our actual tests are executing. And then we'll also have a special time for Q&A section, Q&A in the end. So, let's begin. So, common test automation framework looks similar to this. We have a program in language of choice. It can be a station, Java, JavaScript. Then we have an automation framework of choice, usually one of these, BDD, Page Objects model, and similar. We will have a driver, a web driver if we're using Selenium, or we'll have a UI Automate or XUT test if we're using Appium. So, this is how the common test automation framework looks like. And now let's understand how AI tools can help us to achieve more stable tests and less time used on developing. So, first of all, before we go to how we can reduce this, let's talk about what causes this effect, this automation flakiness. So, automation flakiness can occur by multiple reasons. We'll look to the view here. We're not going to go and about all of them as we only have 20 minutes, but some of them are false positive reported actions. Basically, actions in your automation that have reported successfully, but did not actually do anything. They had no effect on the UI for some reason. They had no effect on the application and etc. Mobile devices variation is a very common factor for automation flakiness. In today's day and age, we have so many different mobile devices and variations. And we have so many different configurations. We also have virtualization, different screen resolutions, variety of mobile devices, all of these affect how stable our automation is. Different things like internet connection speed, different loading time can also affect that. For example, if you're using the same application on dev or on production, it can have major difference in how fast it works. A lot of things can affect that as well. We also have dynamic and auto-generated locators and a lot of applications and frameworks. We have a lot of applications that have a very complex structure that generate ambiguous or invalid locator. So how do we deal with that in this project is basically we developed an approach to reduce flakiness by including three AI components. These three AI components are the automation assistant, which is a mechanism that automatically detects false positive actions. The adaptive weight is a mechanism that combats one of the biggest automation enemies, which is timing. The adaptive weight basically is a global setting which can be increased or decreased according to the application speed, which automatically weighs the amount of time needed for any element in the application to load. So all of the environment conditions will be sufficient for the automation action to succeed. The self-healing is a mechanism that automatically construct multiple locators and alternative locators for a single element. So this allows us to find this element using multiple ways instead of just one. So our automation will not break if the element, for example, is moved, or maybe if the element name is changed, we cover multiple ways to find this element. And basically heal the automation. We have a question. Do we have open source AI tools? So basically, test project is built on open source, such as Selenium Minapium, and test project itself is free software. So the agent itself is not entirely open source, but it's free software. And the components it's built on are open source, as well as we have an open SDK, which is a completely open source solution. So let's move on and understand how does it actually work. So I know it looks a bit complicated and a bit maybe intimidating, but first let's talk about how the common automation without AI works. Well, basically looking for an element, we either find the element and then perform an action. And if we could find the element and we could perform the action, our action succeeded and the test passed. This is a common automation with AI. So if we didn't find the element, our action failed because we couldn't find an element and the test failed. Or whatever behavior we have defined for such exception. If we could find the element, but we could not perform the action, usually the test will fail. So basically, this is how automation framework looks without AI. We have a test, we are asking for the element, then we are performing the action, and then we get a simple result succeeded or failed. But let's take a look what happens when we combine AI with automation. So first of all, we run a step. Then we run a second step. So we find the element. If we find the element, we perform an action. If the action was performed, again, we pass the test. Pretty easy so far. If we couldn't find the element, we apply adaptive weight, which essentially is waiting for the element to become available within the page. And all of this happens automatically by default, which is, as I mentioned, is a global setting which you can change for a specific step or throughout your entire test. So the adaptive weight automatically waits for the element to be found. Let's say the element still was not found. What do we do then? So we waited for the element, explicitly waited for the element, and we couldn't find it. So here is where a basic automation without AI will fail. But what we do, we invoke self healing. Self healing is a second mechanism that, as mentioned before, have multiple locators. So we can check different locators because we have at least three to five different locators for a single element. So the test will not fail so quickly. We check which element can heal this step. And now we go to found element again. Let's say we couldn't find the element. Again, even using five different locators, should we now fail the test? No, not yet. We apply the automation assistance logic. And what the automation assistance logic does is basically it's asking a few questions and calculating whether or not the last step was a false positive action. If the last step was a false positive action, we simply rerun the previous step and then trying to find the element again. Because a lot of the time in automation, if we had a false positive action, for example, we clicked login. Now we expect to be in the login page. But let's say for some reason, login was reported as fast, but we didn't actually moved in the page to the logged in state. So the false positive action happened. So the step after will fail. It's very common in automation. So if that happened before the automation assistance knows how to detect such cases and run the previous step again and then try to find the element again. So there are basically three mechanisms that are applied when a step is not found. Let's say we could find the element and we are performing an action. If the action failed, meaning that we tried to perform an action, but the action itself failed, maybe because it's not the element that we intended to get or something happened in the way. The automation assistance again kicks in and trying to change the results. It's trying to rerun the previous step and then run the step again. So basically, all of these mechanisms are here to ensure that our test will always pass. And we have one more setting which allows us to set a pass anyway value. If on our specific step, we don't care to pass the step anyway, we can simply apply this logic and then our test will not fail as well. So this is how we can leverage AI to basically heal our test. So this is how the execution flow looks like. You can see that we have over 1500 different actions to choose from. And basically, a step is made by an action and an element to perform this action on. So this is how it looks in the actual skin. So we have multiple locators built for a single element as you can see here. We have an action to perform. If we couldn't find the element and adaptive weight is applied as you can see here. The automation assistant is applied. Cell affinity is applied in between. And we can also set on the step whether or not we should fail the test or not. And then you can see the action was performed successfully but a lot of safety mechanisms are standing between an action succeeded or failed. So let's move on. So the role of AI tools in this automation is basically to improve our day-to-day work. They help test us with creating robust locator strategies which are based on the application type. Of course, all of these happen automatically. You just need to apply the strategy type. Automatically apply self healing when automation breaks. So this is very easy. It's all being done automatically using the AI tool. So you don't need to build multiple locator. They are already built during development and then the AI can use them during execution to stabilize the test. When automation breaks as a result of element change, we can apply the self healing and then change the locator to the faster locator. So the locator that could detect the element in the least amount of time can be up in the priority to be used later on in the next execution. So basically we can also make our automation more efficient by applying this logic. So it's also used for eliminating failure that results from loading time in valid element states or environment factors. And this is what the adaptive way does. It doesn't just wait for no reason. It just waits for this specific element or more correctly the first locator that is defined in the self healing. So it's basically a combination of two components working together. It can be used to determine false positive actions and automatically repair them. This is what the automation assistant does. It basically looks for this action. And when you detect them, it's going back in the test running rerunning previous steps and automatically repair them. It can predict here steps and execute them as a part of the automation flow. Again, the automation assistant does it automatically. And it also provides us with visual testing and image analysis when all other methods fail. So thank you all for listening. We have left five minutes for Q&A and I see we already have a question. So I see we have a question about the SDK. So basically, the AI components are built into the agent as a costume script. So when you are using our open SDK, you are basically saying, I want to write all of these things myself. I don't need an automation assistant. I don't need self healing. I want to write my own code. So of course, you will essentially have to give up on some of the capabilities. Not all of them, of course, but some of them. And because of that, when you're using the open SDK, you're basically what you are trying to do is develop your own code. And that basically means I want to combat these automation issues using my own skills. So this is why using this as a recorded script is very strong. So we have a question about how AI finds the post positive by multiple calculations. It checks for how fast the action was performed, whether or not we can find the next element, whether or not we could find the previous element. It works by applying a lot of logical calculations and logical things that happen based on the test and that specific automation to understand whether or not we should rerun the previous step. So let me just go back to that slide and show you this in walk. So basically, we can see that when an element is not found, we can detect a step that is a false positive step, and then we can simply run it and then we look for the element one more time. So we again rerun the entire pass. Let me see if we have any more questions. That has been answered live. That has been also answered live. So I think it's pretty much done. Time summits for sharing your experience with us today.