 All right, well, welcome everybody. This talk is on the systems thinking approach to test automation. I'm Gerard Mazarus. You might know me from my book on ex-unit test patterns. And another project that was involved in a number of years ago was the pro-products of this test engineering with Gregory Bell and John Bach, who happens to be James Bach's little brother. And also this subject of the book, John is in Linux and SQL. So there'll be a fact for you there. So this is John Bach's report card. But it's a very short story. Johnny comes home from school on his report card day. And his father asks him, where is your report card? I'd like to see. He says, I don't have it. Why don't you have it? A friend of mine wanted to borrow it to scare his parents. So why are we talking about report cards? Testing or QA, quality assessment, is effectively a report card. It's telling us what our quality is. This is why we can't test quality into a product. Because all we can do when we're testing downstream as a separate testing group is basically catch the little brand lumps coming out of the end of the pipe, which isn't a very effective way of improving quality. It seems to be getting a slightly different sound, depending on where I stand here. So why is this an issue? Well, the problem is that if we treat testing as a separate thing that's downstream from development, then we end up playing this game I call test and fix. Battleship, did anyone here play battleship? So development throws a software release over the wall. Testing plays around with it a while, writes up some problem reports, says missed, and sends back the bug reports. And then the developer gets to do this again. They just keep taking shots until eventually you run out of time and you say, OK, we're going into production anyway. And don't we still have this longer? Bugs. So we call this the agile test problem. In the old days with waterfall, we had these different phases and these reached months long. And that last phase we did test and fix. So not just testing, it's testing and fixing and testing and fixing. But as we shortened our time frames, the amount of time available for testing has gotten shorter and shorter. And so now that we're delivering every week, every two weeks, some people are delivering every few hours, how can we have testing as a downstream activity from development? So to solve this, we need at least some automated testing. But it's not to say that we don't need any manual testing, but automation is a good way to get a lot of the more drudgery part of the testing work done. So it's not a matter of automated tests replacing human testers. It's a matter of automation helping human testers find a lot of the easy stuff so they can focus on the real high-value testing activities. So who here is doing automated testing? Put up your hand. Keep your hand up if you think it's working really well for you. At least some of you think it's doing well. Good to see. So for those of you who aren't looking around, maybe 10% of the people who put their hands up saying they're doing automated testing and maybe 2 thirds of those kept their hands up. So I'm encouraged to see that people are actually having reasonably good results with it. The problem is that historically what we found is that traditional approaches to test automation had terrible results. And the problem is, and this is the other famous Einstein poster, I don't know if it's actually actually said this, but it makes for a great slide. We kept doing things the same way, using the same kinds of approaches and tools. And we kept getting the same results, which was basically the automation was unsustainable. So the thing to think about is that automated tests are just code. And it's code that happens to test other code. So two important questions. Who tests this testing code? And who should write it in the first place? So we can draw a lot of inspiration from Toyota and the Toyota production system and the Toyota product design system in the agile world. One of the most important practices that comes out of the Toyota production system is doing root cause analysis. Toyota calls this the five of y's. And the reason it's important is because when a product occurs on the production line, anyone can stop the production line and say, hey, this part won't go on. It won't go on right. And then the production line stops and everyone from the neighboring workday comes over and says, hey, what's wrong? Why'd you stop the line? Well, this part won't go on. So then they ask the question, why doesn't go on? Because the holes are drilled a bit crooked. Well, let's go find out why. So they walk upstream to where that part came from and figure out why the part is drilled wrong. Well, it was drilled wrong because the jig moved. Why did the jig move? Then they go back and they trace it back to root cause. Now it's called five y's, but it's actually however many y's it takes to get back to the root cause. And so basically what we're trying to do is find out what it is that caused the problem so that we can throw measures in place to never have it happen again. And ironically, when the Toyota factory fires up, it stops a lot. And slowly they work the kinks out of the production line and eventually it rarely stops at all despite the fact that anyone on the line can pull the cord. Whereas in a non Toyota style factory, more traditional factory like in North America, the only one that can pull the cord is the general manager. Or is it direct delegate? And the cord gets pulled a lot. And what's the difference between those two? It's because they don't do the root cause analysis and put in place the measures to prevent defects. So let's apply this to test automation. Let's think about why test automation is hard. So we asked the question, well, automated tests are very complex. They're hard to write. Why are they complex? Because they typically interact with the system to be user interface. Why do they have to interact with the user interface? Because the product isn't designed for testability. Why is it designed for testability? Because development doesn't know how to design it for testability or maybe they just don't care. Why don't they care? Because it's a QA's problem. Why is it QA's problem? Because tests are automated after development is complete. Development is moved on to something else. Why is it automated after development complete? Because it's QA's problem. I feel like we're going in circles here. So why does development not know how to make a code testable? Because QA isn't involved in the product definition. So the testability requirements aren't clear. Why is that? Well, because the QA's supposed to be independent to the development, independent test. You know, you wouldn't want the developers and the testers colluding and having the testers tell the developers ahead of time how they're going to test because then they might actually make the software work that way. Okay, why is QA not involved in the product definition? Because they're too busy testing the last release. Why are they too busy testing the last release? Because test automation is hard therefore you're doing all the testing manually. Okay, that was a bit of a journey. Now let's take a small side step into systems thinking. Who here is familiar with systems thinking? Okay, good, well this is going to be a nice, very brief introduction to system thinking for the rest of you. So suppose we have some things about the system, A, B, and C. These things are related in some way. If increasing A causes B to increase as well, then we draw a little arc like that and we put a plus sign next to it. What that really says is that these things are correlated and causated. So if A something happens to A, the same general direction happens to B. And if increasing B, for example, causes C to go down, then that's a negative correlation. And we can have a loop, so C increasing can cause B to increase. Now the interesting thing is when you've got a plus and a minus like that, this is what's called balancing cycle. It's self-regulating. Increase one, the other one goes down. Increase that one, the other one goes up. So it kind of finds an equilibrium point. In contrast, when you've got something like this where increasing A causes B to increase and increasing B causes A to increase, this is what's called a reinforcing cycle. Now the bad news for all of us is that climate change is one of these things. Is the warming climate is gonna cause the ice caps to melt, which is gonna cause more warming and all sorts of other shit like that. So that's one of the issues that we need to deal with inside this room. But another name for this is a vicious cycle or a virtuous cycle. What's the difference? A vicious cycle is when increasing is bad and a virtuous cycle is when increasing is good. So that's our introduction to systems thinking. Didn't take very long. So let's go back to our client-wise of test automation. To turn this into a systems thinking diagram, we basically just reverse all those arrows and say, what's the relationship? And you'll notice that all of these are positive relationships, which means that each one of these things is reinforcing the next one. What kind of cycle is that? It's a reinforcing cycle. Is this a good reinforcing cycle or a bad reinforcing cycle? It's not a good one because we're basically guaranteeing that our products remain hard to do automated testing for. All right, so this is a vicious cycle. So we need to break out of this vicious cycle somehow. And that means somewhere along here we need to break one of these links or change the sign on it. So it's negative rather than positive. So who has to go with Conway's Law? Okay, a couple of people. So Conway's Law basically says the architecture of the system will resemble the organization that built it. So you organize in a certain way and your architecture will be organized the same way. This is because people working in teams tend to create interfaces or boundaries between them and others. So let's look at the typical way we organize our teams by function. We've got a business team who comes up with the vision and the requirements. Here's what we want the system to do. We have a development team that builds the software and then we have a test team. So what is the architecture gonna look like? Well, we have our specifications. We've got a little business guy there working away on them to be business analysts or other kinds of roles on the business side of the house. We've got our developers beavering away, writing product code, and we have our test people. And if these are test engineers writing automated test code, they're writing test code based on the same specifications. What are the odds you're interpreting those specifications the same way of development? Your mileage may vary. But what happens is at some point we integrate our test code with our product code and we discovered that, well, maybe the tests are written right or maybe the product code doesn't work as the way the business intended it, but either way, it's gonna be one of these classic examples of the Big Bang integration. Everyone familiar with the Big Bang integration? You know why it's called Big Bang? Okay. Enough said. So what can we do to change this? One of the things we can do is use a process that is sometimes called example driven development or acceptance test driven development or story test driven development. The idea is that we write our requirements in the form of potentially executable specifications which are basically business examples of how we want the system to behave. And I'll show you some examples of these in a moment. And then as part of the job of building the system, we build an interpreter that takes the executable specifications, well, potentially executable and takes the potentially off and uses them to interact with the system. And then that interacts with the product code. Now, one of the interesting things that happens here is that these specifications tell development what the code needs to do. And this here tells development the testability requirements. Remember that one of the issues was that developers didn't know what it would take to be testable? Well, if you make it their job to build this example interpreter, then they is in their best interest to make this code testable because otherwise they're making this job a lot bigger than it needs to be. Now, as you're building these things, you basically start running these examples and it's executing the code. So the example interpreter reads the executable specification and then interacts with the product code following the appropriate functions, passing the appropriate data from the examples, compared to the expected results of the examples include and generates a report card. This basically means that development knows at any point how well they're doing against the specifications. Now, at some point here, they may find that the specifications are inconsistent. And so you get some feedback about the specification as you're building the interpreter. They're not using these things consistently. Sometimes you have two of these and sometimes you have one of these for rare. So forces, it improves the consistency. So at that point, the people who are involved in preparing the specifications can fix them up. So this is debugging the specs. And then as we actually interact with the product code, we get feedback on whether there's inconsistent requirements. So the first one is inconsistent formatting, inconsistent grammar or syntax. And then the latter is inconsistent semantics. Now, the important thing to note here is that these business test and development, which we sometimes shorten to TBD and Ken Pugh calls them the three amigos. There's several other names for this sort of triad of expertise. This isn't necessarily separate people. It's separate expertise or skills. It's certainly not separate teams. So it could be that you've got some business analysts and some testers and some developers on your same team. So a cross-functional team like a scrum or you could add people who are good at all of these things depending on the nature of the domain you're working in. So a question you might ask at this point is, wait a minute, isn't this just test automation by another name? Is it? I'm glad you asked that question. Names are important. And one of the problems we often run into is people using different words to mean the same thing or the same word to mean different things. And there are some people working very hard to try and clarify what these various names mean. If you go to this particular URL, you will see a lengthy description about what sort of mean tests and checks. I'll summarize it for you. Checks are things you can automate. Tests are things that only sentient human beings can do. Seems like a fairly arbitrary distinction, but that's the terminology they use. This is people from the so-called context-driven school of testing. So adding on to that, examples are things that can be used as checks, but which help you understand what the system should do. So, ugly test code is not a good example. Nicely formatted and laid out business examples, the kind of things that business people would draw on a whiteboard while explaining to you how the system should work. Those make excellent examples. And with a little bit of work, they can often be made executable. So let's look at a little example here. This is a magnet system where we're gonna configure notifications. We want to be notified when certain activity happens on our credit card. The purple things are the user stories that would have helped us build out those use cases, the Oval to Use cases, for those of you too young to remember URL notation. And this may be what the user interface might look like for configuring those notifications. And then every time a transaction is processed through the banking system, it checks to see whether notifications are configured for this account type of activity and sends us some kind of notification. And you can configure what kind of notification you want. You can have a text message, an email, a phone call, telegram. People are making up new ways to do this all the time. So here's an example of checking that this notification system works. So this is a potentially executable specification. I don't know how rigorously this is written. This can actually be automated. There are tools if you used to automate this. Things like robot framework will interpret things like this as long as there's a consistent grammar on this. Click on would be a keyword and these things are the names of labels and no contents and so on. This is another example of it. This is using a framework called fit or fitness. This is a tabular one. And in fact, robot footwork lets you do tabular things as well. Those happen to be two of my favorite tools for doing this. So let's just look at a little example here of notification, checking notifications. So we configure, we go into the system, we set up some notifications, we check to make sure that there's no error messages, then we run the notifications, the transactions, and then we check to see that the notifications are configured properly. And then we run the actual financial transactions and here's all the transactions we put through the system and here's the notifications that we're gonna get out. Now, how good an example was that? Was that terrible? Was that fantastic? Was that somewhere in between? You were building the system, could you use that example to understand what the system is supposed to do, what we mean by thresholds and different kinds of events through notifications and so on? You probably could. It's way better than a lot of specs. The system shall blah, blah, blah. And this is entirely executable. This is actually chunks of screenshot from the tool. Now, one of the problems with this is that there's a lot of detail in this and you really need to go through and look at these things very carefully and try to figure out each of these things here, which ones appear down here and which ones don't and how that maps back to the rules and how you interpret the rules. So, it's good but it's not great. Who here's throwing the test automation pyramid? Something in my phone came up with that quite a while ago. The idea is that you want to have a lot of unit tests. You build a broad base of unit tests that test the details of your code as you move up to larger-grade components in your system. You have a much smaller number of component tests which, by the way, are a lot more expensive to build than individual unit tests. So, you can have tens of thousands of unit tests, maybe hundreds of component tests, and then ideally at the full system level, you only need a few tests for a particular piece of functionality. Now, I talked about unit tests yesterday, so I'm going to get rid of those off this picture. And what I want to do here is focus on the different kinds of tests that we can do here and how that impacts or influences what kind of examples we want to produce. So, down the left side, you'll see I've got the level of detail information we want to include in the examples, high at the bottom, low at the top. And across the bottom is the scope of the function out of our testing. We're testing the whole system for small pieces of it. So, broad scope on the left, narrow scope on the right. And if we overlay different kinds of examples or tests that our terminology want to use, workflow tests that test the entire workflow through the system like we just saw would fit up here. They are very broad-stoke, this is the entire system, multiple use cases. And ideally, you want to have a low amount of detail in it because what you're trying to illustrate is the overall workflow, not the details of the rules in the system. Individual transactions with the system, individual use cases might fit down here and a medium detail, medium scope. And the detailed rules and algorithms in the system should be down here. Narrow scope, so individual components and lots of detail. The problem is that most automated testing happens at a high level of detail no matter what kind of testing that you do. So it ends up down in this corner here. And when you're down in that corner, trying to do multi-use case tests, workflow tests, et cetera, the tests become very big and very hard to understand. So you don't want to be down in this corner because there's too much detail and that makes them unmaintainable. The opposite corner, which is too narrow a scope for the level of detail you're providing almost never happens. If it were to happen, it would just be basically giving you an incomplete spec who wouldn't tell you enough about what you need to build. But I don't think I've ever seen that happen. The natural tendency stands up in that bottom left corner. So how does this diagram help us judge that test that we just showed and tell us what we can do about it? So if I find myself down in this bottom left corner, which is roughly where I am with that example that I just showed you, what I want to do is reduce the detail a lot if I want to demonstrate the overall workflow. But if I want to describe the details of individual transactions or the rules, like whether or not I should notify, I want to reduce the scope if I keep lots of detail. So let's look at an example of this. Let's look at our overall workflow example here. What parts of this are not important for understanding the overall workflow? Assuming that this information will be provided in other examples. Well, all this stuff up here is really quite irrelevant. Look at that, we're down to just one thing. It's important that we know that the customer set the thresholds, but nothing else. And so here the key thing is if it isn't essential to convey the understanding of what the behavior should be, we should leave it out. So let's look at page two of that chapter. There's all these examples here. Do we need all these examples? All these individual transactions? The answer is no, if we just pick a few here, we can describe things. So let's get everything all up to one page here. And let's look at which information that we're describing here is not essential to understanding the overall workflow of notifications and the configuration. Do we need to know the details of the rules? We'll put that in another example that's about the rules. So let's get rid of a whole bunch of stuff. Do we need to know how we're gonna notify? No, we just need to know that we're going to notify. We don't need to know that it's gonna be email or text message or whatever. So we can just keep getting rid of information on here. So who was at my talk yesterday? Okay. So I talked a lot about the use of the given when, then notation given in unit tests. So let's apply a given when then here. So given that, the customer has set up notifications for all transactions over $10,000 on this particular account. When we process these transactions, here are the notifications that should occur. So that's a given when then. We can actually test the sufficiency of this by looking at all the information that we're showing and do we need this information? So let's look at these things. So this is the account number. Do we need that account number? Well, it seems to show up in all three places. So that's probably a useful piece of information. But we haven't showed what happens when a different account number is used. So if it's set up on that account number, we should have an account number that isn't to be notified to show that we don't. So the negative case. So let's add that line. So here's the amount. And these examples show us that, in fact, the amount matters. And we could use more precise numbers. We could use 999.99 and 10,000 or 10,000.01. But now that just makes the slides hard to read. So I'm using generous margins here. We set for all transaction types. So in green here, we highlighted the transaction types. So these are all debit transactions, spelled incorrectly, by the way. So if we want this to illustrate that it is really for all transactions, at least some of these should be some other kind of transactions. So let's change one of them just to make that more useful. And what else do we have here? We're sending notifications to a particular user. Yeah, we set up a configuration for that user as well. And oh yeah, look at this time information. So since this is a workflow, this configuration was done at nine. The transactions are processed at 9.30. And we need to record when the transaction occurred as part of the notification. So now we can kind of see that all of this information on here is essential. And we made it into a really good example of the overall flow and the general essence of what this system needs to do. So the key thing here, because this is multiple use cases, these are like three different use cases. We need to have, we have by definition a broad scope, and we need to minimize the detail. What is the absolute minimum information we need to illustrate the behavior. So now let's look at the opposite end of the scale here. How do we describe the rules of whether or not we should notify? We had a hint of them there, but we only focused on one aspect of that rule. Let's look at some examples for the rules. So here we've got our customer and their account. And then we have some threshold setup. And here's the transactions we want to process. So given that we have travel configured with a threshold of 1,000, over here we've got a transaction for $999 that shouldn't notify. And we have another one for 1,000 even, which should notify. So that makes it very clear what precision is and we're not going to have a greater than or greater than equals kind of misunderstanding because we need to be very precise about what that means. We can have other kinds of transactions here. So we've got restaurant charges, grocery charges. We can have other tables just like this describing geographic. So depending on where the charge is, it's higher for overseas travel and lower thresholds for overseas travel, higher in your whole market and that kind of stuff. So we can describe all the details of the notification rules. So here again we've got our given, what are these rules? When is these call-ups here? And then this is what the outcome should be. All right, so the lifecycle of the example is basically we're starting off with a user goal as we're doing our product visioning and design. We end up with features. We break that down into work items to turn the user stories and we elaborate on those to describe in more detail. We define the acceptance criteria that leads to scenarios. We make those examples more concrete by adding data to them like the numbers and stuff that we saw there. And that turns into story examples. We formalize them so they're consistent. We write the interpreter which automates them. And now we have executable examples and we develop the product we have. Satisfied examples. And then passing tests. Kind of use those words interchangeably. And the key thing here is that this happens for each and every example. User story feature whatever granularity you're working at. You're gonna be adding on to the set of existing examples as you come up with more functionality for your system. If we go back and look at what expertise is involved. The ones down the left is primarily business driven. The ones up the middle is kind of testing business knowledge required to come up with the examples. And then on the right here we're primarily developing ticktivities. The automation of the examples and these kind of need to be developed to the product. And again this isn't necessarily teams this is just knowledge. So the dab and test could be the same person. Test and business could be the same person and so on. So what does this do to us? In terms of how does this affect the structure of the software gonna build? Because remember these examples are acting not just as business requirements but also testability requirements. So if I try and automate some tests against an existing system I'm gonna be having to figure out how to interact with something provided by the existing system. Typically that's gonna be a user interface or if I'm lucky some server APIs that I can interact with. But I have to test through whatever pieces whatever APIs interfaces that I'm being given. And the difference is that when I'm doing this type of approach with a example driven approach I'm actually driving my architectures in the examples as well. So I'm gonna have this keyword interpreter for my workflow examples that's gonna interact with various components in my system. And they can bypass the user interface for example. And as I get into the more detailed examples around the notification rules whether or not I should notify and how I should notify I end up with example interpreters for those tables. For example those tables I just showed you the last set by the examples that drive shouldn't we notify decision. And that's gonna cause me to have a shouldn't we notify component in the architecture. And since that doesn't notify it just gives me back a yes or no. It's very easy to automate those examples. Because all I do is my example interpreter is gonna spin up an instance of this thing which could be just running right in my IDE it doesn't even need to be deployed. Pass in the appropriate arguments and run the code and look at the decision. And it even allows me to do things like think about how it is that where does this configuration data come from. It doesn't need to be in a database. I can actually pass the data in along with the request thing. Here's the given this data and this request should I notify. So this makes it really easy to automate these tests. You don't have to go and poke stuff into the database before you can run the test or part running the test. You're just calling the code a bunch of times with different sets of data. So it totally changes the way you go about designing your systems. So this is a famous quote from Sun Tzu in the art of war. Who's heard of the art of war? Who's heard of the art of where? I'm not laughing, there is seriously a book. This guy, I know, took the art of war and redid it to describe software development. So Sun Tzu said, strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat. So what I've described here is a strategy. You need some tools to go with it. So let's place tactics with tools. So this is why you need to understand your test automation strategy before you pick your tools. The tools vendor comes to you in Christ to sell you a testing tool. Who was at Fred George's last session? Run away, because what they're doing is setting you up for the defeat. Because picking tools without understanding how they support your strategy is a big mistake. So the tools, as I mentioned, these are the ones that I happen to use a lot. Robot framework, which is excellent for overall workflow tests. It can interact with APIs, similarly fit or fitless can interact with APIs. Both of these do support using Selenium WebDriver as a way to interact with user interfaces. Like I say, these are the ones that I happen to use. I've experienced with and they've served me well. There are all sorts of tools that you can use for this and it's just a question of taking the appropriate ones for your particular department. So, sorry about that. Given that we've taken this approach here, how does that affect our systems diagram? Is automation hard now? Who affected and replaced a lot of these issues? QA is involved in product definition. So a lot of things that that causes disappear. The tests are automated during development not after development. So that removes a lot of those dependencies. The product is designed for testability because we have testability specs along with our behavior specs. They don't need to interact with the UI. The tests are not very complex. I mean, you saw actual tests there. Those were executable. Every one of those tables I showed you are actually screenshots from the tool. So test automation isn't hard. And because it isn't hard, the people who have the testing skills are freed up to spend time on the product definition and preparing these examples rather than always being busy, busy, busy doing manual testing. So this becomes a virtuous cycle. So just a question of finding one to start working on this so that you can break out of the situation that you're in with too much work for all your test people. So that brings me to the end of my talk. How are we doing for time? We're good. You got time for some questions? Yes. How much time? You got five minutes? Perfect. We have a microphone being passed around. All right, looks like we have no questions. Oh, we have a question up front. Just now you described using fitness as one of the tools. So in your experience, do you get the business users to write the scenarios themselves as influence? Sometimes. I've had business people who are more than happy to actually prepare the tests in the tabular formats. And once I showed them how to do it, they went off and prepared a whole bunch of scenarios, like hundreds of scenarios using the tables. And then there's other business people who would rather draw them and write them out or put them in Excel or whatever and then hand them over to you and then let you turn them into the more diverse format. The tool, one of the reasons for using fitness is it's a wiki. And that means that anyone can access it. It's very easy for someone just to go on the website. It does version control. You don't have to worry about teaching people how to use it. Stuff like that to do the version control. It has introduced a few problems for participants of wiki. It's done it very well with things like that. But yeah, it really comes down to what are the first and how unteachable, trainable. The actor preparing the examples is the most important part. Even if you don't actually automate them, just preparing the examples makes a huge difference in terms of your success rate at building the right thing. This strategy works if you have new features or new product, what in case you already have legacy product and you're getting new features, so how do you get legacy product out for the users? Yeah, so if you have legacy code, meaning a system that doesn't have automated tests around it, what you want to do is when you're asked to change a certain part of the system, kind of pull that out into a separate component. Very careful refactoring, using automated tools so that you don't break it. You can do some quick regression testing manually to make sure that the integration still works. Then you can retrofit some tests around that component. Say you're changing the rules, a particular rule. Put a few tests around that rule thing to make sure it still does what it currently does. And then write some more new examples for the change in behavior that you want. So that's a good piecemeal way to go about introducing the strategy. And you'd be typically doing it at the component test level that's kind of the business unit testing. But you can also retrofit into end tests using tools like robot frameworks and this meaning web drivers, you know, it's a web-based system. I've done that as well, I've kind of put some smoke tests on the system I've run into it. Don't try and go all the way over here, don't try and stop and retrofit all the tests. Just focus on the areas that you could break with the changes that you're using. Yeah, one of the nice things about these tools is because of the way the drivers that talk to the system aren't done, you can just replace it with a lot of the keyword library for example, with another keyword library that talks to a different system. Or you can do one thing that runs through the UI and another one that goes through an API and run the same test with or without the head. So there's all sorts of things you can do to really leverage these examples and tests. Is there a question like that? Asistently, I go to the unit and API testing and during our experiments, we were trying to integrate Selenium-based testing. One of the issues that we frequently encounter was that Selenium was using the class path identifiers and as the specs changed, as things changed, we end up realizing we actually spent way too much time on the UI testing and testing and we slowly abandoned it actually in the project. When you say class path, I assume you're meaning the X path to individual fields, butters, input errors and so on. Yes, that's one of the reasons why I don't say use Selenium WebDriver, I say use robot framework interface to the system using Selenium or fitness. So I prepare my tests independently of the UI. And a robot framework, the nice thing about it is that you can have multiple layers of keywords. So a keyword can call other keywords, can call other keywords and somewhere two or three or four levels down, the edit customer keyword or change customer name keyword translates into Selenium keywords that are go find this field, type data into it or go find this button, press save, that kind of stuff. So you're not writing your tests at that detailed level. So this is a good example of how a tool, if you set up to use Selenium as your testing tool, it sucks you into that bottom left corner of too much detail and a kind of a uniformly detailed view regardless of what kind of test you're describing or what kind of example you're doing. So this is why I prefer to use tools that use Selenium WebDriver as the backend to talk to the system and write my tests and examples independent of that technology. Just to clarify, it's like ADSS, is it? Say again? You were like saying basically keywords, your patent multiple ADSS of the keyword. So that was an answer to his question. I can use the same keyword and have multiple implementations of it. So a robot friend requests me swap up the keyword library, just point to a different library and I'd have one that would go through directly direct Java calls, for example. And I have another one that does the same functionality but goes through HTTP to hit the web interface. So that would use Selenium WebDriver. The other one would use the Java library to go and talk directly to the user interface. So that's one of the nice things that you get out of using these more abstract tools like robot framework rather than writing your tests in Selenium WebDriver because that commits you to interacting with the system through Selenium. Especially if you're dealing with systems that have multiple interfaces. Like the example we saw here where the transactions aren't gonna go through UI. And I've really rather not filled the UI to say, to simulate the introduction of transactions. I could call a Java or whatever, you know, service or microservice directly from a keyword and not have, because I'm not using Selenium for every keyword. I only use Selenium implementation for keywords when I'm interacting with the interface. Thank you. I see we're out of time now. We are all out of time. Thank you very much, Eran. Thank you all for coming. We appreciate you having me. We really appreciate your time from the way out.