 Well, hello, and welcome to another Dev Nation Live. I'm super excited to have you all here today, and we have another great guest speaker to come spend time with us right now. So you guys are gonna hear a lot about microservices and specifically testing, and maybe less microservices and more about testing. Because one of the things we wanna do in these sessions is ensure that you guys also have great background information that'll cover your day-to-day development needs. So if you have other topics you have interested in, please feel free to email me. Most of you have my email address at this point since the announcements go out with my reply to. Feel free to just send me an email and say what else you'd like to hear from us. Also, you can hit me up in the chat. I've been trying to follow it here, welcoming folks from all around the globe. It is super exciting to see everybody from all around our world, if you will, to come in and spend a little time with us every other Thursday or so. So we meet the first and third Thursday of the month. You can always go to developers.redhat.com slash Dev Nation Live to see our upcoming schedule. But at this point, I'd like to turn it over to Alex Sodobueno. He's coming to us from Spain. I believe he's specifically in the Barcelona area, but I'll let Alex introduce himself. He has a great testing topic to talk to us today. Hello. Yeah. Today I'm going to start talking about a new tool that we've been developing in the last year in the testing group. Which is like, it's called test or smart testing. It's called the tool, which allows us to run your tests with more fashion and faster and you get results in a better way. My name is Alex Sodobueno. I work in Red Hat. I'm the creator of LordofTheJar.com. And yeah, I mean, that you can follow me on Twitter, which is Alex Sodobueno. And as I said, well, I'm the co-author of Testing Java Microservices for many. And as I said, we're going to talk about testing. And I would say that the world is moving so fast. Traditional business are changing into software wall. We've seen this in Uber, Justice, or Airbnb. All these companies have taken the traditional business and moved to the software companies. And if we want, and as we as a software company, we need to adapt to these changes to survive as well. And this means that we need to release our software faster because maybe there is a back on our software and we need to get it on production really quick. Sooner, maybe because we want to get some really cool new feature for our competitors. And of course, better. I mean, you can have a really cool feature, but if it doesn't work, it's somehow useless. And to go faster, sorry, to go faster and sooner and better means writing more tests and we need to automate everything. So just a really quick overview of the lifecycle of the continuous integration, continuous delivery from the point of view of testing. Usually you are a developer, which you do Git pull, rebase, revision master, or whatever. Just you get a lot of changes from your Git repo. Then probably you create a new branch and then you try to make a clean test to run all your tests to get it that everything's working as expecting on your machine. After that, you start coding your new feature, writing the automated test, writing the new feature and so on and so forth. And then just one, maybe you can test to validate that everything is working as expected. After that, probably you are going to commit your changes. Then you do a checkout of the master. You get a lot of changes from the Git repo because maybe between you have started developing something and now there has been some changes. Then you rebase these changes to your branch and finally you need to run again, maybe in test because if there has been some changes, it means that maybe something has been broken between your test and the new changes pulled from the Git repo. So you need to run again all the tests. Finally, if everything works as expected, you have clean and you don't have confidence once and for all, you push this new branch to your Git server. Then probably there is a web hook which sends a pull request and this pull request will run some pre-commit test in your CI CD server. Then again, you're running again some test to validate that everything works as expected and after some good review, you can decide if you want to merge these changes to your master branch or not. You decide to merge it, you have an approval and then you have the commit to stage test where you again run test. These basically are the quick test which gives you feedback about the quality and to be sure that you have not broken something important. And finally you run accept and test to validate that all the solution works as expected and then you can deploy these to production, right? So you can see you run test, test, and test in several places. Then if you check this quote from Canvas, say that as an engineer, you should constantly work to make your feedback loops shorter in time on wider in a scope. So basically what we want to do is just do not spend times in front of the monitor, just checking if all the tests are green in all of these spaces. And the good thing is that most of the time you already know which test you want to run because you know that which test, test, which production code. So usually what you do is just you have your ID and say, okay, I've added this feature and I know that this feature changes this part of the code so I know that I can run all these tests. But it's at the end of my normal process. The automatic process still takes time and time. So what we have done is just take this knowledge and put it so you can get it in automatic way. And I think that the best way to show this is with an example. I've got here a really simple application which is a game in this case. Well, I'm going to send you now. I've commented this test, but it should not run now. So I just commented this test. I create, for example, a new feature and I want to test it. So then I'm going here and I can do a Maven clean test why this is usually what you do. And then you will see that the built start working. But basically the idea you get here is that this project has several tests, maybe it has like 20,000 or so. So you're going to compile all your project and just running all the tests of this project which I was going to take some time. Now we see that if you check the project, it's a multimodal project. It's a monolith project, but it means that this works with microservices with monoliths, whatever, it doesn't matter. This approach works with any Maven project because basically this smart testing tool is a Maven extension. So now I'm not running a smart testing at all. I'm just doing as usually, but I think that's going to take a lot of time because of sharing the screen. But you can see that I'm running here tests, then get the next module and also it's going to run more tests which is a test, for example, and if you check here, you'll see that the one that I've changed is the game service test. Again, it runs all these tests, the little service test, which I mean, it's a Java E application which is done with JPA, CDI, ACVs and so on so far. So for writing the test, you need to boot up the database and run all the entity, boot up entity manager and so on so far. So, wow. It's just building all the projects. Of course, I mean, this is a small project but if you think about in a big, big project, usually this is what happens, right? That you are just there compiling and running tests from your project. And in this case, for example, this is a Game Resource Rest API test which is like a smoke test which just package everything, boot up and just run a smoke test to validate that all my changes still works or fits together. Now this is that in this case, I'm using a Wi-Fi, so it needs to start Wi-Fi, create the Wi-Fi, deploy this Wi-Fi inside the Wi-Fi, deploy it and so on so far. So I mean, it takes some time because you need some time to execute it. But I'm going to just to save time. I'm going to abort the build because it takes a lot of time but as you can see in the log, we have been executing a lot of tests which were not related at all with the changes that I have done. So now let me show you how a smart testing works. I've already had a smart testing here installed. I would show you later how to install it. It's really easy. But to install up, maybe an extension, you just need to create a .maven folder and accession.examl and just put it the smart testing like cycle accession and that's all. I'm going to show you how to create this file later. So now I'm going to do the first thing. You just need to do maybe clean test and use the smart.testing equal new and changes. Now, what's happening with just adding this property is like only the tests that are new or that have been changed are going to be executed. I know that it is not so smart but believe me with just this thing, you're going to speed up a lot of your pipeline, right? Because maybe in the first coming, you only want to validate the new test, not the old ones as a first step, right? To validate that everything works in perspective. So now you're going to see that building reviews MPL is going to compile it again. But now when it's going to execute Shopify, it says, is there any test in the reviews module that has been changed or has been added? And obviously as much as it says now, there is nothing new. So it says, okay, if you check it here, you see that there is no test run. Although this project has test since there is no new, no changes, then it just skips. Again, it's happening the same with details. Details will say that there is no test run, zero. So it goes to the game page and MPL. And now in this case, it says, yes, there is one new or changed test class which is the game service test. So now it's going to run just this test. Notice that now game service test is the one that's going to run because it's one that like changes adding a new test method. Now it runs and the build is success, but I often only run these service test. This one, now I'm going to commit this to just have a claim. Now you said, okay, that's great, but it just checks if there is new things on the test. But usually my changes are on the production code, not on test. So what I'm going to do now is just do another change. In this case, I'm going to change the regular class, which is a JPA class. I'm going to change here that now the rating instead of being from five, it's going to be from six. I want to do the six. Now, of course, if I do this, it's going to run nothing, zero test. Why? Because there is no new test and there is no change of test. I've changed a production code. So to take this, we create another strategy which is called affected, which basically does something really, let's say a smart, which is like, okay, I know that you have changed review.class. Then I'm going to explore all your class path, finding which classes uses this review class. And from these classes, again, all the classes that are being used. So it's like, you know, creating a hierarchy of direct and indirect relationship between the modified class and all the classes that uses them. After that, what we do is like, it's saying, okay, of all these hierarchy, let's find which test some of these classes and add it on the test plan. So with affected strategy, what we are effectively done is just finding which test verifies a production code. In this case, you will see that I've changed reviews entity. So obviously I'm going to run test of the reviews and module because this is where this review class stands. But if you check now, for example, here is reviews. Notice that it's running review service test because review service test is a test that uses the review class. And also it's running the review test because review test is a test that verifies your review behavior. So we detected that these classes are using review, so we're going to run this test. But no more tests are going to be run. Details module just uses details, no reviews. So since there is no direct or indirect dependency between details and review, then no test are going to be run from details module. Notice that zero tests are run. And then it's going to happen exactly the same with game page MPL, and then it's going to execute no test. Notice that it works pretty slow because all the CPU is just spending on sharing my screen. There's some things that sometimes happen in my computer. Well, when I do this sharing the screen, so I don't know if I know. So it's like, okay, everything is successful, but I've executed only the test that reviews or reviews class were used. Let me do that exactly the same to have it clear. Now you will say, okay, that's great, but what's happening with other files? For example, if you check here in game page, you can see that there is a persistent of the XML and this is not a Java class. But if I change this, probably you want to also validate that it's working as expected. For example, let me do one change here. I change it to the database location. Then if I want to execute any test because of this change, for example, I want to run this test. The only thing that I have to do is just go here and say, look, I want to execute also this, I want to execute also this test when I watch this file. And the file that I'm watching is source, test, sources, like that, flash, c++, XML. So now what I'm saying here is like, okay, if I change persistent.xml, please execute this test because this test validate that this persistent.xml it has been correctly created. So I can go now, I can go here, and I do Maven factors. And then again, what's going to happen now is that it's going to execute only this game test because the change is just a file changes. And I instruct as much testing to just run tests called game test. Now, again, I mean, it's running all the builds, but only, then you're seeing here like these three strategies, the newest strategy, which is for when you created a new test, the changes to strategy that is run when you change a test. Then you've seen here a fact of the strategy, which is run by inspecting your code. Basically, we are inspecting the byte codes and we are building this graph of dependencies and then we just explore and run the test that are really important for that change. Also now, we've seen that with the fact that you can also be used with when you change a file, which is not a Java file, but we also have other strategies. Like for example, further the strategy, which it does this strategy is checking the previous run and all the tests that have failed are put it as an important test. So this is really useful sometimes when you run your test from IDE and it works, but then you do maybe clean test and it does not work and you say, wait, it's working with IDE and not with the ribbon and you start doing some kind of debugging but you are running all the time maybe clean test, wait until all the tests are run or until it's time to run your test and then start debugging. With the further strategy, what you are getting is that you only run the ones that failed before. So it really speed up this use case. And, well, now let's see if it works this test. And I think that now game test is run and it's just game test. None of the other tests are being run. Well, I've killed the JVM for the reason it's spilling. Then let me show you another, the last strategy, which is called categorize. And now suppose that what you want to do is run one test just because it has a category. For example, game resource REST API, it's a test that maybe you'll always want to run because it's really important, right? Then you can put something like, using the category annotation, this category annotation is the one provided by JUnit. And it creates a new interface which is called a small test. So what I'm saying here is that this class is one of, it's one for the small test. Then what I'm going to hear and I said, Maven, clean test, smart testing, take horizon, and then you just put something like this, which says minus the smart testing categories and you put here the class of the category. Then when you run this, what's happening is that I'm only going to execute the tests that are annotated with the category smoke test. So notice that now I'm just using one strategy every time that you can concatenate. This is a smart testing property and decided with it's a comma separate value property. So you can put, for example, that affected comma categorized it. And then in this way, you're going to always run only the test that we think that are important plus the test that you as a person thinks that are important. So you can categorize your test and then let a smart testing just plan the test execution by just selecting the important test plus the test that you have instruct us to also run. So now you will see that reviews test will run no test. Details model will run no test. It's zero because there is no test annotated with a smoke test. Details, it happens exactly the same and then you have the other test which is annotated with a smoke test. So it went to run. Now I'm going to leave it here running. Well, yeah, because now it's this that takes a bit of time, but let's wait. Notice that now only game resource REST API is run. No other test. And this is because I'm applying the strategy of categorize and this class is categorized with the category that I've set. Okay, you've seen the demo, then a smart testing has a website which is, you can or not a website that all the documentation is in Arcilian.org slash smart testing. The thing that is important is that it's a Maven extension. It means that it's not a plugin and it means that it's a Maven extension that needs to be registered in the dot Maven slash extension that XML, but you don't need to remember this because you can just do this cool Minus SSL and this URL slash bash in the root of your project. And then a smart exit, a smart testing will be installed as Maven extension automatically. We have seen here the selective mode. This means that it only runs the test that we think that are important, but we have another mode which is called ordering. It's basically instead of just running the test that we think that are important, what we are doing is just ordering the test for important. So the tests that are more important are run first and the tests that we think that are less important are run last. In this way, you can enable this fail fast behavior. So it's like, okay, I don't want to skip any test. I'm going to run all of them, but if the first one fails, I don't want to wait until the end of all the run to validate that the build is failing. Just fail at that time because I know that the build is failing. We have seen some heuristics new changes affected, categorized and the failing one, which is missing here, but there is this failing as I mentioned before to run test that has previously failed. Now we've seen that we are always running a smart testing from the current comment, but you can also set a range of comments. So for example, you can say, I want to run all the new and changes test from this range of comments. For example, from version one to version two. So instead of running all of your build, you can say, okay, let's run only the test that has been added between these two versions. It can be configured as system properties. We have seen here, but also you can create a ML file which can be put globally in your root directory or their module in each module. And then you can configure everything that you've seen as a system property, but as a ML file. So then you only need to run maybe in plain test and then we detect this especially ML file and then we just configure as smart testing with these parameters. We also provide in a Jenkins pipeline share library. So if you are into this Jenkins two pipeline as code, then you can also use our share library to run all these processes automatically in your Jenkins. And as a bonus track, we are working in a test hub, which basically it's a hub where we are going to store all the tests. And this will mean that we will be able to exploit all this information for getting better results on a smart testing. And that's all. We have covered just one part of splitting up your development, but this is just one way of splitting your release process. But again, if you want to speed up your deployment, you also need to try new techniques such as consumer driving contracts, service utilization or testing in production, apart from using a smart testing. And that's all. Just if you have any questions, feel free to ping me on Twitter or just send me an email. And of course, feel free to join the Red Hat Developers Community. Thank you very much. Hey, Alex, you ready for a couple of questions? Or did he take off on me? Oh, there he is. So real good question based on what you're demoing, and I know you're having CPU problems there. What, it would be nice to see the before and after effects from a timing standpoint. You know, the dumb testing versus smart testing. Is that possible or did you have that in the presentation? No, we don't have this because it means that we need to know what happened in the previous build to be able to compare. We can just, I mean, the only thing that we are doing is just removing some test from the test plan of short fire plugin. So we never know what will happen. You can do it on you as a developer to try it, but not automatically. It's true that when we have this test hub server integration, then we will be able to provide this information because with test hub we will have information of the previous runs. So we'll be able to say, look, the previous run was not running by smart test, and then this is by smart test as you have improved this plan there. Okay, another good question I thought was nice is can we make it the default? So when I just run mbn test, the smart testing is run. Yeah, well, more than the default, what you can do is just create a smart Minus testing.jml file and then configure there the mode that you want to use, the strategy that you want to use. I mean, there is a lot of parameters to be set in all these strategies. You can configure it there and then you can just run mbn test. And then instead of using the system property run this configuration parameters from that file. Okay, that'd be a good, an example that file would probably be good to get out to the world, maybe on Twitter. One other question is, what about mocked classes? So if you have mocks in your tests, how are those impacted? Are they, is it picking up mocked classes also? Yes and no. I mean that, I mean that if you're mocked, if you are mocking an interface, then if you change the interface then it's detected because this interface is imported in the test. But if you, for example, you're changing the EMPL class, I mean the implementation of this interface, then since the implementation is not modified in the test class because it's not imported, then we are not able to detect it. This is going to change, I mean that there is a workaround which is like we have a special notation as I like we see that watch for, then you can use also the annotation to set, if I change any class of this package or follow this pattern, run also this test as well. So if you're mocking, then you can even instruct to run this test as well. And in the long-term support, what we are working is like working in inspecting the coverage. So if you are running coverage, we will know that this test has covered all these lines of code. So then we will be able to automatically detect these changes. But in any way, I mean that if you are mocking an interface and it means that you are not testing the implementation, right? Because you are mocking an interface. So I don't know if it has so much sense to run this test if you change the implementation because at the very end you are mocking the implementation, right? So yeah, it's, I mean, I suppose that there is a use for that. So we are unfortunately at a time for today. Thank you so much. But there was one question and an item in the chat you'll want to think about more. Maybe put this out on Twitter also is what about Gradle? But we'll just leave it right there. You can think about what you're going to do from a Gradle standpoint, unless you're just saying, oh, it's available already. That's not true, right? No, that's true. No, because we tried, but it's really hard for Gradle. So yeah. Well, thank you so much. And thank you all for joining us today from all around the globe. We appreciate you showing up. And if you ever have questions, feel free to hit us up on Twitter or find me on email. And feel free to give us new ideas from future events. And always go to developers.redhat.com slash DevNation Live for more information. Thank you so much. Thank you, Alex. Thank you very much. Bye.