 Okay. The recording has started. So, hi everyone. Today we have a first demo of GSOC or old strategy performance improvements project. We will, one of the first objectives in first coding phase is to have a benchmarking framework so that we can evaluate changes. And we will show the current state. We have something to show and I think it's great. So, yeah. I'll meet you. I've made you a presenter. So, you can start at any moment. Hi, hi everyone. I'm Abhidhe. And so, today I'm just showing you about the benchmarking framework that I've created to run our microbenchmarks tests. So, how does it work? So, it works directly from your JUnit test. So, it's easy to just run from your IDE. And I also added the functionality to run it automatically. I mean, the classes which contain the benchmarks are detected automatically when they're annotated with JMH benchmark. And for each fork that JMH creates, a temporary Jenkins instance is started. Now, to configure the instance, we can either use the Java code as in the functions inside your plugin or the Jenkins code itself or we can use Jenkins configuration as code. Both of these are there in the role strategy plugin now. And so, how does a benchmark look like here? So, basically what you have to do is create a static inner class which extends JMH benchmark state. So, this JMH benchmark state is a state that is used by Java microbenchmark furnace. That is, this is the only way you can pass information to a running benchmark. So, what this class does is that it starts up a temporary Jenkins instance. It's almost exactly the same like what Jenkins test harness does. So, what we can do here is like, if we have some specific setup for your own benchmark, you can just override the setup method here. And if you want to do some cleanup, there's the tear down method. And to get the Jenkins instance inside the benchmark, at the bottom, we have the benchmark here. So, what we can do here is you can either use the Jenkins instance from the state that's provided from the state or you can just directly use Jenkins.get instance. And after that, you can put your benchmark whatever you like. So, we can now, when you use configuration as code, we just need to do the same thing except we just need to extend the configuration as code JMH benchmark state. And to give the path to the YAML file, we just need to override this method. And the Jenkins is configured automatically. Now, the setup and tear down methods are still available. The only thing you'll have to take care now is to call the super method first. So, the benchmark reports, they can either be evaluated from inside the J unit tests. The JMH benchmark when run returns a Java object. So, the values from there can either be used to fail or accept the test. Otherwise, the reports are written to the disk in JSON and these reports can be directly fed to the JMH report plugin. I did share some examples on the mailing list. I haven't added them here yet, but I'll do that for the final presentation. So, JMH allows us to measure multiple things. So, we can measure the average time for a lot of operations. So, how JMH works is that it takes some, it creates multiple forks and it creates multiple forks. And so, for each fork, it first runs some warm up iterations to warm up the JVM. So, it doesn't have to compile the code again. I mean inline compile to the machine code. And after that, it runs the iterations. So, the average time of these iterations can be measured and there's also throughput, the number of operations and the single short time without any warm up. So, there are multiple things. So, you can just see the JMH examples. And so, some of the challenges that I faced was these bugs. So, test scrum issuer was which I was using because it was used in Jenkins rule from Jenkins test harness. That was not allowed to be measured from the configuration as code. So, what happens here and the other thing, again discussed on the mailing list, is that when we stop a jetty server, it leaves some threads behind. So, what JMH does is it waits for, after the benchmark is completed, it waits for these threads to terminate for 30 seconds, but then they do not. They're in the waiting state. So, JMH has to forcefully kill the JVM. And these are the pull requests. Thanks. Yeah, thank you for the quick introduction. So, yeah, it would be great if you could briefly show how it works for all strategy plugin. I mean, maybe take a look at the tests which we already have as benchmarks and maybe even running them. Yeah. So, okay. What we have here is a benchmark runner. As you can see, it's just part of the standard J unit tests. So, what we can just do here is just run it from here. So, we currently have a lot of benchmarks which do take a lot of time. So, just let me reduce this. Let me remove them from being discovered automatically. Yeah. We just have this benchmark running which uses configuration as code. So, this here is all of the checks that we are making. So, just to make sure that it's being configured correctly. And after that, we're checking the time it takes to generate an ACL object. So, just let me run it here. So, it's just starting the Jenkins instance here. This is the first fork. This is the configuration as code doing its work. And so, now the benchmark would have started. Yeah. That's 70% CPU usage. So, it's using two threads currently. So, as you can see, first the warmup iterations are on. And after that, we should be able to see the actual iterations which would be measured. Yeah. So, it will do the same thing again. And it will terminate this fork, start a new fork of the JVM and do the same thing again. And so, after that, we get a report, a JSON report. This is a sample JSON report. And so, I'll just show you the, this is what exactly the GMH plugin does here. You mean the Jenkins GMH plugin, right? Yeah. So, it just shows you all the benchmark. It visualizes it perfectly. So, here, we're just running one single, that report was just a one single test. But if you have multiple reports, you can compare them all. Now, this is what I was talking about, the JT leaving threads running. So, what finally happens is that GMH shuts it down. So, this is the second fork that's running. I'll just also show you how to just create a small benchmark right here. So, we have some, that's all you have to do for creating a benchmark. And how do you configure a number of iterations and all of the things, which could be executed? Yeah. So, they're here in the benchmark runner. So, like I said, we're using average time right now. And we can just configure the number of iterations. So, we have two threads, two forks, and two other iterations. Now, I think the benchmark has completed. We're just waiting for it to terminate. Yeah. So, we got a fresh result here. So, our score was 0.83 microseconds for one run of the benchmark. That's the average time. Yeah, that's cool. So, these are the results this time around. This is the confidence interval, this black bar. That's cool. Any questions, guys? If you have any questions, just feel free to ask them in the chat or in the, just unmute yourself and ask if you want. Yeah, maybe it makes sense to talk a bit about the next steps. Because, yeah, currently we just started. And our current plan is a bit proposed. And the main increase is to actually get the part of Jenkins as far as, so that it could be used by default and that it can be used from plug-in form. So, yeah, it would be great to make it available to all Jenkins plugin developers using standard tools. And if you have any feedback about that, please let us know. If there is no feedback right now, we can probably proceed this common project sync up. Does anybody want to say something? Okay. So, yeah, I think we can just sync up on the current activities. Yeah, if you want a video, I can just share my screen so that we can take a look there. Do you see it? Yes. By the way, this is our Gitter Chat. So, we have all the discussions happening in the whole strategy plugin. And by now, we already have first releases with changes integrated. So, if you take a look at the releases. So, two days ago, there was a new release of whole strategy plugin. There is a lot of changes already integrated for performance testing. So, right now, performance testing is disabled in a Jenkins file because we need to speak to the infrastructure team about how we properly implement that. And we have Olivier on the call. So, maybe it's a good opportunity to discuss that. Yeah, we also got JCASC support and we also got some bug fixes, more changes to come soon. And yeah, maybe we could really use the opportunity. So, what was originally done is that performance tests run as a part of our pipeline step. So, we first invoke standard build plugin. And then we just use another node acquisition in order to run benchmarks with minus D benchmark flag. So, this is a flag which now enables GMAH in my own profile. But the question was what would be the best agent types to do such performance benchmarking? So, Olivier, if you have some suggestions about that, it would be much appreciated. I'm not sure. Maybe he's not on the line. He cannot talk right now. But yeah, then we can just take it offline and we're discussing this Jenkins developer manifest thread. Thank you, Olivier. So, yeah, you see my chat pop-ups and everything is visible here. So, yeah, the ultimate goal for the project would be to re-enable these components. But here, right now, we just wait for feedback and we will get it done. Regarding the rest, Olivier, maybe you would like to make an update? Yeah. So, I did add a few more pull requests for adding more performance tests. So, yeah, these three ones, right? Yeah. Okay. So, folder benchmark is one of the benchmarks which has been created according to the community feedback. So, yeah, we had some discussion about use cases to check and, yeah, these use cases implemented right now. And if somebody has any specific use cases for all strategy usage, feel free to provide this feedback because, yeah, probably we could just implement benchmarks for them. And it will be a base for the next coding iteration so that we can improve performance there. And, yeah, I guess you need the reviews for all of these pull requests, right? Yeah. Yeah. We still need to make the review request because now it doesn't use write permissions. The repository can do the request reviews. And, yeah, my plan is to have automatic review board so that it will be taken and reviewed automatically here. Okay. Anything else about the project? Are there any other ways you would like to put them? I would like to discuss about the times that these benchmarks are taking because they seem to be very small. Yes. So, I guess it's somehow related to this pull request. It has permission for more high reconfigurations. Is that right? No, what's happening is that the time taken is too short. So, yeah, maybe you could share your screen and show it. Or I can navigate, but you need to show. Yeah. If we just take a look at the previous results, I'll share my screen. Okay. This one. It takes 0.8 of a microsecond to do one iteration. I mean, this may not be the most complicated benchmark that we have, but... So, what would be your preference? So, yeah, one of the ways is to actually create a test which involves more permission checks. Or maybe we could add some artificial tests, for example, the rendering of WebUI. Because, yeah, one of the stories of the implement is a part of application. He created an instance which really represented the slowdown of Jenkins WebUI with a little strategy plugin with a massive number of regular expressions. So, what we could do even in Benchmark, we could try rendering Jenkins WebUI. Because in Jenkins Test Harness, there is a framework called HTTPUnit. And with HTTPUnit, you can navigate to WebUI as a part of the benchmark. So, maybe it's a topic to consider because, yeah, if you want to have a real heavy smoke test, maybe we could just emulate requests to WebUI there and make Jenkins to render the stuff. It uses Funtime.js and other such things. So, yeah, it emulates the performance pretty well. And, yeah, I believe that it could provide you pretty long execution time for the benchmark. I'll try that. Yeah. Actually, it would be a good experiment to try it out this performance testing framework because, yeah, so HTTPUnit shouldn't be using Jenkins rule. But, yeah, there may be some interaction. So, if you want, I can just show you how it looks like. I'll show you my screen. Okay. Do you see my screen now? Yeah. Okay. Let's try to find a test. Okay. Whatever. So, yeah, we have released a new test. Maybe if this is a test which navigates to create a client, test the Insta Wallet. Okay. Yeah. There are some tests. Here you may see that a web client is actually initiated by a Jenkins rule, create a web client. So, you may see the problems start here because, yeah, web client is actually created by Jenkins as harmless. So, if we take a look at the code, it comes from, so, yeah, it comes from Jenkins rule directly. I don't think it's too complex, but, yeah, basically just to tell you a new web client. Nothing really complicated here. So, just use this in our class. And this in our class actually interacts with Jenkins context. So, you may see that it takes some information like that. Basically, it's a client. So, to track the thing. And I believe that it interacts with upper class to set up URLs and other such things. Most likely this code can be moved to the performance testing framework so that we can implement web client requests from the performance test framework. Why not? So, basically what you could do, you could create a web client. And then in your instance, via JCask, you could create multiple views with different set of layouts. So, for example, we had a discussion of all view this morning in the chat. So, we could have this all view and the number of other views. And then we can have a number of benchmarks which just access different views. So, we could have a good set for smoke test configurations. Does it make sense? Yeah. Yeah, it does. Yeah. So, it definitely wasn't in the original scope of the project. Your voice is not clear anymore. Sorry. So, I just wanted to say that although it's not something we planned for this phase, it's something you could implement if you want. Yeah, we are ahead of schedule right now. So, if you want to experiment with such more heavy tests, I think it's a good time to do so. Okay. Yeah. So, basically maybe creating a separate task for a web UI based test and you can do it as a part of this sprint. Okay. Yeah. So, basically we have a recording of that. So, you can take a look at the implementation and maybe take some examples. But your web client is pretty straightforward until you start parsing web UIs. And yeah, I believe that in your case, you don't really need to parse web UIs. You just need to retrieve them. So, it should be pretty trivial. Okay. Do you have any other questions or any topics you would like to discuss? So, should I start extracting the framework to somewhere else? Maybe it's something to discuss at this call. So, we have several Jenkins contributors. And my question to all of you guys, what would you say if we had it as a part of Jenkins Test Harness? So, we just offered out of the box. What do you think? Marky Rieck? I'm here. I think a default functionality would be really good. What I see so far in what I'm looking at. Yep. So, the default functionality, we basically have two ways to implement it. So, one of the ways is to just make it a part of Jenkins Test Harness directly. Another part is to have a separate repository. For example, in Jenkins, we have our own implementation for Docker-based testing. It's Docker features. Just a second. Jenkins CI. But yeah, our main problem with the framework. Well, it's not really a problem that we actually have a series of love with Jenkins role in Jenkins Test Harness. So, by putting it directly in Jenkins Test Harness, we could share the code without reusing that. So, yeah, we can just have two implementations aligned, which would definitely help all contributors who want to adopt the framework because they would have a similar set of features like in Jenkins role. Yeah, I could see. I could see benefit in that. Yeah. So, my preference would be to actually, if you're fine with it, we can start working on this topic. So, just by suggesting a pull request to Jenkins Test Harness, which takes the framework part there. So, I mean, Java clauses and other things. We will need to think about the packaging. So, usually, we put the information, well, there is a number of projects. So, my suggestion would be to have something under Jenkins. For example, Jenkins slash benchmarking or something like that. So, you put code to the newer parts because, yeah, the most of code here is actually, yeah, it actually comes from hot sometimes. But it really makes sense to use Jenkins package for the new code base. And, yeah, once it's done, once the new Jenkins Test Harness is released, what we might need to do is to update a plugin. So, plugin.pom is our framework which is default implementation. And here, Jenkins Test Harness is offered directly as a part of plugin.pom. So, once Jenkins Test Harness with a patch is released, you can suggest a patch here. And it's also possible to add profiles. So, like you can see in the bottom, we have profiles for integration test of Manion. And we could do pretty much the same for performance test. So, like you're dating a raw strategy plugin, we can just add a profile for benchmarks and keep it there for now. So, that anybody who is interested, we have a profile, we have some documentation in the repository, and it would be a good step. And then, if you want, we can defy pipeline library to use it by default in some conditions. So, this is the file we were discussing before. But, yeah, first steps like Jenkins Test Harness. If you start doing that, I think it's a good time to start doing that. Because, yeah, I believe we can build consensus, but I don't expect real issues about including GMA directly into Jenkins Test Harness. At least not for now. So, yeah, this story is also something we can press it with. Do we have any other blocked stories right now? So, yeah, you mentioned these patches, issues with JT and the issues with JAP 200. And actually, JAP 200 issue may be a problem once we start doing REST API tests. Because, yeah, REST API tests really depend on these test cramp issues. Are they blocking you right now? Not with the current implementation. If we share code with Jenkins Test Harness, that does it by default. So, that would, I mean, contradict with the configuration as code part. I think, yeah, that's fine. So, yeah, if there is no other blockers, I think that we can just press it. And, yeah, thanks a bit for the demo. It's a really good progress. The recording just started this week, but we already have something to show. And, yeah, thanks a lot for doing that. If there is no other topics, thanks to everybody for participating. Yeah, this video was recorded, so we will get posted on YouTube. And if you have any questions, just ask in the Gitter Chat so that we can discuss them there. Okay. Have a good day, everybody. Yeah, you too. And thanks a lot. Bye. Thanks, everyone. Okay. Bye-bye.