 Hi, my name is Sergei Volkov and I'm part of the storage view. I'm going to briefly talk today about Jenkins and how OpenShift Container Storage 4 can help Jenkins run faster. So I'm going to do a very brief 60 seconds intro on Jenkins, looking a little bit into some scenarios where Jenkins and persistent storage can live together and then perform a demo from the command line of a Jenkins pipeline using OCS4. So Jenkins in 60 seconds, basically it's an automation server, software that helps developers or group of developers automate things that used to be done manually like building or compiling and testing and deploying and releasing. It works with all the major version controller software, and you can call it all kinds of manners. Probably one of the common ones is just by doing a git commit, you can actually go and call automatically to a Jenkins server to do a build on whatever you just committed. Jenkins pipeline has several plugins that it can use that are using other external software that the development teams is using in order to build whatever application that they are developing. Jenkins on Kubernetes runs as a single server pod, but it can spawn slave pods or other pods as well. So there are a lot of scenarios where Jenkins can use persistent storage. There's just too many ways to use Jenkins that you can just trim them down into one of two methods. And every piece of software or application that has been built and is using Jenkins is different from the another one using different plugins. So you can't. So it's not like we can say everyone is doing this or everyone is doing that. These are like three examples of using persistent storage. One is if you have during the build process, you need to share, you need to share data or storage between all your pods. So you can use SFFS via OCS for to share the storage between all these pods during the build process. Another example is if you are getting into a point where the build process is spawning a lot of pods with a lot of persistent volume claims or pods, but that needs many persistent volume claims. You might be starting to hit the limits of what your cloud provider can actually provide in terms of mounting, for example, AWS EC2 has a limit of 28 mounts. So together with OCS for you can overcome these limits. The third scenario and that is what we are going to concentrate on a demo is where we're basically using the persistent storage of some sort of data that has been used in the build between all the consecutive builds of the same application that we're trying to test and build. So here what I'm going to show in the demo, software itself is an OpenShift desktop project. It's basically building a Java API, a restful web service for some sort of an application. This is not really what's important in here. What's important is that, you know, during the continuous delivery phase of development, we have pipelines that usually hold four stages, build, deploy, test and release. Each of these stages depend on the previous one. Each of these stages might go to different other components of software and receive information. So this is sort of the pipeline that we are doing. On our demo we're going to concentrate on the build portion of the pipeline in order to show how we can save time during the build. So the build, our pipeline itself will create an event clone the repository of the OpenShift desktop project, build the artifact, which will also go and download all the dependencies and libraries that is needed and end the build. The stage is where we are actually going to save significant amount of time by keeping these dependencies and libraries going from one build to the next build to the next build to the next build. This is just our build config that we are going to use in OpenShift. You know, a few things to emphasize on. We are literally inheriting from the OpenShift for Maven image that comes with OpenShift 4. We are basically building on top of that a pod template that we are calling Maven-S. And every time this pod template will be called, it will attach a persistent volume claim named dependencies into the path of slash home Jenkins.m2. And now let's run the demo itself. I have downloaded the scripts that I've shown in the slides. We are going to deploy Jenkins. The script itself has five functions in it. We create a project where we want to deploy Jenkins in it. This is where we do all of our work. We create a Jenkins app using the OC new-app method. And we then wait for this Jenkins master to come up. It takes roughly between 60 to 120 seconds. We paired the demo that I'm going to do. I'm creating a PVC to be used in our Maven pods in the build. And we are creating the build config that I've also shown in the slides, the one that holds the Jenkins pipeline. The script accept basically two parameters. One is the project that we want to create and hold everything in it. And the other is some kind of storage class that we want to create the PVWid. In my case, I'm going to do it with the RookSef block. So I'm going to run the script. And we are going to do a little bit of a time wrap and then come back real soon. So now that we have Jenkins deployed, you can see that the script showed us Jenkins is ready. We're going to look at the run builds script that we have. And it also has a few functions in it. It starts a build via the OC star-build command, which we provide with, of course, with the build config that we created before. We didn't wait for the build to run and we get the build log and we then calculate how long each build and each section have taken. What's important to notice is that we are doing all of this and running through a loop of, in our case, of five times one after another. So once a build finish, the second one stops, second one finish, the third one, and so on. And we will see the time difference between the first build and then the next four consecutive builds. So to run the script, we just need to provide the project that we created before with the Jenkins and the build config and some sort of a directory that will hold the output from all the builds in here. And so, again, we're going to do a little bit of a time wrapping here. And once all five builds are done, I'll come back and show the results. So now that the run build script is finished, we had basically five runs of the same build. This is the end of the output. And again, after each build, I'm actually downloading locally the build logs. So let's see what we have. We have a directory that has, you can see the files. All the files that start with my Jenkins, there's something, our build logs and the files that are log dash something are basically a calculation of time. And of course, you can already see that the first build log is massively bigger than any consecutive build that follows. So what's interesting about this is that if we go to the actual build log, we see our pipeline script where it starts to run. But we have a lot of lines for downloading all kinds of dependencies and libraries that are needed for the actual build. And if we'll go now and calculate the time in terms of how each of these build behave, how much time it consumed, we can see the first build where we actually had to go and download all the dependencies took almost two minutes, one minute and 55 seconds. And every other build that follows is dramatically faster because when the build process starts, it identified the same persistent volume claim that we call dependencies in our Maven pod. It then sees that all the dependencies are already there. So it immediately jump into the Maven build section of our code, and it completes very fast. These are just the times for literally what's going on inside the Maven pod once it's up. There's, there are another time consideration which is how long it takes to Jenkins to start the build and create the Maven pods, which is coming from Jenkins and Kubernetes. This is, you know, it's out of our control. What we're showing is, is that the build time is considerably shorter, once the first build actually downloaded all the dependencies. So the next step, I'm going to actually create 60 of these projects and using another, another script, and then run 60 of these builds in parallel. So now that we have 60 projects with 60 Jenkins pods running. As you can see, we're going to use another wrapper best script called run builds dash parallel to basically run the same build that we ran on the same project on all the 60 projects at the same time. The only parameters you can see that it's received is just sort of a directory to keep all the data and the logs from the runs in it. So I'm going to start the run. And when this will be done. I'll continue the video. So now that we have finished the run of five builds on each of the 60 projects that have basically 60 Jenkins pods running in the parallel. What we have is our test directory. That's the name of the run that we gave. And inside is basically directory per project in each of these directories has the log file of the build and how long, how long it took to run the build. And so we can basically run a small best script that actually is just a couple of calculations that will go through this directory and creates an average of all the first build on all the 60 pods second build on all the 60 pods and so on and so on. And as you can see, first builds, the average is about 91 seconds, and the second builds significantly drop into eight seconds following five, five and four seconds. So what we basically in this demo is how we can take three AWS nodes. All this open shift cluster consists with three workers that also run OCS four. So it's just three worker nodes, and we run 60 Jenkins pods in parallel running our build. And we're showing that even on either a multi tenant or multi group environments. OCS four is consistent in saving the time in this type of builds type by saving all the dependencies and libraries. Just again to sort things out, if we were not using persistent storage from OCS four, the 90 seconds will actually continue to be the average build length. So any build that we will be running. So we're saving a lot of time in here. I hope you enjoy this demo and the links to the scripts over there if you have any questions. Karina will be able to share my email, and I'll be happy to answer any questions. Thank you.