 Hello, this is Bear Sutter. No, sorry, I'm kidding. Today, I'm at Sonia Naga. I'm replacing Bear, hosting this awesome dev station live. And for today's presentation, we have the great CMAC, which is our Pipeline Master. And I'll hand over to you, CMAC, to just enjoying us with your presentation. Thank you, Edson. Hi, everyone. I'm Sam Aisada-Bianfar. I do evangelism and tech marketing for Red Hat. We're containers and open shift. And today, I want to talk to you about Jenkins and Kubernetes. How we can do cool stuff with these things. I'm going to jump directly into it as we have a little bit of time. And I have a lot to show you guys. I'm going to start with sharing my screen. Just confirm if you can see my screen. I'm going to take that as a yes, that you can see my screen. So continuous integration and continuous delivery with Jenkins and Kubernetes. Why am I talking about Jenkins? Jenkins is probably the most known continuous delivery engine. I should say whatever engine out there is open-sort. Original came from Hudson. And there is a lot of love and hate for it. Most of love, I would say, because it's a tool that fits pretty much anything. It's extensible. It's pretty easy to use. And it connects to every single system that is out there because of the huge ecosystem of the plugins that are out there. Some people don't like the UI or some other aspect of it. But nevertheless, because of the flexibility, it is one of the most popular ones out there. And I could say it's the de facto tool for building certain type of automation through Jenkins. I wouldn't go that far saying it's de facto for CI CD. But for any type of CI automation, it's definitely the first tool that people try and use. Kubernetes, on the other hand, with the rise of containers, have become super popular. Containers are essential part of any type of deployment. These days, usually people with new apps, because they're lightweight or easier. I can deploy them on my own laptop. I can build them. And I can take the same thing, move it around to public cloud, Amazon Azure, and elsewhere. But when we build a lot of them, we need something to help us orchestrate these things, to automate how we build them, how we deploy them across multiple hosts that we have. And Kubernetes has become de facto standard. For doing that. And these two share a special love. It's a match that is made, really. In Heaven, if I may say, between these two tools. So Jenkins covers a lot of automation that we need around the application, around different type of the spoke paths that we need to do. And Kubernetes automates a huge portion of infrastructural heavy lifting that we want to do around containers. And these two together are really powerful tool of how we can use these technologies to build really flexible pipelines that do pretty much whatever we wanna do with our applications. Our applications are quite unique and they need their own things, their own way of deployment, their own way of configuration management. And I doubt that there is any combination that it can do combining these two tools with huge ecosystem of plugins, of course, that are out there. And what I'm gonna do for the rest of the presentation is to start the session is that I'm gonna give you a couple of practical tips that helps you in building advanced pipelines. How you can do a bunch of tasks that most teams have to deal with using these tools, using some of the plugins. The first thing I'm gonna mention, the first tip out there is Jenkins source to image. So through this session, I'm gonna use OpenShift, which is Red Hat Kubernetes distro Enterprise Kubernetes for anyone who wants a solid stable Kubernetes running run production workloads. And source to image is a specific feature of OpenShift that helps you combine configuration and code into a container image that runs. And usually we use that for building container images for applications, but that's not what I'm gonna get into. I'm gonna talk specifically about Jenkins because most of the times we take Jenkins, then install a bunch of plugins on that, then add a bunch of credentials, username and password for different place that it needs to connect to, and configure memory, configure different aspects of Jenkins itself. But how we can save that as, how we can consider the code, how we can make that repeatable instead of what we normally used to do, which is deploy Jenkins, install it, go in the GUI and start changing the configuration. So if something happens that Jenkins and everything gets removed, then we have a little bit of trouble recreating that thing. What source to image does, apologies for the little mistake here. Hey, update this as we speak. What source to image does is that it takes the Jenkins image that comes by the platform or you could use other images based on this image. And then you can point out a Git repository that contains your OpenShift, sorry, your Jenkins configuration. That might be the main configuration file in Jenkins, the conflict XML. It might be your credentials XML file. It might be other files that you need within Jenkins. One of the most famous ones is settings XML for Maven. You might want that on top of the image. You might want to list the plugins and so on. And OpenShift takes these two combines and I'm built a new container image that contains your configurations. Let me at the same time show how that works. I already have created a CI CD project that I deployed Jenkins on it. Since the deployment takes a minute or two, I wanted to save a little time. But how that works is that within the service catalog and OpenShift and you log in, this is OpenShift web console on top of Kubernetes, right? But on the core is Kubernetes. We are using the OpenShift web GUI, which helps us to do some things a little easier compared to Kubernetes console. And within here under CI CD, we have Jenkins image. And that's the one that I have deployed. Using the same, you can point at your source code and that would create instead of using the vanilla image that comes with the platform, you can create your own image and add your own configuration on top of that. So that's the way that makes it really easy for deploying, building custom container images for Jenkins without having to deal with the Docker files or making the configuration at a wrong time, which is prone to make it difficult to be repeatable with one everything to be saved as code. So that's the first thing. The second one is a plugin called OpenShift Sync Plugin. And this OpenShift, this plugin helps you to synchronize objects that you create as Kubernetes resources on top of OpenShift, syncs them with objects that are relevant for Jenkins. An example of that is the pipeline itself. For example, I want to create a pipeline for a simple application that I wanna run in this platform. Let's first deploy a simple application that we can run. I got two projects here, Dev and Prod, these are name spaces so that we can separate my objects. In the Dev project, let's deploy a simple Spring Boot application. I have a simple mapping application, which is called, it's not in my history, let me copy it from my notes. I have a project called a mapping screen, simple Spring Boot application that shows a map and you can add the geolocation data on the map. So nothing extraordinary, a simple job application. I wanna deploy that inside the Dev project over Kubernetes so that we can use that for building a pipeline. So I'm gonna use the OpenShift CLI command to say create a new application, new app, and then it's a job application, put it on a JDK image and here is my source and I wanna create a name space. So that would create the objects that we need, again uses the source to image feature that I mentioned. I could have built the container image for this application myself, but I'm a little lazy in that matter so I just ask OpenShift to use the source to image to create a build to see a mapping Spring build is running. It's a normal Maven build, it just runs the build through the project, runs the code, runs the build and then package the result as a container image and deploy it on top of the platform. As soon as that happens, then we can take a look at the application and then go create a pipeline for this. Another thing before it comes up is health probes. That's Kubernetes feature, I wanna set that so that it checks if the application is up or not based on HTTP request. So I'm gonna say wait 30 seconds and start probing HTTP, the root path of the application on 8080 and whenever it's ready, then start saving traffic to it. All right, the build is running. As soon as this finishes, then we will have this automatic deploy. But save time, as this is running, I'm gonna move on step forward and start creating the pipeline that we can use the same plugin in OpenShift or Jenkins and sync that back to Jenkins and create a pipeline automatically for us using this declarative syntax that Jenkins has. I have already created a GitHub repo where we wanna keep all our pipeline definitions. Like I said, everything has a code. We don't wanna keep anything as a runtime configuration. So we have a basic pipeline using Jenkins declarative syntax quite straightforward. It says clone the source code from the repo that we just deployed, run a package and then run the tests. So let's create a pipeline using this pipeline definition. And we create that on top of Kubernetes and then OpenShift or Jenkins sync plugin, sync that definition to Jenkins environment. So let's do that here. Also in the new build, I refer to where my pipeline definition sits. I give it a name to this pipeline I run create and which name space you should create. And CICD where I have my Jenkins deployed. Okay, let's take a look. See that our mapping application is also deployed. Let's create a route for it which exposes this application on the load balancer so that we can send traffic to it. There we go. The mapping application is up. So let's go to the CICD project and see what the sync plugin has done for us. I have created this object and it created a map it's built in the configuration it can see is referring to the GitHub repo and the Jenkins file that was in that repo. So the entire definition of our simple pipeline is in Git and we have just created it inside our Kubernetes platform. But what happens if I go to Jenkins now? Let's me follow the URL here. I have a Jenkins deployed. We can see that automatically a pipeline is also created for us in Jenkins with the exact same definition that I created in OpenShift. Basically the pipeline does not exist in OpenShift. The only definition of that exists in OpenShift and the sync plugin replicates that object creates is for me on top of Jenkins and the start executor. So the value of this is that I can keep the entire definition of my application as Kubernetes resource files and the sync plugin would replicate that on top of Jenkins for me and create those pipelines. There are other uses for this because you can replicate, for example, the secrets and use them as credentials, other type of configuration. We're gonna get to them a little further in the session today. But so far we're gonna use only the pipeline definition that a sync planning plugin is replicated on Jenkins. You can look at a pipeline within OpenShift or inside Jenkins. I prefer I like the blue ocean interface of Jenkins. It's quite nice. Right now the build is running. It has checked out the pipeline from Ytripo and then it starts building it. If I click on view logs, of course it goes to Jenkins and show me the logs in the older GUI of Jenkins. Give it a little while as this is deploying. Within the Jenkins, of course, you can also look at the logs as they're progressing. Like I said, it's a normal Maven build that you're running inside our pipeline. So if you look at the pipeline definition, at the build stage we had the clone and the Maven package. So it takes a little while until it downloads all the dependencies, Maven dependencies. And after it finishes, it runs a test. Right now in that project, we don't have too many tests. So it seems it's quite fast to test. It's not a thorough test suite for the sake of this demo, make it short. So that's the first phase of our pipeline. Very simple, just use the declarative syntax, build the application as a jar file and test it, right? But that doesn't do much good for us yet because now we have built a package, we've tested it as good. We should be able to also possibly deploy it inside that dev project. And the next step for doing that is another plugin, which is called the... I think I skipped a little ahead. Let's go back one step. So if I start the build again, there is another aspect of running combination of Jenkins and Kubernetes and that's the Kubernetes plugin. Kubernetes plugin can be added to any Jenkins instance and it helps Jenkins to instead of having dedicated Jenkins slave images, slave servers, it can automatically dynamically spin off slave pods on top of Kubernetes and delegate the build jobs to them. It could be a pipeline or any type of build job. Historically, if you want to... Jenkins is not a scale on its own. So if you want to scale Jenkins and have many instances of jobs running at the same time, you've got to dedicate a large number of slave servers to it, which is a waste of resource because not always you have a lot of jobs running at the same time, especially during nights, no jobs are running. So we have a lot of resources wasted because of those servers standing all the time. With Kubernetes plugin, you can spin off automatically. Like I said, you can install it on any Jenkins instance, just configure it, point it at a Kubernetes instance. And it would, every time you run a build, it would spin up a new pod and run that job for you. So within this project, if I go to the pods, we see that there is a Jenkins pod running, that's our Jenkins master. Also, there was a Maven pod that is actually running the job for us. So as soon as the build finishes here and the test completes, if you go back and look at the pods again, then the Maven pod disappears. So it's, when I start a job, it talks to Kubernetes engulf and shift, spins up a new pod, runs the pipeline on it. When it finishes, regardless of the fail, the success, then it take the pod down. So it releases the resources as soon as you don't need it anymore. It's a base way to scale Jenkins to hundreds and thousands of jobs, running get possible a lot of them at the same time without dedicating a huge amount of resources upfront to these pods. Of course, when they're running, they require the resources, but you don't have to have them standby all the time so that it can run these jobs for you. So that's a very useful plugin when you're dealing with Kubernetes, definitely recommend that use the Kubernetes plugin. On the image that comes with Jenkins, that is pre-installed, it's already available and installed on that image. The next tip for using Jenkins on Kubernetes is the way you can customize Jenkins Slate. So this is the same plugin that I mentioned before, and another thing that it does, it can help you customize this Jenkins pod. So I already mentioned that we have these pods running the Maven pod, but immediately, especially if you're dealing with Java and Maven, a question comes up of how much memory this pod is running with. Sometimes you have builds that require a lot of memory, sometimes small, sometimes you do a source generation in the build that take a lot of resources, CPU resources. So we need a way to be able to customize the configuration for this pod. And the way to customize that is normally through Jenkins configuration. So if you go to the managed Jenkins, you can go in the configuration of the Kubernetes plugin and change the configuration for Slate pods. But that's not again a configuration as code because as soon as that pod that Jenkins is deleted, all your customization also disappear. So what you can do instead is use the same plugin and use a config map that the same plugin automatically reflects on Jenkins and configures those Slate pods the way that you, for example, I'm going to create a config map, which is a Kubernetes construct for containing any type of configuration, call it Maven template. These names could be anything, does not matter. And for the Kubernetes plugin has an XML format for configuring the Slate pod. So when you do this configuration within the Jenkins UI, this XML is generated for you and saved on the Jenkins pod. So what I usually do to figure out this XML format is that I do changes through the Jenkins GUI then just search into the pod and copy this XML that exists there and then define a config map based on that so that I can replicate that repeated every time. So this pod template defines what's the name of this Jenkins Slate and what image should be used for this Slate. They are using the Jenkins Slate Maven that is coming from Red Hat. You can define how much resources should be allocated. I'm saying that maximum up to two cores, CPU cores and 2UB out of memory should be, it's okay to be allocated to this pod. And that's pretty much it. So you can create any type of configuration in one environment variables, persistence storage and other things you can add it exact same way. So we have created this config, but there was one more step before Jenkins picking this up and configure its own definition of the config Slate and that is a label that we need to assign to this Kubernetes object. So Jenkins sync does not take any, all the config map. It just looks for a config map that have a label called role equal Jenkins Slate and picks those ones and based on that configures the Slate to have a whatever configuration that you request that you want. So as soon as we did that, Jenkins automatically fetches that from Kubernetes and configures itself. So that's the next thing we can do to customize Jenkins Slate through the config maps that you want you define on Kubernetes to allow you to repeat this on any number of Jenkins instances and always start on scratch. The next step is the OpenShift Client Plugin. So if you want to interact with the OpenShift through your Jenkins pipeline, the easiest way is to install the OpenShift Client Plugin on Jenkins and then it gives you a full on the API for defining a pipeline. As you can see on the slide, there was a construct of OpenShift.new app that generates that is called OC new app and this is full-fledged groovy, right? So we can do any type of navigation or processing that you want to do within your pipeline based on the information that you get from OpenShift. So if we look at the same repo that we had our code, we have an extension of the pipeline that is called dash deploy. It's the beginning of it is similar to what we had before, it just builds and tests and it has two more stages. It has a built image stage that starts interacting with OpenShift within the Dev project. It builds a new container image based on our jar file then it waits till it finishes, the build finishes and then it deploys the new version of our image into the Dev project. So let's use this replace the Jenkins file we had with Jenkins deploy and see how we can use the OpenShift Client Plugin within our pipeline. If I go to the pipelines, map it, build. So all I need to do to update this, I can just refer because this is already picking the Jenkins file definition from GitHub, all I need to do is to just modify and say instead of the default Jenkins file, take the deploy one and let's run it once more. So it's gonna check out the pipeline again from GitHub and it would execute an updated pipeline and it would interact with the Dev project as it goes forward, as it makes progress into the project, into the stages. This is gonna take a little few seconds to start. Yeah, it gotta start it. Gonna check it out, run through the build test and then execute the build the image and also deploy it into the development environment. We're gonna come back to this while it's running to show the result to see how you can use the Client Plugin. The next thing you can do, the next tip for using Jenkins on Kubernetes is the credential. So Jenkins credential is a feature in Jenkins that allow you to centralize all the authentication, all the tokens that you need for authenticating against Git against your image repository, anywhere certificates instead of spurting that within all the projects, you can define them all there and refer to them within the pipeline. And we already talked about the sync plugin. So another great thing that the sync plugin can do is that you can create a secret object in Kubernetes similar to ConfidMap but it's designed for holding sensitive data and the sync plugin would automatically convert them to credential objects on Jenkins and I can use them on in a pipeline. The value of that, it can keep all of this inside the Git repo instead of, of course, except the data of the username password cell, but you don't have to go manually configure Jenkins anymore and define credential through the GUI. You just build a secret and it gets replicated on OpenShift. So let's do that. What I want to do is that we could define and one thing I forgot to do here is that so by default in OpenShift, every time you build a new image, it would redeploy the existing deployment. We want to manage that through our pipeline. So I'm going to disable that and let our pipeline lead when deployments should take place. Before I create, let's check that. Yeah, exactly. I failed because there was an automatic deployment happening. I'm going to start this again and now we let the pipeline control the deployment for after building the image instead of OpenShift automatically doing that. So going back to credential. So a lot of times we have an enterprise registry in our organization. I'm using GUI Enterprise in this instance. So after we build our images, we want to release them into our enterprise registry and this is the private registry. These are images that we don't want to share publicly. I need to be able to authenticate myself in order to be able to push images to this repo. What I'm going to do is I'm going to create a secret. First create a secret that holds the credentials for authenticating against the Quake. So I create a secret, it's generic, give it a name. My username is semi-sade and my password, this is encrypted. So no worries, don't repeat this. And like the config map, I also need to give this a certain label so that Jenkins sync plugin picks that up and replicate that on top of Jenkins for us to be used within Jenkins. So I did that and also for pushing images to Docker registries, we don't need to have a Docker demo running. I'm using something called Scopeo, which is a tool for copying images around. We don't need to have Docker demo running anymore. So within the same resource that I had for the same config map that I had, I'm going to define a new one, a new item, call it Scopeo template and define a custom image is referring to an image that we have on Docker Hub. And this is the image that's going to be used as Jenkins slave, which has all the tools that I need. And within the pipeline, I can now refer to this new slave plugin and use that to run a certain build. So I'm going to create now a pipeline for pushing, like releasing our product, basically our mapping application into our repository enterprise repo. You can see we have a Jenkins release pipeline. It's quite simple. I just want to show you the sync. And within the pipeline, we have a construct called the credentials, which refers to the credentials that were replicated to OpenShift and we use a Scopeo to copy this image into Quay. It's so that we don't have time to show you the full run of this, but I can only show like within the credentials, if I update here, a new credential is replicated in Jenkins for us that I can use within this pipeline to copy images around. So that's the next tip to use the secrecy in Kubernetes for building credentials in Jenkins and have them save them as code and be able to replicate it across various environments. And the last tip I want to share today with you is Canary Releases. Canary Releases is a way to do your downtime deployment on top of Kubernetes. So a new release of application comes, you deploy it to a new pod, you put zero traffic on it and gradually you grow the traffic as you feel more confidence that this version is working and then you turn that turn, remove the previous version. And all of that is also being handled within our pod pipeline. So we have a Jenkins pod pipeline, pod pipeline, and that within that pipeline, we can see that first we get a selection from the user, which version of the platform they are image, which one you want to deploy. This just interrogates the registry, fetches all the versions that exist and then asks the user which one you want to deploy and then defines a set of, it creates a new instance of that application, our mapping application, deploy a new version, put 10% traffic on it, if everything is so good, every step of the way, we need approval from someone to release manager or deploy manager where that has permission to do this and it can grow our release, our traffic to all the way to 100%. And this is all good except one problem, if somebody aborts the build, the pipeline in the middle, it doesn't want to have a problem after 10%, we test it, we have a problem, we don't want to go in production. All the objects that we created, the new deployments that we create here, the release too, it would also be left out there, it would mess up, it would make our production environment very messy. So what we can do is that in the rollback pipeline, which is an extension of the pipeline that I showed, you can always define some kind of exception handling, you can define as a post block within your pipeline, then what happens if something goes wrong? If we have a failure, go remove all the new deployments that you have done and make the production look like before, put 100% of the traffic to the existing release that we have, do not touch anything, do not disrupt traffic. We can do exact same thing if somebody cancels the release because we have a test, we have a bug, for example, in the new release. We rollback everything, put everything as it was, 100% of traffic on the version that was working fine and remove all the extra projects, so extra objects. So within the pipeline, using the OpenShift client syntax, using the sequence using the configs, we can manage the entire deployment of platforms throughout it's lifetimes, throughout the cycles, do very complex processes and most importantly, keep all of that information as a code within our Git repo so that we can every time recreate that entire theme, should the need be if we need to change the environment, so we need to have multiple instance of this or something happens to the existing environment. So we can recreate every single piece of our configuration from scratch again. And with that, I'm gonna finish the session here and see if there are any questions that we need to answer. I haven't been looking at the chat at all, let's see. I can't help you with that, CMAC. Thank you very much for your awesome presentation and we only have time for two quick questions. First is that, does the same plugin we work with playing Kubernetes or do you need OpenShift for that? It does work with Kubernetes for certain objects but not for all the objects because some of the objects are specific to OpenShift, they're built, they're added as extra resources like the pipeline itself, for example, that is built for OpenShift but sequence and configs, yeah, they could work with Kubernetes, with a few Kubernetes. And now the question is, is it possible to use a single Jenkins instance to work with all of the other namespaces or would it be a best threats to have like one Jenkins per name space? You can use a single Jenkins for all namespaces and in this demo also we used one Jenkins working with Dev and prod namespace but you explicitly give permission to Jenkins, OpenShift is very strict when it comes to access control Kubernetes is a little more relaxed so you explicitly give access to Jenkins which naming spaces it has access to and then from the single Jenkins you can manage the entire pipeline regardless of how many environments it involves and even how many clusters. So from the same instance you might promote your images to the next OpenShift cluster and do a deployment in your production environment that is elsewhere. Okay, perfect. I think we had a lot of information today. I guess I hope that everybody is just amazed as I am. And I would like again to thank you, Samak for this presentation and thank you all for being here. I hope to see you all again on the next Devination Live. See you guys. Thank you.