 Awesome, thanks everyone for coming. I'm Kingden. This is my first time at KubeCon in person. Really glad to be here. So welcome everyone to the Jenkins and GitOps talk. This is my first talk at KubeCon, obviously. Well, in person. I've presented a couple of times before. I'm Kingden. As I said, as Taylor introduced me, I'm an open source support engineer at Weaveworks. I'm also a flux maintainer. And I'm the cow on Twitter. If you want to follow me, that would be cool. So GitOps. That's what you're all here for, I think. First, we're going to talk really briefly about what is GitOps. GitOps is an operational model for cloud native operations. It's a modern model, I think, I meant to say. So GitOps is formally defined at this point. This should give you an idea of its maturity. We've gone through lots of rounds in a vendor-neutral setting, AWS, I believe Red Hat, Microsoft. A number of companies have participated. Weaveworks, Codefresh, to come up with this definition for GitOps. And we think it's pretty solid at this point. So it's been made 1.0.0. That's awesome. So the GitOps definition is a set of principles. And I want to preface this by saying that this is not prescriptive or requirements of GitOps. GitOps is a progressive progress. You have to process. You have to take one step before you can get all the way there. So step one is understanding declarative artifacts. You already have declarative artifacts if you're at KubeCon. Great. Step one, out of the way. Step two is they should be in a versioned store, which holds immutable artifacts. So versioned store, in this case, could be Git, could be something else. This is the vendor-neutral definition. So try to be as inclusive as possible. And another guideline of the GitOps principles is that it should be automatically pulled from your source. And it's a continuous reconciling process. So continuous delivery, continuous reconciling. You have reconcilers that sit inside of the cluster. They pull artifacts, and they apply them to the cluster from a version store in a continuous process. And the artifacts are declarative. The Git repository should completely describe the state of your system so that it can be reproduced. So, okay. Back to this talk and why you're here. I have an idea of who my intended audience is, but I don't know every one of you, so I'm not sure if you're all current Jenkins users. Can we get a show of hands? Lots of current Jenkins users. Okay, great. You're my main audience. So the disclaimer is I've shown, I will show in this talk, a way to deploy Jenkins. And this might be better than what you have now. But I cannot take responsibility for your Jenkins infrastructure. It's very big. We do CD. I'm here representing Flux. I'm here to show you a good way to deploy Jenkins. And also to give you a good introduction to GitOps and how you can use Jenkins with Flux. So I'm going to start with, I don't know how you have Jenkins deployed now, but the common use of Jenkins is not especially declarative. You have lots of these things. They're dialogue boxes. You've got to click on them. And that's what we're doing away with. That's the point of this talk. So let's make Jenkins install declaratively with Flux. So I have this Git repository. I should probably pull the Git repository up so you know where to go. We'll save that for later. Okay. Now we got, here we go. So this is the Jenkins intro repo in the Kingdom CI org. That's my org. And the way that I have this set up is with Flux. It's going to take a little while to load apparently because the network is having issues. So let's just go right to it. This is running on my pet cluster. It's called the move cluster. And I'm looking for the Jenkins folder. This is a Flux customization. So if you've never used Flux or Flux for Git apps, this might be unfamiliar to you. I will draw your attention to the spec section. There's lots of stuff in here. We're going to go find this patch down here. There's a patch. So I've already installed Jenkins because this takes a while. What we're actually going to do is to scale Jenkins up. So what I just did there, sorry, that was a little too fast. I incremented the replicas count to one from zero. And I've had to use a patch here because the Jenkins Helm chart does not include a replicas count. It's not meant to be more than one replica. So let's commit that and push. And this is what Git apps is. I'm going to skip a step two because this is a fast-paced demo. Normally I would open a poll request. I'm just going to push straight to master. Yeah, yeah, yellow. Okay. Oh no, it's going to make it not a fun demo. If we can't push to master, maybe we'll have better luck outside of the VM. So some things are bound to go wrong as we do this. And like I said, this is my first time presenting at KubeCon. I'm going to laugh. I hope you all laugh too. Okay. So I have to first pull and make sure I have the latest revision because it's a Git repository. Here we go. We'll put that one in and push. All right. We got to Git push. And the next thing I'm going to do is go to Slack. And as you see, things are happening already. So let's see if we can see what is happening. We should get some pods. This is an alias here for KubeControl and the Jenkins namespace. Oh yes, of course. And the reason we can't is because I disconnected from the VPN to try and make that work. So I have a VPN here. I have this set up to be a little bit paranoid because, like I said, I can't take responsibility for your Jenkins infrastructure. And I don't want to put a Jenkins server on the internet. I don't know about you. But I imagine many of you who have Jenkins have it deployed in a private sort of setting. We're going to move on if this doesn't go right away. We're going to need Jenkins in a minute though, so I hope it does work. And I'm going to try to catch up with that guy at the bottom there as we go. This is progress indicator to help us keep the pace. So we're going to outline what we're going to show exactly here. We're going to do a Helm update. And actually we're going to do it in a mostly automated way. And then we're going to do an image update. And when we do the image update, the first time we're just going to do an image update. It'll be quick. And the second time we're going to use Flagger. So we can see the image update go in a progressive delivery style. So first we said we're going to do an automatic upgrade with Jenkins and the Helm controller. So let's do that. Good. We are getting Jenkins to come online. So that's good. So it's in the Helm release file, which is another CRD. Like I said, there's a lot of new concepts if you haven't used Flux. I'm just going to blast right through all of them. And there will be an opportunity for questions later. Okay, so what we have here is our Helm release. This describes our Jenkins installation declaratively. This is really I think what you came here to see. And this is very long. Once you get to the values section, these are actually things that we're passing into the Helm install command. So we've got persistence so that the Jenkins server when it comes online knows whether it's run jobs before or not. But we're not using it for anything else. And we've got a place that we can set the image if we want. Jenkins Helm chart tells you that you should probably bake the plugins into the image. You don't have to. There's an option in the Helm chart. The Helm chart is great. I'm just going to plug it right now. If you haven't checked out the Helm chart to install Jenkins, you should definitely check it out. It's kind of a prerequisite of this demo. Okay, so before we go into detail about that, we're just going to upgrade Jenkins. And that's what it takes. That carrot there says we would like the latest release in the same major version. You can also put tilde there if you would like the latest patch release in the same minor version. But we're going to get the latest version here. We'd like it's actually an upgrade to the next minor version. So like before, we're just going to do a commit and push. And we're going to go back to our notifications channel to see if that did anything. And it did. Helm upgrade has started. Great. We can watch it progress if we want. We've got to move on to something else. So we've got lots of stuff to show. So this is the live demo part, as we said. Okay, we've installed Jenkins. Okay, we did that already. Helm controller. We've seen the Sember automation to upgrade Jenkins. Hopefully that wasn't too fast. I'm showing that first because it's the easy way. There is an easy way and a hard way to do automation and flux. And we established many of you already have Jenkins. Maybe you don't. But now we're going to move on to the app. Hopefully Jenkins is ready so we can actually do a build. And the build will also automatically be deployed. And then I'll show you the artifacts that make that possible. So I imagine that you have some kind of app or apps, maybe a bundle of microservices. Pod info is our app today. Yes, I'm in the right place. So what we're going to see here in our deployment patch. So there is a deployment beneath this that has all the details. But this is the easy thing to look at. It's the easiest thing that I can show at least. And this over here is called a K YAML setter. That is a customized thing, I think. And that tells us which namespace to look in for our image policy. So we have an image policy defined that expresses how to find the latest build for Pod Info Dev. So let's kick it off before we're too late. And I suppose I should make a change here rather than just do a build. But this is quicker. And we are actually short on time at this point if you are watching the tracker. So let's do it this way. The build will take a minute. We're not going to watch. We'll just move on. But when we come back, we'll see. So there's our build that fired off. Jenkins is running on Kubernetes and it's using Kubernetes agent. So when a build happens, it creates a pod and it pushes instructions to the pod through a side channel. And many of you probably are doing this in your Jenkins installs already. But what I hope that you'll take away from this is that when you create the new image and the structure of this is a little bit arcane. So this is the branch name and this is a commit hash. Those are not super important, depending on how you define your policy. I suppose we should look at the policy. So where does it come from? Well, this is a customization overlay. This is one of the confusing things in Flux. We have customize customization and we have Flux customization. I don't know how we got here, but have to have to get over it. It's okay. I believe I've defined the policy here, okay, in a pod info folder. And that's what it looks like. So it's got the literal string master in it. And it has a space for the commit hash. And this part says we're looking for a timestamp. Now if you're familiar with Flux's v1 automation, this is a little bit different. There's a technical reason for it. I can explain later and the opportunity that I'll have later. I hope that you all will join me later at the Flux booth for a clock, I think, 4.30 maybe? It's four o'clock. The office hours, where I'll be going into detail about the things that we had to gloss over here. So you will have an opportunity to kneel me with questions. Anyway, this part says the group name timestamp is the part that we're extracting. And that's the sortable part of the image. So we're using sortable image tags. And if you Google Flux sortable image tags, you'll find documentation about this with this very example. So we see our job is finished. We're in the pod info dev namespace now. And we see two pods that are two minutes old. That's awesome. And for anyone who hasn't seen pod info before, this is what it looks like. So this is what we're upgrading. So let's move on. Okay, so there is a helm chart for pod info. It does work with Jenkins. We're not going to use it today. Like I said, there's a lot of stuff in the abstract for this. And it's all in the demo repo. We're not going to have time to look at everything today. But we can at the office hours. So we've added Jenkins support to pod info. Pod info does not natively have a Jenkins file. And we've done this using the Flux guide to Jenkins, which I will show you. It is on our lovely docs site. And you can find it in the menu under use cases, Jenkins and Flux. So there's a lot of text in here. Go ahead and read it, if you like, or just come down here and snag this example. And this is a Jenkins file. And you may or may not have a lot of these, but this is part of declarative Jenkins. So we've installed Jenkins declaratively. And now we're configuring our jobs declaratively. Great. And how does this work? We have some variables defined at the top. You're meant to change these for your own version. So you're not Kingdom B. Put your own Docker hub username there or wherever your registry is and give it a secret so it can push. This is a Jenkins secret. In the Helm release, there's a section where you can add credentials declaratively also. And then it takes this Docker pod definition. We're actually using Docker in Jenkins to build images. So we do build. Here's how we get the get commit hash in Jenkins. This is groovy code. And if you've built any Jenkins files or used Jenkins for any length of time before, you've seen stuff like this. And that's also how we get our time stamp. And what happens next is probably better explained over here. So here's our pipeline stages. Oh, I didn't install the blue ocean pipe. That would be really nice. We could look at it. How many of you have seen blue ocean before? Okay. All right. So that's not a big loss. Lots of people have seen it. Anyway, what you would see if you haven't seen it is a fork. So when we get to the build, when we build is done, what we'd really like to do is for that image to be deployed immediately in dev. And also should be tested in parallel. So that's what's happening here. And we have a different definition. If we look at the actual Jenkins file in my pod info fork that tells it what to do when a tag is pushed. So this is a little bit more elaborated. But we would like for when a commit is pushed to a dev branch, we'd like for that to be built and pushed. And then when we push tag, we'd like for that to be built and pushed. But instead of with the crazy three part tag, now we're going to use the actual get tag as the image tag. Great. So we'll do that in a second. All right. So I think we've gone backwards. So there are some helm test examples in the docs that you should all check out. We're not going to have time to go through helm test today. You see we're getting further and further behind here on the clock. We're going to catch up though. It's okay. All right. Let's go to the linker D demos, but info. And this is going to show us flagger. Oops, spoiled thing. I've borrowed this example from Jason Morgan at buoyant. They're the makers of linker D, which was another CNCF graduated project. So I've just cloned it inside of my repo here. I'm not using any sub modules or anything in the right place. I think so. Yes. Okay. So there's a canary defined here. And the canary is the moving part that does the progressive delivery. So this is going to take longer. So we're actually going to start it right away before we explain anything. And okay, we've got a pod info. So this is our production pod info. It has a patch to change the background color and do some other things. So we're going to change the background color. I'm not sure what this color is, but I'm sure it's good. And we're going to do something else. I'm going to check our channel real quick to make sure something happens. It did. Okay. Let's go back to the browser. And I think I'll have a minute to explain what we're going to see exactly. So first, if I can find the canary, we will see that it has a status initialized. And if we watch the status, we should see momentarily that it changes to progressing. And what that means is Flagger has detected that there's a new release. And it's time to roll it out. And what it does to roll it out is first, if you have defined any helm tests or other kind of tests, it'll run them before it routes any traffic at all to it. It is actually moving. And then once it started routing traffic to it, it's going to progressively increase the amount of traffic from 5%. Great. Here we go. It is moving. Wow. This might actually work. Okay. So in a moment, what we should see is our change, but just a little bit. And so what's happening, Pod Info is actually refreshing the page for us. If you run Pod Info in your progressive delivery demos, it's really nice. It kind of pokes the page over and over again until something happens so that you can see the thing happen and you can just keep talking. You don't have to press refresh. So go back to the slides. Advance as fast as we can. Okay. So I hope that so far, this is nice. We should see some progress over here soon. But I want to talk for a minute about if you've seen some flux demos before or maybe you've tried it yourself and it didn't turn out like this, why is that? Why is that? So you probably should take a look at the webhook guide. There's a guide in flux docs called the notification guide. And then there's another one called the receivers guide. And a webhook receiver is taking notice from GitHub or from an image repository when something happens to tell flux it's time to reconcile something. Otherwise what flux is doing is it's continuously reconciling on an interval. And that's very expensive if you want to do it on a tight interval. Okay. So you have to have some like to see any part of this here supposedly serving 50% of traffic from you still haven't seen anything. Am I looking at the right one? I'm afraid I did something wrong in the setup. One of the things that can go wrong at this point is your pods are not meshed. I bet that's what we'll find when you create. Yeah, that's it. So I've been tearing this down and standing it back up repeatedly. And I have depends on statement, which is another feature of flux to make sure that things come up in the right order. But sometimes when my cluster goes down because it's a homelab cluster and things go wrong all the time, like literally every 24 hours at this point, it's practically chaos engineering. The thing that goes wrong is you have things come up before link or D and so they don't get a sidecar injected that sidecar is a proxy that all traffic goes through. So so that link or D can collect metrics and do things like revert the deployment if if you see an elevated rate of errors or if you see your latency numbers going up, your request latency, you can take a look in that canary and see how that's defined. So that's all customizable. Like everything else about flagger and it's progressing every five seconds until it's done. And we'll see that it finishes at some point. And we'll see a transition all at once, unfortunately, because I flubbed that part. But so it's still pull based. And yes, the reason that it's actually pull based, even though we're using a web book is because flux only pulse. It's not receiving instructions through the web book. It's just receiving a notice. It's time to pull again. And this is the shortened loop. So development is an iterative process and the fast inner loop is super important. And this cannot be understated. And I'm sure you've heard that a million times before. But this is like one of the important principles of DevOps. And also it makes a better demo. So five minutes is, I think the guideline in the DevOps guide five minutes is not too long, but any longer is probably too long. And it should be shorter if possible. The shortest possible is the best. So all right, you can use any CI does not have to be Jenkins. And flux gives you this great boundary between CI and CD. Now they're separate jobs. CI's job is to build, test and push an image. Flux's job is to deploy it in the cluster. Flagger makes this even safer. You can do this without helm or with helm test or in any combination, there's lots of features to explore in the flux docs. And image automation that we saw in the earlier step is probably the feature that made flux one popular in my opinion. So GitOps is great. But what we really wanted was it should be easy. Everything should just happen. We don't want to really think about it. And CD, if it gets out of the way, that's great. That's what we wanted. We don't have to look at it. Super. Infrastructure is unfortunately not easy all the time. And so this is a little harder than it was in the past, because we're building infrastructure. We're not just building CD, right? These are composable units of abstraction that you can use for any purpose that you can imagine. So image policy, image repository, flux will go out and look at the images that are there and reflect them into the cluster so that it has a record of what image tags exist, filtered through your image policy, and image update automation takes that information and writes it back into the Git repository. And that's cool. These are the CRDs to know. These are the controllers. Git repository is also part of this. It's used for its secret. So you pass a Git repository, ref to image update automation. It harvests the secret out of it. For this reason, they have to be in the same namespace. And because these secrets are used for writing into your Git repositories, they're super important to protect. Oops. So these are all advanced topics. You can template and scaffold around them so that your team members can use them if you understand them and they do not. Okay. So we've got our Flux and Jenkins guide. Please go read it. It took me a long time to write it and get the PR approved. It explains how to use JSON it together with Flux. And if you read the abstract, I mentioned Porter. Why is Porter involved in this? So if you're using Docker to build images, Porter does that too. But the interface is smaller. And unfortunately, we don't have time to show Porter. But what jobs does CI do? And should I use Jenkins for this? Those are questions that I cannot answer. I'm not here to talk you out of it. But Jenkins has made me extremely paranoid. If you go and look in this repo and see all the things that I've done to protect it from random strangers on the Internet while I give this demo, you will be impressed. Are you prepared for Doomsday? I hope so. So there's a 45-minute session later on. Please visit the Flux booth. You can do that from wherever you are. You don't have to visit in person. We have a hybrid booth. It's really cool. Also, the GitOps one-stop shop is our milestone event coming up next week. And we're having Microsoft, AWS, VMware, D2IQ, and speakers from Weaverworks all to show us their products that they're building on Flux that you can grow out and buy. So please check it out. We're going to have some really fantastic speakers. Flux is great for disaster recovery. 35 minutes is not enough time to show disaster recovery. But 45 minutes, we're going to tear the whole cluster down and I will show you how it all comes up. And it'll actually only take about 10 minutes. It's my home lab. So there's lots of stuff that we don't actually have time to talk about. But it will be easy. Maybe parts of it, at least. And there's a strategy here that I really want to show you and I really want to explain, but we don't have time, unfortunately, right here. But it's super important. So remember this part, at least. So this was the easy thing that I showed. I'm going to show a hard thing at the Flux booth that actually solves a problem that you're probably thinking in the back of your mind when you think about how image update automation works. We have a chicken and egg problem. We'd really like to use the git tag so that we can take the manifest there in the git tag of the application and apply it to the cluster with our new image. But that's not how image update automation works. So tags, that's what you're using for. You want to mark a release and then things should happen. And when the image build starts is when you push the tag. So that's not the time for Flux to deploy the manifests, right? Because your build could fail. And it may fail for surprising reasons, like you run out of disk or anything else happens, like pods came up in the wrong order. So the solution that I will allude to is that a new Sembr image tag actually implies that there is a matching git tag. And if you're really thinking right now about how to use this, and if you know about a feature that I haven't shown yet, then the solution might be obvious to you already. You can use that git tag. You can use that image tag as a git tag. Anyway, I think we're completely out of time. No. Please join me at the Flux booth. Are we out of time? Yeah, we're actually out of time. Okay, great. I hope you enjoyed.