 Good morning everybody. So, just gonna talk about what we're gonna get into today. I'm gonna do a brief introduction about myself, kind of my background, what my interest is, kind of my passion about continuous integration and continuous delivery deployment. I want to mention the session right after me goes into a lot more detail that Kruna was saying right there. It's gonna go into more detail on our continuous environment setup and give a little bit more of the kind of details on how it's set up and how it works. I kind of wanted to... I usually do this presentation just focused on what we're doing, but I kind of wanted to give a little bit more info on the landscape, so I kind of wanted to talk about the evolution of what the platform is today, how it's evolved, where it started, and, you know, understanding that, and then the evolution of how DevOps and how its landscape has changed considerably, I would say, in the last five to ten years. And I'm gonna talk about some of the DevOps infrastructure components and tools that we're using and how this relates and how to integrate your GitHub or your project with our DevOps infrastructure. And then I'm gonna do a brief demo of the pipeline. I did, unfortunately, record. I did not want to tempt the demo gods or the conference Wi-Fi or network. So I can show some live stuff, but I recorded most everything. I'm just gonna talk through that demo. And then I'm gonna talk about kind of what the future of DevOps infrastructure and CI CD, kind of where it's headed, some of the new innovations that are happening in the space, mostly around OpenShift and Kubernetes. So Spoiler Alert will be talking a lot about OpenShift and Kubernetes. So my name is Riley Vigny. I'm a senior principal software engineer at Red Hat. I am in the system engineering organization and I'm on a continuous productization team. So what that means is that, you know, we're focused on how we can release our products with more efficiently, you know, quicker to the release and also with less bugs. And how do we enable that process? So we're working on a lot of things internally, like a service to help enable that. So users don't have to figure that, or developers and QE folks don't have to figure that out themselves. And, you know, at the end of the day, my passion really is continuous integration, continuous delivery in the overall DevOps umbrella that encompasses that. Again, don't forget to go to Kronos Talk after mine, too. Just wanted to mention that again. So the evolution of the platform landscape. So, you know, everything kind of started. The platform was, you know, you install your OS on a bare metal machine way back when, right? We all had, we all did that. And then that kind of went to the virtualization stack. So there's, with all these different pieces, right, we just had machines and there was kind of no consistency between them. You know, you can have a bare metal machine or VM and you can, you know, set up your dependencies and install everything that you need to do. But it's not very repeatable or consistent. Then, you know, obviously you could use configuration management to do some of those things that helped out there considerably. And then we go into the cloud where now you could get your VMs and all your components via any of these clouds. And so there's ones that I've left off, but these are some of the examples of what exists today. To me, really, the evolution of where we've come to is Kubernetes and OpenShift in particular, which is kind of red hat spin on Kubernetes or their distribution of Kubernetes cluster and install. You know, kind of the history here of how it's grown and how fast it's grown is the first DockerCon was in 2014. The first KubeCon was actually two years ago. There was a thousand people there. The KubeCon this year that I went to in Seattle, there was eight times as many people there. So think about that for a second of where it started and also the original projects that were part of the Kubernetes umbrella were these three. Kubernetes, obviously, Prometheus and Fluentd. Now, if you look at the landscape of what is offered via Kubernetes and the projects that have grown out of them, whether they're in the sandbox or incubation or graduated stages, over 30 projects. And that's really because people identified that this is really the new platform. This thing that is running on top of bare metal or VMs is really where you can have consistency in rolling out your applications and any of the systems that you want to run on top of that. That means, too, right, it doesn't really matter if you have a cluster on-prem or you have it in a cloud. That experience is pretty much the same across the board, which is probably different than what previous platforms offered you. So I think that's really important and that's what's really kind of also evolved the DevOps landscape of how we set things up and on a platform. So like I mentioned before, there is configuration management. There's a lot of these tools that have evolved. Ansible, CF Engine, kind of that's what's current, right? I mean, that's what people kind of use today or have been using. Kind of the new stuff around configuration management is maybe APBs, which is the Ansible Playbook bundles. And then monitoring on some of these systems, we used to use Nagios and Xabix. And now in the cluster itself, we're using Prometheus to do some of this monitoring. So just as kind of the platform has converged into, now we have Kubernetes and OpenShift, which is an application, it is a platform to run any of our applications that we want, whether it's databases or front-ends, back-ends for it. Some of these tools have also come together. Obviously, I didn't put a current and new on the CI-CD tools because I think all of these are still relevant and still evolving. Some of these are services, and some of these are just straight tools. Obviously, CircleCI is a service, just like TravisCI is a service. Zool, Jenkins, and are really more along of tools that you can implement yourself or get maybe some off-the-shelf implementations of them. Like in the case of CloudBees is the main contributors and owners of Jenkins itself. Zool is definitely open source and has been very accepted in the OpenStack community and now is kind of expanding out to other facets. So a lot of these things are converging together on the platform itself. A lot of companies are making moves now, not just to have a separate CI system that runs, but really more around how do we actually implement this or integrate this into the platform itself, and that's really what I'm going to get to when I show my demo and we talk about this stuff. The one thing I wanted to also point out about Jenkins, Jenkins Classic is not cloud-native. It's not Kubernetes-ready. It's really just a job application that runs. But then it got containerized, which is cool. It's containerized. That's great. But it's still not really cloud-native aware. You can't scale it. You can't really... It's not ephemeral in the sense of you could spin that up and spin it down to do a task, and it's gone in a pod on OpenShift or Kubernetes. And then it got a little bit more, can we run this in a pod and a little more? Now we're actually... What Jenkins and CloudBees are doing, part of this evolution, is to make Jenkins more cloud-native and more available, and there's this thing called Jenkins FileRunner. So now you could actually spin up a headless Jenkins that's just there for the orchestration to execute Jenkins pipelines without having to have that persistent and sitting around. You don't have to have storage to back it or do any of that. You can actually collect the logs off of it, any of the artifacts, and push them to another location. So that's kind of where CloudBees recognize that Jenkins itself is not really kind of meeting the muster of running on OpenShift or Kubernetes as a platform. So how do they actually change that? Some of you may have heard of Jenkins X, which is actually their complete, I would say OpenShift or Kubernetes cluster solution there that runs on a cluster. And so the other, I think the other interesting thing too, I went to Jenkins World this past year and for the past, I've been there probably the past four or five years, the conference was always called Jenkins World. This is the first year that now it's called Jenkins World and DevOps World. So they're shifting away from Jenkins being their flagship product because they identify that DevOps is the space that they want to get to and how do they actually enhance that with their products. So it's not just going to be Jenkins and how do they actually run that on top of OpenShift or in their case probably Kubernetes. So I'm going to talk about some of the infrastructure components and tools that we use. So obviously on OpenShift we have this thing called S2I Templates which is just a configuration, a YAML configuration of a build config to put on your Kubernetes cluster. When I say Kubernetes in OpenShift, interchangeable, let's just assume that. And Dockerfiles that we want to load. In the case of Jenkins, we're using Jenkins for this demo and that means you have to use Jenkins but we have shared libraries and pipelines that we also roll out so it's an optional piece and then we also have this thing called Hooks where after we actually run the setup here, we then can do some any post-processing. So I'm going to show you today in the demo before I get to that is an OpenShift instance locally run on Minishift but the same stuff that I'm showing you could also easily have an OpenShift endpoint that we just apply that to and can roll out these S2I Templates, our containers, any Jenkins pipelines and run any post-hooks that we want to after the cluster has been set up. The other thing that our team has worked on is this thing called Contra-HDSL, all upstream projects by the way and I'll have links at the end of the presentation for that. It's really a simple YAML DSL that is a front-end for Jenkins libraries and pipelines. So a lot of people don't want to have to know Jenkins or no pipelines or what that entails. This allows you very easily with some helper containers that we build in OpenShift and then we deploy at the time when we run a pipeline. One is called our Linchpin Executor which is used in their helper containers and that's used to provision resources in any cloud. So even if you needed a bare-metal machine or a VM in AWS or GCE or OpenStack you could use this mechanism to do that and then we have an Ansible Executor Container, another helper container that also helps configure that resource once it's been provisioned, executes tests, and then collects logs and artifacts. So a very simple workflow. The pipeline code for that is a little bit more annoying to have to maintain our libraries so we've kind of handled that for you and used these helper containers to help administer that. And now this is kind of, I'm going to lay out kind of how all our stuff comes together here. So as you can see, I blocked out this future stuff because I'm going to have that bring this back up on the future slide of where kind of we're headed with this but I wanted to at least have the whole diagram up here. So whether it's MiniShift at the bottom here or OpenShift, whether it's a stage or product and production environment or your local MiniShift, you have this one playbook that will set up a cluster for you. It will, we have a, our Contra, NV, Infra, the pink area here that has the future stuff blocked out. Has our helper containers, any of our metrics containers like DB and Grafana and all the Jenkins infrastructure already ready to deploy and roll out. If you have any pipelines, we have some sample pipelines in this HDSL code here on the left that can be deployed if you so choose. It doesn't have to be. Same for the infrastructure. If you did not want any of these pieces, you can leave that out. And then we come to this yellow stuff on the right side. This is you guys. This is your project that has been designed to actually build and deploy on the MiniShift or OpenShift infrastructure. This allows you to do that. So this kind of lays all that out. A crew no product is going to go into a little more detail on some of that, but that kind of shows everything that we're working with across the board. And when I show the demo, you'll get to see some of that firsthand. So everything comes down to kind of this. You could do, it's all Ansible based. So you could do the parameters one by one, or you could just generate this kind of file that I have here, this contra-NV setup, which is just a bunch of parameters which I'll walk through as well. The S2Y templates that are in this project, the Dockerfiles and the Jacobs libraries and pipelines, like I mentioned before. And this is basically the structure of the sample project that I did for the conference, which will be available afterwards and the references on the slides. So this is that file. Some easy parameters. Cleaning up things. We don't want to run pre-rexes really for virtualization, nested virtualization, which in this case we don't care about. I'm going to set up my containers. I'm also going to set up pipelines. I don't want to set up this sample project. And I don't want to modify any of the security context. I am setting up MiniShift here and my profile's name is MiniShift. It can be whatever you want to call it, but for this example and for the demo, that's what I chose. And this shows the MiniShift version and the OpenShift version that you're going to use with it and some basic username and password that would be set up. And then the bottom of this file is some of the other stuff. So Kruno actually added this feature of whitelist and blacklist. In this case for this demo, I really don't want my helper container, so I'm going to actually leave those off of the installation. What that means is that the diagram I showed you where the infra setup had those helper containers listed there, it will then identify that these two would be ones that we do not deploy or build in the environment at all. And then here's the name of my OpenShift namespace and the project repo that I'm going to be using. You can go to that URL and all the stuff that I'm showing you here today is available there. And then just some information about the memory and CPUs that I want to provide for my MiniShift instance. There are more things down below, but less important, more around the influx DB and Grafana setup, and you can look through that as you wish. All right, so we're ready to do a demo. Really, this is the command that I did to kick off the demo, so I just wanted to show that there, and it's in the slides, but let me bring the video over. So I'm going to talk through the video. So I just did some pre-clean-up I did before I ran the demo from scratch. So I cloned the Contra-NV setup repo, and then I also cloned my project repo, which in this case is the DevConf demo. That's really all you need to get started with this. And at this point now I'm ready actually to commit to kick off that command line that I just showed you on that demo slide. Actually, sorry. Just so you can see the file that I just showed you in the demo. I'll also make this video available after the conference, too. I'm just going to skip through some of this. Sorry about that. So here's the command line that we're kicking off. I just have to put in my password, but that was the same command line. Actually, if I just want to back it up just a bit. So all I did was I provided my user on my system, and then I pointed to that setup file, and then I provided the sudo, so it asked me for my password and my SSH password, and then we can proceed. So it's going to go through, in my case, and up the previous cluster that I had set up there. So we'll do that, too, because I had that run clean up option. And at this point it's going to go get the mini shift binary and then start actually creating the cluster. So it's going to deploy a mini shift, basically a VM, and then once the VM is up and running, then it actually deploys the version of OpenShift that I want. In this case it's 311. It also pulls down the OC binary, so you can do OC commands as well. So now we're at the point where we already, the mini shift cluster is set up, and now from my project I'm deploying this Fedora image and that has come from a Docker file, the S2Y template. I'm just going to get the IP here of the actual VM that's running my OpenShift environment. And at this point I've kicked off and I'm waiting for this Fedora container to build so that it's available, but even before that we can log into the mini shift instance and then have a full up and running OpenShift cluster at that point. We can go to our devconf namespace. There's nothing deployed here yet, but you can see when we go to builds here that we've already built, or we're in the process of building the Fedora container that's actually from my project. It's not in the infrastructure pieces. So at this point, after it does that first project, Fedora container, it's now going to go to the helper containers, which is the Jenkins master, slave, and as well as the Grafana and InfluxDB containers as well. We also have a container tools, which has Builda and Podman also, part of our helper containers. It also gets rolled out as well. Let's speed this up. So I don't want to fast forward too much, but so now we're in the process of just finishing up building of these containers and you can see we just had Fedora before. Now all these are either complete or in a running state that they've been deployed to the MyMiniShift instance. And until those, since the Grafana, Jenkins, and Influx have deployment configs, after they're built they get deployed out to the OpenShift instance. And I can show you that too. You can also do, since you have OC available, you can also, besides the UI, you can do this on the command line and see the same containers or see the complete setup of everything that's on that cluster at this point. This gives you a full view of that. Sorry about the video choppiness. So now at this point, everything has gotten built and now we are deploying our Influx DB, Grafana, and Jenkins as well. So Jenkins is the last one to come up. Grafana is now deployed, Influx DB, and Jenkins is in the process of the pod coming up, I believe. But we can go to the Grafana UI at this point. It's up and running. And this is all locally on my machine. It could be we could take this whole thing and deploy this and skip the mini-shift part and deploy this to an OpenShift cluster. That's perfectly feasible as well. So Jenkins is still coming up. Now, all the playbooks have finished running that full system. And it takes about, if you're deploying mini-shift, it could take from anywhere from 15 minutes to 20 minutes, something like that. And then once this is deployed, then we'll have a Jenkins instance that we can go to and see the end part of this, now that this is all set up and that's great, now how do we utilize that? So we have a pipeline that we seeded. We have a seed job that picks up that pipeline file. And then it runs basically a stress test to get stress to CPU and memory. And then we can graph that in Grafana. We can put that in Influx DB and then Grafana can graph that for us. And then we can run this on the show here. You just have to do the initial login. So that pipeline is running already. So at this point, what it's doing is Jenkins is saying, okay, I know I have this pod template that has the Fedora 28 image. It's built. I want to deploy that and run this metrics.sh test on it to stress my CPU and my memory. At that point, you can get stats from Grafana not only on the metrics of the CPU and memory, but basically how long did it take your build and the pipeline build to complete? How many were successful, how many failed? You can get all that information from there. This just shows, again, the full stack that is up and running. Go to the cool stuff. So this is just showing that we're executing this. Basically every time out is 90, so it runs for 90 seconds to do the stress test. And it does it about five times. Initially, obviously, the data to show you wasn't that impressive. So actually what I did was I let the system run for over basically almost 48 hours to get more data to show that. But I'll show you the initial data that gets put out there in the... So now we can go to Grafana. You can see it's not as exciting because we did one run basically of this. So we had one build succeed. One little dot, obviously, in the last hour. But I have another video that kind of shows this running over a period of time. And if the network permits, we could also show this a little bit live, too. So let me go to... So this is the second video here. This shows that we've run 50 builds now. So now we have more of a data set to show. You could see, based on the last 24 hours of what was stressed out in the CPU and the memory, we have also to show the sample pipeline of how many runs we've run through that. 49, we're on the 50th right now. And we can show the 24-hour view of what that looks like. And that kind of gives you also an idea of how long did it take your builds to run? Was it faster during certain parts of the day? But this just shows a sample of what you can do with it. Your project may have a different target or different goal. This is what our target was just to show you kind of how all this stuff wraps together and using the different components, all running inside the new platform, which is OpenShift and Kubernetes. I do have another demo to show of Jacob's file runner and how he set all that up. But I'll leave that if we have time at the end for that. So that was basically the demo of how all this comes together. It's all maintained in two repos. The contract will be set up being the main repo. And then there would be your project repo. In this case, I had a sample project to set up. So the future, what does the future look like? Obviously, I mentioned APBs, and that's sort of future. It's good for... It's really good for deploying a configuration in OpenShift. It's not really for maintaining the lifecycle of it. That's kind of where you get into Helm charts or operators. You know, it's really a tool for, I would say, Helm is a tool for managing packages called charts, and it provides the following. There are three important concepts here. The chart, it builds information necessary to create an instance of a Kubernetes application. So it keeps... It's basically a builder or a packager for your Kubernetes application that's going to run. And then we get to operators. So operators are a little bit more involved. It's really a method of packaging, deploying, and managing the complete lifecycle of your Kubernetes application. You know, it also maintains its lifecycle so as you make changes, your operator will then control whether something gets redeployed or something is torn down, or it basically is like almost self-healing of an application in that sense, and it just manages it. It's kind of software to manage software in that sense. It's what it's been called. And then we get to kind of one of the new kids on the block, which is Knative, which is really kind of... It really extends the Kubernetes API. It provides a set of middleware components that are essential to build applications. You know, by having serving, which is you might have heard of serverless, that you can deploy an application on the fly via a CRD eventing. And the kind of part that comes back to CI and CD is really the build pipeline. So they actually have a component of this that is may fill in some of the gaps or take over some of the things that Jenkins does. That could be just native to actually Kubernetes itself. So this is actually Knative as an upstream project. You know, I recommend checking it out. I have links in the end of the presentation. You can contribute to it. I would say the key thing, too, it's just like source to image is OpenShift's version of taking source code and building an application out of it. This is kind of another way to do that and probably more upstream than even S2I is at this point or more widely used in S2I. So like I said, going back and kind of talking about Knative, this is kind of the setup. It does require if you're using serverless or serving an eventing, you need Istio as the mesh. It is source-centric and to deploy container-based applications. It can run anywhere. So any of the things that we're showing you, the platform OpenShift or Kubernetes, we can run this either cloud or third-party data centers or on-prem. And the idea, too, around the Knative stuff is to identify similar application patterns to deploy your application and really codify those best practices. So kind of round out the future of what we showed before is, like, today we have S2I templates, Dockerfiles, and JK shared libraries, which is great, but now as we look towards the future of other things that we could deploy as part of the setup that I showed you, this infrastructure, we could use APBs, you could set up Helm charts, the Knative templates that are available, or even operators. So to kind of show that box now is part of our infrastructure, we would have maybe we would deploy operators for ourselves that would help the infrastructure as well as Knative templates, or maybe your project also has some of these projects, these operators as well as Knative templates to use as well. The idea is that we want to make this platform flexible and that we can keep adding into it. You can deploy your own pieces into it. All you really need is the OpenShift platform with some ansible post hooks to actually do any extra stuff that you want to do after you've deployed it. So that's my links. I think we do have really only halfway through, so I do have some time to show you that other demo. And then we'll open it up for Q&A. So I wrote this script that basically pulls down the Jenkins file runner repo, builds the war file, or sorry, builds the package, the Jenkins file runner jar, and then at that point I can pull down the latest war file that exists from Jenkins, and I can insert my, inject my own plugins that I want into that, and then I can actually run a Jenkins pipeline, which you'll see at the end of this. This is just kind of doing some of the install. Here we are pulling down the war file. That was just a building of Jenkins file runner, and now we'll pull down the war file and then you'll see also how we inject the plugins into the system. At this point we had a list of plugins, and now we're actually installing this inside that in Jenkins home directory. So this is the file at the end. We're basically, this is what you would usually do in Jenkins, is Jenkins file runner command. This is pointing to the home directory of Jenkins that we built the plugins where the plugins are at, and then executing our Jenkins file. So that was basically it. We did an LS of just a shell step in a Jenkins pipeline. So instead of having a full blown Jenkins with the UI and all of its other persistent stuff that you need, you can actually run Jenkins right from the command line with the Jenkins file runner, and I just wanted to kind of show that kind of where Jenkins is cloud business headed with Jenkins in that case. And there's the pipeline file itself. That's it. So I'll open it up any questions about what I present that I know I cover a lot, and we can go back through anything that you need. Any questions? You're so impressed. Oh, the Jenkins file runner, or when I was showing? The base is actually the, it's the OpenShift Jenkins that's available upstream. Yeah. I don't think that's Fedora based though. Minishift is the overall platform running on. We're deploying a container that Jenkins is running on, which is based on OpenShift's version of that. That base is what allows us to configure Jenkins with any globals we want to set up, any of the plugins, all that stuff. So the same stuff I showed you with the Jenkins file runner of installing those plugins. OpenShift uses the same mechanism with the S2Y templating to do that. Any other questions? So the question is, can we, can you have an Amazon, a Kubernetes cluster on Amazon? Is that what you mean? Or, yeah. We could provision a cluster in Amazon through this, or we could talk to an existing cluster. We kind of focused on OpenShift or Minishift, which is probably a little different than this vanilla Kubernetes cluster. But there's no reason why you could not build an OpenShift cluster in AWS as well. Because as you hear, I used OKD, which is the open-source version of OpenShift anyway. Just the yellow part, which is... Yeah, exactly. In this case, I only set up tie to Jenkins. But there's no reason if you knew how to do it some other way, you just wanted to set up Grafana and FluxDB, you could deploy those templates your own way using our mechanism here, but it would be in your project and you can configure it any way you want. Are there any questions? All right. Well, thank you. Appreciate it.