 My name is Carlos Sanchez and I'm going to talk to you about continuous integration, continuous delivery, all those cool things that you can do with containers. So a little bit about me. I'm an engineer at CloudBees. I work scaling the Jenkins platform. I wrote, well, I started the Jenkins Kubernetes plugin over three years ago, more or less, when Kubernetes was still not very popular. And I am a big open source contributor at different foundations, big time Apache, Apache Maven, and the Glitch Foundation. And now Docker Images, Puppet, did you name it? And I'm also part of the Google development experts program with the work I'm doing with Kubernetes. And so before we start, okay, who's using Kubernetes? I'm not going to ask who knows what Kubernetes is, but okay, keep your hands up. Who's using it in production? Okay, all right. Who's still sleeping at night? Okay, who's using Jenkins? Jesus. Who's using Jenkins with Kubernetes? I can just stop and live and it's perfect. It's the perfect audience. So if I don't talk about something you expected me to talk, please grab me later. I'll be around the whole week. I'll be at the CloudBees booth downstairs. So if you need to talk to somebody about something, especially Kubernetes and Jenkins, you can find me around. That's why I'm here. I also have a call since yesterday. So this is not my normal voice, I think. I cannot hear it myself very clearly, but all right. So I'm going to talk a little bit about what we did and how we learned from this experience and covering a little bit of what we do at CloudBees and also the open source stuff. So I'll be very clear about, well, I hope to be very clear about what's open source. So our main goal a few years ago when I joined CloudBees was to scale Jenkins, so traditional application and using containers. So one of the reasons is like, well, everybody that came to me or to us asking, well, we want to run CI CD in containers, right? Because why not? So that's like Docker, the old hammer for everything. So basically we wanted to run isolated Jenkins, masters and agents and enforce like memory CPU limits to make sure that people don't abuse the system and run it at a bigger scale. So obviously this is not trivial. That's how they move containers around in some place in the Caribbean until some years ago. And sometimes what we do resembles that. But so there's two ways to scale Jenkins. You can have more agents per master or you can have more masters with your own agents. So the solution about having more agents, there's plenty of plugins that you can use to dynamically scale. Create agents on demand. It started years ago with the EC2 plugin where you could just start VMs. Obviously today with containers you can just start containers which is a lot faster if that works for you. And the problem is like master is still a single point of failure. In Jenkins you can have different versions of plugins, different configurations. So it's not really always useful to have one big master for everybody. Especially if you have multiple teams working on different things like mobile, Java, JavaScript, whatever. There's a limit on how many agents you can attach but I mean that limit goes every time it goes higher and higher. And the other solution is that you can have more masters where you can have different teams, different organizations using different masters. So it's like a sharding model where you can have these guys that develop Java applications have their master with their own plugins and their own agents and these guys that develop something else have a different master. The problem is like you lose like single synom or you lose the centralized operation and configuration of your cluster. So what we build back in the last two years was CloudBase Jenkins Enterprise that had the best of both worlds using one thing that is called the operation center where you can manage multiple masters. And then you have dynamic creation in each master using one of these plugins that you can use to find in the open source community. So the first implementation for historical reasons only was using Mesos and we use Marathon, Terraform, Packer, all these cool technologies. And just recently was announced that we are launching this on Kubernetes. So Jenkins and Kubernetes, we can run both the masters and the agents on Kubernetes. And the interesting part is especially for the masters is the storage, how you handle the storage of the masters because they just use file system location, store files and outputs, logs and configuration and so on. And agents typically don't need this storage. So storage is one of the typical problems that people face using Kubernetes or distributed cluster container. And we use persistent volume claims so you can plug your own implementation or your persistent volumes using whatever backend you want. So that's a cool thing. The problems, and I just added this quote from one of the engineers, like Jenkins has like the worst case scenario for network-based storage because it has a lot of writes and the small files and many files and so on. So when people try to run Jenkins in like EFS or an NFS that does not have a performance server on the backend, they have all these issues, performance issues. And also one thing that we had with Mesos because we built it and we don't have today on Kubernetes yet is the multi-availability storage with EBS. Because Kubernetes doesn't yet provide. The other important part of running Jenkins on the scale is the networking. You had to know that there's like the two ports that Jenkins uses, the HTTP and the GNLP agent where the agents connect to. So HTTP is typically web UI APIs. GNLP is just to connect the agents. And we use the ingress controller and we use ingress rules like in Kubernetes. So basically we can have multiple clusters running at the same time, just using different host names. And we use path-based routing to go into the operation center or go into any of the masters you run. And for GNLP, so we have the agents that start dynamically using the Kubernetes plugin. And you can also attach agents manually because let's say you want to have a Windows box attached to a Windows agent. Or you want to have like a specialized machine that has something that you cannot containerize or you cannot run on the cloud. So for that we expose these ports using node port in Kubernetes. So it allows you to plug whatever agent you want. And we have, so what the Kubernetes plugin provides is agents with infinite scale for whatever definition of infinite you want to use. So it's only limited for the amount of resources you have in your cluster. It's all dynamic. Each agent runs in a pod. It has a unique thing to that other plugins in Jenkins don't have any provision or plugins which is allowing groups of container spots. And I'll show you demos now. It also supports Jenkins pipeline. Who is using Jenkins pipeline today? Jesus, this is my dreams. Make the reality. You find this all boring, right? So far. So you can define your agents with the pipeline support. You can attach persistent workspaces and attach random whatever volumes you want to attach. And we ought to configure it by default. So if you run your master in Kubernetes, your agents configuration is already configured automatically to use the same namespace, the same credentials and all that. So from now on it's demo time. You can ask me questions if you want. You can see that. Let's make sure I'm in the right place. So I'm watching the pods that are running. So this is an instance of operation center. If you use blue ocean that I'm not going to ask because I'm sure everybody is using it already. This is like the new UI for Jenkins that doesn't look like the 90s. So let's go back to the 90s for a second. I have two masters already running. So this is operation center. I can go click here and create a new master. But the interesting bit that we have is I can create what we call now a team. I can hold the cool things. Yes. So what is this doing? In the back end, this is creating a stateful set. Creating the storage for the master. It's making sure that there's one pod running. It's creating the service. It's creating the ingress rules. All that is handled by operation center behind the scenes. So this is still part of Cloud with Jenkins Enterprise. And if I go to the logs, not this one. So this is the one I just started. I can click on the logs. And I can see everything that has been happening. The persistent volume claim. It's getting bound. The pod is created here. There should be, yeah, it's creating the service, creating the ingress, using the host name. All that. And basically I can see from here what's happening to my master how it's coming up. And this has all the cool things that Kubernetes provides for free. Where, I mean, if the pod dies, if the node dies, it's all high availability. Yeah, this is also running in the Kubernetes cluster. It's called CJOC. And Cloud with Jenkins operation center. So if I show you everything that is running in this namespace, because I can also install multiple clusters of this in different namespaces. I have one stateful set for the operation center. One for one master and two for, like, three masters in total. This is the one that I just started. The team is a store. And you see that I get the service. I can get the ingress. We also use roles and service accounts to limit the exposure of privileges. So these are all the ingresses that we have. Sorry, maybe at the back you don't see it. So at some point this is going to come up. But I'll show you in the other one. Well, it already came up. So I'm from there on. You have your own Jenkins master where you can install your plugins. Use it for your team. So it's one click provisioning. Yes. This is running a Kubernetes cluster on Amazon. It uses EBS. We try with EFS. With EFS you have to have a very big storage committed to have a decent performance. So let me switch to the other server I have, the other master. So these are automatically configured to use the Kubernetes plugin for provisioning. And I'll show you what the Kubernetes plugin can do. And now the one I'm showing you now, it's a Jenkins open source running on Google, Kubernetes Engine. So this has the plugin configured. So it will be the same in any of the masters. But I'll show you some examples. All right. If I create a new item and I call it KubeCon and I create a pipeline, what I can do here is create my own pipeline and define where I want this built or this job to run. So I can define a pod template. I can say name is KubeCon and label is KubeCon2. And then I can say in the node with the label KubeCon, run something, SH, and let's run hostname. Let's see the container hostname and sleep for a bit so we can figure this out. So I'm going to open this, and I'm going to run it. And let me switch context to Google and Kubernetes. Yes, I'm there. And let's watch the pod start coming. So there's a node that has been created, a pod that has been created as this is running. So I just, with a very simple syntax, I just got an agent dynamically provisioned. So once the job finishes, the agent is discarded. So there it is. This is the hostname. And if I wanted to run this multiple times, this is basically limited to the resources you have in your Kubernetes cluster. There should be more agents coming up, more pods. And while that's running, let me edit and create the next pod template pipeline. So what's happening here is that this is by default creating a container inside the pod that is basically the agent, the remoting agent for Jenkins. So this container, by default, is an alpine image that just have shell and Java and barely anything else. And I cannot do a lot of useful things with it. So what I can do is I can create my own container image that has all the tools I want. It may have the Docker client, it may have Java, JavaScript, whatever. But that's a pain in the ass because you have to basically mix things from different Docker images or I can just compose a pod with the tools I want. So you can see there's more pods coming up and being created as the jobs keep running. So what I can say is, in this pod, you're going to create some containers. Let's use a container template and let's call it maven because I'm sure everybody loves maven here or hate it one or the other. And the image is going to be maven alpine. And two important things, because containers die, if you don't have a long running process, what you want to run, if this is not like a database or a web server or whatever that is running, what I want to do is TTI enabled is true and I want to run a command that is cut. So this keeps the container running all the time. And now I can go and say inside the, well, actually, I'll show you, like, mvn does not exist here, but inside the container maven, I can run mvn dash version, slip for a bit. So I can run now, I should have changed the name, so it's showing with a different name. Okay, I made a typo somewhere. Okay, mvn not found. Somebody's paying attention. I like it. I was going to make some mistakes on purpose to see if you were paying attention. Other ones I don't, but you'll never know what I was doing here. Anyway, so it's mvn version or, okay, you see that in the first call mvn was not exist because this runs in the other container, the JNLP agent remoting. Again, I forgot to change the name, and now I have a different error. Container maven does not exist. Oh, yeah, because it's using the old ones. So kubectl maven, kubectl dash maven. So what this, this looks very simple, but it's really powerful because it allows you to reuse all the images that are already available in the hub, so you don't have to go and create your own images. Okay, now we have the pods being created, and this is run, both, I'll go there in a second. And this is running on a maven container inside the pod that I just launched, okay. Now the gentleman here was asking if this runs also in declarative paper that you probably all know, which is like a easier, more concise version of pipelines, so by default you can use any groovy you want in a pipeline, which is really good because you can use any groovy, and this is really bad because you can use any groovy since you want. So there's this idea of declarative pipelines where you can define the agent in a, like, more concise way. So I can say the same thing I was saying before. I can say set some label, run a critical template that has a maven container in there, and then run some stages. So if I run this, I'll show you the previous one, it's exactly the same. So you can have this very simpler syntax if you want, because, yeah, the problem with the groovy one is that it gets complex pretty easily. Another example, so how powerful it is to have multiple, to have pods with multiple containers on them, so I'm gonna kick this, and then I'll explain it as I go. So this is a hard one to, let me just copy here and show it here. So this is a complex one, this is a Selenium template on a pipeline that will run two tests in parallel, so it's running two maven containers, one for Firefox tests, one for Chrome tests, means a Selenium hub in another container, that Selenium hub has connected two other containers, one with Chrome, Selenium Chrome, and one with Selenium Firefox, and so you have five containers plus the remoting, so it's a pod with six containers. So I'm reusing the container images that were released by the Selenium project. Now it's interesting, the interesting bit here is that because everything that runs in a pod runs in the same network namespace, and these images were designed to run in Docker Compost, you have to tweak different environment variables so the pods don't collide. So that's why all this customization, like you have to change the display port, you have to change where to look for the Selenium hub, but without touching the images, you define this pod that will run all this for you, and in parallel I'm running a Firefox test with maven, just passing the Selenium browser Firefox, and I can run the Chrome browser tests, and then there's another option in the Kubernetes plugin to get the container logs, so at the end I'm just printing the logs from both executions. So if I go back, you will see I have in parallel the test running in Chrome and Firefox, and this should have shown the pods here launching, let's run it again, I want to go to the previous one, so there's something, so these steps are running from the maven container testing against the Selenium hub, and then at the end I can also get the logs from the container, so this is wherever it was printed to the standard output of the container. So the other interesting thing that you can do is also deploy to Kubernetes from Jenkins, so this is not related to the Kubernetes plugin that will respond to the agents, but you could also have both the agents and deploy to Kubernetes cluster that one or a different one, so let me show you an example, for some reason this was started, see, yeah, I mean it's, you can get the output if you want to, yeah, yeah, yeah, the QCity output here, yeah, it's just showing you that the pods are starting, the pods are finishing, and so on, let me kill these pods first to ensure, by default, it's not good, by default there's going to be a new pod created every time, but you can also set the time that they can stick around, so if you want a pod to be around for doing other things, you can do that, that's typically the confusing part for people that were, if the pod sticks around, Jenkins will schedule more work in that pod based on the labels, they will get deleted at some point, just to make sure they are unclean, because it depends on how often the Jenkins goes through the provisioning phase, you may have, you may find pods that stick around for longer, one thing we do is to use UIDs for the labels, so every time every job runs in their own pod and you ensure that it only goes to that pod because the UID is generated in every build and every job runs, so I have this croc hunter application that you may know from other people's demos, am I providing Docker, I'm running the containers inside pods, yeah, that was the question, yes, you could mount things and make it available, so you can mount everything, you can mount persistent volumes, you can mount files from the host if you want, you can do all that, yes, it's using Kubernetes API, it's not talking to the Docker demon at all, let me just launch one thing because this may take a little bit to come up, so I'm launching this execution and you have a question here, okay, so the question is how do I get this out of the pod into somewhere else, no, okay, you want to use your pod to build a Docker image, so you can do it the same way as you would do with Kubernetes, you would mount the Docker socket, actually I think this example does it, okay, open, so this example, it's building a Go application, it's packaging in a Docker container and then it's pushing the container, either image to the Google container registry and then it's deploying it with Kubernetes, so this is a more complex pipeline, so I have again several containers running, actually, this is the branch, oops, okay, so I have the agent container, the JNLP container here, I have a Docker container which is the Docker client only, with the Google Cloud SDK, I just need this to push to the Google registry, I have the Golang image to build my Golang application and then I have the kubectl image to run kubectl command, so I don't have to create one image with all this, I just need to plug them all together and what I'm doing is mounting the Docker socket into the container, into the pod, right, so this has the benefit that you can use Docker commands, it has the problem that you can use Docker commands, so it's up to you if you want to use it or not, yes, if you use the Docker socket, you're on your own, basically, sorry, no, not if you use the Docker socket, the plugin will interact with the Kubernetes API and that's it, so I mean we can talk later after the talk, if you want to talk about how to build Docker images in a Kubernetes cluster, it's going to be a long talk, let me just finish this example, so interesting bits here, I'm building the go application here, so inside the container Golang, I'm running make, nothing fancy there, but you see I'm reusing the containers, I'm not creating my own images, then in the container Docker that has the Docker CLI installed, I'm running the Docker build and then the gcloud Docker push, so this is creating the container, the image, I'm pushing it to the Google registry and in the container cube CTL, what I'm saying is, hey, I have some general definitions in GitHub, just execute them and install this application that does this thing, whatever it is, so if this run fine, which I don't think it did, waiting for next available executor, yes, yes, and that's on the slides too, I think, yeah, I'm in the right place, of course the demos never work, okay, so this is waiting, oh, there it is, it's just patience, what's lacking, so this is going to go through that and you know that you can also have approval steps, so what I'm doing here, if type permits, there is here, so I'm asking, are you approving, do you want to approve this deployment and this pipeline is going to wait for one hour there sitting and waiting for me to click and unfortunately it took a little bit longer to start, so in this pipeline also what I'm doing, if I, it has a GitHub hook, so I can commit changes and get trigger updates with cube CTL and if I get this anchor, hunter, all, all, so there's nothing running here yet, so still going through the compile and test, I think this is going to make me run out of time, but if you have questions now, yes, if, well if any of the containers die in a pod is what will happen with any Kubernetes pod, it's just the container, the pod ends, it will tell you exit it or whatever or something, you'll get messages in the log, so yeah one thing we do is there's a cap that you can set, so this does not spin things over and over again, let's see, okay, so this is deployed and I got my app built with a container and I'm missing the part where I was pushing things to GitHub, but you get the idea, this is building the image and, okay, and time's up, all right, well we can talk later, don't worry, thank you.