 So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, whoever's not talking can do it, I mean, if there's not, once I hand off to Rika, I'll just sit here. So, thanks again for coming. There's a few organizational messages while the guys are solving the technical difficulties again. So, please mute your mobile phones, please keep the room as clean as possible. When you really need to leave, please keep the doors as quietly as possible and close it again, please. Don't forget to vote for Lightning Talks Day, the take part tomorrow. And there's a party tomorrow, as you may know. I think the tickets are over for today, but there should be more tickets tomorrow at the Red Hat recruitment booth at the entrance. But sit for me and now I hope the guys from OpenShift will take over. Okay, good afternoon. Nobody breathe, okay? Because this thing is... Did it just go off again? Yeah, it's on. Do you want me to get my laptop? So, we're going to go with this. We have some color abnormalities, but that's okay. So, welcome to our talk about CICD Continuous Integration and Delivery with Jenkins and OpenShift. This is really a tools kind of discussion, but we want to recognize from the beginning that CICD practices are not just about the tools, right? This is about a culture where we're thinking differently about how we deliver software, how we test software, and it does have a big impact on the tools, but it's just as much a part of how we collaborate, how we think about who does testing and who does development and who does the deployment to production and that type of thing. So, we want to try to keep that in our mind, but we're really going to just be talking about the tools, I think, primarily today. And what we're going to go through briefly is kind of the goals of the project and the demo that we're going to show, the kind of the Jenkins setup and those kind of details, the workflow that we've set up in kind of a generalized way that we'll be demonstrating and then provide some opportunity to see things work where we've got a short demo and then we'll have some questions. So, just briefly, why containers, why are we using containers in our testing and in our deployment and development work? So, speed is a huge thing and just the whole agile nature of how quickly that you can work with containers, deploying them, replacing them very easily and quickly, you're forced to treat them as cattle, you know, there's lots of these things. And you don't care if one dies or one's broken, we're going to rebuild, we're going to redeploy that kind of thing. So, and we're trying to bring that whole microservice buzzword, if you will, into the way we test, not just in the way that we build our application and deploy it, but also the whole infrastructure that provides that deployment pipeline. So, we want to kind of dog food containers across the board. And the other is how many of you have a system that you maintain or whatever that you've spun up at some point and that if it goes down you really don't know what you would do because, you know, you just spun it up one weekend and you didn't provide maybe storage that was backed up and maybe, yeah, it's kind of on this other part of the network that isn't real robust. So, if you're in that situation, if you're like me, you can get lazy very easily. And we find that working with containers, a lot of times it requires you to think about persistence and how, if this thing dies, what's going to happen? What if my database isn't backed up and all that kind of stuff? So, we start to think about, a little bit more disciplined about these things. It's kind of a pain sometimes, but it's actually probably a good thing in most cases to think about persistence, state, decoupling your application from your data. I don't remember what the last thing was. What was the last thing? Reproducibility. Okay. And I think that gets to the other, you know, about persistence and being able to spin this thing up in a reproducible environment. Okay, next. So, why OpenShift? I think, for me, whenever I started working with containers, it was kind of as a toy, right? You have this thing and you play with it and then you have another thing and you play with it. But when you're serious about things, you've got problems and you need some type of platform to orchestrate these things, to provide networking, to expose ports and provide routing DNS to certain services, to link services together, all that kind of stuff. OpenShift starts to really solve some of those problems, so you don't have to worry about those things so much. Just having templates to manage your microservices is really a useful pattern. And also, just having this kind of integrated system, you know, when you're dealing with containers, you're always going to need to build, you need a place to put that build, like a registry. You need a way to, you know, manage kind of that whole pipeline, so we've got a Jenkins service and maybe some slaves over here, and you've got to build this too. So, now you've got lots of containers you've got to build, lots of containers you need to store, lots of containers you need to maintain and orchestrate when they come up and down and when they're updated. I mean, this gets complicated, right? So, that's one of the benefits of having a platform. And I think one of the really interesting things that OpenShift provides is not at all utilized to its fullest. It's really just kind of being introduced, I think, to my thinking, is this source-to-image. And that's a way of layering a small amount of your application onto a Docker image, and it provides a very quick way of customizing an application. It's a very lean kind of quick pattern to provide a common set of images and then just having the unique data layer on top of each one. And we'll see how that works in the Jenkins slave example that Mika will show. So, the goals were really to kind of generalize this container-based pipeline. You know, everyone has unique needs, but we do want to provide some of these basic, you know, hooks and automation type of patterns that you will need in any project. And certainly there's going to be lots of customization that's going to need and it's going to be required, but hopefully you don't have to spend so much time with the basics that everyone has, all the basic problems that we all share in common. And really the goal is to prevent the promotion of broken builds. We can build all day a thousand times, and if it's broken, that's okay, because we're not going to promote it to a place where it's going to be critical. When we promote it to a stage environment, then we're going to have some confidence in that time on that 1001 build. So, I think I'm going to hand it over to Mika to talk about the Jenkins work. Yeah. So, hi, my name is Mika and I'm working in Redhead as a software engineer. So, I'm actually responsible for the developer experience part of OpenShift. So, I'm working on the Docker images that OpenShift provides. And as part of that, we think that the CI, CD flows and Jenkins fits to this. So, I also work on the Jenkins and continues delivery, continues integration with OpenShift v3. So, this will be part of the demo. So, you will see how this works in practice. So, I will just briefly talk through what you will see there. So, what we, what you can do in OpenShift nowadays, you can run Jenkins as inside the container as OpenShift. So, OpenShift can also run the slaves that the Jenkins use for building your application, right? So, in the normal world, the slaves are the virtual machines that you build your application, right? And they, you keep them, usually you keep them running, you know, and you assign the jobs to them. And what we think is more cloudish way to do this is that you spawn a slave as a pot in Kubernetes when you really need it, right? Like, I don't need to have the virtual machine hanging out doing nothing. So, every time I need a slave, every time I need to test or build my application, I will just create the pot and run the test or whatever I need there and get the results back. So, the Jenkins image that we have in OpenShift, initially, we start adding a lot of features to it. So, we start adding things like a master slave replication, you know, changing the passwords, configuring the jobs. And then, at some point, we say, like, no, this is too much, right? Like, we are putting a lot of features to a Jenkins image and this is not how the world works. So, we can't satisfy everybody and people usually use different configurations for Jenkins, they use different jobs, they use, you know, artifacts that they want to deploy in Jenkins. So, we thought, like, it will be cool if Jenkins can be an S2I builder. So, we treat your Jenkins configuration, your jobs, your plugin as a source. And you can use Jenkins to build your own customized Jenkins image that will have all jobs you want, that will have all the plugins you want, you know, all the configuration and everything that fits your infrastructure. So, the S2I allows currently customization of the Jenkins image. So, you just give us your git repository and we'll just rebuild the Jenkins based on your configuration. So, one important piece here is the slaves as I was talking about. So, currently, to have a slave, what does it mean in Jenkins world? You need to have an Docker image that has Java on it, right? So, you can run this JNLP Java agent. By the way, if somebody in this room go ahead and rewrite this to go or something sane that we can run without installing Java, I will send him t-shirt personally. Sign it, you know? So, what this template... So, we have a special template that basically converts every builder image, which means in OpenShift terminology every image that builds your source code into a valid Jenkins slave. So, yeah, that sounds nice. What we really do is that we install Java on that image, nothing else. And we also install one shell script that will then act as a connector to the Jenkins. So, what really happens is that the Jenkins starts a pod, which starts a container, which runs your builder, and that builder will launch a shell script that will download the agent from Jenkins and start it and connect to Jenkins, right? So, you can then, from the Jenkins, you can execute the shell script on that slave and you can build your application, execute unit tests, execute whatever. If that is done, Kubernetes automatically puts the pod down. So, you know, it's not consuming any resources. What is important, as I said, is that you are creating slaves on demand. You don't run them, you know, all the time. So, you create a pool and you say, Kubernetes can launch 30 slaves in parallel, right? You know, you are okay to go. Next slide. Other way around. And this is Ari. So, hi, I'm Ari Labigny. I got my mic right here. Can you hear me? So, another part to this is you can manage your jobs. You can have all Jenkins jobs, if you were to go under the covers of the UI, are basically in XML files, which we actually have in our example repo right now, but some better way to manage an abundant amount of jobs. So, if you had a lot of microservices, you had a lot of these applications that you're testing and you wanted to kind of run through a set of jobs, an easier way to do this is to use Jenkins Job Builder. It's a nice YAML template. You can basically, it has a templating system that allows you to define many jobs in a very limited amount of YAML. You can use this notion of defaults that allows you to reference them and reuse a lot of the same data instead of having to have all this XML code and duplication of XML code and names and things like that. You can just reference it. You know, it reduces the redundancy and it can easily be uploaded into the Jenkins system. And you're maintaining YAML, which is very easy to read as opposed to XML. And you can put all this under source control management and it's much easier to manage. That's another piece that we added into this kind of CICD workflow. Just to also mention, Jenkins Job Builder is an upstream open-stack project. It's very extensible as well and you can contribute to it. So, we wanted to show before we go into the demo is kind of the picture of what this workflow looks like. So, as Miquel mentioned, we have a test phase where we're running it. These basically break down to the Jenkins jobs that you're going to see in the demo. And we have a test job that will spin up the slave, the slave pod that in Kubernetes, as Miquel indicated, when that starts, we can do some level of testing on that pod, which will be a Jenkins slave and destroy the pod. Now, in the example we're showing, it's a sample app, so it's just one container, but there's no reason this could be multiple containers that we're going to run and validate and test in that phase. If that is successful, we get an automatic promotion using the promotion plugin to then kick a build. Now, this is not a Jenkins build. This is building in actually the open-shift environment and kicking that off in a namespace called Stage or a project called Stage. Once that's done and it was successful, that build is successful, you could do this automatically, but we chose to actually take a manual approach of you can then deploy this in your Stage namespace. You can think of it as a company that has a Stage environment that they're testing out an application and they want to test that out and deploy it. In real life, you don't deploy to Stage every single commit you make to the source control version. That will be just crazy. The QA and testers will hate you for that. Usually, you have some testing lifecycle where you say this is the version we want to deploy to Stage and we want to have a QA or testers spend one week on it and tell us if it's good or not. You don't want to automatic promotion to Stage every time you have a new image. I would say the opportunity for integration testing. You may have a test environment where you can do integrate type testing before it goes to Stage. There's nothing preventing you to have a pre-Stage or something like that where you do all this evaluation. Our sample really shows these four, but there's no reason why you could have another pipeline in there as well if you wanted to expand that or you wanted to be more cautious in what you're rolling out. Then the other idea is you would then, once you're successful in your integration testing, you would manually promote to the production name space or project and then you would deploy your application and everyone would be happy. Your new tested application is out there in the world to use for your end users. What is important here is the connection to OpenShift which allows you to set different policies for staging and for production here so you can have people that can work only on stage, you can have people that can touch the production. Those two people don't talk to each other. In the real world, usually they are the same people, but in some companies, you have these security policies in place, so OpenShift really allows you to do this scenario because you have a separation of privileges between two projects. So I think we're ready now to show the demo. So the reason why we play a video here and we are cheating is that the conference Wi-Fi really sucks and we need to download like two gigabytes of Docker images and I spent like three hours trying them to download them and that failed, so that's why we play the video, but it works the same. Yeah, the other thing to note too while Kale's getting that set up is all this is in, at the end we'll have a list of references. You can just go get this project off of GitHub and use it. So yeah, this project really sucks. So this is the OpenShift console. It's not that green in real life, it's white. And yeah, so this is the OpenShift V3. So basically what are we going to show you is starting from nothing. I mean like literally empty projects, so I just have the projects created, which is the CI prod and stage. You can see the projects there and there is nothing in those projects. Like they're completely empty. So I'm going to show you how you can, in eight minutes, you know, and maybe now it's even faster because we fix one plugin, so it's like five minutes. You can get from nothing to have the full CDCI flow for your single application. It's really easy now. The key thing too is there's, the communication between Jenkins and OpenShift is done through an OpenShift pipeline plugin that was developed so that it's two can talk together. Yeah, so now I'm going to create the Jenkins slave image from my Ruby 2.2 image. So this will be a Ruby application because, you know, Ruby. And, you know, I'm just going to build the slave image using the template I was describing. So here basically I set the image stream name, so I want to use Ruby 2.2 Center 7 and want to convert it to the Jenkins slave. So, yeah, that's the name of the image stream. You can also specify, like, an alternative repository, where you have your customized build script. So if you want to add something more than the slave stuff, like, you can also add more, like, testing libraries or something like that. So we just hit create. Yeah, there's nothing. Because it's just the build. It's nothing else. It will just produce the slave image. So now I'm going to create the Jenkins server. So I'm going to create the Jenkins master. And, yeah, so for the Jenkins master, you can specify the service name. So that's useful when you want to run something more than Jenkins and you want to talk to Jenkins service within your pods or within your application. You can specify the Jenkins password, which is the admin password for Jenkins. You can specify alternative Jenkins image. So if you don't want to use our image, you can specify whatever image you want. And then the S2I repo, that's the thing I was talking about. That repository contains all the configuration, plugins, jobs, basically, you know. And this is fully customizable. So if you wanted to add your own plugins, like, we've downloaded all the plugins into our example. You could also have a text file that has a name and a version. And maybe you wanted to go to a different update site than the Jenkins update site. You could do all that and customize Jenkins the way you want using that template. So now I deployed the Jenkins. So now the Jenkins master image is currently being built. So I can view the logs from this build. I hope I will show that. I don't remember if I... This is from the slave. This is from the slave. So what really is happening here is that you can see we're doing yum install Java and then this is a wrapper, nothing else. And now you can see the highlighted. Back to the builds. So you can see we copy the repository files. We install 141 Jenkins plugins because Ari really likes to have a rich Jenkins. And we remove the sample Jenkins jobs that comes from the original image and we install our own configuration. We push it to the internal registry in OpenShift and that will basically cause the Jenkins to be deployed automatically. So now you can see I have a... I have one pot of Jenkins running and I have the route bind to that pot. These small buttons, I think you will see that later on the OpenShift presentation can be used for scaling up and scaling down. So if you want to have more Jenkins running, you can just hit the button. It doesn't work with this Jenkins image. So now we just go to Jenkins. Yeah, I have cell sign certificate. Yeah, and it's still coming up. But then once you'll see the Jenkins service went up. So you can see what's really going on in the Jenkins. So it's copying the configuration that takes password change, setting a password. It is automatically discovering the images for slaves. So the Jenkins will be pre-configured with those images. And it also downloading the Jenkins job builder, which is a Python project. This is the kind of customization you can do on the official Jenkins image. So if you want to install more stuff, you can just write your own shell script that will do that. And Jenkins is now getting up. And now you'll see the flow that we showed before. You'll see the four jobs in there to get reflects the same flow. So it's up. So I use the password I set in template. Yeah, and this is the screen. So all these four jobs are the jobs that I pre-configure. So I put the XML files there. I can use the Jenkins job builder to automatically create those if I want. So this is on the bottom. You can see the test. I think it's barely visible, but it's called test. Then you have a build here. And then you have a deploy stage here. And then you have a deploy to production. So Jenkins finished. So now what I want to show you is what I already spoiled, which is that the Jenkins will automatically be configured to have the slave images, like the Kubernetes configuration already done here. Yeah, so the credentials will be set. The image stream will be set. The Docker image repository will be set for you. Everything will be pre-configured. The Kubernetes plugin we are using here is done by Google. So it's a Google Jenkins plugin. It's an official plugin for Kubernetes. So now I kick the test. So I don't push the source code, but I can configure it to trigger on the new committed master. So I trigger it manually. So you see that the job is now in queue. And now we will just wait till Jenkins tell Kubernetes to spawn a pod. It's running. It's taking some time because when the Jenkins starts, it's still not finished with initializing all the plugins we have. So it takes some time to download all the crab and metadata and everything. And after that, it will start operating reasonably. So yeah, it just sits here in this queue state waiting for that slave to show up. And then now it's starting to come online. So you can see that this is the pod in OpenShift. So it started the slave automatically. And in the second, the job should be assigned to the slave. And you will see the console. Yeah, you can also tell up here that that's running on the slave. The other ones are running on the master. So this is the log. So you can see here. It's building remotely on this crazy name slave. It's running inside the Ruby 227 image. So what this job is really doing is it gets the source code and installed all the dependency and execute rake test. So it will just execute the one unit test. So we can prove that you can do testing here. That was tribute to Jenkins developer. And no, yeah. So now the test job is done. And you can see the build job was automatically started. So now we are building the image in OpenShift. So if you go back to OpenShift and go to stage, you can see that the image for sample app is currently being built. It's a Ruby app, so OpenShift will build your application and push it to the registry. And the next step is that you have to deploy it to stage. And this is the bug we found when we were recording this demo. So the build is now trying to make sure that the application was built and is deployed. But in this case, we are not deploying it. So it will just hang for two minutes. So the old version, that's been fixed. This was fixed last week, so super fast. I'll just skip this. So right now, since we had the automatic promotion from test to build, once that's done, then we will, right now it's doing the build. So that was successful in OpenShift. And now we'll manually, and you can see the promotion status plug in there. I'll see the console. And now we'll manually deploy in OpenShift to the stage project. And there it goes. So now you can see in the sample app, it's been deployed out into the stage project up top. You can see the stage project, and we've deployed it, and that's ready to go. Yeah, so I mean like the deploying to production works the same, so you kick the job, and it will deploy the same image to production. So how this is working, actually, in OpenShift, is that we built the Docker image in stage, and it's in registry. So what you can do in OpenShift now is that you can tag the image from one registry to one repository to another. So in OpenShift, every project has its own repository. In this case, you have the stage repository in Docker registry, and you have a prod repository in Docker registry. So to do the promotion, basically, that means that you run oc tag, stage slash, sample app, latest, or whatever Docker image you want to promote, to prod slash, sample app, something, right? And that will basically move, it will link the image from stage to prod. And in prod, you have a deployment config that is configured to trigger deployment automatically if there is a new image. It will automatically deploy your application. I think that's it. We can flip to the slides. That's basically the whole pipeline showing in Jenkins and OpenShift and that connection. Yeah, so this was really simple, because what the hell is this? So this was a really simple example when we have just one application that has just one source of source code, so it's a really simple Ruby application. But in the real world, what we expect from people to build is the application that consists from 10, 20, 50 different microservices, and all these microservices will be deployed as containers, or as their own services inside OpenShift, and they will talk each other and provide the application itself. So I know that marketing hates to call it an application, but we call it application now. And, you know, so what this is really useful for, if you have this scenario where you have different services and you need to make sure they all talk each other and they work each other, you really need to have something like a CICD flow. Otherwise, you will just shoot yourself after some time, because it depends. If one container gets broken, that can mean your whole application is broken. So it's really important to test the whole deployment for us, like the end-to-end testing, the whole thing testing, and then promote the thing that works and was verified by the QA, the testers, or whoever to production. And what Docker really allows us here, we are not working only with the source code now. So in the past, when you want to promote something, you say, I want to promote this Git commit to production. That's not longer true. Now you can promote this Docker image that has the application built from this commit, which what really brings us is that you promote the whole environment, the whole operation system, everything, you know, as the thing that you are going to promote to production. So it's not just the single source code commit, it's the whole environment. So that's what Docker allows us to do now. Yeah, so in conclusion, it should be fairly easy to take what we've done in this example repo and create your own CICD pipeline. This plugin and this feature should show up in release 3.3 of OpenShift, which I think is about the summit timeframe or something like that. And some things that we'd like to add or I'd like to add is the delivery pipelines or more of a visualization in Jenkins of the whole flow. So we can see kind of what OpenShift shows graphically, but it would be great to see that kind of pipeline. So I think it would be great to have this view, you know, like icons and stuff moving around. Nice and pretty. And then there's some features to add to JJB, which I'd like to work on just to extend the promotion plugin, because there's some options that aren't available today through that. And I think that's it. We're open for questions, please. Yeah, we really want to... Hold on, one sentence, one final word. So what we are looking for is that I know that a lot of you will not ask questions now because you are shy and there's a lot of people here. So we really want to talk with people about what are the use cases, what are they trying to build, so we can make this stuff really helpful for them. I mean, you know, this works for the sample app that is Ruby application, but if you run some crazy Scala, something with million of dependencies or something, and you have like 10 different services or just crazy scenarios, we really want to hear about these crazy scenarios, because we want to learn, you know, to make things better for you as a developers. So... Yeah, and now you can ask questions. Builder test. Like, how do you map a project to an image like if it needs a specific environment to test in? So the builder image I used to convert to slave, that can be whatever image. You can define whatever image you want there. So it can be your custom image, it can be whatever image. It doesn't need to be the official OpenShift image or something like that. So the template I was showing on the demo, it has this image name or something like that in it, right? So it was a ruby-22-centus7. So that's the name of the image stream that exists in OpenShift. So if you create your own builder image and create your image stream, you can just use that name there, and it should work. So one of the patterns that I've been interested in is like containerizing your test suite, and then making that a slave through that mechanism. And all of a sudden, you know, maybe it's an Ansible playbook that you kick off and you do some work, and now you've got a lot of things you can do. So it's an interesting pattern that can be extended. However, yeah. Who else? Guys, sorry if you covered this. You know, maybe, how is it similar to what Fabric8 is doing or what the ManageIQ is doing? Can you find any similarities or differences? Yeah, that's a good question. I mean, Fabric8 is also putting something together that's similar, but I think they have some specific use cases when it comes to Java applications. I don't know if they're... This is more, I think, a general use for any type of application, and that's what we were going for, is to make it as general as possible. They also use, I think, the workflow plugin in Jenkins to accommodate some of this. And the front end to that is basically like a groovy file that people really have to learn how to use. And that's, I think, where JJB is a little bit... It gives you some benefits, it's a lot clearer to follow. Yeah, I think there are two problems that we are trying to solve. The Fabric8 guys are trying to create really first-class Jenkins CDCI experience for Java, JBLS applications or Java-based applications, which is a great goal. I mean, they're doing a great job doing that and their demos are awesome. The problem is that we want to make this more generic with other people, right? So if you don't write, like, if you don't use Maven or you don't use JBLS or, you know, you want to build a Ruby application or Node.js application or Python application, we don't care what builder you have. So we want to provide the pieces of the legal to people and they can then use it to construct their flows, basically. So how many IQ is fitting here? I don't know really exactly how this... So what many IQ is, it's really CloudForms 2, which is the tool that you can use to manage or orchestrate the whole cluster for... So there's one level on top. So you're sitting there and monitoring all the nodes that OpenShift is using, getting informations from them, collecting data, creating all these charts and managers, views and so on. So you know what's going on in OpenShift in the cluster, but it's not really like something that's related to the applications or building the applications, you know, from developers. One more question? I'm not going there, so you need to scream. We know about that. So this is a very common question. So in OpenShift builds, you have some concept called incremental builds where you can... If you already built something and that image exists on the node, we will reuse that image so we don't download the whole internet over and over. We know that there is a limitation. If you have multiple nodes, one from one node to another, that's one case. So it slowed down the build as well. So it is like a chicken egg problem. You're either going to download all the dependencies from Maven, or you're going to download the entire Docker image that you built previously. So the answer to this could be use one Docker registry that has shared storage across the cluster. That's one answer for me. Another answer is that we recently added Docker source input to the build. So you can specify a Docker image that we pull and suck the artifacts out of it and put it into the build. Thank you. You are out of time. You can continue the discussion. Thanks a lot, guys. Thank you. Are you a speaker? No, I'm not. I could be. I have another question. Sure.