 We're going to continue the program. We have now Tomasz Noziczka, who will, from Red Hat Engineering, we will talk about continuous integration and delivery with OpenShift. Okay, so we all know that setting up CI CD from scratch takes a lot of time. I'm Tomasz, and today we will make it in 20 minutes. So yeah, let's start then. So we all want to deliver frequently and reliably, and that's hopefully why you are all here. But managing the infrastructure takes unnecessary time. And a lot of us remember the times before containers came to the industry, and you kind of had a Jenkins master and a lot of machines for the slaves with pre-installed dependencies, different labels, and several virtual machines providing you databases and other stuff you needed for testing environments. And this was not ideal, so then the plugin came in and it kind of provided you with clean built environments and repeatable ones and it solved the dependency issue and others. But you still had a lot of machines you had to take care about and manage those, and those were really our pets. A few years back when containers came to the industry, we kind of started to move from pets to cattle. You know the analogy, right? So this was the time when, yeah, with the rise of Kubernetes, their plugin came in to Jenkins and it kind of provided you with all the slaves running inside the Kubernetes cluster, and you didn't have to take care about anything about it. So you were left just with the master. But this really didn't work for big companies and it was a bit of a scalability issue because one master wasn't really working, so what you did was that you sharded it. So some team at one Jenkins master and other team or a group in your company had a different one. And you still had to take care about all these machines and installations. But Pipelines, the new feature in OpenShift 3.4 took it even further. You don't even need a master. All you need is an OpenShift cluster and one file in your Git repository. And that's it. But yeah, enough talking about it. This is actually a demo session and this is my only slide for today. So if the demo doesn't work, this is going to be a really short presentation. So I hope the demo goods are with me today. So yeah, let's go for it. Just before we start, how many of you know OpenShift? So, oh, yeah, that's a lot. How many of you know OpenShift and aren't from Red Hat? Oh, that's nice as well. I wasn't expecting that much. Cool. Yeah. So yeah, before we start, we will start from scratch today. So we won't have any cluster prepared or anything. So there are more ways out to provision an OpenShift cluster. And today I will show you OC cluster up, which is quite a new tool for it. And it will run it through containers. So this is a really cool one, and I like it. But just to mention for those who come from Kubernetes work and know MiniQ, OpenShift has also MiniShift, which is a direct equivalent of this. And for the production use case, you can use OpenShift Ansible scripts, which are open source. So another great way, and there is also online coming. But right now it's only a developer preview. But if you have a GitHub account, you can look in there, and they will give you some free resources, and you can try all these things out. But today we are going with OC cluster up, which is actually integrated in the OC client tool. So this is really easy. How many of you know OC cluster up? So I'm just OK. So just to show you there is nothing running. This is a clean machine, and hopefully it works. So this is all you need, but because I'm a bit of afraid of changing IPs, I will actually specify a special bridge that I have made here. You don't need this last parameter. OK, and yeah, I was a bit of afraid of the network here, so I've pre-pulled all the images. So this should go really fast, hopefully. And we have an OpenShift cluster running, and it was five seconds. So yeah, this is a really cool way. And the way we do the demo today was I will kind of get things started, so there will be some commands, but just two of them, and we will look into those files and this implementation at the end. So let's start with the cool thing. So what we need is to create a pipeline, and this looks a bit scary, but what it does is actually OC process does template substitution. So we have a template which defines a pipeline config, and we will set into it a git URL of the repository we are working with and the branch, and create those objects. So yeah, we will look into those at the end of the demo. And what we need is to start the pipeline. And it's called Hello Universe, because Universe is much more bigger than Word. So yeah, let's try to give it up. What we need is to check what the IP of our cluster is. So we will be looking at this from the web console. So yeah, we'll see cluster status. And yeah, we are on localhost, so don't be afraid of this. And the password, and the username is developer, so that's really okay, and we are in OpenChase console. And hopefully the pipeline is already starting, so yeah, we can look into it. And while it's building the image, it has actually provisioned as Jenkins, which is just an implementation detail. You don't have to care about it, maintain it or anything. If you delete it, you can get it provisioned again. So we will just hide it. And yeah, let me just make this a bit smaller, so we can actually see all the stuff. Yeah, I'm sorry about the resolution, apparently only VGA works here. So I will try to put it all here, hopefully. And yeah, yeah, because this is just a demo talk, I have created only two environments. One is dev and one is production. But in a real world, you would have more of those and did some real testing in between and integration test. It's all configurable and we will see at the end how it works. But yeah, the pipeline has been started and it's actually shown in the overview. So it has actually did some preparations, created the image and deployed to dev, which we will see in just a minute. And right now it's waiting for approval, because this talk is kind of named continuous delivery and not deployment, so we will wait for approval before deploying to production. Yeah, let's open up our environment, ideally a new window, yeah, should have brought a mouse. So yeah, and we will set up Auto Refresh, so we will actually see what's going on while we will be working on the pipeline. And let's open up the production environment, we'll just make it a bit smaller. And you see an error page because we have no deployments yet in production environment because it's still waiting for our approval. So this 503 is from the router, which is an open-shift thing for which, yeah. And yeah, there are no deployments and there is one for trying for our dev environment. And I haven't set up the refresh, sorry. So yeah, so let's deploy it to production, because the development environment seems working quite all right. And yeah, the first time we will have to click it through. And the Jenkins, which has been provisioned for us has all plug-in for open-shift, so we will use the same credentials to login. Next time we will allow some stuff. And yeah, for the future, actually this kind of approving will be integrated into the web console, but right now it just redirects you into the Jenkins stuff, but it's just a detail. We will get it better for the next releases, hopefully. And yeah, let's proceed. And you can see it deploying to production, and hopefully it will work. Yeah, and we can see hello universe in production. So yeah, this is quite cool. And yeah, we can see we have three ports ready for production because, yeah, there will be more traffic. So let's change some stuff and see how the pipeline does. So yeah, I've cloned the sources here, and let's change the message. This is a Go app, so you're probably familiar with Go, but this is pretty easy. So let's say something like hello DevCon 2017, and let's commit this and let's push it. And because we are not on a public IP and I didn't want to complicate stuff with actually redirecting the webhooks from GitHub, we will trigger it manually, but it works just fine if you set it up through the GitHub hooks. So yeah, let's start it again, and let's look into what it's doing. So yeah, we have autorefresh set up everywhere, and yeah, it's creating the image so I can refresh myself in between. And this should be devouring here. So this is actually doing a rolling update, which is a great feature for OpenShift, and we will see it after approving this stuff. So yeah, we can see that the dev environment deployed successfully, and if it would have been some real website, the problem is why you don't do continuous deployment usually is because you don't effectively test all your application. If it's something complicated and uses a lot of database stuff, and you have some yeah, there is a lot of microservices involved, you don't usually have tests that cover all the things. So at least for the sanity, like a product manager or someone responsible can actually look at the website running with all the other microservices connected and see if it's working or if it makes sense. So yeah, that's why we have all these environments and we wait for the approval because you never test all the stuff, or if you do, you are the lucky one, but yeah. So let's proceed because it looks alright in development. So yeah, let's close this and let's see the pipeline, and it's deploying to production and we can actually look at it. So this is the rolling update, but because I've seen so many hands about people who know about OpenShift, you know this rolling update actually has zero downtime and it does health checks and all the cool stuff, and there are some great talks about it the year. So this is not in the scope here. So yeah, all the pipeline worked. I'm really happy. Thank you. Yeah, so let's have a quick look into how can you do this? So yeah, if I can actually spell GitHub. So there's a deploy for which I just grouped all the stuff here and development parts. Here's a pipeline template, which is something you would need. It's Kubernetes or OpenShift object and yeah, the template is here just for variable substitution. So let's look at the development part. So you actually create a build config in OpenShift and yeah, some metadata is not important. Here you can set up the GitHub webhook and the most important part, you define strategy of type Jenkins pipeline, which actually allow you to do all this cool stuff. And if you don't have Jenkins file in your route, you have to specify the path for it and we want to run it as several because doing parallel deployments to the same environments doesn't end in a good way. You specify the Git which Jenkins clones the sources from. So this is just one time thing and the real control is here and it is the Jenkins file which actually controls what the pipeline does and yeah, it's a grueless script but yeah, don't be scared just yet. You can actually run shell inside so that's what I'm doing. So the only thing that you need to do before is to select a node and we are selecting master but you could select a slave which would have been provisioned inside the Kubernetes cluster and you could have some pre-installed dependencies there. So if you would have needed some tools for your testing or building, so there is like a Maven one which actually provides you with all the tooling and stuff but we have a go and we used S2I so we actually didn't need that stuff. And then you have to mark your stages which kind of divide your pipeline into stages and check out SCM. This is all the part of the grueless you actually need and the rest can be shell if you want or you can do some grueless stuff. So yeah, we just log in, my point here is we use OC2 as you would have in your shell when you are controlling the cluster so we are doing, we have just one way to do stuff but there is actually a DSL which you may choose to use or not and I like to do things one way so we actually log into Kubernetes through internal DNS we can actually do some reflection and find the Kubernetes and Kubernetes actually injects you with a token to log in for the service account in the namespace so we read that and yeah there is a certification authority and that's it. We will process some templates so you would have to have this either way and this is what tells you how the deployment is defined and because we have multiple environments we have an open shift template for it and we substitute just an application name and number of replicas so these are the parameters that actually distinguish our environments we could see there and we send it to OC apply because OC create would have failed after a second run so OC apply is the way here and we create an image stream which is this template and that's it so this is how you build stuff with binary builds and S2I so you start build which is the one we defined above and this is just a hack for now because open shift doesn't really let you, doesn't really let you to find a hash of what you have built so this will be in next versions and there are two stages about how to trigger deployment basically so you can do it a bit easier from command line but this is kind of for keeping you sane if you see the locks and something went wrong so we set the image with the exact shop of what we have built and this is basically it means we want to deploy our deployment we follow the locks and we ask for status if something went wrong the pipeline would actually fail here and yeah we have a special stage for deploying to production which is just because every stage is kind of a counter or a clock and we don't want like our deployment to production fictively took two hours but it was close because someone didn't approve it so yeah I've separated those two and deployed to production does almost the same thing as deployed to death and just reference deployment config with a different suffix and this is a script so we can actually write anything you want here so whatever your CI flow is just write it here create the stages you want and that's all you need and we are out of time so thank you that's all I had for today and we still have five minutes for questions or more okay so okay if there are no questions right so so would you mind speaking up a bit okay yeah so so the question is and there are two parts in this are you actually asking about how you authenticate from Jenkins to OpenShift or how you that's the one oh yeah that's the easy part that that's the login part so you find the API endpoint by the internal DNS which is pretty easy it's Kubernetes default and dot SVC but you don't have to use it but you need a token to authenticate yeah every namespace you are in has a default service account and every service account has a token associated and if you run a port inside Kubernetes or OpenShift will inject a token into your file system so var slash run secrets Kubernetes IO slash token you get the token which you need to authenticate back yeah so at this point these are in the same name space but you you can do any combination because it's just a script and the way you will do it with a different namespace or a different cluster you can actually get a secret using the OC tool you create a secret with authentication to another cluster and you can you can you can get those through OC tool like regularly you would do this if you were doing it from command line you can do the same here okay so are asking if only Jenkins is supported yeah so the point here is Jenkins is just an implementation detail right now like you can use it but you don't have to how you control this stuff is by using the Jenkins file which is the glue with which actually does stuff but all the configuration is passed from the build config object created in OpenShift so you don't have to interact with Jenkins and it's just an implementation detail so in future we could have something else running it but yeah you should in ideal case you shouldn't actually see the Jenkins okay yeah that's all I had for today and thank you for coming