 In this demo series, we're going to go through how to set up continuous integration and continuous delivery environment on OpenShift using regular components that we usually use in Java development, for example, using Maven, using Jenkins as the CI CD engine, Nexus repository for the artifacts, SonarQ for static analysis, and so on. And in the first part, we're going to look at how to set up this environment on top of OpenShift. To go through how the environment is going to look like after we set this up is that we have a couple of components involved. First, we have the Git repository where we're going to keep our source code. We can use GitHub for this purpose, and we also have a version of that that uses Gox, the Git repo that is retaining Go. We're going to use that on OpenShift to see how we can run the entire thing end-to-end on OpenShift. We're going to have Jenkins, of course, that is driving the whole thing. We'll have SonarQ as the static analysis thing backed by a Postgres database. They're going to be Nexus repository also that will build our artifacts to push them into it. So how does the pipeline going to look like when we set this up? We're going to be Jenkins taking out the source code from the Git repo, regardless of scogs or GitHub. Then we do a parallel task. At the same time, we run the unit test, but since we want to save time, we run the code coverage and static analysis at the same time through SonarQ and publish the result inside SonarQ. So when the tennis tests are finished, we can go look at at the same time how much coverage I've had with the test, have we violated any of our code quality rules, and so on. And if the test fails or if the code coverage, if because of the code coverage and other criteria, we fail the static analysis, then the pipeline is not going to execute any further. If both of them were successful, then Jenkins is going to archive the artifacts in Nexus, going to push them in Nexus, they draw for the waterfalls in this case, with the version that they have. And from that point, if that step is successful, then trigger an image build in OpenShift. So we have the artifacts for our application, and Jenkins is going to trigger a build on OpenShift, so OpenShift build a Docker image of that waterfall that we have built in the process, and we have pushed into Nexus. Then the image is ready. We're going to create the application as a Docker container on OpenShift, deploy the application on that, run a couple of integration tests, and if that is successful, we're going to promote the image, tag that Docker image with the version that we want in our staging environment, and recreate that container in the staging environment to, for example, notify the testers to go test it in the staging environments. So let's go through the steps and see how we can set this thing up from scratch. We have the entire instructions for running this demo, setting this environment up on GitHub on OpenShift demos, slash OpenShift CD demo, and you'll see the exact pipeline, how does that look like, and a series of steps that you need to go through to set this up. It's pretty much a few. It's not that long. If you look at my OpenShift environment and having this local environment, there is no project set up on it yet, so we're going to start by creating some projects for spinning up our Nexus Jenkins and other components that we need. We're going to create a new project, call it CICD, also give it a nice name so it shows nice in the web console. All right, we've got a project. We can also create two new projects for when we deploy the applications for the development environment, the staging environment. So I'll go create the dev project. I call it task. Task is the name of the application we're going to use in this pipeline. We'll use the simple application running on Jboss EAP, also a stage project for a staging environment. If you're happy with our changes in the development environment, then we're going to deploy it in the staging environment. All right, let's get projects. I have three projects created ready for me. The other thing I need to do is to allow the Jenkins container access OpenShift API so that it can interrogate it and figure out what other images are running or the Jenkins. We're going to use Jenkins as slaves running as containers on Jenkins, so Jenkins is going to discover what images are available to use to be used as Jenkins slave, and for that it needs to call Jenkins OpenShift API. Also, we're going to need to download images from one project, the dev project, and tag it in the other projects or deploy containers based on images. So we need to give access to Jenkins service account to be able to do this kind of operations through OpenShift API. And we do that through OC policy command, add role to user, and we give edit access to the service account that runs Jenkins. And that is, since Jenkins is going to be running in the CI CD project, I use a default service account for that. The same to the dev and stage project. All right, now we're good to go. I switch to the CI CD project to look at the console also. We have our projects ready inside CI CD. We haven't instantiated any components yet. How are we going to instantiate the components? We have two templates here. One uses GitHub as a Git repository. The other one spins up a Gox instance that looks pretty much like GitHub, but going to run in OpenShift as a container. And you have those choices. In this demo, I'm going to use Gox to show you the full CI CD flow. I'm going to copy the URL to this template, see both see process this template, so that it will place the parameters in it with proper concrete values, and then create that. It creates a bunch of objects for us, a number of services, pods, builds, deployments, and so on. In the project, all the components are up and running. We have a number of containers instantiated for us. We have Gox, which is our Git repository. Right now it's empty. Let's register. Gox is using a password. Login into Gox. We don't have any repositories yet, but the application that I talked about in OpenShift Task is available on GitHub. What we're going to do is to migrate that into Gox. We can import that repository and manage it within Gox instead. What should we call it? In Gox, OpenShift Task, migrate. And now we have that project available inside Gox, running on OpenShift. What else we have? We have Jenkins instance. We have Nexus for managing our Maven artifacts. We have already configured Red Hat GA Maven repository for JBus artifacts in it. We also have the Postgres backends for SonarCube and Gox, and SonarCube, which is our code analysis tool. Since we haven't analyzed anything with it yet, so we don't have any projects empty once we run it, we'll get more data. The last component is Jenkins. We have one project defined in it already, and it's a classic CD pipeline. Let's edit and take a look at the pipeline definition. It uses the pipeline plugin, which is a really convenient way for building continuous delivery pipelines on Jenkins, and on Jenkins, too, this plugin actually is brought built-in and integrated into Jenkins itself. In Groovy, we define our pipeline here. We say one-on-one nodes that are tagged with JDK8, so that we have JDK installed there. This Jenkins is configured already to use OpenShift containers on OpenShift as a slave, so it's going to spin up a new container on OpenShift that is tagged with this label, JDK8, and run the job there so we can scale, we can run many, many jobs at the same time. It runs the build, so by default, this is going to GitHub to get the source code for OpenShift task application. We want to modify this to make it go to our GOGS instance within our project, so it's listening to our ports 3,000, and the user we defined and imported the product to was GOGS, so this is the URL for our repository in GOGS. And the rest are the different steps of the pipeline. We run the unit tests and static analysis in parallel, so which one of them fails, the entire pipeline fails, we push the artifacts to Nexus to manage them over there, and if that is successful, we'll start deploying onto the development environment to build the project from scratch, and if that is successful after running tests, we promote the image to the stage environment, tag it with the version, and deploy it into the stage environment. So let's run this pipeline once and see how it looks like. Say build now, and instance of the pipeline is instantiated. Github is actually running in one new container on OpenShift. So look at OpenShift now, a new container, a new pod is spin-up, it doesn't have name or anything. This is spin-up by Jenkins, it tells OpenShift to create this pod, and you see it uses JDK Jenkins as a split, this is something that we built as a part of this demo. In Github, you can see the code for that, the source for that, and it's running the job there, so instantiated pipeline, and the pipeline plugin is really nice visualized as well. So you see the first step is instantiated, if I click on logs, you'll see what's exactly happening. It's cloning the source code from God's, and afterward it's going to run the main and build, and start testing and static analysis at the same time. It's going to take a few seconds. Pipeline moves on to the next stage, test then the analysis, and to look at the logs here, you'll see there are two stages running at the same time. We have the tests running, the main and test, and at the same time, we are doing the sonar analysis, so first download some artifacts as we're running this for the first time, and then start running the static analysis on the code. It can do the code coverage and extract any bad patterns that are used in the code. It's going to take a few seconds to explore as well. If you look at the logs, you'll see the sonar starts running the analysis at this stage. Download some of the draw files for sonar, and then load the roles and run the analysis. And since both stages have been successful, both running the tests and the static analysis, now we move on to the next stage of our pipeline, which is pushing to Nexus at this stage. It pushes the war artifact that we have to Nexus for archiving it in the repository. Push to Nexus has been successful, deployed to the development environment, that also went fine, it run a few tests, since that has been fine, tags the image and runs, deployed to the stage environment. The whole pipeline has been successful. Let's go analyze a little what's been going on within our pipeline. So if you go to sonar, refresh, see a new project shows up here, give us the AP, tasks, jacks, iris app. Well, the code coverage is pretty red. We don't have really good coverage on this, 17.6%. If I click through, you can get a little more detail about which classes you have good coverage and which classes you have bad coverage. One class is blame, so we have good coverage on one of our services, but the rest is too bad to split. How much line, how many line of code, how many bugs we see, vulnerabilities, some signs of using bad patterns in the code, if I go into it. So we get some data about our code and in sonar, we can define rule so that if some of the rules are violated, for example, the build fails. That's always a good way to guard the quality of our code when we're running a pipeline. So we get two bugs that are notified by sonar. We create issues based on that, so the developer goes fix them. We also had Nexus as a part of the pipeline, so that when we build the artifacts, they are pushed into Nexus. If I go look into the repositories, into the snapshots, let's see one more file is pushed there. It's the version that is deployed here, and we use this war file for deploying into a container in OpenShift. So in OpenShift, if I look at the dev project, see the application is up and running here, the tasks app. It doesn't have any GUI if I go to it, but it has a set of REST services that in the documentation, we can see how to call the REST services to create tasks and get tasks out of that. And since that has been successful, look at the stage environment. We also have a project here with the task application deployed. If you look at the images during the tag, we see that we have an image tag with the version of the application and that version coming from the Palm file, that the same version we use in Nexus, so that they can recreate new containers based on this specific version of the image into our stage or other environments. So that's how it looks like, a complete pipeline all the way from source code to deploy in multiple environments and go through the pipeline as we go to the right to get more confidence in our build. In the next part of these demos, we're going to look at how to make a change in the source code and trigger a deployment based on the changes we do in the configuration or the source code. Thank you for watching this demo and make sure you check out the next part.