 Welcome to the second part of OpenShift Continuous Integration and Continuous Delivery demo. In the first part, we looked at how we can set up the entire environment of various components we need in our seeding infrastructure like Nexus Repository, Jenkins, SonorCube and Gox as the Git repository. And we run through the pipeline once by triggering the build. In this part, we're going to look at how we can make a change in the code, add a rest in point, add a test and push that through the pipeline and watch how our pipeline gets triggered. So the pipeline is the same as the first part. We have the Git repo and Jenkins building the code based on the Git repo. Then it does part of the step of doing static analysis through SonorCube. And at the same time runs the unit test. If any of these steps fail, then the entire pipeline fails. If both of them succeed, then we move forward, push the artifact to Nexus and start deploying the application, start building Docker images based on that war artifact that we've built and pushed into Nexus. OpenShift, in fact, is doing that for us. We just give OpenShift a Warfile that we have and OpenShift through Strieprocess, SourceToImageProcess builds a Docker image based on that Warfile and layers it on top of Java's EAP that this application requires and it deploys it to a development environment. We run the integration test and afterwards do the same thing. We tag that image that we built once as a good image. We tag it with the version that this application has, this particular pipeline instance has, and deploy it into the stage environment for further testing. When we make a change in the Git repository, we want Jenkins job to get triggered. Let's take a look at the Jenkins job. There are two options or at least two options. One is to have Jenkins to keep holding the Git repo for changes on a specified interval every five minutes or 10 minutes and so on, or have the Git repo to notify Jenkins whenever a change is happening in the repository. We're going to do the latter to reduce the load on the Git service in general. It's not good to pull all the time every minute or so. So I'm going to copy the webbook to this build. It's under this play button. Then I go to our Git repo in Gox to the settings. Under the webbooks, I add a webbook. The webbook is a mechanism that we use for triggering events on other platforms whenever something happens on a Git repository. So here is my Git. Here's my webbook to the job that I want to use. And since this Gox instance is running in a container, it can just use the service name to reach Jenkins. We don't need a complete route URL. Just send the push events whenever we have a push to the Git repo. And this is active. There we go. The webbook was added. So now we can go and run a developer studio. The eclipse-based IDE with some JBus plugins on it that we can use for coding. Let's look at the Git view. We don't have any repositories. We want to clone this Git repository into our workspace. So I go back to our Gox instance and copy the URL to our Git repo. So we'll de-copy it here. Post and there it is. And it needs the credentials. And I'm going to use Gox. Gox, they use any password that we had created. Finds master. And it asks where to clone this. And I'm fine with that part. We may add projects here. We can clone it in our project environment. Okay. It is cloned now. And we want to create a project imported in our workspace now as a Maven project so we can start coding with it. Okay. If I go back. We have the JBus task project imported in our workspace. And it's already, you see that it's connected to the Git repo upstream. So we have a set of breast endpoints for manipulating tasks. Like get the tasks, create tasks, delete tasks, and so on and so on. All based on the Jax RS. And we have a users resource. Users endpoint that gives the list of all the users. For each of this, we have a set of test classes also, unit tests that we run. We have the tasks resource test that runs a little set of unit tests. It's all mock tests against the interface. Can run the test once. Okay. It's all going once three tests. I have already added the user resource test with the test, but with a number of tests also here as well. And the test in this case is actually ignore commented out. So what we want to do is we want to include this test. This is not a good thing. We don't want to exclude tests from our pipeline. So I'm going to remove the ignore so that we can run this test as well as a part of our pipeline. So we have made a change now. I'm going to commit the change in to Gox. And included user endpoint tests, commit them and direct it, push it into upstream repository. Okay. Here you go. Take a look at Gox. We should have a new commit here. Included user endpoint tests just committed to the platform. Okay. Let's see what's going on in Jenkins. You see a job is triggered based on the git push tab it into the repository. If I click on that instance of a jar running a new instance of our pipeline is triggered. It's running through the build, test and code analysis, nexus, push deployment to development environment and stage environment as we have in our pipeline defined. It takes a few seconds to run through the pipeline. Okay. This pipeline failed. We have read in the test analysis phase. Let's take a look at the logs that's been going on. In the test phase, one of our tests has failed and the users sorted by tab. That's exactly the test that we just enabled. So that was the reason because people had ignored that test. Let's go take a look, see why this test failed. Go back to the repository to my IDE. And if I run this test right here inside the IDE, see yes. That's failed the same way. What's the message here? We have expected user two, but the actual value was user one. So in this method, we get those users from the users endpoint and we expect them to be ordered based on the number of tasks that they have created. And the order we get back is not correct. So let's go take a look at our method and that's the problem. So we're getting a list of users and we have some code that sort this out based on the number of tasks and this part is committed out and that's just causing the issues. I'm going to uncomment the sort function. Let's run through this test once more. Well, it's all great. Super. Let's commit the change to GitHub to do gogs. Commit and push through repository. Okay. Go back to Jenkins. A new job is triggered and it's running through change we made. Let's see how the pipeline executes this time. Also get some metadata around the job execution, how many commits there have been in this particular run of the pipeline, the exact hash code and short code of the commit. So we can track that which resources have been touched. Also the pipeline plugin can calculate average times for our field for the entire pipeline. So it gives us some nice stats, statistics about the pipeline. We can monitor the trends if pipeline time suddenly gets spiked and increased in a couple of minutes. So we need to go look at a pipeline, what's been going down, why does it take so much more time? Well, this one has been all successful. So that was the issue with our build and since the fixing test, all the tasks have been successful on the force and build success and our sonar also see that it has done successful analysis. We should have a better code coverage now since we enabled the test. This is red before nice, a little more orangeish. And so it got a little better. The user resources is very green and tasks resources are slightly less green. Okay, question second. To make this successful, we're deploying into our development environment. If I switch project to dev, see when the build was created and uses that waterfall that we just created in Jenkins and Piston nexus to build a Docker image based on that and redeploy it in the development environment. What is successful bringing the container up? And it also did the same in the stage environment since the tests have been successful. So we've got a new instance of the container one in the stage environment. So that's about it from today's continuous integration and continuous delivery demo on OpenShift through using Jenkins, nexus, gods, other components, sonar, all running in containers OpenShift, you can do end-to-end continuous delivery pipelines that trigger actions on OpenShift and the entire infrastructure also canned on OpenShift like you see in this demo. Thank you for watching this.