 Welcome to the second video in a series of three, showing how to have a pure mass decision service working on OpenShift with the all the webhooks configured. In the second video, we'll create the decision service on OpenShift, we'll configure Maven to use a proxy, and then we will test the service. So let's come to OpenShift. This is a blank project. I'm going to add my decision service. My decision service is going to be based out of a template that's included with the OpenShift 3.1.1 installation. The template is essentially a preconfigured form, which information that allows you to have a easy jumpstart. So the ripple that I'm using, it's my Git ripple here. This is where the project is, and now I ask OpenShift then to build the project. So right now OpenShift is going to GitHub and it's cloning my ripple, and we'll start building my BRMS decision service or my OpenShift decision service based on that ripple. As you can see, it's downloading all my dependencies from Maven and from the internet, and we do not want it. We want the dependencies to be downloaded from a Maven proxy. And I have a Maven proxy configure here on OpenShift, which is in this project here. I configured this Maven proxy using instructions from this blog post, which says improving build time of Java builds on OpenShift. I followed the process and the blog, and now I have my Maven, my Nexus proxy configured and working properly. So let's do this. Let's change the build configuration to make it use the builds. I'll leave this build here running, but I'll change the build configuration so the next builds can take advantage of the proxy. It's very easy to change the build configuration to the proxy. You just need to add another environment variable, which is the Maven mirror URL. It's to specify a value and the value is going to be the one that's coming from Maven, from Nexus. So I'll have a group here, which is a group of all my ripples, and this is the URL I'm going to use in my project. Okay, here it is. I added a Maven URL. It's all fine. So that means that next builds we'll always get this URL. So we can come here and see what URLs have been applied. I'll make sure that my Maven URL has been correctly applied. Let's see if that value is not there for some reason. Let's do it again. Okay, let's save. Let's see if it's there. Okay, now it is good. I may have missed something there. So that means that all the next builds, they would use that Maven. So this one took around 1 minute and 40 to execute. Now I'm going to start the build again, and this next build should take less than that. So let's come to the build and access the build logs and verify that it is using Maven. So starting the build, and as you can see, I just go here and stop. It's downloading from my Nexus. So you can see here starting online, when it starts downloading, you see here that it's going to my local Nexus instead of going to the public internet. So that will make my build happen much faster. So it's just downloads will take much less time. And while it does the build, I will show you what this project is about, right? So it's a very simple rules package where I will receive a person, and this person has a name. And if this person's name happens to be Rotary, I will reply with a salutation that says at your service, my master. And then if that person is not Rotary, I would say get out of my way, whatever name that is. So this is a very simple decision service. Let's come back here to OpenShift while doing the build. So the build has been completed. And we can see that the build time has improved quite considerably. It was from one minute and 40 seconds to 55 seconds. So that's one of the advantages of having a proxy. Let's come here to OpenShift. Now OpenShift is starting my Docker image with my decision service. And as you can see here is light blue. So light blue means that the container has started, but is it not yet ready to respond to requests? And that is because we have included a readiness probe on the container that will only let OpenShift send requests to that container when the container has passed all the readiness probes. You can configure readiness probes to be anything that you want, like a simple bash script call, for example, that will verify if a certain port is open, or that will verify if a certain log information is out there. So here we can see that it has passed the readiness probes. And that means we can already send requests to that service. So this is a service URL. And I'm going to send a test, my very simple BRMS decision service. I have a REST client. The URL is here. It's correct. So I'm going to send a name that's not arbitrary. And we should get something which is, hey, you get out of our way from the first video, right? And now we're going to change it to RETORI, send, and it will say, hello, RETORI at your service, my master, right? So this was the second video in the series of three. Thank you.