 The topic of this video is Building and Deploying Spring Boot Microservices on Red Hat OpenShift Container Platform. This publicly available reference architecture paper describes various items like the software stack and provides a reference architecture application in an associated GitHub repository with all the source code and configuration artifacts required to duplicate the environment. Chapter 4 of the paper, Creating the Environment, walks the reader step-by-step through setting up the application. This is duplicated in the GitHub repository's readme file here. I've already set up this application in a local OpenShift cluster. So I'm going to go here and do OC Get Pods. You can see these various pods running. I'm going to do OC Get Routes to see what OpenShift services have been exposed for local access. You see one of them here is Presentation. That's the entry point for the client UI to the application. So I'm going to open that in the browser. And this application is basically a Flight Search application. So I can search for a round-trip flight from Los Angeles to New York. Let's pick JFK and some arbitrary dates in here. And you see there is 729 combination flights found. I can open any single one of them and get the details and so on. I'm going to change this to a one-way flight. And here you can see there's 27 flights found. Now that's the working application. So the next thing I want to demonstrate here is how we can do external configuration for this application. So to do that, I'm going to go here and first of all, I'm going to take a look at the Presentation pod log and grip in there for the word Batch. Notice it says it's pricing a batch of 20 tickets followed by 7 tickets. There were 27 flights. But the way this is happening, I can go to the source code here and show you this. So what's going on here is that there is a Histrix thread pool that's being used. And the thread size depends on how Histrix is configured. So this uses the Histrix reactive API to make the calls separately and bundle them together. And it does it in batches of 20 right now. And that's because if I look at the source main resources, I can see the application YAML file. And I can see the core size for this thread pool is set to 20. Now imagine you want to change this depending on your deployment environment. So that's what we're going to do here in external configuration. So what I'm going to do is create a new application YAML file in my OpenShift environment. And for the content, I'm going to copy this content which sets the thread pool to 30 instead of 20. I'm going to save this. And then I'm going to go and create an OpenShift config map based on this application YAML file that I just created. With the config map in there, I'm just going to edit the deployment config for presentation. And then I'm going to mount this application YAML from the config map into the deployments folder off the roof under config. So I'm going to put it right here and save this. So I'm going to watch OC get pods and notice here that there is a second presentation pod being created and deployed. And as soon as the second pod is running, the first pod is going to be terminated and all traffic is going to get rerouted to the second pod. So once I do this, and if I do a new flight search, what I should notice is that instead of batches of 20 like before, it should be batches of 30, which means all the 27 flights should be able to process in a single batch. Now that the second presentation pod is running, let's switch back to the application. And let's do another couple of searches. Let's just change the date here. It's taking me a little bit longer now because it's a new pod and it has to be initialized. But here's the 27 flights again. So let's go back here and repeat that same log command for this new pod that's been deployed. And let's look at the batch size. And you can see it's saying it will price a batch of 27 tickets because the threat pool is 30 and there's only 27. So they're one batch now. So that's using OpenShift config maps for external configuration of a Spring Boot deployment. The next thing I want to demonstrate here is A-B testing. So if I go and walk through my application really quick and I want to see what's happening here, I have a presentation project and it has an API gateway implementation, which is essentially an aggregator service running in the back end. And it also has, of course, the front end, the kind of Web 2.0 client JavaScript front end that's running in my browser. And these two are running in here. And then all the calls from here from the aggregator are actually going through the first plot proxy that's implemented by the Netflix Zool Service. And the Netflix Zool Service implementation is typically just going to its own property file and saying, where do you want to go? Do you want to go to airports? I'll route you to airports. If you want to go to flights or cells, I will route you as appropriate. And that's what's happening here. Now, in addition to this, there is also logic here that says it's going to look off the root of the file system for any groovy scripts. And if it finds one, it uses that groovy script for dynamic routing. So what we want to do here is use a groovy script to provide dynamic routing. So half of the callers get priced differently for their flights. The way we do that is copy this groovy script into a shared file system that's been made available. Okay. After copying the file, what we need to do here is create a persistent volume. I'm going to do OC get PV. And then I'm going to copy this. So persistent volume created. Persistent volume claim based on that. That's also created. So if I check what persistent volume claims there are, there is now a groovy claim that's bound to this persistent volume here. Now that that's done, we're going to mount this file system on top of the container that's running it. All right. And now if I do a watch again, you can see that a new version of the zoo container is being deployed and the old one is going to get terminated. So what does that do? So let's open up this groovy script. All right. So what this groovy script does is that it implements a zoo filter. And then there's a request. If the request is going to the cell service, and that's the service that's responsible for pricing. It goes and looks at the HTTP header to find out what the colors IP address is. It looks at the last digit of the color IP address. If it's an even IP address, it does not filter. If it's an odd IP address, it does filter. It's obviously arbitrary. So we want to filter for half of them and not for the other half. For the half that we do filter, what we want to do is reroute them so that instead of going to the cell service, they go to the cells V2 service. You can see here I have the cell service and the cells V2 service to determine the price of each flight. They're marginally different in that, for example, in the sales service, I have these extra hop this count of point eight. Whereas in sales V2, I have extra hop this count of point nine. So the theory is I want to test to see how customers maybe react to these different prices, what the difference in sales is and so on. Now let's go follow the log for this new Zool service. And let's go do another search. All right. So you can see the color IP address is even. And because it's an even IP address based on the Groovy script, an even IP address will not get filtered. So in order to have this do the filtering that I wanted to do, I'm going to have to edit this file. And I'm going to change these true and false around. So this is true. And this is false. I'm going to save this. And then just like the instructions would tell me to do. I'm going to roll out a new version of this pod so my changes picked up. So once again, all I have to do is watch and make sure that Zool three pod gets deployed and that Zool two is terminated and replaced by the Zool three pod. With the Zool three pod running now, I'm going to stop. And once again, I'm going to follow the log of this pod. I'm going to do another search. And now you can see it says it's running filter. So the fact that it's running filter means that if I look at the sales V2 pod, it's doing pricing, which it wasn't doing before. Okay. So you may also notice that the prices are different. If you keep an eye on an actual airfare with an exact time, you can see a difference in the price based on what it was before. Now, the other thing I can look at here is if I do OC get routes other than the presentation pod, there's also Zipkin, which is used for distributed tracing and is exposed as a route. If I go to the Zipkin UI and I do a search for the presentation pod, let's see here. There's a time difference between these machines. So I'm just going to do a search for the last full day. This is actually the one we just ran. So if I open one of these, it says 119 spans. And you can see the tracing of the call as it goes from presentation to another span within it to the flight service or going to the sales service to do the pricing for it. And if I refresh this and get a new one, I should also be able to see it when it goes to the sales to service instead of the sales service. And basically the whole code is there in GitHub to look at and hopefully a lot of it is self-explanatory. But the reference architecture paper also includes a chapter five design and development, which goes over all the different patterns and components in a lot of details.