 The topic of this video is wild fly swarm microservices. This publicly available reference architecture paper describes the design implementation and deployment of wild fly swarm microservices on Red Hat OpenShift container platform. The associated GitHub repository contains all the source code and related artifacts. Chapter four of this paper walks the reader step-by-step through duplicating the reference architecture environment in an OpenShift cluster. Those same instructions are also duplicated in the readme file of the GitHub repository. I've set up this environment in my own OpenShift cluster, so if I go there and I do OC get pods, I can see the various pods, including the ones that are running as well as the builder pods. So just to not see those, I'm going to say show all equals false and here are my running pods. Now to access them, OpenShift pods are exposed through routes. So I'm gonna do OC get routes and I can see that there's a presentation app which is the main application and entry point. I'm going to copy and paste that host name and hit it with my browser and that is how I access the application front end. So this is a flight search and if I do a search for flights from Los Angeles to New York, let's pick JFK and pick dates for a round trip flight, I get the results which is 729 flights. I can individually see the details or I can change this to a one way flight and change it to various dates. All right, so if I go and look at the source code, the way this happens is there's a presentation app which provides the client which is JavaScript based as well as following an API gateway pattern to create an aggregator that is actually accessing the microservices in the backend and providing them to the front end. There is an airport service which is essentially in charge of the airport names and distances and so on and there is the flight service which has the static itinerary for all the flights based on the schedule and there is a sales service which is the pricing service figures out how to price each flight depending on the date and how long the flight is and so on. So on top of these, there's a couple other things we're going to discuss. There is the edge service which is essentially a reverse proxy with both static and dynamic routing. There is a second version of the sales service that we're providing for AB testing and I'm going to get into those a little bit later on. But so right now I'm going to go back here and there was another route expose and that's for Jaeger query and Jaeger is the distributed tracing software. So I'm going to open that in the browser and I'm just going to search, let's say for the presentation service being hit and I find one and there we go and here you can see basically in a tree drill down form the presentation service makes other calls there's other spans open, it calls the flight service which calls the airports or presentation itself calls airports or it calls sales and so on and so forth. So that's the basic functionality in terms of the code. If we open a simple service like airports and take a look at what's there the main entry point is the service the airport service and sorry the controller here is the REST application so you can see has a path and it defines two different operations slash airports by get and airport slash code if you want to get a specific airport there is an initialization class for it which is a web listener and that just takes care of eager loading of all the airports from the provided spreadsheet in there and it has a configuration file for Yeager to be able to connect to it and so on. So this is a pretty simple wildfire swarm REST application and really a lot of the work just happens in the POM XML by pointing to wildfire swarm and so on we're able to simply spit it off and have a REST service as a microservice right there. Some of the more complicated ones include presentation and so on but let's go back to the application and its functionality. So one of the things that I'm gonna show here is that actually if we look at the code in presentation you can see that if we look at let's say the gateway controller it uses a threat size based on history setting to create a threat pool. Now I'm going to go here and get a log of the presentation pod and grab for the word batch in there. Notice that it's sometimes saying it's doing batching of 20 tickets followed by seven tickets. That's because there were 27 results and it's doing batches of 20. So it does 20 and then it does another seven. Now imagine based on your environment you want to change this. You want to provide external configuration. So the way we can do that is by going and let's scroll down here. Creating a project default file and specifying different history configuration. So I'm going to do that here. Going to copy paste that in there and then I can create a config map based on this file and I already had the config map but once I have that I can go and copy and paste the configuration of this. So what I'm doing here is mounting this project default that I just created on the pod under deployments which makes it in the class path. So what that will do is that it's going to selectively it's going to selectively overwrite some of the properties. So if I'm going to look at the pods right now I can see a presentation dash two is coming online as as presentation dash two gets deployed presentation dash one is going to get terminated and we're going to have that one. So what's going to happen as a result of this is that our thread pool is going to change now and instead of 20 it's going to be 30. So if we do a search for a flight to New York which gives us let's say 27 flights back it should happen in one batch of 27 instead of two batches of 20 plus seven. If we check back and presentation two is online now so I can go ahead and do the same blog and this time looking in this new pod but before doing that I'm going to have to do another flight search for New York. So this is a little bit slower now because there we go it was the new pod. And it is 27 flights. Now if we do this search now we can see it's pricing a batch of 27. And so the 20 plus seven because the configuration for Historic Threat Pool has changed. So that's one of the secondary changes we do to this deployment. The other one is AB testing. So I have the sales service in charge of pricing but I also have the sales two service which is essentially the same but I'm trying a different price calculation and I want to have both of these at the same time for the users and I want to split the users in half consistently. So what I'm doing is based on the IP address of the users I'm sending half of them to sales for pricing and half of them to sales two. So the only difference here is the extra hop discount which is used in the pricing formula is changed from 0.8 to 0.9 in sales two. So that's what sales two is. But how do we actually get half the callers that have like let's say an even or odd IP address to get sent over to sales two for pricing? The way we do that is by using this edge reverse proxy which is essentially an in-house solution that does some of the same things that let's say Netflix, Zool or similar software do in other ecosystems. So if I look in here, there's a miscellaneous folder that includes a routing JavaScript. The routing JavaScript says if somebody's trying to go to the sales service let's look at their IP address. Let's print their IP address and figure out if the last digit of the IP address is even and if it is instead of sales, let's send them to sales two. So to do this, I'm going to have to first of all copy this JavaScript file to my shared file system. Once it's there, I have to create a persistent volume for OpenShift and use the persistent volume claim in the pod. So let's first verify this. There we go. So the edge persistent volume claim is bound to the shared file system. And now all I have to do is use this command to go tell OpenShift to go to the deployment config for edge and add a new persistent volume claim called edge and mount it off the root at slash edge. So I'm going to do this and I'm going to watch as the pod gets deployed again. Notice now we have an edge to deploy. So it's the same thing again. Edge one is going to terminate when edge two is ready to take it over. Now what happens when this is put in there? If I go and look at my edge mapping I have this JavaScript mapper that essentially looks at slash edge slash routing.js tries to find that JavaScript file. And if that JavaScript file is there what it's going to do is read it and based on the result of the JavaScript file reroute the request, however it might be rerouted in which case in this case it's going to be sales two that is going to go to. Once the edge two service is up and running let's go back to the application and do another search. See if we can notice the price change. It's 247 for the 922 flight. I'm going to change it to the 23rd and again this is the first time hitting it so it's slow and then change it back to the 22nd. It's 271 for the flight. So let me look at the log for edge two and notice that it's saying it's detected my IP address ending in 100 and it's rerouting it to the B instance. If I look at the log for the sales two service I can also see that it's being asked to price tickets and this wasn't happening before because this is based on the edge service reading that JavaScript following the IP address and routing to sales two instead of sales one. So the code is all available in the GitHub repository. You can go through it and the reference architecture paper contains a lot of detailed explanation for each part of these. Chapter five design and development basically walks through every single aspect of the microservice development and what's going on in here.