 Hi, in this screencast I'm going to show you how to get up and running with the instrument seller software as a system platform that runs under OpenShift and Kubernetes. And what I'm going to do is I'm going to show you how to install MicroShift, which is a condensed version of OpenShift, and then I'm going to show you how to install the instrument reseller code on Kubernetes, which is running under OpenShift. And then I'm going to show you how to configure and run the tenants. But before we get started, let me go over the basic structure of the application. Instrument resellers is a multi-tenant application that runs under Kubernetes. And there are three tenants, one's called Clyde's Clarinet, the other one's called Betty's Brass, and the last one's called Cindy Saxophones. And they run under a single Kubernetes cluster, and the way that each tenant achieves isolation is through a distinct Kubernetes namespace dedicated to each tenant. The tenants all share the same code base, and the code base is stored as container images in a KIO imagery repository, as you can see on the left. There are two images, one's called instrument reseller cedar, and the other one's called instrument reseller. Instrument reseller cedar runs as a connect container under Kubernetes, and its job is to put in some seed data for each tenant. And each tenant will have data specific to its instrument type. For example, Clyde's Clarinets will have Clarinet data, and Betty's Brass will have Brass data, and Cindy Saxophones will have Saxophone data. The second container called instrument reseller is the actual application code that each tenant will use. Each tenant will have an instance of the application code. The code is generic, but between the specifics of the instrument reseller's data and the generic nature of the application logic, we get a distinct tenant for each application running in the cluster. So that's how it works, and so let's get down to business and show you how it's all going to go. Inside of the screen, you can see that there is a terminal window into a Fedora server, and the Fedora server is where the OpenShift Kubernetes cluster will run. But as I mentioned earlier, we're going to use MicroShift, which is a condensed version of OpenShift. And then on the right-hand side of the screen, you can see that we have the MicroShift getting started page. So what we're going to do now is we're going to install MicroShift. So the first thing we need to do is to install the Creo container runtime. MicroShift likes that, so we're going to go dnf install Creo, and then it's asking for my password, and that's okay. Okay, and then we're going to install Creo again before we set up the bind to the repository, and this just takes a little bit. And now we're going to use system control to enable Creo, and that's done. Now we need to install MicroShift. So let me do, let me clear the screen so you get a better view. Okay, let's bind to repository. Let's install MicroShift. Okay, good. Now we need to set up some firewall access and some ports. Open up some ports on the firewall. So set up this IP address as a trusted sign, and then we'll open up port 80. The application won't run over port 80 using virtual names, which we'll talk about later on in this video. And let's see, we need to open port 443, which is the HTTPS port. Import 5353, which is what MicroShift likes, and then we'll use system control to enable MicroShift. Okay, so MicroShift is installed, but now we need a way to talk to it. We're going to use that, the way we're going to talk to it is using the OpenShift in Kubernetes clients. So we're going to go get the OpenShift OC include control clients. Let's decompress that tar, those tar files. Now we need to create a directory where the Kubernetes configuration file will go. So we'll do that. Then we'll move the configuration file into a friendly place on the Adora server. And now let's do OC getpons. Let's see what's going on here. There's no resources found. Okay, let's do this. Let's just do all namespaces. And you see there are definitely pods running. Some of the pods are working their way up to get up and running. It takes a little bit. There's a lot going on in the background. But now MicroShift, which is the condensed version of OpenShift, is installed and running. So that part's done. Alright, so the next thing we need to do is we need to install the source code. And the source code lives in a red hat repository called instrument resellers. There it is. So let's go clone the source code into the Adora server. And I'm going to do that by copying and pasting because as you're seeing, I'm probably not the best type as the world. Well, as you can see, I'm, let's see here. Well, as you can see, I'm not even the best copy and pastes in the world. Let's try that again. Alright. Okay, so now we have the source code. You can see it's an instrument resellers. So let's go into instrument resellers. And then we'll go into the OpenShift directory. This is all the code. This is the source code, the documentation. Everything you need to know about the project is in here. And most of the work has been done already. Actually, all the work has been done already. The only thing we really need to do is to bind an external MongoDB database into the Kubernetes manifest files. So let's, let me make this a little bigger here. Let me send this down because we're not going to be using this again for a while. Let me make a nice big screen here so you can see it. And let me clear the screen. And now if you look in here, you'll see there's these YAML files. There's a YAML file for each tenant, Brass, Clarinet, and Saxophone. And within the YAML file is actually a manifest declaration for each resource that the tenant will use, each Kubernetes resource. So let's take a look at Brass YAML. I'm gonna do this by going cat Brass.YAML. And you can see this is the manifest file in YAML. And here are all the resources. So let's go through the resources one at a time. The namespace resource is called Betty's Brass. So the namespace is how each tenant is isolated in the cluster. Then we have a deployment. And the deployment defines what pods are going to be running within the cluster for the tenant. And if we scroll down under the deployment, you'll see there are two types of pods. There's a, there's no, actually, there's one type of pod, but it has an init container and a container, as I showed you previously at the beginning slide. The init container binds to KIO, the instrument reseller seeder application, and the containers bind to the KIO Red Hat developers instrument reseller application. This is the container image that does the data seeding. And instrument reseller is the container image that does, has the actual application logic. But if we look down, we'll see that there is a, an environmental variable from MongoDB URL. All these environmental variables define something particular about the tenant. In this case, since all the tenants will actually be sharing the same MongoDB cloud instance, but will bind to a different database within the instance, we need to define that database. And again, this database is defined by MongoDB URL, but you can see that the actual URL is contained in a Kubernetes secret. It's hosted in a Kubernetes secret. So if we go down and look at the Kubernetes secret, which is here, you'll see that there's a URL property in the metadata, but it's a placeholder Mongo URL here. And I put that in because what we need to do is substitute the actual URL, the MongoDB instance running in the cloud with, with, against the Mongo URL here, placeholder. Now we have the service and the service is how the application exposes itself internally within the Kubernetes network. And finally, we have the route and the route is the mechanism by which those that are outside the cluster access the internal tenant. And part of the route is to give the tenant its actual host name. And in this case, we're going to use Betty's Brass local. And we'll see Clyde's clarinets while Clyde's clarinets local and Sydney saxophones will have Sydney saxophones local. And this domain name is what we'll use to access the tenant running within the cluster. Okay. So that's how it all works within the animal file. Now we need to do is bind it on it being the MongoDB instance. So let's go back again here. And again, you can see that I had created a utility shell script called set Mongo URL. And the way set Mongo URL works as you go, SH, and there you go set Mongo URL. And it takes as a parameter the URL to the MongoDB database running in the cloud. And I'm going to cut and paste that in here. And as you can see here, that is the actual URL with pass user name and password. I'm going to change this after the video. So you'll never be able to get in. But that's how it is now. And now you can see here that all the database URL substitution is taking place. And just to prove it to you, let's go back to here. And let's scroll here and you can see there is the secret. And now there's the database, which will change after this video is over. All right, so that's done. So now what I have to do is I have to actually inject the clarinet, the brass and the saxophone resellers into the cluster. And the way I'm going to do that is I'm going to use control, apply minus F for file. And in this case, I'm going to do the brass. And you can see that the namespace Betty's brass has been created and all the other resources are created too. So we did Betty's brass. Let's do clarinets. There's clarinets. And you can see namespace Clyde's clarinets was created. And then let's do saxophones. And we do saxophones. And you can see that the namespace saxophones. Yeah, I was created. And all the other components, excuse me, Kubernetes resources that are needed, were created also. So let's see what's going on. So first of all, cool control. Get pods. And we do that. Nothing's found because again, remember, all the pods for the deployment were put into distinct namespaces. What do I mean by that? So let's do get pods. And let's use the all namespaces. And again, bad typing, let's type cool control, which is the Kubernetes client. And you can see here, and what you can see now is that in the namespace, Betty's brass that the Nick containers are running in Clyde's clarinets, the Nick containers are running and in Sydney saxophones, the Nick containers are running. So everybody's trying to get up and running. And let's take another look. Okay, and let's see the pods are initializing. Good. So things are happening in there. It takes a little while for this to spin up. Let's take another look. Okay, and everybody's running. So right here, right now, the tenants have been installed. But there's one last little thing we need to do. And I'm going to do this. Let's do OC get routes. I'll use the OpenShift container client. Excuse me, OpenShift client. OC will do get routes. And we'll do all namespaces. And as you can see, here are the routes going to each tenant, Betty's brass local Clyde's clarinets local and Sydney saxophones local. Now, this is fine, but there's only one problem. In order to actually call the route, we have to let the Fedora server know that the route name exists that the DNS name exists. And the way we do that is to put an entry in Etsy host that binds the DNS name to the IP address of this server. And we do IP address grep. And I happen to know it's a 192 address, you can see the IP address did this is 250. So what I'm going to do is I'm going to go pseudo nano. And then I'm going to go and see hosts. And I need my password. And right now you can see that nothing's in here. So what I'm going to do over here is I'm going to copy and paste. So here it is. And now you can see that I'm going to bind Betty's brass local to the IP address Clyde's clarinets local, excuse me and Sydney's clarinets local. And so it will know how to get there. Now, what's interesting is that what will happen is that the domain names will be called against port 80. And kubernetes open shift and kubernetes is smart enough to map the domain name port 80 into the internal tenant, excuse me into the internal tenant service. It's very smart, a lot of auto magic going on. But when I call Clyde's clarinets, it's going to come in on port 80. But internally within the cluster, open shift kubernetes will map that call that request called to the appropriate service and port. So let's give it a try. Let's do curl. We're going to call Clyde's clarinets.local. We'll go v1. And I happen to know that I built in a health check endpoint. So let's do health check and see what happens. After I learn how to type. Okay, so let's do v1 health check. And you can see that the health check is working. Clyde's clarinets is up and running. This means now that the cluster is active, it's connected to the route. And the next step we have to do is just check that there's some data behind there is indeed Clyde's clarinets accessing MongoDB database with the seed data. So what we're going to do is let's take a look and say, Oh, are there any purchases made? And again, I don't expect you to know this, you have to know the API. But there's an endpoint for purchases. And if we go to purchases, you can see, Oh, it's actually connected to the database. That's really, really interesting. And that's really, really good. So right now, we're bind, we're bound to the database. And Clyde's clarinets is working. Let's see what happens if we go to Betty's brass, Betty's brass, not local to purchases. Oh, there's some stuff for Betty's brass, you know, nice horn, that's a brass. That's really good. So there's stuff for brass. That's interesting. Now, the last thing I want to share with you is that when we created this application, which is built on the open API specification, in other words, we created the spec for the API first, and then we implemented the code. Open API has a utility that you can bake into your code that will display the documentation for the API. Now, since this is a server, I can't, I went headless, I don't have a web server built in here. There's no graphical UI. What I need to do is to bind my local machine into the remote cluster. And then I'll be able to view the documentation. And that's a lot of words, but it'll come out when I do the demo. So what I need to do is I need to go into my local machine. And here's my local machine. And what I need to do here is I'm going to need to go sudo. And I'm going to go edit the Etsy host file on my local machine. And I need to put in my password as after I learned how to type. Okay, and as you can see, there's nothing for Betty's brass local Clyde's clarinet's local Sydney saxophones local. So what I'm going to do is I'm going to put that in now. And because as you've learned my typing challenge developer, go figure, I'm actually going to cut and paste it. Let's see if I'm not cut and paste challenged. Okay, there you go. And now we're just saying, okay, my local Etsy host knows how to call the IP address of 250 using the domain name. And that will map into the cluster into the tenant service running within the cluster against its own internal IP address. Boy, that was a mouthful. You don't have to understand at all. But the important thing is is now I can use my web browser to actually view the documentation running on the internal server, the door server. So let me save on this. Before I can call the tenants running on the fedora server from my local web browser, what I need to do is I need to open port 80 on the fedora server. I didn't open that yet. It's port 80 is still locked out. And to do that under fedora using the fedora UI, web server UI, I go to networking. And I go to edit rules. And I go to add a service. And you can see that I will make it simple. I'll go to 80. And you can see that there's an HTTP service port 80. And I'm going to open up that port. So I'm going to add the service. And there it is, it's all running. And it's in the terminal. You can also see that cockpit is open in 9090. And SSH is open at 22. Okay, so that's done. So now let me go to my local web server. Excuse me, my local browser, HTTP colon. And then we'll do let's do Clyde's clarinets. And now it's going to go to Clyde's clarinets dot local docs. And docs is the endpoint that the bait in open API spec code supports. So when I call docs, the auto magic within the application code is going to bring up the interactive documentation for Clyde's clarinets API. That's the goal. So let's do that. And there you go. See, there's Clyde's clarinets. And you can see how all the mappings are working. So one of the nice things about the API documentation is that it's interactive. So I can go try it out, I can run the health check. And you can see there's the health check. Clyde's clarinets works just fine. And if I wanted to learn more, let's say I wanted to get the instruments, I could go try it out, execute. Nice for them. There's there's instruments. These and this is all live data. And the other thing is, if I want to learn more just about the models in general, like in other words, what is an instrument, the API documentation will show me the description of the instrument model and a purchase. There it is. It's really informative. It's one of the nice things I like about open API. And I like about the automagicalness of the built in API documentation display. So there you have it. So let's let let's review. Okay, so in this video, what I said I was going to show you was how to install micro shift, which I did. Then I was going to show you how to install the instrument reseller code on a Kubernetes cluster, which I did. And then I'm going to show you how to configure and run the tenants, each distinct tenants Clyde's clarinets, Sydney saxophones and Betty's brass from within the single Kubernetes cluster. And I did that. So I hope you enjoyed the video. I know it's a little detail, but that wall is always in the details. And take a look at the code and keep moving forward with it. Thanks for watching. Bye