 Hello, everyone. This video is about building and running microservices on OpenShift. I'm going to do a couple of videos that bring the whole story together. This is the first one of them. So before we start thinking about building and running microservices, there is a need for a quick overview of how microservices can be thought of running on OpenShift as a platform. And before we get into microservices itself, let's see how applications themselves run on OpenShift. If you have seen my earlier videos, you are already aware of that, but just a quick recap. I will not spend a lot of time here. So how do applications run on OpenShift? The first one. For those who are new to OpenShift, everything in OpenShift runs as a pod. Pod is the first class that is in OpenShift. If you are taking an application instance or an application component instance and creating one instance of that, that's running as a pod. Pod gets an IP address and pod may have, in general, it has a single Docker container running inside. It could have more than one Docker container. If those containers are tightly related, they have to live or die together. If that is a situation, then there would be, you could run more than one Docker container inside a pod. Now this container is running inside a pod. Share the IP address given to the pod, as well as they will share the pod space. Next, how do pods run? The pods run on the application hosting infrastructure. These are OpenShift nodes. If you look at the bottom, those orange boxes are real boxes on which OpenShift is installed. That is the application hosting infrastructure. On the top of that hosting infrastructure, your containers are running inside pods. On a particular OpenShift node, you may be running a bunch of pods. These pods will get distributed across different OpenShift nodes. Next is the concept of a service. When you have a bunch of pods for the same application component, you take your, let's say, wildfire swarm application that's deployed as a fadjar, and that's running as one pod instance. If you want to scale it up to 20 instances, those are 20 pods, and those 20 pods can be on any of the OpenShift nodes. How do these all tie together? Who does load balancing across them? That's done by a Kubernetes service. Service groups the pods based on labels, and it does load balancing. The clients would talk to the service, and the service does load balancing across the pods. Next thing, how does this all come together? Now that you understand that there are a bunch of pods that you could run to scale up your application. If you are running just one instance of your application component, you are running one pod. If you are running 10 instances, you are running 10 pods, and these pods could be anywhere on your cluster of OpenShift nodes. They are frontended by a service. When you group this together, you could call it an application component. This term is not coming from Kubernetes, and it's just an icon that word. You take an application component, which you would want to have a bunch of instances of your application running as pods and frontended by a service, and then that service exposes a service name and pod. As long as you know which service you are talking to and which port you are talking to with that service, you can talk to that service. This concept of service is internal to OpenShift, so any clients running on OpenShift can access that service using the service name, and the service itself is, you can resolve the service using the internal SkyDNS with an OpenShift, so as long as you know the service name, it gets resolved to an IP address given to the service. Assume that the client is another application or something that wants to use this service. This whole thing put together is an application component. Let's say you want to expose this service to the outside world. Let's say you want to give a URL and want to access the service from the outside world, then when you expose the service, OpenShift creates another thing called a route. Route is nothing but an object that assigns a name to your applications component like you get a URL and that URL in the router is tied to a bunch of pods as endpoints, so if the request comes in from an external client, what I mean by external client is let's say something that is outside OpenShift wants to access this as a restful call or via the browser. You type in a URL and access this. At that point of time, the request comes in, it goes into the router. The router knows that it is servicing these N number of pods and it does the load balancing. When you want to expose your application component to the outside world, you would use a route. Now, if you think about a holistic application that you want to expose to the outside world, you may have it as a multi-tiered app. In such a case, you would have, for example, you may have an application that has a database, for example, that's running in its own part, front-ended by its own service, and you may have a part that runs business logic and that business logic, if it needs to access data, then it would use the service that is front-ending the database. Your business logic component has its own service that can be exposed to the internal clients, as well as if you want to expose it to the external clients, then you would go either route. This is how a multi-tiered application looks like. When you group together a bunch of components and build a multi-tiered application, this is how it looks like. Now, the next thing, you may have a set of services that are independently deployed and running, which may depend on each other. For example, your multi-tiered application may want to use another service deployed on OpenShift, and it requires an interface to talk to the other service, and the other service exposes its own interface, and then your application components can talk to each other. Your multi-tiered app can talk to another application component running on OpenShift, and it exposes its route. This is how your applications come together, your services come together. This is how things run on OpenShift. Now that you understood how applications run on OpenShift as pods front-ended by services and how the routes are defined and how the different components come together and run on OpenShift, the next step is to understand a little bit about microservices and how they tie into this kind of an architecture. In the next video, we will cover that part.