 Hello, everyone. This is the second video in the video series on building and running microservices on OpenShift. In the first video, we covered how application components are made up of pods and services and sometimes routes if you want to expose them outside. In this video, we will see how microservices can be seen in the context of OpenShift, right? The intent here is not to explain what microservices are and teach the advantages of going for microservices and all that. That's not what I want to cover here. There is a lot of documentation and some good material available for that. This video is only about how microservices run on OpenShift. To quickly define microservices, I'm just using Martin Fowler's statement on how he defined microservices. So microservices are building applications as suites of smaller services. Each service should be independently deployable and each service should scale independent of the other services that it works along with. And each service has a firm module boundary. That means it exposes specific interfaces so that the other services can depend on that boundary and use it without worrying about the side effects. And then every service can be written in its own programming language. And we will see an example of that in the future videos. And then each service can be managed by its own different team, right? So when these are in general considered to be the features of microservices. So how do we realize microservices using containers in OpenShift version 3? I'm again using the figures from Martin Fowler's articles so that it's easy to understand because most of the people who are coming here would have already read it. So he talks about how monolithic application puts all its functionality into a single process. Whereas in case of microservices architecture, you take each element of functionality and make it a separate service, right? So what you're seeing here are each individual services that are microservices. And these services run on a host where there are a bunch of processes that are running on this one host, right? And in case of monolith, you have a host which is running an entire monolithic application. Now when you think about it from the perspective of OpenShift, the container host on which these services run is an OpenShift node. And then the individual processes, these are microservices running in containers on this host. Now each microservice itself may be made up of one or more containers. So if you see the microservice on the right side here, this white box you're seeing here is a microservice and it is made up of more than one container, right? You have an application container and you have a database container and this whole thing put together as a microservice. When you consider this from OpenShift perspective, each one of these is a Kubernetes part. So your microservice is made up of two Kubernetes parts in this case. And this particular microservice is made up of a single Kubernetes part. The next question is how do these parts get placed on the nodes, where do they run, right? Your application may be made up of X number of parts, but are these parts running together on one box? Are they distributed on different boxes? That is handled by OpenShift, the OpenShift master and this master provides a scheduler and the scheduler decides where the parts are placed. So your parts are placed on the OpenShift cluster on the nodes based on the policies that you can define in advance. You can have the part placement policies based on affinity and anti-affinity rules based on which these parts are distributed across the cluster. So if you're running multiple instances of your application component, that means if you are running multiple parts for the same application, those parts get distributed across the nodes so that all X are not in the same basket, right? That part is taken care of with Kubernetes or OpenShift orchestrator, which is part of the master's scheduler. Now, another thing is the microservices can scale independently, right? That was one of the features we talked about. Think about this particular application. There are two instances of this application, multiple application parts. They have scaled independently and they are talking to a shared database in this case, right? The next thing is scaling. OpenShift allows you to scale your parts up and down the specific parts that you desire to scale up. You can scale them independently up and down as needed. You could be using manual scaling or automatic scaling. Both are possible. So as you scale, the parts get placed, again, the scheduler places the parts. It distributes the parts across the cluster and the scaling can happen either up or down. The next thing is the replication controller that manages when the parts are not responding or if the parts go down for some reason. There is an automatic check built into Kubernetes where if a part burns down for some reason, OpenShift goes and finds another node where that part can spin up. So your microservices are not only scalable, you are also building in some amount of reliability in it so that if the number of instances that you desire to run go down because the part went down, this automatic feature will go and spin up the number of instances you require on the availability infrastructure. So even that is built into OpenShift. Now to summarize, if you go back to our discussion in the previous video, where the applications come together by multiple application components running as independent pieces but coming together or even different applications talking to each other, right? Each of these can be considered as a microservice and as you can see, each microservice can run a number of parts. That means it can scale independently of each other as well as within a microservice, each tier can scale independently. The service discovery mechanism is also built into OpenShift so you can directly talk to the Kubernetes service that exposes the parts, the different microservices running on OpenShift can talk to each other and those microservices that you desire to expose to the outside world can be exposed via routes. So all the infrastructure that is needed to scale and run microservices are built into OpenShift already, right? That's how the platform supports microservices. So now in the next video, we will look at some examples of microservices. These microservices are written by my colleague Chakri and we will introduce you to what those microservices are and we'll see how to build those microservices and how to deploy and run them at scale.