 In the last video, we stood up an OpenShift cluster using IPI on AWS. Let us now explore this cluster. First, let's log on to the web console. We got the web console URI from the setup scripts. When we created the cluster, the output logs gave us the web console URI. I'm using the same. We'll accept the self-sign certificate and move forward. And I'll type in the cube admin user name and password that we got from the output of cluster creation. We are logged into the cluster. This web console has an administrator interface and a developer interface. Administrator console includes everything needed for cluster management while the developer console addresses the application view for anyone trying to deploy applications to this cluster. Since we are logged in with the cube admin user, the cluster shows this message on the top that we are logged in as a temporary administrator user. By clicking this link, we can set up authentication using a provider of our choice. In my case, I will set up HT Password based authentication. I have a HT Password file created with a couple of users in it. I'll just add this. I do have a user with username admin. Let's go to roll bindings and create a cluster-wide roll binding to make this admin user a cluster administrator. Now I'll log out of this cluster. Now it shows an option to log in using HT Password authentication mechanism. And I'll log in as an admin user. By default, the web console for administrator takes us to a dashboard that shows the health status of this cluster. We see that there are six nodes and all of them are healthy. Let's go to the compute area and we have a node section here which shows us that there are six nodes. There are three masters and three workers. Let us log into this cluster using command line as well. And if we run OC get nodes, we get the same exact output. OpenShift 4.x sets up a highly available cluster by default. Hence, you can see three masters here. Let us quickly understand what runs on the master and the node on this cluster. An OpenShift master is always based on Red Hat CoreOS as the operating system. As the host comes up, it runs Cryo which is the new and secure container engine. Note that we are not using Docker container engine anymore. Cryo is an implementation of Kubernetes container runtime interface specification. And it replaces Docker in OpenShift 4.x. Cubelet also runs as a system-d service on the host. So when the host is up, Cryo and Cubelet are already in there. There are a few other system-d services like SSH that run on every host. Now that we have a container engine and Cubelet, we can run pods on this host. Essential Kubernetes components such as HCD, Kubernetes API server, controller manager, scheduler, all these run as static pods on the master. So how do we know this? Check for all the pods with the host name as part of the pod name. You will see all the static pods running on the cluster. So at CD, for example, it's running on all the three hosts which are masters. These are indicated by the host name being included as part of the pod name. Next, the control-print services like SDN, OAuth, web console, OpenShift API server, and others run as regular pods on the master node. They are created and managed by the respective operators. We'll discuss about operators in a different video. OpenShift worker node can use either Red Hat CoreOS or Red Hat Enterprise Linux as the operating system. However, for a fully automated OpenShift install with IPI, the operating system is always Red Hat CoreOS. Worker nodes also run Cryo, Cubelet, and other system-d services. And then there are a few pods related platform services on the workers. These include registry, router, logging, monitoring, marketplace, et cetera. And then the worker nodes run the end-user applications as Kubernetes pods. OpenShift cluster gets set up with three masters as a HA cluster by default. These three masters are frontended by an API load balancer. API servers listen on port 6443. So when we log into the API server using command line interface, we use this port 6443. Then there are worker nodes that run your applications. You can have as many workers as your workload demands. Masters and Cubelets running on all the nodes together form the control plane for your cluster. OpenShift SDN enables node-to-node communications and the application communications on the cluster. You have multiple software-defined network choices. Currently, OpenShift SDN is based on OpenVSwitch. OpenShift Ingress is implemented using OpenShift routers. Routers are pods running on some of the worker nodes. By default, these routers are implemented as HA proxy, but you have other choices too. These Ingress router pods accept HTTP and HTTPS traffic. An application load balancer frontends these router pods. While it is currently not implemented by default, a common pattern is to segregate infrastructure workloads such as logging, monitoring, registry, router, et cetera into their own specialized nodes called infrastructure worker nodes. When implemented this way, you can have specific worker nodes reserved for infrastructure workloads and others for the application workloads. Now let's look at this cluster on the cloud provider, AWS in our case. So here are six machines that got spun up. Three of them are masters and three of them are the worker nodes. Let's look at the Route 53 DNS records. This one is the DNS entry for API server accessible internally within the cluster. This is our externally visible API server and the star.apps is the application domain exposed by the router. Let's scroll down and look at the load balancers. We have three load balancers. This one is the internal API load balancer used within the cluster. This is the API load balancer exposed externally and this one is the application load balancer that frontends the router parts. Let's look at the instances that this load balancer is frontending. These are the three worker nodes that the application load balancer is frontending and the router parts can come up on any of the three worker nodes for the internal load balancers. Let us look at the listeners. All the traffic coming on port 6443 gets forwarded to this target group and the targets for this target group are the three masters, which means that the API traffic is delivered to the masters by this load balancer. Let us also look at the security groups. One of these security groups is used by the masters and the other one is for the workers. Note that all of these are tied by a single VPC ID and this VPC is automatically created by the cluster installer. A route table is associated with the subnets one per availability zone and an internet gateway is created and associated to the VPC. Elastic IPs are created and associated with each of the NAT gateways and it is one per availability zone. So to summarize in this video, we have explored the structure of the OpenShift cluster that our IPI installer created for us on AWS. We have understood what makes up an OpenShift cluster and how it is available. All the components that are spun up by the IPI installer on the infrastructure before setting up OpenShift and how IPI has actually automatically created all the infrastructure for us. If we were to use UPI, then we would have to set up all that infrastructure ourselves and then install OpenShift on the top of that. So you can now appreciate the advantage of using IPI installer which automatically sets up and manages the entire infrastructure in addition to OpenShift for us. I hope you liked this video. Thanks a lot for watching. I'll see you next time.