 Hello, my name is Veer Mucchendi. I work for Red Hat as a middleware specialist. I'm going to introduce you to OpenShift version 3 using three small videos. And we are going to look at how OpenShift v3 works and also we'll have a short demo with an example. Let us start by looking at OpenShift's version 3 stack, the technology stack. Most of you may already know that OpenShift version 3 is designed to run Docker containers. Now, Docker provides a means of packaging applications in lightweight containers, right? And these containers are smaller than virtual machines. They have improved performance and they are flexible to run in multiple environments. But while Docker defines a container format and builds and manages the individual containers, you would still need an orchestration tool to deploy and manage sets of these Docker containers. And Kubernetes is a tool for that. It orchestrates and manages Docker containers. Now, as a base for all these is REL, Red Hat Enterprise Linux. And Red Hat Enterprise Linux, Atomic Host is a variation of REL 7 and it is optimized to run Linux containers in Docker format. It has been designed to take advantage of powerful technology available in REL 7. And REL Atomic Host uses SC Linux to provide strong safeguards in multi-tenant environments when multiple containers are running on the same host. And also provides the ability to perform atomic upgrades and rollbacks, enabling quicker and easier maintenance with less downtime. Now, both REL 7 and REL Atomic can be used for installing OpenShift. And REL Atomic comes pre-installed with Docker and Kubernetes. Now, on the top of this container-driven model and orchestration using Kubernetes is the containerized services that OpenShift provides. The first thing is XPass, which is like application server on the top of OpenShift, JVOS EAP, for example, right? Or integration on OpenShift using JVOS Fuse or BPM, BRMS Mobility, for example. All these are XPass services that come on the top of PAS. Now, you also get a Docker Hub where your Docker images would reside, would be registered. And a marketplace through which other vendors can provide you the Docker images to use. Now, cartridges are going to be replaced with those Docker images, right? Now, on the top of all this is the user experience layer. So we provide that enhanced developer and administrative experience using, for example, a web console or a command line interface or an IDE of your choice that provides tools to run, deploy applications onto the OpenShift environment. All these nice things put together is what you get in the technology stack for OpenShift version three. Let's look at what's different between OpenShift version three and the earlier versions of OpenShift, specifically the OpenShift version two, that is, the new base OS is going to be REL 7 instead of REL 6.x. For example, in OpenShift version that we are using right now, it uses REL 6.6, which we will be moving to REL 7 in case of v3. Now, the containers are going to be Docker containers instead of gates. Now, gates were using the same technologies as what Docker is using today. For example, in gates, they are still Linux containers and they were using SC Linux, they were using Linux control groups and the technologies are the same. However, the packaging format in Docker is different and since that is kind of standardized across the industry these days, we are moving towards the Docker-based model. The new orchestration engine instead of OpenShift broker in case of version two, that's going to be replaced by Kubernetes, specifically Kubernetes master will take the place of broker. The new packaging model for technologies would be Docker images in place of OpenShift cartridges in version two. The new routing layer, the platform routing layer would replace the node-based routing and you get better services and better developer experience. Now, there are a bunch of concepts that you would get introduced to as we are talking about OpenShift version three and I would like to explain those concepts using some figures. Now, the first concept that we would want to understand is a pod and a pod would run the Docker containers. So a single pod would get an IP address and you can run multiple Docker containers, multiple related Docker containers inside each pod. Now, the pod gets an IP address, whereas each of those containers would share the, all the containers running inside a pod would share the port space inside that pod. So a pod goes up or comes down at the same time. So all the containers inside the pod go down together with the pod or come up together with the pod. So you would put a set of related containers, Docker containers inside a pod. So in this example that you're seeing here, right, the MySQL database and the administration, the PHP MyAdmin are running, are two different containers running within the same pod. Now, MySQL container would expose port 3306 to connect to the MySQL database, which is the standard port for MySQL and PHP MyAdmin would use port ADAT and these two ports are within the same pod and the pod itself is getting an IP address, right? Now, when you are thinking about in runtime how these pods are going to look like, when you are thinking in terms of the host, see all the way at the bottom, the hosts are a host could be running multiple pods on the top of it and each pod would contain one or more containers, right? So a host, a physical host is running one or more pods on the top. There is another concept called service, which kind of load balances the group of pods of the same type together. Now think of it as multiple instances of the single pod. They are all load balanced via the service. Each part gets a label and a service would use a label. In this case, this web is the label and both these pods are the same type of pods and they are getting the label web and these two pods are grouped together by the service and they are load balanced. Replication controllers, which is again another concept in Kubernetes, it ensures that n number of copies of a pod exist at all times. You define how many copies of a pod should exist and those many copies of pod are ensured at all times by the replication controller. Now let's look at how all the stuff comes together, right? As we talked before, Kubernetes master is taking the place of broker and the Kubernetes minions now call nodes are taking the place of nodes and this is the application hosting infrastructure. Its CD is going to be taking place of MongoDB and we already spoke about replication controllers. From developers point of view, the developer would have a similar experience like before. They can use a web console, a command line interface or IDE and all the communications for creating a new application, for example, would go to the master, the OpenShift V3 master using the restful API and the master will end up spinning pods on the nodes and when there are multiple pods to be spun up, those can be on any of the nodes and they are load balanced as we spoke about by the service layer, we'll group them together by using the labels that are assigned to these pods and the router, the routing layer ensures that they are load balanced. Now as you can see that a pod can contain one or more containers and a pod can have one or more related containers and these pods can also talk to each other via the service layer.