 Good morning, everybody, and welcome to my talk. My name is Janki Chatbar, and I work as software engineer at Red Hat. I contribute to OpenStack and Open Daylight software. Those are open source projects. OpenStack is what I'll be talking about in details, and Open Daylight is a software-defined networking controller. That's my Twitter handle and my email ID. Running OpenStack in containers, that's the topic of my talk today. What it means is all the OpenStack services, the services we'll be looking at in later, all the services are inside the containers and not as system deprocesses on the host itself. So how many here knows what is OpenStack or have heard it, deployed it, contributed to it, used it? OK. It's official page says it's a cloud operating system. Cloud is a collection of servers. An operating system is a software that manages the resources on those servers. It's governed by OpenStack Foundation. The source code is available on GitHub. The bugs are filed at Launchpad, and code review happens on Garrett. Its 17th release was released on 28 February. It's named Quince. The next release is named Rocky. The releases are named alphabetically. It falls into the cloud computing model of providing infrastructure as a service. The other two models are platform as a service and software as a service. We'll see what this cloud computing models means. These are classified based on the resources that the service provider has to manage. To get a better understanding, we'll try an analogy with eating a pizza. So on-premise infrastructure is right now what the industry is. Everybody has their own servers. They manage their own hardware, operating systems, virtualization, networking, storage, application. It's like making a pizza at home, right from needing the dough to baking it, to setting the table, cleaning the dishes later on. Infrastructure as a service is when you get a ready-made infrastructure. You get servers ready, deployed. The networking among the servers is taken care by the service provider. You just need to install the OS of your choice and run an application on it. It's similar to buying a frozen pizza and making it at home. Examples are OpenStack, Azure, Microsoft Azure, and AWS, Amazon Web Services. The next computing model is platform as a service, where you just need to take care of your application and data. The rest of the layers, the whole of the stack, is managed by service provider. This is like getting a pizza delivered at home. You just need to worry about the tables and the plates. Examples, Google App Engine and OpenShift. The third model is software as a service, where you do not have to manage anything at all. You just need to use the whole stack to, for whatever purpose you need, the whole of the stack is then managed by service provider. This is like dining out. You just go to the restaurant, order a pizza, enjoy, and come back. Example is Gmail. Google takes care of hosting the Gmail servers, ensuring that the mails are getting delivered. You just need to write a mail and send it. OpenStack is a collection of projects. To these are few of them. For example, Nova is a compute service, which takes care of spawning a VM. Ironic is for bare metal services, which manages the bare metal servers connected to the cloud. Swift and Cinder are for storage. Glance is used to store operating systems. These operating systems are then used while spawning the VMs. Neutron is for networking. Heat is an orchestrator. Horizon provides UI. Cola is for providing OpenStack container services. This will, again, see later in much detail. And TripleO is the installer. This is what I'll be talking about. So TripleO is an OpenStack installer. The other two are DevStack and PackStack. TripleO means OpenStack on top of OpenStack. It first deploys a cloud called undercloud. Undercloud then goes on and deploys another OpenStack cloud, which is called overcloud. And this overcloud is the OpenStack cloud that is actually accessible to the users. That is where the end user spawns the VMs. It provides CLI and UI for deployment. And from Pyke release, that was the one before Queen, six, seven months back. From Pyke release, TripleO also has the support for container deployment of overcloud. So this is the deployment architecture. There's the undercloud, which has the services like NOVA and Neutron, Swift, Heat, Ironic. These are the few basic necessary services. Rest you can have as and when your use case needs. The undercloud then goes on and deploys overcloud. Overcloud generally contains two nodes. Nodes, when I say here, generally means hardware servers. The overcloud controller node and overcloud compute node. Generally, these are the names that are given. Two nodes is a very common deployment architecture. If you want an HA, you can also specify that I need a controller HA. You can also increase the number of compute nodes that you need. TripleO itself, again, is a collection of different projects that work together to provide a fully functional and deployed cloud. There's a TripleO client, which provides CLIs for deployment, update, and upgrade. TripleO common is where the deployment logic is actually implemented. It serves as a server to the TripleO client. TripleO Heat Templates is a collection of YAML files that has information about the deployment architecture, that has information about how the services will be deployed, on which node which service will be deployed. Like I said before, if you want to have, say, an HA deployment, there is a dedicated TripleO Heat template YAML file in the source code, where it says that please deploy my controller in HA mode. Puppet TripleO is a collection of puppet manifest that describes how the services will be configured on the nodes, on the compute and controller nodes, over cloud nodes. And TripleO Upgrades manages the upgrade of clouds from one major release to the next major release. Apart from TripleO, another OpenStack projects that help are Ironic. Ironic will go and configure the over cloud node servers. Heat then creates a stack for the resource stacks for the cloud. Uncivil configures the servers and the services. Puppet again configures the services inside. Mistral and Zaka, these are the Mistral is actually so Mistral actually divides the action into different tasks, and then it executes those. And Zaka is the messaging service between OpenStack components. Kola provides Docker images, Docker files for all OpenStack services to be containerized. So what we have launched so far is there's an under cloud OpenStack cloud, which goes on deploys over cloud. Over cloud generally has two nodes, a controller node and compute node. Controller node will have NOVA, Neutron, Glance, Keystone, Horizon, Heat, Cilometer, any service that you want. And over cloud compute generally has NOVA compute service, which is important for spawning the VM. The architecture deployment is defined in the heat template. Ironic configures the controller and compute nodes. Uncivil again manages the steps and configures the nodes. Heat creates the necessary resources. And Puppet configures the individual services running on those nodes. What are containers? So far what we have seen is the services are run as system deprocesses on the controller and compute nodes. Now we want the same services to run as containers on the controller and the compute nodes. So containers, long back there was a time where a dedicated hardware was used for an application. So the problem here was under utilization of the resources. So we moved to using VMs. Now again, they are very heavyweight. They need each individual OSs. They take space. So we moved to using containers. These are lightweight. They share the host OSs itself. They don't need much time to boot up. With all these advantages of containers, OpenStack also moved to containerized deployment. Few terminologies in regards with containers. First one is Dockerfile. Dockerfile is a file which has a set of commands that says what my Docker container will be, what service it will run, when it will run, what port it will run on, what will be the base OS for the container. Docker image is a ready-made, it's a turned-off instance of a container, you can say. The process of a Docker image is actually the Docker container. And Docker Hub is a registry. Registry is a place where you can push all your Docker images. It has the images for different OSs, for different applications, different services stored. And that registry is called a Docker Hub. For OpenStack, there is a dedicated project called COLA, which provides Docker files for all the services. The Docker file is locally editable, and it can be locally built. The Docker images are built by CCCI. These images are then pushed to Docker Hub. And these images can also be customized for each OS. You can specify for what host OS you want the Docker image for. You can also specify how to run the packages, how to install the services that are run inside the container. Punch is another project in OpenStack that manages the lifecycle of a container. The Punch takes input, the heat template as an input. It's a YAML file as an input and starts a Docker container based on the definition as given in the heat template. All the Docker containers use host networking, which means that they'll be running on the same port as the actual host is running on. They are configured to restart the service whenever the service fails. Whenever a service running inside the container fails, the container automatically restarts. All this definition is prescribed in the heat template. There is a specific Docker underscore config section in the triple O heat template that will specify all these details. I have a small demo. So here I'm already inside undercloud. As you can see via SSH, the user is StackUser. I sourced a StackRC file which has the username and password and all the accessible credentials to access the undercloud. As you can see, there are too many OpenStack services. Some of them are enabled because you actually want that service to run on the undercloud node. The others are disabled because you do not basically need them. There's a specific service, MOVA API service, which actually takes care of spawning VMs on the node. As we saw earlier, the service was running because this is an undercloud node. OpenStack server list is again one of the commands which will list all the VMs that are running right now. So as you can see here, I have two compute nodes and one controller node. These three nodes are the overcloud, actually. And I'm logging in SSH into the overcloud controller node. You can see the IP from there. On the controller node, when you say list the system CTL processes and grab on OpenStack, you'll see all of these are disabled because this is a containerized deployment. We are not running the services as system CTL. Instead, even the OpenStack NOVA API service is disabled. These are the many OpenStack service containers that are running on overcloud node. See, the NOVA API service is running as a container and it was not running as system CTL processes for this civil as we saw earlier. So what we have established so far? Again, the undercloud is the same. Overcloud controller and compute nodes are same. All the projects are same. Heat template that defines the deployment architecture, that defines the nature of the service, that defines what service will run on what node. Ironic configures these two compute and controller nodes. Uncivil again configures these nodes. Heat creates the necessary resources. Puppet configures individual services inside. Another utility called punch was introduced, which will manage the containers. And individual OpenStack services are running inside these containers. So how punch is beneficial here is that first of all, it takes YAML files as an input. So replacement of any YAML file becomes very easy. Second, the containers are managed statelessly. They are all the containers are given a particular tag. There are few tags defined. They are given tags. And based on the tags, the lifecycle of the containers is managed. Based on the tag, if suppose there is a tag which says validation, during deployment at the validation step, all the containers would be started, so suppose. I'll leave it an open question for all of us to think how easy it would be to upgrade from one version, OpenStack major version to another OpenStack major version when the services are running on containers, and how easy it would be to move from development to production. Questions? Yes. We can easily move for upgrade, right? Sorry? Thank you for your presentation. And I have a question that you said that if you run in OpenStack, you can easily upgrade, right? So how long the downtime we can? Do you have either any downtime? There would be a downtime of about five minutes. Five minutes is a very high limit that I'm saying. Again, it depends on how fast the network connectivity you have, how fast or how fast your deployment is able to fetch the Docker images from your registry, or if you want to still make it fast, triple also provides a local registry. You can push your images to the local registry before deployment so that during deploy, this all happens run time. So during deployment, when the image is pulled, it is pulled from local registry and not from the Docker hub, which takes time. Does anyone have any other questions? Yeah? Sorry. Is there any? Just a moment. Then you just come with me. Sorry. Is there any road back solution for fail during the upgrade process? What happens if it fails during upgrade? I mean some road back solution, if the road back process is failed. Sorry? Is there any road back solution? You mean road back solution? You know road back solution. I mean, is there any back up? Back up. OK. Roll back. So whenever an upgrade fails. Do you mind replacing the question final? Yeah. Sure. Sure. So basically he's asking what happens if the upgrade fails. Upgrade fails in the same, which means your cloud is not upgraded. You're still on the same version. So your original version cloud is still running. It's not gone anywhere. There's no rollback because the upgrade does not happen in steps. It happens a whole complete process. So if a process stops somewhere, fails somewhere, you're still at the original state. You have not still moved ahead. So you still have your functional, the running functional cloud as it is for use. No? We are talking about the database. If we upgrade the database. Database? And after that, how will we roll back to some data in the database? So database upgrade, you mean by MySQL? All of MySQL. OK, OK. Right. So that also happens in steps. So the upgrade process is all defined in the heat template. Every service has a heat template, and it defines a section where the upgrade tasks are defined. The whole thing, see, it's not like you started an upgrade and MySQL got upgraded, and then it failed. If it's going to be upgraded, everything will happen as a single unit. So anybody can guess why the architecture is open stack on top of open stack, why you need two clouds, why you need open stack to again create an open stack cloud. So how this architecture came was a cloud basically needs servers, the compute resources, storage, and networking. So if you are building a cloud, you will need something to manage your compute, manage your networking, and manage your storage. So what people, how this project came into picture was, people thought, why not use the existing services that we know very well, and use these services to deploy the same those services again. So we are using open stack, SNOWA, Neutron, and Sundar Swift. These three are the main compute, networking, and storage units, because we know them very well, because that is what we are going to use in the deployment. So that is how the concept of open stack on top of open stack came. Any more questions? I think the personal question, but I don't like to add that. As I know, we have already provided the service, which is all right. And we can buy the open stack cloud or troubles, and that. And let's encourage you to say that when we want to operate on the system, you need to add. There will be no time. But I don't want. I don't want no time. And I know that currently, SNOWA, Neutron already even made the rolling average in open stack. And in each goal, we can apply rolling average to the cloud, to the open stack. And is there any solution when I don't want no time with marshals? I guess there would be a downtime. We can minimize the downtime. But during upgrade, every service has to upgrade. Even Neutron has to upgrade. So while Neutron is upgrading, the service will be off. When Neutron is off, there is no networking. You cannot connect. You cannot talk from one server to the other. So downtime is, at the moment, expected. It could be as low as five minutes, like I said before. And you can still make it down with containers. You can still reduce it with containers and with having your images pre-uploaded to the local registry. What was your other question? That answers, right? OK. Any more? OK. Thank you. I'm going to take the camera off of this as well.