 Good evening, everyone. So today we will be talking about the future of containers in OpenStack. So OpenStack has evolved a lot to support containers in its ecosystem. So there has been several projects which are trying to support containers as same as they do with VMs today. So we'll briefly talk about, this is the agenda for our presentation today. So we'll talk about what are the containers, container-related projects in OpenStack, and how are they trying to support containers in OpenStack ecosystem? And then we'll talk about ZONE in detail, because we are trying to focus more on ZONE as we all are the core developers in ZONE. And we'll talk about the architecture of ZONE. What are the services in OpenStack which ZONE integrates with? We are trying to integrate with all OpenStack services to take advantage of all the projects. And then the ZONE features. And finally, we'll show a demo of running an application on OpenStack ecosystem using ZONE. So OpenStack is overall an integration project which tries to integrate all the technologies that are available for cloud computing. So of course, it's come safe for the containers. So containers evolved in many years back. So in 2014, OpenStack felt the importance of supporting containers, the same they do for VMs today. So there has been several projects in OpenStack which are trying to support containers. So some of the projects are using containers to make the OpenStack operations simpler, while some of the projects are trying to let users run container on OpenStack ecosystem. So these are a few of the projects which are trying to let you help to run containers on OpenStack ecosystem. So I have tried to list some of the projects. So first is the NOVA Docker. So NOVA Docker started as entry project with NOVA. NOVA was simply a Docker hypervisor driver for NOVA. So users can run containers the same way they run VM using the NOVA. So you just say NOVA boot and gives the image for your container. It will run the containers on the same level when you run your VMs. But then VMs and containers are not same. So they decided to fork it to a new project. So that's how NOVA Docker was started. And eventually, because it does not suit the lifecycle of VM, so then this project expired this year itself. So after that, there has been several projects which are trying to let you run your containers on OpenStack ecosystem. So the Heat Docker is the second project. So Heat Docker provides you heat resource for container. It lets you run containers on OpenStack ecosystem. You just have to specify, write a template for containers. For example, you say, I just want to create, for example, energy servers. You provide the details of those image in your heat template. And you specify where your Docker is running. So the heat will run your application on any kind of infrastructure which you want. So advantage of using Heat Docker is you have the options of running your containers inside VM or any bare metal node. So the next project is Magnum. It was founded in 2014. It aimed on providing support for containers. And it tried to enable containers as a first-class resource in OpenStack. So it started with supporting Kubernetes. Today, it supports various other kind of container orchestration engine. Some of them are like Kubernetes, Mesos, and so on. So what Magnum does is it helps you run Kubernetes, Mesos, and so on cluster on OpenStack ecosystem. So it also supports both VM and bare metal. So for example, you can use Magnum to run your Kubernetes cluster on OpenStack VMs or bare metals. So similarly, it supports Swam and Mesos as well. So initially, it aimed on managing containers on OpenStack ecosystem. But then it went in a different direction. And today, it's managing only the infrastructure for containers. It means after you deploy your Kubernetes cluster or any other COE cluster, you can use the native CLIs to run your containers on this infrastructure backed by different technologies. The next project is Murano. Murano is the application catalog for OpenStack. Murano supports various applications. So Kubernetes is one of them. So it has a ready application, like Docker applications. You can just go to Murano UI, and you will be able to see these kind of applications on Murano. So it lets you run any kind of application on just a click. So it is very simpler. You just don't have to worry about all the configuration at the banks. Murano handles all the configurations by itself. And you'll get a production ready applications on the Murano UI. So it just a click away for any kind of applications Murano have. So the projects which I talked about earlier are those projects which are trying to let you run containers on OpenStack. And these are a few of the projects which are using Docker or Docker to make OpenStack operations simpler. So for example, like Kola, Kola has production ready container images, which you can use to run your OpenStack cluster on any kind of infrastructure. So for example, if you want to run, for example, Magnum or any other OpenStack services, you have the container images on Kola. You can use that. And you'll have a production ready OpenStack environment. So it really eases the operation of OpenStack cluster. It lets you do the upgrade. It simplifies the upgrade and scale up and scale down. So Kola provides you the production ready container images for OpenStack services. The next project is Courier. So Courier is trying to provide the container, the networking for the container resources. It means it uses a Neutron API to provide the same resources that are available for VMs also. So for example, if you provide the driver as Courier in Docker, Courier will talk to Neutron to provide all these resources. And you can have your containers and VMs on the same network. And using Courier, your container can talk to the VMs under the OpenStack ecosystem. So it helps you, like, the interaction between a VM in a container, a VM in a VM at load. This kind of communication is possible using Courier project. So finally, I would like to talk about Zoon. So Zoon, in Austin Summit, when Magnum team met and decided that we are not going to support container endpoints anymore because they wanted to focus more on the infrastructure. So they decided to fork a new project. That's how Zoon evolved. And it is now called the Container Service of OpenStack. It provides you APIs to manage your containers on OpenStack. So it has many APIs that you can find in other container technologies as well. So you can use the Zoon APIs to run your container applications on OpenStack infrastructure. So we aim to support various container runtimes. So we started with Docker. So we today have very good support of Docker in Zoon. And we also aim to support other container runtimes, like Rocket, Clear Containers, et cetera. And we also aim to integrate with Kubernetes Docker form, like we will be providing some set of APIs that you can use to interact to run your Kubernetes applications on OpenStack infrastructure. This is the architecture of Zoon. So Zoon mainly have two services, the Zoon API and the Zoon compute. And for example, when you say create a container for me, so Zoon API forwards that request to Zoon compute. We also support multiple compute in Zoon today. So we have a very simple scheduler in Zoon that picks up the compute host where your container would be running. So after that, the Zoon interacts with either glance or the Docker hub to download your image. So after the image is downloaded on the compute host, it talks to the Docker to create your container. So then Docker talks to Courier for the networking resource Courier sends the request to Neutron to provide the resources. So after this, you'll have a running container that is on the same network as your OpenStack Nova VMs are running. So the communication between a container and VM is possible using Courier. So this is just the detailed thing which I have explained just now. It's just that we have the Zoon API. Zoon API interacts with the Zoon compute after scheduler picks up a compute host where our container will be running. And then Zoon compute talks to Neutron to provide the list of available networks. We provide that. We create a network with that network. We talk to Courier to create a network in Docker. Finally, after that, we create a container with that network which was created by Courier. And then the container is created. We say Neutron to do the port binding for us. So after that, we will have a completely full-fledged application running on OpenStack ecosystem. So now Namrata will explain how to use Zoon CLI to run your applications on OpenStack ecosystem. We will see that how we will run containers through Zoon. So there are two containers underlying the same network. One is WordPress, that is web server container. And another is MySQL, that is a DB container. So what we are doing, that we have to interact between the MySQL container and WordPress container. So they are interacting with the IP address. So you can see the first command, Zoon run, what it is doing, that it is setting the environment variables, then using the image. Then the second command, the WordPress web server, it is searching for the IP address of the other container which we want to access. And then setting the environment variables and the image. Then they can interact with the IP address which we have given to them, which it has fetched. Now orchestration with heat. It is doing the same thing which we did earlier, but it is doing with the heat template. Here we are setting the two resources. One is DB, that is MySQL. And another one is WordPress. The type of resources is Zoon container. We have set the properties, the image, MySQL, and environment variables we have set here. And for the WordPress, that is web server container, we have set the environment variable. We are fetching the IP address of the container we want to communicate with. So it will fetch the IP address of the container and communicate with that container. Now we will see how Zoon has integrated with other OpenStack services. They are Keystone. Keystone provides the authentication support for Zoon. So to access Zoon API, users should be registered with Keystone. Zoon uses Glance to manage container images. We use heat template to run our application through heat template. OpenStack client, it supports Zoon commands. Horizon has a Horizon UI plugin in Zoon. You can use browser to manage your applications through Zoon. Neutron in Korea, Zoon supports Neutron in Korea. And it helps in communicating between the two containers, our container and NOAA VMs. We can attach storage to Zoon containers through Zoon. This is the diagram which shows the integration with other OpenStack services. We have seen the Keystone for authentication images through Glance, storage for Zoon and network through Neutron. We are planning to integrate with Magnum so we can orchestrate containers on different COEs, Kubernetes, Dockers, or Dockers swarms. Zoon features. This is the list of Zoon features which Zoon is providing. First, Zoon provides a container API so that users can interact with containers, a container host management. Zoon manages a set of container hosts and resources. And like other OpenStack projects, container resources are isolated by Keystone tenents. Neutron integration, as we have seen earlier, Zoon is integrated with Neutron. And it supports multiple container image repositories. So we can fetch images from Docker Hub or Glance. It is integrated with Heat. We can use Heat template to orchestrate containers. Horizon integration, Zoon has Horizon UI plug-in so we can use browser to manage the application. OpenStack client integration, we support Zoon commands through OpenStack client. Now ShooSan will demonstrate a presentation which is on a VM which is provided by Engine Server and running on a container, inside a Zoon container. Next. No, it's not working. You are seeing that Matty desktop on Ubuntu VM in Horizon's console build. This VM has HTML-based presentation, a player powered by Libio.js. It contains static HTML, style sheet, and JavaScript files, and files at local on VM. The presentation player is customized to load its contents from external Macdown file. The Macdown file content should be provided by HTTP, but it has not been provided yet. So this player can't play the presentation. This VM is located on private network for demo project. It has IP address 10.0.0.10. So let's provide content for the presentation from container inside private network. You are seeing Zoon UI plugged into Horizon. This panel is added into the project menu for users. Let me create container to provide presentation contents. Presentation contents provider for container name. In this case, use EngineX container image, Docker, Alpine, EngineX. Set command in bash. And this container will start after created. Also, we can specify number of CPU, memory size, working directory, environment, variables. Interactive mode will enable should TTY and standard in and out. Also, enable to access from web console via web sockets. Specify labels to the container and click Create. Create container request is accepted successfully. And Zoon UI has actions to operate almost features of the stats, top, reboot, pause and pause, execute command, send kill signal. And delete container. Container will be created and started soon. Next, set contents of presentation using console. Create web root directory. And create contents file in markdown using printf command. Then EngineX. OK, EngineX has started. To confirm network settings for this container, change tab to Overview. To service EngineX, 443 and 80 ports are exposed. And we can see this container has IP address as 10.008. Please remember the IP address 10.008 for this container. In the network panel, we can see ports for router 10.001, DHCP 10.002, and Ubuntu VM 10.0010 in private subnet. After reload this panel, then port for the container will exist, new port exists. This IP address 10.008 is same as usual in Zoon UI. This port is created by Clear driver by Neutron. To access this port via HTTP, the rule has already added to default security group for this demo project. Also, ICMP protocols are added. And it will allow to access EngineX on the container from VM to confirm networking. Let me ping between VM and container each other. Container has 10.008, and VM has 10.0010 for IP address. At first, ping to VM from container using web console. Ping five time to VM 10.0010. Got response from VM. Next, ping to container from VM using terminal. Ping container 0.0. Got response from container. So now we confirm networking between VM and container via ICMP. Before accessing container from VM via HTTP, let's watch access log for EngineX on container. Switch View to console of container using tail command. Watch access log for EngineX. If the EngineX will be accessed, the new access log will be added here. Let's change to VM. And it's time to access container from VM to load markdown contents for this presentation player. Now load contents from container 10.008 and play presentation. OK, it's access. Finally, the presentation player got contents from container. We can confirm the HTTP access in the access log for EngineX from VM 10.0010. As you see, it works with Neutron and clear in container networking. And like VM, we can operate containers provided by Zoom. That's all for demo. Thank you for listening. OK, thank you, everyone. So we are done with the presentation. If you have any questions, feel free to ask us. Like, we do have time for a Q&A round. OK, so tomorrow we have an operator's feedback session for Zoom as well. So if you have any use case or any kind of question, please feel free to come and join us. We'll be happy to answer any kind of questions. Thank you, everyone.