 Hi, so hi everyone, sorry for the late So this is the about the June project update and I possibly keep a brief introduction of us and My name is Hong Bindu from Huawei and I'm the PDA of this project I'm Madhuri Kumari from Intel. I work as a co-river in this project. I'm Shu Muto Implementing ZUN UI From NEC, thank you. So, uh, I So this so this is our agenda of today So we will have a general introduction of what is kind of an open set and Then I will talk about the basics of the ZUN project and the internals and other things and then we will show a demo so Madhuri will take this part. Okay. Thank you. Yeah, so containers on OpenStack. So Container evaluation start in OpenStack started in the year 2014 OpenStack team felt the importance of supporting containers as they support VMs on OpenStack since then many projects have evolved and there has been several projects which are trying to run Containers on OpenStack and some of them are using containers to make OpenStack operations simpler So we have Kind of like different kind of infrastructure where we can run our containers on OpenStack So this shows like how can we run our containers on OpenStack? By that I mean like on what level our containers can run So for example in this diagram We are running our containers on the same level where we run our VMs so that means like we will have a compute node where our Container runtime would be running and then we can host our container on that compute host and Either we can also run our hypervisors for running VM on that same compute node and We can run our VMs parallel to the containers on any compute node So this example is basically done by a zone today So we run our containers on a sim on a compute host so the second one is The different deployment which shows like we can run our containers inside VM So for here it means our Nova VMs would be running the container runtime tool Which is used to manage this containers So it means like on the compute host we run the hypervisor and then we launch VMs on that compute host and inside that VM will have the container runtime tool running and after that we Host our containers inside this VM So this is basically done by I can say Murano today Murano launches Nova VM host nodes and then it run application inside this VMs So this one is the third deployment scenario, which we can use for running containers on open stack in this one we run COEs on the Nova VMs So by this I mean like we'll have a compute host where we run our Know hypervisor and then launch some VMs on that compute host and inside compute host will have our container orchestration engine That can be Kubernetes messos and so on which is used to manage your container applications And we can have multiple VMs that Manages your COE cluster for example in Kubernetes We can have a master node and slave node or any combination which is suitable for as per the customer's use case And then we run container inside this VM backed by the different COEs So this one is done by Magnum Magnum install Kubernetes on Nova VMs and then it runs container application on those VMs So this basically shows like to run containers on open stack. How can What all projects are needed to give support for running containers on open stack? So this example shows like we need to have a COE or a container runtime tool Maybe Docker Kubernetes or any other tool that can run on either VM or a bare metal node and then We need some project that can provide us the API to run our container applications on those compute hosts So zone provides you the API to run your containers on this compute host So after we run our containers, there are several things which we need to For our containers for example Keystone is used for authentication of the for example zone API or any other project which we need authentication for and then we you Maybe we would we need image We need image for our container. So we can either use the Docker hub or glance today in a zone that provides you support for managing container images as well and then we can also provide Container volumes that can be done by using cinder and Fuxi as well and then We do need container Network resources for our containers. We might want some communication between two containers or a container and a VM So that is done by courier today and courier leverages the neutron API to manage all the network resources for container and finally we can have a monitoring tool for monitoring our containers as well like Kilometer but we don't know which project is suitable for container as well today on for Containers on open stack. So this generally shows like how what are all the projects that we can use to Unable containers on open stack So second one is the zone basics. So I'll start with like the introduction of zone What is one and then go a little bit in detail about what are other? Projects which we have integrated with how does the zone API looks and what are all the features and zone? So zone is an open stack project that manages container on open stack infrastructure It provides you the API to run your containers on today on bare metal nodes so we have a and that is backed by Docker runtime tool today and then we also integrate with other open stack services like keystone for authentication neutron courier for providing the network for our containers Glance for managing the images horizon. We do have horizon UI plugin for zone We use heat for orchestration of container. There is a zone resource in head heat Which you can use to run complex application that can be that can compose multiple Containers that can run your microservices inside it and we have support for open stack client also and Nova we don't today Do not support Nova, but maybe in future we would want to run our containers on VM also So we have we will be supporting Nova for that as well placement API so We today have a very simple scheduler in zone which we use to launch host our Containers on any compute host based on some filters. So we don't want to do that. So after this placement API is ready We will be using it for Scheduling our containers on different host for telemetry We are not sure what which project we will be using now and Swift we can use to store some metadata for about containers. So these are like lined up project which we May want to integrate with zone in future. So This example shows like how are you running? Containers using zone on open stack infrastructure. So the first Diagram shows like you have a neutron network where you You have an application. Let's say for example In which need to run a web server and a DB so before containers or zone We used to launch our web server on one Nova VM and the DB on the another host On the another VM host, but that is not very Like resource This is very resource extensive Way to do that. So we can run this lightweight services inside containers So today with zone we can do that and we can run this Microservices inside containers. So we just have to create for in this example We create two containers one will run our web server and another will run the DB Or we can run the DB because maybe for example, it it is it needs lots of storage So we can use DB based on different use cases. We can do this kind of combinations. So In this diagram, we have our web Web server running in a container, which is a very lightweight Survey, so we don't need to run it inside the VM and then we are running our DB in an over VM Because we support courier. So both VM and containers are on the same neutron network So the communication between this web server and DB is possible using the courier network So this is the list of zone APIs, which we support We support all the crude operations for container like you can create delete list update The containers and other than that we do support various other APIs like you can retrieve logs of a container execute a command command inside a container and even you can attach to a container and then you can run your commands inside container in an interactive mode and Then there are other APIs also we have not listed it down here So these were a few of the APIs which are really important for containers. So we list a few of them So this is this example shows like how can you use the zoom CLI to run your containers? so first we Searching the image like docker search siros and then soon run siros and the command this command will Create a container and then it will start it. So in this example, we are pinging Google four times So you can after this container is created. You can see the logs and see that Our container has pinged Google four times. So these are the list of some APIs, which we support and zone The next command is enter into a container. So there are ways to Attach to a container for example zone attach container. So you'll get a An interactive mode using which you can use to run your containers and To run for example, the second example shows you can open a shell in your container in an interactive mode And then run your content bash commands on that shell So now like we wanted to show you a real example where Zone actually provides some values like we should be able to container should be able to communicate because we Launch single Services inside a container and they should be able to communicate so that we have a complete application running So this example shows you like we have two containers. We create first container. That is the MySQL database Container and we said some environment variables like the root user password and the image which we are using is a MySQL latest After this command will have a container running Container running MySQL server after that we create the second container, which is the wordpress server So we have to we have to provide the IP of our MySQL server like where our MySQL server is running So you can see in the example that we have provided the wordpress DB host that is the MySQL IP you can get this IP in Zone list or you can say zone show the name of the container So you'll get the list of addresses for our for the container and you provide that IP in this second command after that You have a running application where wordpress server has access to the MySQL server Being it is possible because both are running on the same network and they have Communication possible between them Yeah, so this is the same example just using the heat orchestration tool So we are not doing the orchestration heat is the project which is doing the orchestration for us So you can use heat for running complex application on open stack infrastructure using the zone resource the zone resource is called OS zone container and In this example, you can see that we are creating two resources The one is the DB and the another one is the wordpress resource This is the same as the first example the first resource is our MySQL server and the second resource is our wordpress Server which has access which knows the IP where our DB resource is running You can see the environment variable wordpress DB host. We are getting the address of our DB resource so after this we'll have a stack which with two containers that has communication between them and This how you can run complex applications using heat in open stack So now Hongbin will Dive into more details about this zone internals. Thank you. Thanks. Yeah so this is the architecture of zoom and Zoom have two components. The first one is the zoom API. The second one is zoom compute the zoom API is for Exploding the rest interface to the end users and the end user send a request the zoom API Dispatch the request to the zoom compute the zoom compute is the component that is do the actual processes and It does something for example to put on the image from the image repository And to create a neutral resource and to run the containers for examples so right now soon have a default the zoom can Create a container by image which can be located in the docker hut and this is the default behaviors but as alternative you can specify the location of the image from the grants and what you need to do is to package the container image and Upload it to the grants as the tablet So that when you create a containers you add another option. Let's say the image driver There's grants them the request go through the zoom API to the zoom compute the zoom compute will call the API of the grants to put on the image to the horse and Then the second thing is we use the neutrons for the for network the containers this required a zoom compute to Before to create a container we need to talk to the neutrons to verify everything is fine and what is the information of the networks and To create all the necessary The results from the neutrons for example to create a neutron pot for each containers and Then after that, we will call the API of the dockers and we require that the queries to be set up as as Docker networking drivers so that the docker can at a runtime call the API of the queries and queries Responsible to connect the container to the neutrons And all the communications is wire rest API and secure by the keystones and this is the typical deployment of the zoom and as Other open-sets service we can Divide it the node into control node and a compute node the zoom API is supposed to be deployed in a Control node and that's around with other open-sets control paints components for some of the key zone and neutrons servers and The compute node is the node that is running the containers so that the zoom compute is should be running in each nodes that is running containers as an agent and The queries require to Running in each of the compute node because the queries the drivers for the for the dockers and then There should be a new shown agents that is in this in each of the compute node that is controlled by the Neutron control plane to manage all the virtual switch That is in this note and this is the sample sequence to create a containers So it's stopped by the clients to send requests to the zoom API to create give me a containers and the zoom API will based on the The specification of a container for some of how many CPU how many memories it will select a horse That is to run these containers and the zoom API will returns and Response and a tool to respond and at the same time the zoom compute will continue to create these containers and The setup is is there's so steps and the first step is to talk to the neutrons to Give me a network the new channel networks that the container will create from these networks and then we will talk to the API for dockers to Create a docker networks, which is basically a representation of the new channel networks and the docker will actually call the queries to Actually create a networks and a curry returned and Then the zoom compute will talk to the dock API again to create a containers and then connect the container to the networks and the docker will Call the queries to do the neutral part bindings so that given the neutral part I bind to the to the to the to connect it to the neutral networks this This internally they are doing so step that is very complicated so but in general they basically what the important step is to create the vampires and plug it Put an end part of the rampers to the neutron switch and put another end part to the Network lamp space of the containers so that the containers can Have the IP edges of the that is provided by the neutrons And I'm going to talk about the other things The first thing I want to leave is this is the features that we always provided first we have the API that is designed for the containers and We manage all the horse that's running the containers, but the end user don't need to worry about that we manage everything and hide the compressities and we are fully compatible with the Keystone multi-tenancy models and that means each container you create is isolated by the keystone talents and we fully integrate with the neutrons with support multiple image repositories and The most important part is material this mention is reintegrate with heat So that it allows you to obfuscate containers with any open-set resource such as virtual machines then and the neutrons and you can set up a very advanced Topologies and put one of them in the containers that is totally possible and then we have the horizon integrations and Sue will show them on slaters About the dashboard in the horizons and then we integrate with the open-set client and This is the roadmap that is based on the feedback from the committees, but Perhaps it's not fixed. We can still discuss this but this is the least there is according to a feedback and So first right now what what we are already support is to run the Docker containers in the bare Mentos, which is Basically a computer that is already set up by the cow administrators So in the future, we are planning to run the container is not only in the bare Mentos but in the VMs or even the COEs for example, Kubernetes and perhaps we But perhaps the Kubernetes we are so for the COE parks we are still debating is a good idea good idea or bad idea, but we can Continue the discussion and we will see in the latest and Then we are planning to support the additional container run times. So right now We decide architecture. That's not only for doctors. We are we are we are decide architecture that is Maybe in the future. It's very easy to add an additional container run times And we are working on the senior integrations and if this is this feature is implement we are expect to support the stable containers and then We are going to support a group of containers. There is Possibly very similar to the Docker compose or the community part There is to group a set of containers. There's highly coupled to each others and we manage them as a unit and then we are planning to integrate with the paceman API and So that we can leverage the schedulers to do the container scheduling and And the other feature we are planning to support is keep the container alive and monitoring the do a snapshot of the containers and do a code of the containers and The cell things is the the scope of this project. We try to limit the scope of this project and walk to walk with but for But we also want to walk with the other open-set project to collaborate with them to achieve a biggest goal, but We are not going to we are we're not going to do the obstetricians and for example and But we are going to integrate with heat to do the obstetricians and We are not going to do the COE provisioning, but if you want to do that you can consider the Cargo or the magnum projects We are not going to we are talking for the application controller application containers but if we want to run the system containers you can consider other know a virtual researchers know a SD and What we are not going to do is to build the containers from the bill a container image from the source code but you can use a solemn to do that and Yes, so this is the comparison for zoom to other technologies So for example doc we are We are possibly very similar to the know a doctor says to run a container in the compute note But the diff the major difference is we are not limited by the know a PIs so what know a doctor is Actually a no driver that's for the know us and allow the know us to manage the doctor containers But we are provide our own API states Decided for the containers and we won't be limited by the API of the compute and compared to Kubernetes and we are focused on two different use case and Kubernetes events is where events to this decide for the for the application that is very complex and very powerful and that need a very Compact technology is and you could use a community to do that But zoom is a very simple two states to manage the containers, but we don't do the observations and but right now we are Looking at the possibly possibility to integrate with the Q net is so you can use both together so this is the community of the dream projects and you can see there's the committee is diverse and There's if one of the company that is Stop supporting this project the project is still going to survive because it's support by many companies and I'm sure we'll show a demo Okay, you are seeing that math desktop on This VM has html based presentation player powered by review JS It contains static html Styleseed and javascript files and the files are local on VM The presentation player is customized to load its contents from external markdown file The markdown content should be provided by HTTP, but it have not been provided yet This VM is located on private network for demo project and has IP address 10.0.0 10 Let's provide content for the presentation from container Inside private network You are seeing the UI plugged into horizon This panel is added into project menu for users. Let me create container to provide presentation contents Presentation content provider for container name in this case using JX container image Chris George the carbine energy X Set command In bash and this container will be start after created Also, we can specify number of CPU memory size Working directory Environment variables interactive mode will enable should TTY and standard in out Also enabled to access from web console via web socket and Labels for the container and create great container request its access we accepted To you I have actions to operate almost features features of soon Start stop restart pause and pause send execute command send kill signal delete Container will be created and started soon Next set contents of presentation using console create web directory Create contents file in markdown using printf command Then run engine X engine X has started and to confirm network settings for this container Change top to overview to serve engine X 443 and 80 ports are exposed We can see this container has IP address 10 0 0 8 Please remember the IP address 10 0 0 8 for this container In the network panel we can see ports for router 10 0 0 1 DHCP 10 0 0 2 and Ubuntu VM 10 0 0 10 in private subnet After reload this panel Then port for the container will exist New port exists This IP address 10 0 0 8 is same as usual in UI This port has provided clear driver by a neutral to access this port by HTTP The rule has already added to default security group for this demo project Also ICMP protocols are added and it will allow to access engine X on the container from VM To confirm networking Let me pin between VM and container each other Container IP addresses 10 0 0 8 and VM IP addresses 10 0 0 10 Pin each other Change view to container At first ping to VM from container using console ping 5 times to VM 10 0 0 10 Got less response from VM and change to VM Next ping to container from VM using terminal 5 time To container 0 0 8 Got less response from container So now we confirmed networking between VM and container by ICMP Before accessing container from VM by HTTP Let's watch access log for engine X on container switch view to console of container Using tail command watch access log for engine X If the engine X will be accessed the new access log will be added here Let's change to VM and it's time to access container from VM to load markdown contents for this presentation player Now load contents from container 10 0 0 8 and play presentation Finally the presentation player got contents from container That's all from me At last we can confirm the HTTP access in the access log for engine X Yeah, so we are running out of time so I'm going to stop here if you have question you can talk to me offline