 Hi, everyone. My name is Hong Bing Lu. I'm from Huawei Technologies. And I was the Madland PTL in the last release. And I'm going to present the new project that's called the Zoom. And this work is collaborate with other people, Qingming, and Alice, and Maduris. But they are not able to come, so I'm going to present it by myself. So I'm going to generally talk about what's the how to run open set on top of the how to run container on top of the open set. And I'm going to introduce the Madland projects and the Zoom projects. And I'm going to give a demo of the project. So there are several ways to run a container that is on top of the open set. The first way is to consider a container that is like a NOAA instance. So there's an open set that is deployed on top of the infrastructure. And open set NOAA is the component that is to abstract the underlying compute resource and provide a general API that is for users to provision the resource. And NOAA is integrated with different hypervisors. And the hypervisor is going to create a NOAA instance, which is normally a virtual machine. But in the case of the containers, the NOAA is interact with the container runtime that is through the hypervisor interface. And it creates a NOAA instance, which is actually the containers. And in this case, each container that is belong to a talents. So it is possible to have many containers. It is possible to have two containers that belong to the two different talents that are scheduled in the same physical host. And this causes security issues because the container generally don't have the strong isolation capabilities. So it is generally not a practice to schedule container from different talents in the same host. And the second way is to use the virtual machines to run the containers. So there's a hypervisor and there's a set of virtual machines. And the container is run on top of the virtual machines. So in this case, the virtual machine is used as an isolation for the containers from different talents. And this will address the security issues. But as the number of containers that is growing into thousands and hundreds is very hard to manage all the containers that is distributed in different virtual machines. So this is possibly the most common way to run a container on an open set. It has a set of virtual machines and it deploy a container observation engines, a COE. That is on top of a set of virtual machines. And it uses COEs to manage the set of containers. And we can see that the COE is great tools and very populous. But the COE didn't fit into the open set by itself. It needs a set of projects or tools that is to hook the COE into the open set. And so first is the deployments. It needs the tools that is to deploy the COE into a set of virtual machines and make sure it's managed and scaled. So Magnum, the Magnum project is created for this purpose. And then a COE needs the authentications. It needs a list of users. And in the API call, it identifies the API call of the users and authorizes the API access. But the COE generally didn't store a list of users in this SCD or data store. So it needs an external authentication service. So in the open set, the key zone is the authentication service. So it is quite natural to integrate a COE into the open set. And a sample of that is the key zone open park index in the Kubernetes. That is for doing this purpose. And another set of problems is the networks and the storage. So for the networks in the container communities, the general solution is to do the port mappings or use a virtual network solution, such as friend nodes. And there's a drawback of this approach in the open set because the port mapping is very complex to manage. And the solution such as friend nodes is create overlay networks. That has a performance problem because the traffic is encapsulated two times. And so the performance between containers is not good. So in open set, the natural solution is to use the new choice to provide the networking for the containers. And the query project is created to bridge the COEs to use the new choice. So in general speaking, what the query provides is a set of plug-ins. That is for the different COEs. So if the COEs want to set up the networks, they call the plug-ins. That is provided by queries. And in the plug-ins, it's just receive requests from the COEs and translate it to a set of API calls to the new choice. And the new choice is actually doing the work to set up the network for the containers. And for the storage, that's similar. There's a FUSI project. That is for bridging the COEs and allow the COEs to use the senders or manilas for the container storage. But we can see there are several projects that is created to integrate the COEs with open set. But there's something that is missing in my views. For example, the image and monitoring. For the image, the common solution is to use the Docker registry and deploy it in the tenets. But in open set, the problem is we need to deploy many instances of the Docker registry for each tenets, which is undesirable in some use case. And for the image, maybe the grants can be reused to provide the container image. And for the monitoring, maybe the tenets can be reused. But this is just a possibility. And then I'm going to give a general introduction of the MADLAM projects. So for in MADLAMs, what the MADLAM is going to provide is a service that is to deploy a COEs on top of a set of no instances. And then after the COEs deployed, you can run the container by using the COEs. And what is showing this picture is the MADLAM is actually not for managed containers. It is managed COEs. And the COEs that is provisioned by the MADLAM is the one that is actually managed containers. And this is a list of major features that MADLAM is providing. You can use the MADLAM to provision the Kubernetes, provision the Docker swarm, or a missiles. And you can scale the cluster and runtimes by adding, remove, and lower instances to the clusters. And there's a self-security feature that MADLAM is going to provide and possibly many people don't know. And first is the MADLAM is served as the CA, certificate authorities for the COEs. So that's because MADLAM is configured the COEs by default by using the TLS to secure the API endpoint. And in order to use the TLS solutions, it needs CAs. So MADLAM is served as CAs. It provides API to issue the certificate and sign the key by using the certificate. And then the second feature is the MADLAM will actually generate dedicated users and keystone charts for each COEs. And that is because the MADLAM want to limit the credentials that is used by the COEs to access the open-stack service, such as Neutron and Cyndas. It want to avoid the security risk so it dedicate credentials to make sure the permission is right. And the COEs get the right permissions and didn't have any security risk. So in the last release, we update the mission statement of the MADLAMs. We do that because there are several confusions of what is MADLAMs. And some people think that in MADLAM the container is a first-carb resource. And some people think the MADLAM is useful to manage the containers. So this is actually not true to clarify the confusions. The MADLAM community is designed to update the mission statements. So the MADLAM, in before the MADLAM, is called a container service. Right now, it's called a container infrastructure management service. And the mission statement is provided to clarify that the MADLAM is for managed COEs. It's not for managed application containers. And along with the update of the mission statements, there are changes in the API levels. So in the left side is the API resource that is in the M release of the MADLAMs. It has the Bay and Bay model that is for provisioning the COEs. There's a container which is Docker swarm containers. There are a set of resources that belong to the Kubernetes, such as port service and reputation controllers. So in the right side, there's an M release. We can see that what is keep what's the API resource in the MADLAM is that it's left over. It's the Bay and Bay models. And all the other resources are removed. But the container resources we introduce in the new project that is what now is called a DZUN. And then I'm going to talk about the new project, the DZUN project. The DZUN, so right now, there's two ways to consume the service from a COEs. The first way is to use a native API that is provided by the COEs. And the second way is to use an open set API that is provided by DZUN. And what this DZUN is, DZUN is provide an API that is to abstract the container lifecycle management. It provides a simple API that is generics and across all the container technologies. In the back end, it has a deep integration between the open set and containers. And it integrates with several open set service, such as Keystone, NOVA, new choice. It's actually. So why we create this new project? Because there's a no perfect way to do that. Because right now in open set, it is some use case in container that cannot be addressed. So for example, there's a solution such as NOVA docus that is to allow the user to use a NOVA API to drive the containers. If the NOVA docus is there, why we create the DZUN projects? That is because the API of the virtual machines and the container is different. There are a set of operations such as create this and delete that is shared. But actually, VM and controller have their own set of operations. So in container, the run and execute, and the serve parameter in the create that is special for the containers. So in order to explore the feature that is specific for the containers, we create a project that have create a new API that is for the containers. And then, MATLAB is a project that is to allow user to create a COEs and use the COEs to run the containers. So if the MATLAB is there, why we need the DZUN? That is because the DZUN is enabled with different models to use the containers. So in the left side, it is show how the COEs deployed by the MATLABs. So MATLAB is actually, in MATLAB, a COEs belong to the talents. The whole COEs belong to the talents. It cannot be shared between the talents. But to speed it more precisely, a COEs actually belong to the users. So the MATLAB is actually don't allow a COEs to be shared between the users. Lying the lowers, it is. No one is not allowed to share the key pair between the VMs. And in the MATLAB, that is similar. So for example, if there's a product called that has a large number of talents and have a large number of users, each user need to create their own COEs. And as a result in the crowd, there will be many COEs deployments. That is sort of duplicate. And why it is not good to have many COEs? That is because the deployment of each COEs is taking the resource. And each COE need to have a set of the master node that is configured to serve as a control print. And this master node is wasted because it's not used to run a workload. And another thing is each COE need to have a flowing IP, need to have no balances. And it need to set up the infrastructure to monitor the status of each COEs. So if we have many COEs in the crowd, there's a lot of efforts to maintain this deployment. So this problem is solved by the drone because drone provides a single and consistent API that is for all the talents. So all the container that is managed by the drone that is in the centralized ways, this is good for the resource utilizations. And also it will help to release the requirements that the container must run in the virtual machines. That is because in drone, it is not assumed that container have to run in the virtual machines. The container can run in the bare mantles. And there's a use case for that because, for example, if the cow just have one talents, it didn't make sense to run all the container in virtual machines. And another example, if the cow is using a container that is provided by the hyper, which have strong isolation capabilities, it also didn't make sense to use the virtual machine as isolators. So here is a list of the summary. That's why we created a drone project. And drone is provided simple APIs that is container-arranged. And it is independent of the Pacific Container Technologies. It provides a common infrastructure that is for the VMs, bare mantles, and containers. And if in drone, the user don't need to manage the container host or the casters. So what they need is they just, if they want a container, they don't need to get a host first. They just give me a container. And the container will run in the pool of hosts that is set up by the cow providers. And this is different from Matlin because in Matlin, if you want a container, you want to get a casters first. And then wait for the casters to boot up. And then you want to container on the casters. And in drone, this is simplified. So this is the architecture of the drone. It has API that is for process arrest request. It has drone compute that is deployed in each compute host to allow the service to scale out. And it has different drivers. That is, each driver is for driving different COEs or container runtimes. And the container is managed by the COEs and the runtimes. So there's a, this here is a concept of the drone. It has containers, but it also has a sandbox. So a container is just a NINUS container, LiarDockers. But a container has to run in a sandbox. And a sandbox can have one or multiple containers. A sandbox is created to serve as a spaceholder for the container. So it has a box, so all the containers run in the box. And it should create an isolated environment for all the containers that is inside the box. And in the sandbox, there could be a network interface. There could have a volume. If the resource is there, it is shared by all the container in the box. And a sandbox can also use to involve the resource constraint. For example, you can set the CPU or memory of the sandbox. So the aggregate resource consumption of all the containers cannot go this limit. And what exactly is a sandbox? So a sandbox can be implemented differently by different drivers. So for example, in the hypervisor-based container runtimes, a sandbox can be a VM. In Kubernetes, a sandbox with a set of containers can be implemented as a port. And in most of the case in the general Linux containers, a sandbox could be a set of Linux LAMP space. And but in our first implementation, we implement a container as a Docker container. And we implement a sandbox by also using a Docker containers. What it means is if you create a container, it will actually create a sandbox for the containers. There are two containers that is created. One is a sandbox container. The other is a container that is requested by the users. And right now, we currently support one-to-one matching for the sandbox and the container. So a sandbox is one-to-one matched containers. But in the future, we are going to support to have multiple containers in the same sandbox. And this is the simplified version of the command that we are going to run to create a sandbox and create a containers. So in the first command, it will create a sandbox as a Docker containers by using an image that is a Kubernetes plus. And what this command doing is actually create empty containers. That didn't do anything but to reserve a set of LAMP space so that in the future, if we create a container in a sandbox, we can run the container in this LAMP space to share the resource. And in the second command, we can see that we run the actual containers. But we add a set of actions, sorry, a set of options. And to make sure that the container didn't create their own LAMP space, they join the LAMP space of the sandbox containers. And so why we introduce the sandbox? Because that's the reason why Kubernetes has a port. And the sandbox is allow a set of containers to be co-located and co-scheduled and scheduled to the same host. It allows to share the network LAMP space. So all the containers have the same share the IP address and the network device. It's share the volumes and it's share the resource limit. But most importantly, we create a sandbox because this allows us to use the NOAAs to create a sandbox. And so that means the management of the container and the sandbox is different. We use the NOAA to manage a sandbox. And after the sandbox is created, we use a drone API to create a container inside a sandbox. And the reason we use the NOAA because the container created by the NOAA have all the things we want. For example, it has the neutron, it put into the neutrons. It has been scheduled by the NOAA schedulers and it just has everything we want. So we decide to use the NOAA. And this is how it works when we create a container in drone. First is the user is to send a request to drone to create containers. And what the drone doing is first is to send a request to NOAAs and ask NOAA to give me a sandbox containers. And the NOAA will schedule the sandbox to a host, to a physical host. And that's running a NOAA compute. And the NOAA compute will have Docker drivers. That is for create a NOAA instance, which is actually a sandbox. And after the sandbox is created, the drone will ask the drone agent, which is the drone compute, that is an agent running inside a compute host to create a container that is inside a sandbox. And yeah, this is how the drone to create containers. And this is another feature that is the container image. And in drone, how to measure images is also possible. So user can provide different drivers to support different way to store the container image. Right now we have two drivers. The one is for pulling image from the Docker hub. Another driver is pulling image from the GANs. And then I'm going to show a demo of the new project. So this is the horizon UI. And there's a tab in the containers, which is the panel that the drone project provides. And inside the container panels, you can create containers. And you can set the name of the containers and set what the image this container is going to use. And it can set a command of what this container is going to run. And yeah, in the spec, you can set the CPUs and memories. And then there's other set of parameters that you can set in the UI. And then after the parameter is set, you click the button to run the containers. So when the container is creating, it's actually create a sandbox from the lower first. So in the UI of the NOAA, you can see the sandbox is here. And then right now the status is stopped, so it means the container is already created. And we can click the container to see the detail of the containers. And we can see the logs of the containers. Then we are going to start the containers. And this container, if you execute a command, there is echo, hollows. So in the log, there should be hollows there. And if we start the container again, it should print to hollowed. And we can see there's a to hollow there. So this is the very simple container that is created by the June projects. And then I'm going to show the CLIs. So all the functionality that is in the UI is also available in the CLIs. So you can list the container. You can show the detail of the containers. And then, yeah, you can use the NOAA list and you can see the sandbox is there. And in here you show the Docker PS it shows there's actually there's two containers. One is a sandbox, another is actual containers. Yeah, then I'm going to delete these containers. And another example I'm going to run is to use the June to deploy an application that has two containers. One is a database, one is an application service. So first I'm going to create a database containers. It use an image that is my secret. And in the environment variables, I'm going to set a few environment variables. That is the database, the password of the database and the user of the database. And then we wait for this container to start. And so the container should start very fast. Yeah, so right now the container is created. I could start and this database should be up and running. Yeah, so this is the sandbox of the NOAAs. So it show the IP address. This IP address can be used to access the containers. And it's an IP address provided by Neutron. And it has a security group. So right now I'm going to try to use my SQL CLI to access the database. But actually this should fail, should fail because the security group is closing the port. So by default it's not allowed to access the database. It show that the container is secured by the security group. So you can use the security group to secure each part of the containers. So right now I'm going to open my SQL port days in the security group. I'm going to add a rule that is allow all the traffic from the port 33, 0, 6. And then I'm going to try to use my SQL to access the database again. So right now we can get into the database. And I run the command to modify the database spoken. So right now we have the container that is actually for hosting the database. I'm going to create another container that is for hosting the application servers. We call it the web servers and we use the image that is a WordPress that is getting from Dockerhugged. And in the environment variables, I'm going to point to this container, to the database containers. So I'm going to answer the user's passwords and the IP address of another containers. So, and then this application controller, this container that is hosting the application server should read the environment variable from the June that is passing to set up the connection to a database. So right now the container is created. I'm going to start the containers. Now the container is running. And I'm going to go to the NOAA again and to open the port for, open the security group to allow the traffic to access the application servers. So I'm going to open the port that is 18. So then I get the, I try to get the, I use the IP address of the, I get the IP address from the NOAAs and then go to IP address. We can see the application is up and running. And I'm going to set up the applications to make sure the application set a few of, to dump the set of database tables in the database. So right now this is the application. This is the WordPress. And then I'm going to go to the database again and modify that a set of table is created. Yeah. And that's all from the demo. Yeah. I finish. Any questions? They're very interesting talk. So we were wondering why are you using the NOAA Docker driver? So if Zun is supposed to work with Magna which really provisioned the COE, why did you mention the NOAA Docker driver? How are you using it? How does it relate to the COE? So in, so we are not depend on the Pacific COEs of container runtimes. Everything is implemented in the drivers. So in our first drivers, we are supported Docker container runtimes. That is just a runtime, don't have a COEs. So we use the NOAA Dockers because the NOAA Dockers have everything we want. It has a schedule, it has a quota management as it plug the container in the neutral port. So there it has the IP address and everything is get into the neutral. So that's why we use the NOAA Docker. Actually we are not using the NOAA Dockers. We provide Docker drivers for NOAAs and that's customized and it's not a NOAA Docker. That's why we, the key is we want to use the NOAAs. We use everything in the NOAAs to get everything we want. And every feature in the NOAA we are going to leverage. Okay, so in other words, you're using NOAA as a replacement to a COE just like yet another driver to zoom. Maybe you can think of this way. I think it's, yeah you can think this way. Okay, so in the end it will be up to the user whether they want to use NOAA as the scheduler for scheduling and provisioning the sandbox or whether they want to use, say, Kubernetes. Actually from the end user point of view, they just have the API to get a container and they don't know which container. Yeah, for the administrator's point of view, the person who set up the COE, so it will be up to the person who set up the zoom and self, so it will be up to them to choose whether they want to use NOAA for provisioning the sandbox or whether they want to use Kubernetes or Masons or whatever else. Yeah, true. Okay, okay, cool. And maybe just quick last question. What are your plans down the road? How do you plan to proceed with this project? Any additional features you might want to add in the future this is open stack. How do you see this thing going forward? So the first feature I want to support is Kubernetes support. So in the next release, I think the priority of our team is to work on the Kubernetes integrations. And maybe another feature is the storage. So we want to have provide, we want to use a sender or other open stack service to provide the storage for the containers. So maybe a sender integrations. And then maybe we will add support for heavy container runtime such as Hypers because the hyper have a strong isolation. So if you use a hyper in the NOAAs, you can have everything that is strong isolate and compatible with multi-tasking model of the open stack. And yeah, that's what I can think of. Any other questions? Yeah. I just wanted to clarify something. And the current driver with the NOAA, in that case you're not using Magnum, right? No, it's not using Magnum. So but the plan is if you want to implement the driver that is for example going to use Kubernetes, then you'll leverage Magnum in that case? The feedback we got so far is to decouple from the Magnums. Entirely. Yeah. So the driver just have a URI that is set to the endpoint of the Kubernetes and it should not care the Kubernetes get from the NOAAs. It should not care the Kubernetes provision by the Magnums or provision by other tools. Yeah. Okay, thanks. Any other questions? Thank you.