 So I've been working in Alibaba for over a year, and I started to account containers after I joined Alibaba. So I am an active contributor now. My partner is Lohuang Tao is here. Hello everyone, I'm Lohuang Tao. I joined Google in 2015, and I work on Kubernetes on the maintainer of Cloudnative. And also have colleagues from Moodog, Thomas, Derek. This is a small meeting room. It's also my first time seeing so many participants. And the session has 35 minutes, and Lohuang and I will speak 15 minutes each, and then we will leave for 5 to 7 minutes for questions. So let's start. So for the first 15 minutes, I would like to focus on the integration. Because this is still new, it's different from DOG, so I will focus on how to use it. And for more details, we will look at the status quo of container D. Container D is from DOG company, and by the end of 2017 it became part of CNCS. But it has very good development for 2 years, and it's also been very fast. And there are many clients, and this is now the 5th graduation project of CNCS. And now a lot of large companies use Container D. We have K&IKE working on IBM Cloud and Alibaba, Microsoft, AWS. Last month, colleagues from eBay said they are also using Container D. So again, there are many clients of Container D. I believe Container D is a mature software that can provide high performance runtime engine. But not many would transfer from DOG to Container D. Container D will not provide some client-facing features. For example, image building, Container D is more integrated into a large system. So if you want to move from Docker to Container D or Container DCI, you may need to understand its architecture and how it's run. And it can create some Container platform. Now let's look at this architecture. This is a well-drawn diagram by Derek. Now a lot of cloud platforms adopt Container D. On the bottom, we see that Container D supports Container Runtown, Carter. So there's more than one. You can also select other types of functions. So in this diagram, you can see a lot of modules in Container D. Let's focus on Container D API. It's a standard, it says it provides a GRPC API which can be used to create various functions. For example, Docker engine and code, etc. So Container D would like to be integrated into a large system. So this is client and our service side, we have different modules, such as content and runtime, etc. These modules can maintain some low-level resources, for example storage format, such as overlay. It can also do non-spacing. It allows different clients to use the same Connect Container D because they have less space, they are separated. In addition, Container D also has a very important function, Snapshorter. It's similar to GraphDriver, but it's more capable. It can provide some easy-to-operate controls. Moreover, Container D also provides promises API. It can use it to monitor Container D's own process and the progress of the containers they manage. And they will talk about CI, it will give you more details. So a brief summary. Container D can provide API service. You can use your own API to achieve a lot of things, but it's different from Docker, which can do call image or just a large API. Container D doesn't do that. In Container D, we have smart client. Actually, what I just said, the site of Container D server. It supports low-level resources. The API is wrapped by gRPC. So our client site, if any, is a low-level resource, including some container resources. So on client site, you can do anything. For example, installation, back-end to front-end. Let's look at this more example. When we want to do call image, it's not a call of one API, but four API. For example, Deep Service, Content Service, Snap, Shorter, and Image. This image may be different from the image we usually talk about. It's used to manage data. It's used to connect to Snap, Shorter, and some preliminary data. So on client site, a lot of things are done, and the portals are quite small. So these services can be reused and grouped together. The client can call the image and the portal of the API. Generally speaking, I don't want to call Container D as micro-service architecture, but there are clear boundaries between different modules. So the client can do a lot of things on the client side. For example, if I want to change the position of the current process, you can do this without changing the code in back-end. So it's very flexible. So the API can be used or operative on client side. Of course, on service end, there are also a lot of work to be done. For example, if I need a new format of image, or security container such as Mucata, how do we integrate them? Because we can't just maintain one branch, because it will lead to high maintenance cost. So in Container D, any component is plugged in. So how do we understand this? So I said the modules have clear boundaries. So they can be grouped into different plugins. So in the code library, there is a dependency graph to be maintained. So based on the method of registration, the plugins can be defined and extended. So back to my topic, how do we integrate third-party service? One method is to create a new Container D. So Container D provides you with a common entry point. So you can start to go around, file and add the entry point, and last third-party service. So this document can create your own Container D binary. So very quickly, you can connect with third-party. But it may be more troublesome. So the second method is external. So it's an over-source method. You don't need to do orchestration. And it's called two plugins. One is proxy. The other is about your container. And they will give you more details. So first, a proxy plugin. Simply speaking, Container D can forward the request from upstream to the third-party. So maybe more users would choose Snapshotter. So current to the only Snapshotter and the content are supported. So it's a forwarding service. So you can implement APIs on third-party plugin. And was this their service? The information sent to Container D will just read and connect to your server. And then the upstream request is forwarded to the third-party server. That means you don't have to change any code in the service. And there's also runtime plugin. This is also a plugin. How was it used previously? When we integrate runtime or cata, we would add the configuration for runtime. Docker can use Container D to generate a shell script and wrap it to the service, which may look like rnc. But it may lead to some problems. And also you will face some limitations. So now we have runtime API. So if your own container can implement API, it's okay. And then API is binary. So your API should also follow the naming articles. So when you start your server, the server can be found. So API looks like this. So we'll see our windows container and the cata all implement this API. So it looks like this. I'll container the rnc.ve. It will be turning into container shell. Rnc. They will see for the binary. So this will integrate with the third-party security container. So that's about integration. And I'll hand over to Liu. Hello, Liu Lantou. Today I would like to talk about container D Kubernetes engine application. And I will cover container D and the integration with Kubernetes. We have defined an interface called Kubernetes CRI. So it's a set of functions defined. If runtime can do this function, it can achieve integration with Kubernetes. So we have found that there were a lot of runtime, including container D. We hope we want to support different runtime. So we defined this interface. After two years development, there are more runtime. We'll focus on container D, cryo, render doc, sheens. Use to support Docker. And the community, some of the people in the community would like to remove this. So in the future, we'll use CRI to do integration. And also we have the precursor of Cata. And Fracti and Pouch from Aligaba. And Rocket, Burglit, Centra. This is about CRI. In order to integrate with Kubernetes, we need to implement CRI. So we have this container D CRI plug-in. Find the code in this link. And 1.1 version, we built this into the container D. So if you have container D binary, it's supported. It's about self-built. Or use Docker, which is part of container D. We all have CRI supporting. And April 2018, it's production ready. So I'll show you this. This is the substrate. Container D type. So under every list, you can have a lot. I can have a lot of test cases. And most of them are red. If they are red, then Docker. In Docker, they're also red. So you can see my desk. So now it's production ready. So when we compare with previous integration with Docker, you can see there's no Docker scene. We only have a Kubernetes and container D integration. So in terms of stability and performance, it will improve a lot. This is a simple architecture. So Kubernetes tells us, I want to pull image. And they tell us we want to have pull image. Then we use container D to pull image. And then Kiblet wants to start pod. And then we set up a pod environment. And then when we set up pod network, we use C&I blogging, CR blogging. Then the pod red is ready. This is a comparison with Docker scene performance. So you can see, for example, 105 pod that start up, start up benchmark. And also 105 pods management overhead benchmark. Then we will see the components results usage. The reason for using 105 pods is that because the default support pod number is 105, the red one is container D and the red one is Docker scene. Basically, all better. We recommend that gradually we can shift it to CRI because Docker scene, Docker support, in our plan we will remove from Kubernetes. And I don't know whether we will duplicate or not. But we need someone to maintain it. Then how can we shift from Docker integration to cryo container D, this kind of new CRI integration for container D. It's very simple. If you have Docker 18.05 installed in your machine, we already have container D. And as long as you install Docker, you already have containers. So when you start Docker, you can also enable CRS CRI. Then you add several flags. Then you should not talk with original Docker scene, but with the container D. And you should ensure that you have a CRI plugin. Then if everything's ready, then it's done. Then you already have container D. So actually we're still using Docker, but a new version of Docker. This is the integration of container D with Kubernetes. And we hope that you can gradually shift your production environment into this one because most of the functions will be here as I pointed for Docker. We need someone to maintain if not. Then a lot of new functions will not be there. And for GKE, GKE is Google Kubernetes Engine. It's hosted Kubernetes service provided by Google Cloud. And basically all the cloud have this kind of service. Container D, 1.1. Last year it was beta. And not long ago, we have used container D for master nodes because I'm on vacation now. So during my vacation, they can reach GA. And probably at the end of this year or next year, we will use container D for all the nodes. For all the nodes, it will be in Docker. Just now I talked about container D and Kubernetes integration. And container D is used in GKE. Actually in GKE, when we use container D, we can do a lot of things. Now let me briefly introduce GKE sandbox. As we all know, also container is very convenient. But in terms of securities due, there are some problems. So we use a whole Linux kernel. If there is a bug, then they may escape from the container. Then they will store your data or inference your production environment. So we already have a lot of previous problems. To solve this problem, GKE has done one thing. That outside the container, we want to add another layer of protection. If you have strong demand for isolation, you can have that. We are based on GVisor. For GVisor, a brief introduction. Actually, it's an open source by Google. And it's a sandbox technology based on user space kernel written in Go. And it is also OCI conformant run SC. So your application is in user space. And this call is through it to have virtualization. So as a result, you cannot directly access to host kernel. So as a result, you can have two layers of security isolation. To support GVisor in container, we need to make sure that GVisor is different from run SC. On the left side is run SC. On the right side is GVisor. It's different. You can see GVisor have its own kernel. In run SC or container D, they suppose your application is run on host kernel. So they may directly send message to you. Or when you exit process, you can directly observe it. But for GVisor, all the container process is run on the GVisor kernel. You cannot directly get this information from host kernel. So you need to talk with the GVisor kernel for this information. So in this way, for container D, they need different behavior to treat GVisor and on the run SC. So we need to have a layer of abstract. So under this abstract, they can have different behavior to solve the problem. We have a discussion with container D community how to solve it. For the benefit of container D, it has a lot of interface. And we go through every interface and we find that SHIM is the right layer. So we decide in SHIM level, we have an abstract. So we have SHIM V2 for GVisor sending signal or monitor application, exit or taking monitoring steps. Previously, we cannot do it now. We can abstract it under SHIM V2 and for GVisor. They can also use their own special implementation when we have SHIM V2. Actually, we can do the GVisor integration. We have an internal SHIM and all these behaviors that cannot be handled previously can be handled now. And now SHIM V2 is now a standard. For example, Windows is doing so. So if you have your own runtime, you can also use it. This is at the container D level to support GVisor. At the Kubernetes side, we also need to make some changes. When we start part, we need to tell Kubernetes it will be run in GVisor. And so we have an API called runtime class. And for different runtime, you can define a runtime class. And then when you start part, you can use this runtime class. So we have a GVisor runtime class. And at the CRI level, we need to send this runtime info. So we say this part is using this runtime. And then we need to configure. This runtime is corresponded to what SHIM. So we add this configure here. When everything is done, then we, from Kubernetes API to every interface to container D support, all these are connected now. We have an animation here. Anyway, it's not important. So you can see here, end to end. User said, I want to create a part and use GVisor. And Kubernetes can see it. And they use CRI create a part. And they will use GVisor. And then we have a configuration here. If you want to GVisor what you should do, then you need to use SHIM. And then we need to start the SHIM. Then SHIM knows how to manage GVisor container and how to communicate with them. And the same method can support Cata. And for GKE sandbox, last year, it was alpha this year, May, beta. And we still cannot use it in China. Anyway, if you use G Cloud, it's like this. And have a quick recap. Kubernetes, container D integration is ready for production use. We recommend you to shift. Otherwise, in the future, maybe two to three years later, if you have not shifted, you may have some problems. Because Docker SHIM, a lot of functions will not be updated. And you cannot have the security fix or bug fix. Or you can shift it to other CRI. And also GKE container support is beta. And in the future, it will become default. And all the nodes will be container D. And GKE sandbox is built on Kubernetes plus container and plus GVisor. And container D is extensible. We have a long list of interfaces. So whatever you want to do, you just look at the interface and you may find something that is suitable for you. That's all. Thank you. So we are just on time. We have five minutes for questions. Can you share with us when you shifted from Docker runtime to container D, what area should we pay attention to? I guess you all heard the question. What we need to pay attention to is that because all the things are in container D, Docker itself all the containers have state. So this is moved to container D. Then you cannot see it in Docker. And the image, you cannot see it anymore. So Docker and Kubernetes, they will just share underlying container D. Kubernetes do all the things. And Docker will know. And whatever done in Docker, Kubernetes will not know. Because Docker have different application scenario with Kubernetes we want to we don't want to have overlap. And how can we do debug? Actually, in upstream based on CRI, we have an interface called CRI cartel. It's similar to Docker CRI. This is based on CRI. For example, previously when you run Docker PS, you can see a lot of container. Now for Kubernetes, we also have some optimization. And when you run Kubernetes, you can see a long list of containers. And we cannot exert mega data. So you cannot know what the containers are for. And then in container D, all these containers are redesigned. And you can clearly see the container name and also in what port they are. Please use a microphone. In Docker, all the functions you want to have, you just change to create cartel. Because they are using the same CRI interface. In Kubernetes community, there's another runtime called CRI. It's also compatible compared to container D with the advantages and disadvantages. How do you suggest to users to use container D or CIO? Actually, it's your own choice. Because previously, when asked by other people, we will just say, we hope that this one is not important to you in Kubernetes cartel. It's a standard tool, no matter you use container D or CIO, it's the same. So it will support both runtime. For Kubernetes users, you don't need to care about whether it's container D or CIO. Whether when you run all container D or CIO, we hope the operators, always D, will decide. For example, if we want to GKE, we will use container optimized operating system. I know we will still use Docker. For RunHat or ICU SE, they will use SupriO. So we hope to reach a status that you can make your own decision. It's not important. As for the two projects of container D, Google early Docker, Microsoft, IBM, are maintaining it. Cryo RunHat is maintaining Intel. They also have a lot. You can take a look by yourselves. When we eliminate Docker D, a lot of functions will be missing. For example, water management, a rock lotator. How do you view this? We are just going to do the case to 100 people in one case. And also whenever you want to use a box, you can do it. You can use that. It's kind of a reflection. It has some lines. Someone has already sent a message. Yes, I think this message is good. I think it can be groundwork. It's just on the network. It can be groundwork. It's just on the network. It can be groundwork. It can be groundwork. Another important reason is that GKE, including all the solutions around VM, each node is VM. If we use Cartner now, then it means the virtualization is embedded. But without embedding the virtual machine, Google doesn't have this scenario. So Kubernetes is very good. It doesn't need to embed virtual machine. There are different problems. Some don't need virtualization. So this is our use case. Different vendors have different use cases. Of course, we support Cartner as well. But we want to solve this issue of virtualization. So again, both are useful. Just to now see, Continuity support exists. And how long between it's commercialization? I don't know. Yeah. Arm support. Continuity itself, we don't have this support. We still need support. What we're supporting now is how much is enough. If you want to build more platforms, then you can build and run with Cartner. We still have this support. Thank you very much.