 So let's start. Hi everyone, I'm Taopan and I work for N-group. I'm also a Kata Connectors Architecture Committee member and here today I'm to introduce, I'm here to introduce the Kata Connectors 3.0s. We have a virtualization optimized for Kata Connector for cognitive work case. And I also have a co-speaker, but he has some visa issues and he couldn't make it here, but we actually prepared this together, so. Let's start. So the agenda, first I want to talk about what is Kata Connectors and this time there are use cases of it. And also, because the project has been evolving for more than five years and now, so I will introduce our architecture evolution in this years. And then I will move on to Kata Connectors 3.0 depth and we will explain what we have mainly done in this Kata Connectors 3.0 type development time frame and also more about Kata Connectors that is not part of the new components, but also very important to mention features and we'll move on to some feature work. So first, what is Kata Connectors? And when we first started the project, we have a slogan that says we have a speed of containers and the security of machine machines. So that is very organized. What is Kata Connector? We can look at this from infrastructure environment stream, actually. Before we have virtualization, we already run many processes on the same host and these processes are isolated by the process layer and on the host. And next, when we have containers, we have all these Docker 1C containers that is to isolate the previous process and those are grouped into containers. They are isolated by Linux kernel C group and then spaces. And with that, we introduced Kata Connectors and another layer of isolation that is the virtual machine. We put the container abstraction into a virtual machine so then you can run containers in virtual machine but in a container way. So instead of creating new virtual machines or when we started the project, you can run Docker, run containers and all. Right now, we have Kubernetes and you can run Kubey console, apply some polyamol and make it run in a virtual machine but it's still a container experience so that's why we call it, we say we have actually, we have a lot of automated optimization there so it is faster, way more faster than creating new virtual machines. So that's why we say we have speed of container and it's creative of VMs. So what about the project? The Kata Connectors project is right now an open-info foundation top-level project and we have many developers around the world, some something like more than 200 contributors and from more than 20 organizations and more than 100 supporting companies for this. So there are some pictures for the, when we have event and the super user award last year for this. And why Kata Connectors? Well, we did a new container implementation instead of just using the container. So we have, right now, two main use cases. The first one is about the Tennessee scenario. So for example, if you run some public cloud, you have to run some untrusted application and workloads in your container. So if you are running run C container, that is not considered secure enough, then this untrusted code can actually attack. We have several CVEs in the past for container bridge and that's why we need the virtual machine and the second layer of isolation. So then we can actually have body Tennessee to run different workloads from different users. Then it is also untrusted. So what Kata Connectors advantage in this case is we have swung security isolation. We have lower overhead and then it's compared to a normal virtual machine because we, in Kata Connectors, the virtual machine is highly optimized. So we have lower overhead and a quick start time and higher concurrency because we can run high density or close the entities to sensitive to high content, high density virtual machines on the same host. And also multi-sale SAO scenario. So then there's another use case of when we run Kata Connectors, we deploy Kata Connectors in some private cloud, for example. Users usually have different requirements on different applications and those applications are not expected to interfere each other in their performance and for isolation point of view. So with Kata Connectors with the second layer of virtual machine isolation, we have stronger performance isolation so different processes do not, different containers do not interfere with each other because they have different kernel entities. So the kernel states are isolated and also stronger fault isolation. We have use cases that when an application crashes, for example, or goes out of memory, there can be a lot of system geotars if we run it with one container but with Kata Connectors, these geotars are gone. So next we look at the architecture evolution. So right now we are at 3.0 and we start with 1.x error. And that's the beginning of VM-based containers. We introduce the Kata Connectors and the compatible component it is, it pertains itself to be a run C, command line entities point of, from command line interface point of view, when Docker calls run C with different options, we accept exactly the same options as run C. So it's a drop in replacement of run C but the architecture is kind of complex because here we have container D, for each process in the container, we have container D sheen, we always have Kata sheen and also we have a group purport the Kata proxy for it. So it's a problem in the host process from the host process point of view. So next we were involved. We worked together with container D to solve the two main sheen problem. So container D introduced the sheen with two API and we Kata Connectors is the first to attempt it and instead of having to have many sheens for a pod, we have a single sheen here for a pod in container D sheen with two process. So in the 2.x error, we also replaced the 9PFS with VALTAR FS, it's a color feature from red, it's contributed by red hand and it's way more faster than 9PFS and have better process compatibility and also we supported the device password to bring more performance to the containers and we have rewritten the color agent, then it was rewritten in gold language and we rewrote it in Rust too. So the RSS of the agent was reduced from 11 megabit to about 300 kilobytes. So after the end we started to work on Kata Connectors 3 and it has more simplified console plane. There is only, on the host, instead of having the previous architecture, we still have different processes. We have one sheen, VALTAR FSD process or a NETUSD process and we have a virtual machine manager that is such as QMU or cloud hypervisor, we have introduced three processes for a pod on the host and in Kata Connectors 3, we have only one process for a pod. That is done through hand, rewritten the color sheen process in Rust. So because we have a Rust sheen and also we introduced Rust, Rust VM-based hypervisor that is called Dragon Ball. So with this we also have built-in image management. So now we combined the sheen, the Kata DCM, the VALTAR FSD process and the virtual machine manager into a single sheen in Kata Connectors. And so this allows us to have increased performance. We have VALTAR FSD, NETUSD support and we have reduced the overhead of the mostly the memory overhead because we have less processes. The goal at runtime is very band-end memory because if you just start a golden process, several tens of megabytes of entities is gone. But with the Rust-fied runtime and virtual machine manager, we are managed to have this memory consumption here. So that's the overview, let's take a look at the details. So first the Kata Connectors 3 architecture I'm not going to explain the craft here is way more complex, but what's still here, we have a built-in hypervisor that is called Dragon Ball. And also we have written the goal version runtime into Rust and we are limited runtime dash artist. And we have multiple sandbox implementations. For example, instead of just having a pod in a virtual machine we are working on, this is working in progress, we are working on running pods in wasm sandbox. And also, because early on when we introduced the SIM-V2 support of Kata Connectors, the Docker support was swapped because Docker was not adapted to counter-dishing way to engine time. And in the Kata 3 timeframe, we worked with developers from Vantis, actually we have a talk later tomorrow, tomorrow we have a talk with him to explain how we enabled Docker with Kata Connectors. But at a high level we haven't returned the Docker support. So now with Kata 3, we can just simply run Docker run containers and again in Kata with Kata under this. So in Kata 3, the most question we got is why we have another hypervisor Dragon Ball? Why not just use KAMU, just use Kata hypervisor, just use Frank Wanker? Instead, why introduce another? So first of all, we still support KAMU, we still support Kata hypervisor, we still support Frank Wanker. So why another? KAMU is, you see that that's why people are working on the last VM, that's why we have Kata hypervisor, we have Frank Wanker, but Kata hypervisor was supposed to be a combined KAMU, the last version of KAMU. So it has way more broader targeting use cases and Frank Wanker is very light and fast, but it lacks a lot of features that we want in Kata Connectors, so such as PCI support there. So we introduced Dragon Ball, that is still based on last VM, this common code base for Kata hypervisor and Frank Wanker. And with Dragon Ball, we have more optimization opportunities for Kata Connectors, so to be brief is, Dragon Ball is a hypervisor specially designed for Kata Connectors, so why? So when Dragon Ball is specially designed why it has auto box installation support, you can run it very easily and it is optimized for containers and it's still very fast and very light and most importantly, it is production ready. So we look at it and why Dragon Ball is out of box, so Dragon Ball is the reason that we can do, we can have a single process per container or per pod on the host because it is in Rust and we have a Rust version in runtime, we have written most of the host components in Rust and we put it in a single process. And also it is container-optimized, instead of having to have separate VATF-ST and then VATF-ST process to support passing the container root FS to the guest. With Dragon Ball, we have built-in support for VATF-ST and Netus, Netus is an image service project that supports to very fast container image pooling so that we can spawn the container very quickly instead of having to block on the image pooling stage for starting new containers. But again, with Dragon Ball, these two are all built-in into the VMN, so that's why we think it is container-optimized. Also, Dragon Ball is very light and fast when we compare Dragon Ball with Kata 2.4.3 plus QML and we can see that this container setup time is reduced dramatically and the memory consumption was reduced dramatically as well. And it is two times lighter and 1.8 times faster instead of in terms of startup time. And also it is production-ready. It has a lot of features already and we are working on pushing the code out in an open-source manner. So because it was developed in-house by Adibaba and Group and it is running in production with hundreds of thousands of containers. So when we ended to do Kata containers, this feature is supposed to be expected to be production-ready. And more about Kata containers 3. So what about the go-round time? Do we just stop it? No, definitely not. In Kata containers 3, the go-round time is not considered legacy, it is supported officially and it will be supported as not as before wanted. So we actually are working on the go-round time as well and we ended a lot of features here. And firstly we have GPO-VFI as a support that was done by developers from both Red Hat and Meteor. And also we have hostless group with C support and we have ended GDX and ACV support for the confidential container use case. And then we switched from the server version of VARTAVST to a last version of VARTAVST and was also developed by Red Hat and the go-round time has also supported the Docker as well. And we updated the CUMU, Kata Hapavira and the FIQ connector to the latest stable version to have more hypervisor features for the users. And what's to mention there is support for confidential containers. Folks are very interested in this feature and the confidential container is something that expands the Kata container's thread model because for Kata containers what we used to focus on is to protect the infrastructure. So that users cannot attack the host. That's why we have a VARTAVST machine there but with confidential containers we are also protecting the workloads so that anyone on the host cannot access the contents in the container because first of all the memory is encrypted and also even the CPU registers are encrypted as well or the cache lines are encrypted so that even if someone hacked in the host they still cannot see anything in the container. So how do we do it? So it's just about right now we have Intel, GDX support for QMU and hypervisor and also we have supported the IB-SMP feature and a lot of artifacts and specific runtime class to this so that users can easily run confidential containers with Kubernetes right now. So future work. So what's next? And with Kata 3 has been released last year right now we are focusing on some industry here. So first with the last one time case we are actively working on because the component is newly written so we are working on hardly to make it production ready because earlier I mentioned at the end Dragon Ball is production ready but the runtime hours was newly written and it still have links of several important features and we are working on to make it as stable as possible. For example, right now it's only in support to Dragon Ball but we have developers from Intel I'm not sure but also from Red Hat to work on the multi-hypervisor support so that folks can run this with different hypervisors as well and also in the Confederation Container List case we are working on image pooling on the host with verification because right now with Confederation Containers because we wanted to be secret to be onto the image container image to be equipped as we are pooling image in the gate but it will be snow because even if you have pool run the container on the host for once and next time you start the same container with Confederation Containers you have to wait the container image to be fully pooled as well but we are working on and also GPU with non-confederation containers we have all the GPU support ready but with Confederation Containers the memory is not encrypted yet we are working with a media developers to actually have a GPU ready solution for Confederation Containers as well so that folks can run their, for example, run their trainings, run their models in Confederation Containers this is actually very important to support the AI use cases in, for example, some data sensitive use cases for such as medical and financial in the cloud and also we are looking at more features firstly, we want to have service mesh infrastructure protection right now because right now when we run service mesh workloads, the sidecar is actually running the guest and it is an infrastructure component but it is not actually protected from the straight from Qatar straight model point of view because they actually workload containers can attack the infrastructure container in essence so we want to make service mesh infrastructure safe in Qatar's use cases as well and also as mentioned earlier we are working on more sandboxing implementation the first one is wasa, wasa is expected to come very soon and we are also working with container D community to have the container D sandbox API support so earlier with container D CMV2 it has better color support but it still doesn't have one thing next is that it doesn't have abstraction for sandbox with the new sandbox API abstraction we are finally at the abstraction back to the interface between container D and Qatar and this will further improve the density and the start-up speed for color containers as well so that's all for the talk we're getting questions so no questions, that's okay thank you, thank you for coming