 My name is Sunny and I'm the Kata Containers Community Manager at the Open Infra Foundation. Today I'm joined by Taopang, staff engineer of N Group. For people who are in the Kata community, you've probably know Ta already because he's a very active community member and contributor and a member of the Kata Containers Architecture community. As one of the largest Kata container users, N Group has very large deployments of Kubernetes and about 10,000 nodes of them are running Kata containers. Taopang is responsible for the runtime and cloud native storage of N Group. So thank you Tao and your team for your contribution to the Kata projects and the community. I'm very excited for you to join us today. So for people who might not be very familiar with N Group, can you tell us a bit more of why N Group decided to use Kata containers and what are the benefits that you see? N Group, we aim to create the infrastructure and platform to support digital transformation of the security of the service industry. We strive to enable all customers and small businesses to have equal access to financial and under-services that are inclusive, green and sustainable. To do this, we run many different workloads on many clusters. Our workloads can be mostly categorized into two types, online services and offline jobs. Their online services are very sensitive and to latency and performance jitters. And their offline jobs are less sensitive, but can be a cause of performance jitters on their own. And we co-locate them in the same physical machines. So we would like to have a way to isolate them mostly to prevent the offline jobs from creating jitters and affecting their online services. So it is one of the big reasons why we run Kata containers in production. Another reason for us to use Kata containers is still isolation, fault isolation. Because we run many software applications in containers and software has bugs, applications can panic occasionally or cause auto memory situation or even cause chronic panic in some rare cases. These incidents can cause severe performance jitter to containers running on the same machine. So we would like to use a separate kernel to isolate them with virtualized containers. And again, Kata containers is a natural fit to do it. Right now, we have put many of the offline jobs inside Kata containers in production, like main produce and machine learning jobs. We run Kata containers on about 10,000 those in production. It helps us to contain the jitter source and gain much more stable application performance overall. Awesome. It's really great to hear the large deployments that are running on Kata. Also like you mentioned, work isolation is very important to end group and many other users as well, which is the main reason why your team is running Kata. Kata really utilizes hardware level virtualization to keep containers isolated both from each other and from their whole system. So Kata is able to offer deeper isolation levels between individual workloads and achieve this to a greater extent than similar systems. So isolation is good. And I know that Kata containers also integrate with container management layers, including popular orchestration tool like Kubernetes. So is end group integrating Kata with Kubernetes at all? If so, what are the benefits of this implementation? All of our containers in production are running with Kubernetes. And then includes both Kata containers and its traditional one-seat containers. Kubernetes uses a burden of container maintenance. It automatically handles container orchestration, scheduling, resizing, and recovery. And Kata containers integrate mostly with Kubernetes. It works as part of Kubernetes and we can easily mix transparency and Kata containers in the same cluster. It's awesome to hear the stimulus integration of Kubernetes and Kata containers. So now you've explained how end group is using Kata. What are the end groups future plans for your Kata containers environment? Right now we run both of our Kata containers with 2.x releases. And we cooperate with Alibaba Cloud Container Sandbox team to create a vast version of Kata runtime, integrating the runtime and virtual machine management components into a single process. The new architecture is being deployed in production right now. And we are working to contribute the features to the Kata community, make it a part of Kata 3.0 and benefit more Kata users. Along with the new architecture deployment, we are still expanding the coverage of containers and end group. We look forward to running more offline jobs in the future and want to experiment with putting online services inside Kata as well. It certainly requires a lot of work and optimization to make it happen. But the opportunity of Kata containers and end group is huge as well. I love that your team is collaborating with Alibaba Cloud and experimenting with Kata. So please keep us updated on the end group that runs by. And I'm very excited, I'm sure the community is as well, to hear more about end group innovative approach with Kata.