 Let's go get in the project, basic project updates. So every year, CNCF publishes a project velocity report. And this is a report that was published in last 2024 Jan. It's like a report from like based on the data that was collected from 23 Jan to 24 Jan. And based on that, Continuity is like the 13th most active project in the CNCF ecosystem with like 281 unique authors like contributing to the project. And then if you move on to the usage growth, the adoption of continuity has like almost doubled in the last year based on a report from like data talk that was published in like now, November 2023. And then this adoption that we mentioned earlier was like mostly driven by the Kubernetes distros like there are a bunch of them. And along with this, there are also other services like AWS Fargate and Docker also that uses continuity underneath, not just the cloud providers and the Kubernetes distros. And that's about that option. And yes, now let's get into the continuity ecosystem. So we have a core set of services and API in continuity that has been there for like now almost six, seven years. And like you can say that is like pretty mature. And all the extents, continuity is like pretty extensible and everything has been developed with keeping this extensibility in mind. So we have the client and the backend which includes the content store, Snapshotter and runtime. So if you get into the client part, like you have the kubelet which uses the CRI side and then the container engine and buildkit which uses the continuity client and then NetKettle and Finch, Colima extra. Also like the Snapshotters and Container Dishims. Now let's get into each one of those extensible pieces. So going on to the clients, we have CTR which is like a command line data tool that is part of the core continuity project itself. It provides basic functionality to deal with a continuity features and also can be used as a debugging tool. Then we have NetKettle which is a non-core continuity project and it's supposed to say Docker like CLI. And it can be also used to test new features of continuity basically in NetKettle is like go hand-in-hand with continuity, you can say. Then you have the CRI CTL which is a CLI for the CRI APA which is a Kubernetes project. And then you have Docker or Mobi which uses continuity. Recently like Docker also started using further integration into the continuity image store and like Snapshotters and other continuity features. Then there are a bunch of developer platforms which uses continuity which includes like Colima which provides container run times on macOS and Linux with a very minimal setup. Then Finch is from AWS which is another Docker like CLI for macOS and recently they launched for Windows. Then you have Rancher Desktop. It provides a Docker like experience on macOS, Windows and Linux. Now moving on to the Snapshotters part which can be also extended via the ProCy plugins. There are a set of built-in or what you can call as core Snapshotters and Blockfile is one such new Snapshotter that was introduced recently and we also have like Metraverse, DevMapper, extra in the core Snapshotters. There are also a bunch of remote Snapshotters like NIDUS, OverlayBD, StarGC, extra and AKS Artifact Streaming is like a new vendor project that was like recently into the remote Snapshotters. Now moving on to the run times and Streams part. We have the OCR run times like RunC which is the default Linux OCR run time and there are other run times that container support and there are like other external Stream projects. For example, like the HCR Stream which helps to run container D on Windows and Runverse and container VASM Streams are like new projects that helps to run VASM workloads on WebAssembly workloads on like Kubernetes. Now we will just go through the current supported release of like container D. So currently there are like two branches, two active branches going on like 1.6 and 1.7. 1.6 is considered as a LTS release and so 1.7 recently we had a change in the 1.7 released and end of life timeline. This was because like if 1.6 is an end of life, 1.6 is an LTS and like 1.7 is not LTS, there is a chance that the importers can get stuck on a version of container D that is end of life. So like we have changed the support policy so that 1.7 will also be in an extended period of support. So 1.6, container D and 1.7 will be like going into end of life at the same time. And then we have the container D 2.0 which will be like released in a few months. So now moving on to the container D 2.0 features. So first let's get into the release plan. The container D 2.0 beta was released like in last November before the KubeCon NA. And now we have the container D 2.0 RC was released two days back and it will be highly grateful like if everyone can like test out the container D RC and like report the issues or like any feedback that you have so that we can make a stable release. So now going on to the features in container D 2.0, there has been a huge refactoring that has been done in container D and like there has been a pretty good code trend like moving the packages in the code base and like creating separate reports. So this was done so that it was to make the go client or we want to call the go client as stable. So as part of it like the client and core API packages will be stable and there won't be any breaking changes in minor releases in the 2.x series. And for the rest of the features, I hand over to Beifu. I think before we introduce the new features, I think we need to check out the duplications. One of the reasons to start a new major release is to remove the existing duplications. You know, we've already deprecated so many more features in the past several releases. So we think it, 2.0 release is good timing to delete them. But you can see here is the long list, right? It's more challenging to review the item one by one. So we provide the deprecation warning features to detect if there is any in use deprecated features. So you can use the CTR2 to run it and you will say what features you are using and what suggestion can provide to you to do the migrations. This feature also bevel to the 106 and 107. So it's very convenient to you if you have a plan to upgrade to the 2.0 release. And since we have released some deprecated features, so the configure structure has been changed, right? So instead of rewriting the configure by yourself, we provide the configure migration command line to have you to cover that. And yeah, anyway, you need to ensure you don't use any deprecated feature in your production. So next is to the Sandbox API. This API is used to group multiple containers in one Sandbox environment. But traditionally has been done by the SHIM process. If you have a look at the SHIM way to invitation, you will see Kennedy can connect to existing running SHIM so that Kennedy can group multiple containers in one SHIM. But the SHIM process is used to handle the lifecycle of containers. So there's no difference between the containers. So for the Kubernetes case, we need to introduce a Sandbox concept, right? We need to create a post-connector first. And that connector is used to hold all the opening namespace resources related to the Sandbox environment. For example, we can set up a networking interface in the networking namespace, and including the routing information so that the other container can use that. So after creating the post-connector, we can apply the following application connector into the Sandbox environment. So we actually maintain the post-Sandbox concept in the CI plug-ins. But Sandbox environment doesn't mean it is a post-connector, right? It can be a secure VM environment. So we don't have the Sandbox concept, right? So there's no defined way to deal with about the Sandbox lifecycle. So every developer have to build their own concept and own logic to handle the Sandbox environment's lifecycle. So it's not expected. So we introduce the Sandbox API. So right now, in the two-hour release, Sandbox concept is the first class metadata, right? So we push down all the Sandbox API implementation back to the connected stream. So it's like external plug-ins. We can use the different configuration to choice the different Sandbox controller. It's what we did to the external plug-in for the remote snapshot. And there is the other use case. Someone want to create a Sandbox environment that has the different platform from the host. Yeah, for example, you can create a Windows-Connector in the Linux. So Sandbox can also cover that. So based on this design, we end up with this architecture. And this is for the CI service. You know, in Kubernetes, each port has the unique, the runtime handler. Each runtime handler can have a different Sandbox controller. For this one, Power Sandbox controller is the D4 one. It is the compact T-the-mode. You can run with the existing Zoom Widow Inventation without Sandbox API. So this is the workflow. Yeah, when you try to create a Sandbox environment, you need to touch the container service and then you need to use the task service to create, to involve a connected Zoom process. And then you create the task. And what is new here? It is, we introduce the Zoom management here. We can use the Zoom management to create a Zoom environment. That is similar to the Sandbox environment. So we can make adjustment to the Zoom environment just like we can make adjustment to the Sandbox. So instead of made adjustment to the post-Connector here. And with the Zoom management, we can define what is the Zoom, what the Zoom can provide us with the API. So if you choice the external Sandbox controller here, you will see the workflow is very simple. We just set up a Zoom process and just call the created Sandbox. All the detail is behind the connected Zoom Inventation. So it is up to the Zoom author. So we don't need to care about the what is, hello, what is how to create a Sandbox environment. So this is about the Sandbox API. And the next, let's talk about NII a little bit. NII is the plugin is happening when the CI plugin try to generate OCI runtime spec. And the NII plugin can take the spec and made adjustment to that. So you can think of anything you want to do around dynamic node resources for example, you want to allocate it the part of a GPU resources to your container. And this is the plugable interface so that he can keep connected very simple. So we don't need to make any significant change to suffer any type of the node resources you want to allocate it to your container. And in the tutorial list, we can talk to the, the showcase adjust that the NII plugin is listening to. So when the new container request coming, Kennedy will send the OCI spec back to the NII plugin. And the NII plugin will make adjustment to the spec and send it back to the Kennedy. So Kennedy can send a new OCI spec back to the runtime. So it's like an OCI hook, but it's very simple to use. And the next part is still related to the runtime change. Yeah, you know, we already support the username space in the one though seven release. But yeah, you know, if you just create a process with a new username space is very simple. You just call the clone with the flag. It's very simple, but it's not just the process. We need to change the file system for the container. You know, a lot of container image was being built by the admin user. So before you run the username space for the pods, you need to change the owner for the container file systems. In one though seven, we use very, very, very simple weights. We just go through all the file in the container file system, you can see here. We use the change owner to change the file ownership one by one. So in a Linux platform, we always use the overlay file system, right? So you can see this table. And the TensorFlow image is very huge and he has a lot of files. So when we touch the file in the low file, low layer, you kind of will copy up. We'll copy up. So it will bring a lot of IO here. So you can see for this image, he will take three minutes. But even if we enable the method copy option, it still take one seconds. So it's still very slow. So into that, we use the ID mapping here. ID mapping can provide us with a temporary view for the file systems. We don't need to change the files ownership physically. We just create a temporary mount points. You can see here. We just call a system call here. We can have the other view. You can see the performance is a huge improvement here. So this is all about the runtime change. Yeah, let's get back to the account. Another feature that was introduced is called transfer service. Before we dive into that, let's see why we needed something like transfer service and what does the traditional container decline do? So consider the case of a simple pull command like where you're pulling an image from a registry. It actually knows the client will first, via the distribution API, get the manifest, then get the config. So then you will know which are all the layers that need to be fetched. Then the layers are fetched one by one and stored into the content store. Then after all the layers are fetched, then we start with the unpacking. That's basically like preparing the snapshot and apply the diff one by one and finally you commit the snapshot and then create the image. So here it's like all the operations are like going one after another. And also in containerity, the containerity client is something that you can call us like a fat client model or like a fat client because like all the pull push operations are being handled at the client side. So even if you want to do some refactor or rewrite, for example, in case of like CRA had to reimplement and like it resulted in like a lot of cycling imports being done. So that's why we moved on to transfer service. It was introduced in like 1.7 as an experimental feature and now let's see what does, what transfer service does. So yeah, transfer service has a very simple interface. So basically you have a source and a destination and depending on the source and destination, it's like the operation that it's mapped to the operation that you want to. For example, a pull is basically a transfer from a registry to the image store. A push is like from the image store to the registry. And similarly you can see for like import, export, unpack, diff, tag extra. And a registry to registry one is like mirroring a registry image, which is like still not implemented. And now let's look at the general architecture of the transfer service. So you have the client and like the transfer is like basically where we have the daemon. So another feature in transfer service that we implemented was like the support for like streaming. So with this streaming actually the client can get the progress from the daemon via a stream and also the daemon can get the credentials from the request for credentials from the client via stream. So in the same way like for remote registry operations, if it's a registry source, you can resolve it and then like you get the remote registry. This actually helps with like different configurations for like support for like multiple registries. And also if the destination is like an image store, it can resolve into like the content store or image store or basically the snapshoter. Now let's see like how the transfer service works with like a parallel unpack model. So it's the same pool, but here first after getting the manifest and then getting the config. So now we know what's the snapshot that we need and then based on the snapshot that we need, we can fetch the layers and apply the different do this steps in parallel. So we don't have to wait for all the layers to be completely fetched, but like we can do the operations in parallel and commit the snapshot and then like finally create the image. This is like much more performant. The same also helps with like lazy loading of the images. So here also after the getting the config and then you get the snapshot that you need. And the snapshot here is actually you can call it just like the snapshot has to be smart enough to know that like if it knows how to fetch the content that is needed to build the file system, build the file system. So in that case, when it requested like if the snapshot is present, then the snapshoter service immediately says that, okay snapshot is present, then continuity can commit the snapshot and that create the image. But in parallel, the snapshoter service can, snapshoter can basically request for the layers and then fetch the layers in background. So these are some of the use cases that we want to solve with like transfer service. One of the main one is like confidential computing. So basically if you have a guest sandbox environment, we want to pull the image or like content directly into the guest without going via the host. Another one is like OCI refers and then we have like the plugins. So plugins to customize image pooling logic that, so this is like another part where we are, it's like the extensibility of continuity can be seen like you can actually like write custom image pooling logic, making use of transfer service. And then you have like credential management and like the CRite user transfer service in CRite plugin, which is like still in development. Yeah, besides the new features, we still do a lot of things to improve our qualities. Yes, first things we enable the 64 runner in the CI for each pooling request. And we also back all this pipeline to the 106 and 107. And since we do a lot of refactor then introduce the sandbox API and transfer service. So we already do the release upgrade test. So this test is to ensure there's no regression issue when you try to upgrade Kennedy in place. Yeah, for example, we need to ensure we can recover all the existing running parts and all the existing image. And we also running to the several cases. For example, when the IO pressure can cause the shim leaking. So we want to prevent, we want to avoid this happen again. So we introduced the fail points test doing the CI pipelines. And we still have a lot of things we want to handle around the transfer service and the sandbox API. So we have still a lot of things to do. And yeah, since we just released our first release candidate for the two hour release, so we hope the everyone want to try it and give some feedback. So that's it, thank you. So any questions? Hi, thank you for the presentation. I want to know if you guys are planning to introduce on-build images support in near future for container deal as it used to happen in Docker. You mean the SNFs for her? If you use the on-build images, the final image is much smaller. I don't know whether you call it a snapshot or not, but Chargivity says you don't support on-build images. Oh, you mean the storage, sorry, I didn't get. You can use a thing between images and use a different base image. Yeah. Built image and one-time image. But I don't. You can copy from the previous image to this image. So the layered image you have, the sub- Like the largest space built here. I think already, I think the Bukie already supported because we don't do the building function in Canada. So, yeah. Thank you. Thanks very much for the presentation. Unless I'm mistaken, at the last KubeCon, you were expected to already have a container D2O released by now. It seems like there was a delay. I'm curious if things were much more complicated than expected. And if so, what was the challenge and what's been happening? I think I still have a task about the Centerbox API. We'll need to improve it. Yeah. Yeah, this kind of delay because we want to introduce the streaming IO. Streaming I will use the API instead of use the naming pipe. Yeah. Yeah, right now, we copy the connected IO from the one pipe to the other pipe. But right now, we want to introduce the streaming IO API to handle that. We don't want to introduce the pipe things. So we still have a couple requests related to the Centerbox. So I think those is nice to have. Yes, because we still want to provide a good function to help the developer can implement their own Centerbox controller. So right now, I think the streaming things, we still need to wait for it. Yeah. Yeah, it makes sense. And I think the last time I checked, Quasar was the only one that had implemented on the SHIM side the Sandbox API. Is it still them or is RunC also made progress on that? I think right now, we don't have the Centerbox implementation in the SHIM way too. We just add the POS Centerbox controller in the CI layer. So this is used to compatible with the existing RunSHIM. I think we can implement the Centerbox API things in the future. Maybe, yeah, we need to wait for the existing SHIM way to implementation can handle that. Unless I'm mistaken, Quasar has an implementation that is already working against the beta. We are not working on it right now. Yeah, we just want to implement this in the. Quasar. Quasar. Quasar. It was donated by Huawei. And it's now a CNCF incubating project. Yeah, they are pushed us to merge things. Yeah. Yeah, Quasar. They're working on the implementation and it's lost. So they're the only ones at the moment who are implementing it on the southbound side? I think so. I mean, I don't know. Yeah, we know. I was watching carefully their application to join the CNCF and I really wanted them to join because I'm really quite excited about Sandbox API. And yeah, they joined, I think, just before Christmas. And I was like, yes. Yeah. Because yeah, I'm quite interested in the more Sandbox API from the security perspective. So better runtime isolation and Firecracker potentially. Yeah. Is that the, as far as I understand, when Sandbox API is implemented then Firecracker would be a possible native Shim implementation, right? Yeah, you could implement. Yeah. I mean, any lightweight type provides a wrapper and a pod to container. Because right now, they require like a fork of container D. But now with the Sandbox, they won't need that fork anymore, right? Right. Okay, cool. I'm quite excited about that. Cool. Yeah. Okay, thanks.