 All right. Oh, welcome, everybody. I realize we're in the first slot here, which is kind of interesting. So yeah, I'm excited to get started. So as I said, yeah, Phil's not going to be here today. He's in Valencia, but he can't be here today, unfortunately. He might be on the virtual platform somewhere. So hi, Phil. So today, we're going to kind of give an introduction to the container to go through its architecture. And it's kind of where the project has been in the last year since the last KubeCon. So if there's other container detox at KubeCon, so I recommend the first one we have from Akihiro, another container to maintainer. It should be really interesting. And Anusha also has a talk that's going to be covering container to you as well, both of those tomorrow. So I'm going to start off with kind of where container these have been in the last year. We've seen kind of tremendous growth in the project. A lot of that is going to be seen from Kubernetes 1.24. We finally saw the deprecation and removal of the Docker shim, which has brought a lot of people. Thank you. Is that DIMMS right there? Yeah, you can thank DIMMS for that. He's worked hard on that. But that's really driven a lot of people to container D. And luckily, what we've seen is even though the number of users has gone way up, the project itself hasn't seen a lot of issues. People seem to have had a pretty good experience moving to container D. So if you haven't already, I'm sure you're thinking about it as a well you have to at some point, really. And then as well is NerdCuttle. So I don't know how many of you are using NerdCuttle today. It's a pretty new tool developed by originally by Akihiro. We're seeing a lot of uptick in that usage. It's driving a lot of users to container D. And it's really, it's a pretty amazing project. You should check it out if you haven't already. We've also have some new maintainers that have joined since last KubeCon. So we have Kazoo as our new committer. And then we also added Mike and Danny as reviewers. One thing about our project and where we're at, we could always use documentation improvements, especially as we have a lot of users coming in. This is a great opportunity for those who want to contribute. And you're not quite sure how to contribute. It's understandable that contributing to a case like container D is kind of hard to get started. But each of you are going through your own journey, whether it's just switching to container D from Docker, or you're switching or you're just starting fresh. When you get frustrated or you hit those little snags, it's good to come to the maintainers and help us fix that, so that we can fix that for others in the future. So I put this up here. This is one of the common questions to get asked is about mirroring for container D. How do you configure it and stuff? So you can go into our docs and look at it. We don't have this information. I think our website's not really updated very well. But when in doubt, go to our source code. There's a docs directory that has quite a bit of information where you can find those. So I'm going to talk a little bit about what supported today in terms of container D releases. So we just end of life 1.4 a few months ago. 1.6, we also released back in February. 1.5 has been our stable branch for a little while. We're trying to move users on to 1.6 as we can. So 1.6 is probably going to be our longer supported release time. So it is in February. We have this extended support time period that we're probably going to extend quite a bit for 1.6. We don't have long-term releases, but the way we tend to end of life things is when the community is ready. That's another area where if you're a user and you see something and you're like, that can't happen, come to maintainers and we'll listen to you about that. 1.7 is on the horizon. I'm going to talk a little bit about 1.7 in a bit. So if you're a Kubernetes user, you probably care a lot about which version of container D should I be using. We have this in our releases document in the root of our repo. So this graph kind of represents what's been tested, not necessarily what's going to work for you. We always recommend trying to use the latest if you can. Or it's always good to use the latest. But this kind of falls along the Kubernetes support cycle because Kubernetes doesn't support that many releases. I think it's like three or four at a time. So we tend to support our releases tend to live a little longer than that. So 1.5 is going to be supported for pretty much every version of Kubernetes out there today. 1.6 we've tested with 1, 2, 3, and later. And all of the new ones are going to be on 1.6 for a while. So I'm going to talk a little bit about where we're going with container D 1.7. Unlike some of our previous releases, 1.7 is actually we're developing a lot of features for it. And this is one of the reasons why I say 1.6 is probably going to be a longer support cycle because many of you don't want features. And that's understandable. Like you just want the runtime. You don't really want to worry about what's hot and what's cool in your container runtime. You just want it to be boring. And it is boring, but there's also stuff we're trying to add to make it easier for the maintainers and to make it easier so that we can be more extensible as a project. So one of the things that one of the first things we've been developing is this sandbox service and API. One of the major uses of container D over the past few years has actually been focusing more on the environments. And containers, VMs, whatever, they're using container D to manage those. And so we're trying to improve our interfaces around how those are managed. Because VMs and containers, they're not exactly the same. They have some different requirements or different abilities that you can do. And you want to be able to leverage from different parts of the stack. And then some of these other components, most of these are actually, they'll go along a common theme. So the image transfer services are way of improving the way we do pull and push in container D. So right now we have kind of a fat client approach where we're doing everything, but we want to actually have a nice service that we can develop against. So I'm going to talk a little bit more about that later. And some of the other things, a lot of them are along the line of trying to clean up how we do stuff internally, especially how we have our CRI plug-in today. So this is what the architecture of container D ends up looking like. It's an iChart. Don't try to memorize it. I'm going to go through the different parts. I'll highlight kind of some of those things that I mentioned that's new in 1.7. So we have these new transfer service APIs. We have new stuff for sandboxes. We have some new stuff in our API layer that we're working on. But everything else is stable and stays the same. We're trying to add stuff in a very backward compatible way. So I mentioned the container D client. It's kind of fat today. Like there's a lot of functionality that lives inside our client. All the image management that is bull, push, import, export that you're familiar with, everything related to actually creating the container, creating the OCI specifications is actually done inside of our client. The way we have developed it is we have a bunch of service APIs, and the client uses those same service APIs. And then we have a server proxy. It's actually going to go over gRPC to container D. Then in the actual daemon, we kind of broke it out into four different parts. We have the API. The API is going to be gRPC today. But we're also looking into supporting TTRPC more at this layer. These have all of our service definitions, all our service APIs, CRI as well, is an API that we support at this layer. In the core of container D, we have all of our core services. They all have their own API for more like they each have their own go interface that the rest of the components can interact with. Some of them are stateless. Some of them actually store metadata. We store all our metadata together. The reason we do that is because we nicely namespace and we garbage collect things. So that, say, you delete an image. It can delete all the stuff related to it or you delete a container. It can go ahead and delete all the resources without you having to try to track those yourself as the client. So the back end is where we're going to have the solid implementations of stuff like the snapshotters. We have the overlay snapshotter that's actually going to handle the on-disk storage of our snapshots. We have our content store. Our content store is actually what stores all the blobs and stuff that we get from the registry sort of artifact that you create along the way. So if you're like buildkit, you're building something, you might store something in the content store and push it up to a registry. Then we have our runtime back ends. These are what manage all the actual container processes, the sandboxes, and all the tasks that are running. And we focus a lot on plugability and containerity. So we try to add as many plug-in interfaces as we can. In fact, pretty much every box and circle that you see here is itself designed. Can you hear me? OK, all right, we're back. So any of the boxes here, everything is designed around having, using our plug-in interface. So CRI itself is actually written as a plug-in that uses every other plug-in and containerity. We have registry access, something that's plugged in. I also have a few spots here where we have snapshotters, content stores, that can be plugged in. So you'll see this quite a bit. There's other implementation of snapshotters. So a common one that's become very popular is the StarGZ snapshotter, which gives you the ability to actually do lazy loading of images. We have some plug-ins around the runtime, which can actually restart your containers for you. And our shim layer is what actually manages the container process. So think of the shim as what's going to parent your container. And it's kind of like it's babysitting it. It's working on behalf of containerity to watch the container track exits and status and handle everything, handle all of that. It uses a TTRPC API for containerity to actually communicate to the shim. So we try to make this layer as thin and lightweight as possible, because this is going to be on a per container or a per sandbox instance of the shim that we have today. This also gives us the ability to restart or upgrade containerity without actually taking down all of your containers. So let's go through what a container run is going to look like in containerity. So it's a little small. So I'll describe what we have in the five boxes. We have the client on the left. We have a snapshotter service, container service, task service, and then our actual shim. The first thing that's going to happen is the client's going to actually set up the snapshot that's used for the container. So this is going to have the actual root file system. It's going to set up the root file system that the container is going to use. Going to go to the container service. We're going to create that container. And now we're going to go to the task service. We're actually going to run a task in containerity. We keep them separate. The container is the object. The metadata object, the task is the actual instance of your contained process. The task service is actually what's going to be responsible for creating that shim. It's going to do it via an exec. And then a new shim process will be created. And then the task service will actually create that process and return back the container ID and the PID back to the client. At this point, they'll send a wait call, wait for the container to finish, and then they'll start the container. During while the container is running, the client is doing nothing but waiting. Soon as that process exits, the shim is going to return back with the exit status. We'll go back to the client. Then the client can do all the cleanup. We'll delete it. Shut down. We'll delete it. The task service will actually shut down the shim when it's done. And then we can go ahead and clean up other resources, such as deleting the container and snapshots. So the transfer service is one of the new things that's being worked on. So it's another stateless service. It's what's actually handling doing a pull operation. But previously, the pull operation was done client side. The transfer service is actually adding a service into container ID. While this is significant, as you can see, for example, the CRI plug-in has its own instance or its own implementation of doing image pull. NerdCut will use the client. So we have multiple implementations of this. It's really difficult to configure. Is this working? OK. So it's easier to, we want to put everything together, have one place where we can actually do the configuration and give our experience back to the client, and as well as for CRI. We have common libraries for actually doing the image distribution part of it. So everything that's related to transfer, the actual transfer implementations, and the resolver's authentication, stuff like that. So with the transfer service, the process, it's going to look a little different. The client's going to make a request to the container DAPI. It's going to say, hey, transfer from this registry to local storage. The transfer service is going to talk to the registry. Registries use to say, hey, you need to authorize. And we're building a way in. And this is kind of the complicated part why we haven't added it before is we're getting a way in for the transfer service to go back to the client. Hey, I need credentials. And the client can actually send those credentials back up to the transfer service. And that's really important to us, because we're very strict about we do not store credentials in container D. Like our metadata store will not store credentials. Kubernetes provides them. Client provide them. You have a plug-in that provides them in any sort of way, except from our metadata database. And at that point, the transfers back to the registry to solve the image and return back the image digest. Yes, is it cutting out on? OK. So the transfer service was actually going to be responsible for handling the pull flow. So we use image handlers to do that. So think of it as the image handlers are responsible for figuring out what are the artifacts that are related to this image. So it's going to fetch each of the artifacts that it's needed and the order that it's needed. So usually, that's going to be a manifest. And then it's going to pull an image configuration. And then all the image layers, it's going to do that one by one. And then it's going to return the progress back to the client so you can know what's going and where it is in the process. Afterwards, it's going to complete the unpack and store the image. And that's it. The client will get a response that the transfer has succeeded. There's much less that the client will need to do there. Now, I'm going to talk a little bit about how. I mentioned some stuff that's in CRI today. And we're trying to break it up a little bit into different services. So for example, when you do sandbox creation today, this box here that says pod sandbox, it's the same size as everything else, but it's actually much bigger. It does a lot of stuff in there for managing these sandboxes. Obviously, it's like all the logic around a pod is pretty large in managing, creating the sandbox and containers for it. There's a lot involved in that process. So we're trying to add these new interfaces to make it easier, not just to create the sandbox and create new interfaces, but actually, this will give us an option to do stuff from the client and as well as from other plugins. So I talked about working on a CRI v2 plugin. You can think of a CRI v2 plugin as a simpler implementation that can leverage these services. And there's different parts of the stack. Everything is pretty layered in container D. We'll have services, we'll have metadata, we'll have backend implementations. There is implementation here on the shim side. That's mostly been completed. This will give us some ability, for example, to move the pause logic down into the shim. It's been worked on. And then I mentioned before kind of this TTRPC work and proxy. So one of the things that we're working on right now as well to enable this is by defining these new services and these new service APIs, we're actually able to move logic into the shim when that's needed. When it's not needed, you can use the local services, but in the cases such as with confidential computing or some of the other use cases that are being worked on, more logic is actually wanted by some shims. Not all of them. For the most case, you're running a container. If you want it thin, we keep it thin. But we want to have options those which we try to design things for. You can implement whatever you need for your use case using our APIs. So this proxy basically enables the ability to actually use another implementation of the service that could be implemented by the shim. So if you think of how I demonstrated the poll earlier, we're gonna have a poll that's gonna go to the container DAPI, which is going to go to our local service implementation. With the proxy implementation, you can think of it as the container DAPI is actually going to forward that request directly into the shim API. And the shim can handle whatever it needs to to implement that interface and return it back to the client. So from the client's perspective, it doesn't have to do stuff like go around container D to talk to shims or try to understand too much about what implements different parts. We can handle that kind of transparently to the client. So they don't have to see that. All right, I'm gonna go back to the overview slide and that's all that I have today. It's a reminder of when the talks are, the other container D talks that I recommend. There's also a, I mentioned confidential computing. There's a confidential computing talk today at 325. So don't miss that either. All right, can you see the question? Thank you. Thank you.