 But no, I'll keep it to 20 minutes or less. So this is third year getting to present container D, kind of a mix of status update, and really this year I'd like to focus, you know, most people have seen the architecture, I'm not going to drag you through all of those details, but how are people actually using it? And specifically, there's some great ways that people are extending it and embedding it in other projects that wasn't really the initial design, but thanks to some of the smarter people on the project, there's some cool ways that people are using it. So that's really the focus, but I will take you through kind of a current state of container D effectively. There's a couple things we can say about what it is. Container runtime, yes, there's lots of people that have debated what do we call a runtime or an engine, you know, we can get kind of stuck there. But we think of it as below platforms like Docker or Kubernetes, but above lower level run times like the OCI run C, or some of the other things you've heard about in this room today, Cata containers, Firecracker, Gvisor, and really we play that kind of resource management role between those two levels, a platform above us, these low level run times below us, you know, managing that container process, the image artifacts, the snapshots, the metadata, and we've tried to remain tightly scoped. So a lot of people early on said, oh, container D is going to become another Docker and have all these features. Our governance requires 100% maintainer approval to increase scope, and the only thing we've really changed from that initial scope in 2016 is adding the CRI implementation into container D itself. So there's a picture I think Thierry showed earlier today with CRI container D separate from container D. That was definitely true a few years ago, but now it's actually the same binary. So again, we'll see, as people have extended it, we haven't added that into the core of container D. We've allowed plug points and ways to do that without even modifying container D. So where are we today? We're the fifth project to graduate in the CNCF, so that happened just a few weeks after FOSDOM last year. The great thing is again, it's not just a project from a small group of people. We now have had over 200 different contributors representing greater than 100 companies. Thanks to the CNCF DevStats project, you can go search on all this data and check out who's contributing, who's involved. And our current kind of governance status, we have 13 maintainers representing nine different companies. So again, I think there's been a sense in which early on container D was seen as just another Docker company project. Clearly that's not the case. We have maintainers from Amazon and Alibaba and IBM and Google and a lot of other individuals. So effectively that just reflects that all the major cloud providers are using container D in some way. We'll see how in a few minutes. We do support both Linux and Windows across multiple architectures and we've added sub-projects to our governance. And what I mean by that is there are interesting pieces that we're going to see in a few minutes. Things that people have created, such as a Rust-based TTRPC implementation, IBM Research has contributed, Image Encryption Library, and again, instead of expanding the scope of container D and adding these pieces into container D as the main project, these are all pieces that are in sub-projects within our container D organization, but maintained by the people who created them. Our most recent release, container D13, added Windows support for the ShinV2 API, Amazon, the firecracker team contributed a device mapper snapshotter, which again was external. We've actually accepted that into the project. And then a new plug-in interface for things like Image Encryption or special compression modes that you don't have to modify container D code to use these capabilities. You can use a separate plug-in binary. And then in the CRI we support per-pod container shims, so you're not starting a different shim for every container in the pod, so there's some memory and CPU usage improvements there. Things that are in progress. The second talk of the day in here, if you caught it, Akihiro talked about remote snapshotters. It's both in progress, but also effectively there are ways you can use those features today like Akihiro talked about. C-groups V2, there have been a ton of PRs in our C-groups sub-project in the last, I'd say, month or two, and we're fairly close to having that complete. The Windows team is still working on their CRI implementation, and we're hoping to clarify how mount and resource management works, again, with all these interesting snapshotter features coming down the pipe, and I already mentioned Image Encryption. So all that said, who's using it today? I've already mentioned the public clouds, the Kubernetes infra team, we've got lots of end users, various dev tools like Kind, custom sandboxes, GVisor, et cetera, but the interesting thing is how are they using it? So how do we see these projects actually using container D? So I'll actually start at the bottom, maybe the least interesting use case, just using container D as a daemon to handle that resource management of containers through Run C or other binaries. So Docker and Buildkit are using it that way today. Step up from that, who's using it as a Kubernetes runtime? There's a couple of public clouds, including ours. There's end users, Ticketmaster, there's Alibaba, another cloud, who's both using it as a runtime and expanding it with their pouch container project. MicroKates in Canonicals Ubuntu, Kind, K3S from Rancher, ADBS Fargate is now using it to drive their firecracker-based isolation. So, again, I'm not going to dig into the details of that, those are fairly straightforward. You've heard today, if you've been in here about the CRI interface, you can implement that with the CRI container D component and, of course, then drive containers using the OCI Run C and other potential isolators. So really where we'll focus for the last few minutes of this is who's using it as a library. And so there's a couple of ways you can do that, the Go Client API. So that's, again, an abstraction that we'll look at how that fits, but there's a ton of projects that have chosen to use the Go Client API for a simple way to run containers, to make a larger project use container D, so OpenFaz is one of the most recent ones where Alex has been tweeting about FazD. I already mentioned Alibaba's pouch container, which is an open source project. You can go look how they've used the Go Client API to drive their container runtime. They've almost built essentially a Docker clone with all the registry operations, runtime operations in the pouch container offering. Our IBM Cloud Functions team has a driver to use. Container D is the runtime, Weaverx has Ignite, which wraps Firecracker, and then some of the Helm team and CNAB, if you've heard of those projects, built a very nice library called OROS, which allows you to do very flexible things with registry interactions, again, via a nice Go Client implementation. So there's that aspect of using Container D. There's also extensibility. So plug points to make a custom resolver that talks to your registry, maybe over an enhanced protocol that's not like the default Docker registry protocol. So Amazon has built a resolver for ECR with that. Maybe you saw a blog post from the Azure team about Teleport, and again, they've written a custom snapshotter that is not open source, so we can't see what it does, but it was mentioned in a talk realer today how it uses SMB protocol and VHD images to do very interesting sharing of images across your cluster, and I believe even within data centers. And then remote snapshotter, again, Akihiro did a great job this morning sharing about the StarGZ implementation, and CERN and the CVMFS team are also working on a remote snapshotter. So there's Go API usage, there's the extensibility points, and then there's even all the different sub-projects within Container D, like our C groups and our Run C wrapper, and other tools like that are console implementation. So even Creo, for example, imports Container D slash C groups, because it's just a nice default Go implementation of C group functionality. So let's look a little bit more at how this is actually happening. So again, I promised I wouldn't dig into this architecture and belabor all these points, so let's focus in on a few things, and if you do want to have a more in-depth talk, look for the KubeCon San Diego Container D talk on YouTube, and there's a great talk that walks you through this whole architecture, but let's focus in on the API. I said a lot of people are using Container D via the Go API, there's both the method by which the GRPC API is exposed from Container D, so for example, in the case of the CRI plug-in, it's simply the KubeLit's talking CRI to the Container D socket, and the CRI sub-project of Container D handling those and then using the Go API to call into Container D, start the containers for your pod, set up the CNI networking, et cetera, so that's obviously a very clear usage of the Go API from the CRI implementation. You also have low-level access to all the GRPC services within Container D, so if you think our Go API isn't giving you the level of detail you need from the metadata service or the snapshot service, you can talk directly to those GRPC API endpoints for those services, and the Godoc is all online, and that part of Container D is strongly versioned with all the guarantees of sem-verb versioning, and I don't think we've actually broken even across all the releases to date broken in any of those GRPC-level service APIs, so again, you can talk to snapshots, content, containers, tasks, events directly through that API, so again, I won't go through all those services, but those are all the core services that have their own GRPC API definition that is strongly versioned that's abstracted for you nicely with the generic Go API, so a lot of those use cases that I said they're using in the Go library, they've abstracted to use that API rather than talk directly to GRPC service endpoints, so let's talk more about plugability at the bottom end, content store obviously has a default implementation in Container D, but you can write your own, and then snapshotters, I've already mentioned a couple times, remote snapshotters, these are the ones that are built in, butterfs, overlay, devmapper, and obviously the plugability, and then we'll talk about shims, so there's a shim API, obviously we provide the implementation for Run-C, but that's where things like GVisor and Cata and Firecracker can write their own shim, and I'll show that API at the end, but let's walk through, start to finish for starting with the content store, so I mentioned this project OROS that has written their own content store plugin, that means they don't have to modify Container D, and it effectively allows them to provide a demonless way to interact with registries using their own content store, and so I don't know if anyone's used my manifest tool project, which again doesn't need a Docker demon or a Container D demon, it just talks to a registry to build or to push multi-architecture manifests, I've actually been rewriting that using OROS project which has the content store implementation, which allows me to throw away hundreds of lines of code in my project, because they made this really nice interface for interacting with a registry via a very simple Go API, we're going to talk about runtime shims in a minute, those are separate binaries, so if you go follow the firecracker installation guide with firecracker container D, you're going to install their actual binary shim and configure container D to drive that shim, so I already mentioned client plugins, the thing I wanted to focus on here is that the remote opt interface, it's maybe a little small, these slides will be online later if you want to look deeper, but this is how you can actually customize a resolver, say you write your own registry, it's not OCI compliant, you just have your own way of resolving hashes to layers and manifests, you can customize that fully with the remote opt, again without changing a line of container D code, so that's how Amazon wrote their ECR resolver, and again you can also replace any service, the leases service, the events, the diff service, the content store, you can use all these handlers to have your own custom implementations again without having to change the line of code of container D, you can also create your own container D binary embedded in your project, that's what Darren Shepherd did with K3S, when you install K3S it's got the sort of minified Kubernetes, he's removed a lot of things you don't need quote unquote, and then he's built in container D using the same model, so that you don't again have to install and maintain that, it's all kind of back to the beautiful world of one big static binary, you plop it down and you have everything you need. Snapshotters, again we just voted to accept the StarGZ remote snapshotter as a sub project in container D, so AkiHero showed you the current GitHub location that'll be moving into container D, but that's one implementation of a Snapshotter plugin that you'll then be able to, again, not have to change container D, but you can run the StarGZ Snapshotter, CERN's CVMFS Snapshotter, and you can configure and run container D to use those special Snapshotters without having to get PRs into container D, have your own custom file system, that's the API you have to implement to become a Snapshotter, and again you can run that as a separate process, this is an example of how you would simply do that in a Go program, listen on a socket, change the proxy plugin configuration to point to your new Snapshotter, and now you have the ability to use that within container D. Finally, I mentioned the shims, so again we provide the Run C shims, we have a couple variants of that, because again our most recent release has the per pod shim implementation, so there's a couple versions of that, you can switch between them in your container D config, Microsoft provides RunHCS for their Windows implementation, and then you have shims for Cata, Firecracker, GVisor, and maybe others I don't know about, these are the major ones that we're aware of and have talked to us and we've played around with, again these are separate binaries, you install them from these projects, you configure container D, and you can now use these run times without changing any of the other container D architecture, so yeah a little more detail on that, again what you have to implement is fairly minimal, it's all about life cycle of a container, so if you want to drive VMs, you just handle the start and stop, pause and pause all those capabilities in the way that your runtime needs, and there's a simple naming convention, so when you start a container D process, you can say use this runtime by providing this type, and again here's all the API that you would have to implement to become a shim in the shim V2 API, it's effectively the task service within container D, so this is a copy of the former chart, hopefully you can see the highlighted areas, especially in the top half and where on those architecture charts they fit in to have extended or found the plug point that allowed them to do the special thing they wanted to do with container D, so with that I think we got a couple minutes, so any questions? I was a little bit quick, but hopefully gave you an idea of where people are extending and using container D today, everyone wants to leave, everyone's done with Fosdom. Actually this is probably a question from somebody who doesn't really have an idea about how Kubernetes works, but how does cryo relate to container D? You said the CRI shim is now part of container D, right? Yes. So what's the difference between cryo and container D basically? Yeah, so effectively the kubelet points to a certain CRI implementation, so if I'm sitting on a worker node of a Kubernetes cluster and I've got a configure to use container D, it's pointing to our socket, if you point that to Creo's socket, then obviously Creo will handle how they've implemented driving Run C, which they have support for CADDA, they have support for Run C, many of those things are similar. Okay, so they're similar, they're basically similar project container D and cryo. Yes, okay. Yeah, similar, when they're used as a Kubernetes runtime, I tell people they're effectively quite similar, there are some design choices. What's different is the rest of the architecture and the extensibility, not that they're missing from cryo, but that was not a design point. Right. It was meant to be a Kubernetes runtime, so that's the path that cryo implements. Interesting. Thanks. Yep. Okay, thank you. Thank you.