 Alright, we started. So, step on set, I'm going to take you through a little bit of background I want to continue to use, a little bit of history of the project, where we are now, where things are going. So first of all, who's heard of container D? It's been mentioned at least a couple times so far. Today if you've been in here, so container D showed up around the same time that the Open Container Initiative was formed. So the OCI RunSea has been mentioned here several times already this morning as well. So RunSea is that executor, the implementation of OCI RunCons back. And so at the point that Docker decided to use RunSea, container D was created as a simple kind of management supervisor on top of the RunSea processes. Then later that year it was announced that container D would expand to be more of a core, you know, fully functional container runtime. They could use separate from Docker, and so that was announced late December 2016. It would be called container D 1.0, we differentiate that with the old container D, we call that the 0.2 branch. And then in March of last year, based on promises made at the announcement, that was contributed to the CMCF, the Native Computing Foundation, the same foundation under which Kubernetes and other automated projects sit. And then by December, which is again just a little over a month ago, we announced our full version when released, and so that's out there and available to you. So that kind of gives you a quick flavor of where we've been. Obviously you can find out a lot more on our GitHub project page. As we all know, STARS are the gold standard by which you decide a project is worthy. No, that's not that important. But we've got a broad base of contributors, and we've actually made a couple point releases since 1.0, and we'll talk through that. So first of all, it might be helpful just to stop for a minute and say, you know, why container D 1.0? Why was the initial little kind of supervisor for Run-C not sufficient? Why did we need to create this larger kind of core container runtime? Well, I really continued that spin out that began with OCI Run-C. I don't know how many people used Docker three or four years ago. When it was a single binary, you can pop it on a Linux system that was statically linked, and you had your image builder, you had the demon, you had your client, all in one thing, which was kind of nice from a usability standpoint, but not so nice for using Docker in different ways. And obviously, as Kubernetes showed up on the scene and other use cases, Docker is a monolithic project that's not really that valuable for the broader ecosystem. So container D was kind of the next step in that. So if you look at it kind of like a stack, you've got Run-C as an executor of the spec, the runtime spec. You've got container D as this core runtime without the sort of Docker twist on how things are done, whether that's networking, volume management. So container D provides that place that if you're not interested in Docker's kind of ecosystem, you can still have a forward container runtime. So the things like Kubernetes through the CRI can rely on container D and not have to use the full Docker engine. And other cloud providers and other use cases can use that. And then, of course, you know, combining that with donating the code outside of Docker was valuable to get more of a broad collaboration than just seeing it as another part of Docker's projects. So container D, you know, you might look at it and think, well, is this just a cut-down Docker? Is it just a smaller version of the big Docker engine? No, actually, and in a lot of ways, they used the learnings of those first two or three years of the Docker runtime to kind of rethink some of the things that were done, some of the things that were done as well as we would have liked to have seen. And so, really, there's a set of technical goals that we would use GRPC for the API that we would immediately start with everything being fully based around OCI, so both the image spec and the runtime spec. The focus would be stability and performance, not so much new features and new releases, and so have this well-defined core base function. And then each part of container D would be fully decoupled, so image, files with some runtime goal, potentially be pluggable and reusable, even without the rest of container D. And so, there's a basic architecture in 20 minutes. I obviously can't deep dive into all these areas. We're going to look at a couple that are maybe of interest, but effectively, each of these is a GRPC service with an API. Obviously, they all have, you know, either metadata or storage, so container metadata, image references to blogs for your layers of your container file system, snapshot drivers mapped to what in Docker we call graph drivers. They probably know about AOFS or Overlay or PlaceMapper. Those are snapshotters in the container D world. And obviously, that links through a runtime where you can pass through an OCI spec and run a container via Run C. We also developed a rich Go library. So again, container D is not necessarily the best thing for replacing your use of, for example, the Docker client. But for embedding, it's actually very powerful. Feedback so far on the API is, you know, that's highly usable, easy to use. I've given a few talks on it as well. You can find a line about how quickly you can write, you know, 60, 70, 80 line client that can pull images, start containers, pause containers, remove containers and tasks. So you can check that out. These slides will be online. You don't need to write these down. So let's just talk through a few pieces of the architecture. The snapshotters I mentioned are similar to what Docker calls graph drivers. How your root file system is translated from a set of blobs into a running image. That's all handled within a snapshotter. To understand the simplicity of snapshotters, it might be useful to think about how Docker dealt with the graph driver that dealt with layers and mounts, a layer store with the content addressability of how you assemble like an image called Ubuntu or Alpine. And above that was a referenceable image store that had the mapping from names to images. And the problem was not that you don't need these components, but that there are a lot of interconnections between these components that made it hard to write a new graph driver without making a lot of modifications throughout Docker. So revisiting that when we designed container D was to have a much more simple snapshotter interface that the metadata store would basically be this intermediary between my set of layers, things I've downloaded from a registry and my runtime that needs a root file system. And so we can actually look at the snapshotter interface is very simple. Again, we don't have time to deep dive into how that operates. But the nice thing about a snapshotter is it just hands you the set of mounts. And so there's no more deep inner linking between the graph driver all the way up to how a runtime sees that assembly of layers. And so this makes it very simple to ask room C to run my container. So I just handed a list of mounts that I have straight from the snapshotter and run C, can assemble those, mount them, and I'm off and running. So again, much smaller interface than graph drivers, more simple relationships. And this external mount lifecycle that I just talked about, which you can actually play with with our simple CTR tool, which is the client for container D. I can list snapshots, I can view them in tree form, and I can even tell it to hand me that set of mounts. And I can actually mount it in Linux and start playing with my file system directly from a command line without actually running a container. So that's one valuable area where container D took the things we learned from Docker and actually improved. Running a container, again, I'll just hit this briefly. Again, getting that list of mounts and the OCI configuration. So those are the two pieces of information I need, file system and config. Those services can hand that to any supported runtime. And so whether you use run C, which is the default Linux runtime, Microsoft is working on their Windows containers on Windows and Linux containers on Windows runtimes through their shim. Obviously you can replace run C with hyper.sh or Intel Clear Containers. And again, because of this decoupled architecture, these pieces over here don't have to know about that runtime. Any OCI implementing runtime will be able to take that information and run your container in that environment. I mentioned API. I'm going to skip through these charts quickly so we don't run out of time for kind of wrapping up. But the CTR tool obviously has simple commands like push, pull, run, start, create. You can see on the right that same actions in code form using the container D go API client. So again, I connect to the demon. I pull an image. I can then run that image by creating a task and starting it. Again, you can do the same thing with the CTR command line tool, kill a task. Again, obviously you get the picture that there's both a client operation to do these and also a nice go API for embedding these capabilities within your own application. There's also ways to customize the OCI configuration. So we've provided in the container D API a lot of width helpers. So this has width host namespace. So I want to start a container, but I want to join the pit namespace of the host, for example, in this case. So again, these width helpers either you can use them as we provided them or you can write your own to do, say you want to take container D and make a doctor-like full runtime. Obviously you can create width helpers to do volume support, specialized networking. So again, ways to customize the OCI config as you process it through your client. I want to talk just briefly about releases. So I mentioned that our one-datter release came out in December at KubeCon, 5-Made of Con in Austin. We spent a lot of time defining a release process that was one of the complaints about Docker, especially as it was used as the runtime underneath Kubernetes, about the breaking changes, moving too quickly. And so the key points of the release process, we are using Semver. Major releases have a support horizon that includes backporting fixes until that support horizon is over. And so already since December, we've done two-point releases and we've backported many fixes out of master, which is leading toward our 1.1 release. And so again, we're very focused on support and bug fixes all the way through a support horizon, not forcing people to have to go to the next point release. The next point release does, should include the Windows capabilities for container runtime support, the Microsoft is working on. And again, all the stability and backport compatibility guarantees are provided for in the document. And it's clearly stated so you know, the CTR tool versus the Go API versus the GRPC, which things are supported for one time frame and it looks like this if you hear the document. What's it look like as a project? Again, as I mentioned, we have a lot of contributors, especially joining the CNCF. Other groups have gotten involved. Obviously, Docker is still strongly involved for their use case as part of the Docker engine stack. It may be interesting to see that Tesla is the number four contributor. That's only because the Docker employee left Docker and joined Tesla. They're still very involved in the project. I don't think there's going to be any container to be running in your Tesla vehicles any time soon. But yeah, so as I mentioned, we're part of the CNCF. Another interesting note is that the Moby project governance has changed. So I don't know if anyone cares much about open source governance. There was a lot of sort of a sticking point with Docker having a BFL model codified in a lot of their projects. We changed that last year and now have the Moby technical steering committee that oversees all the Moby projects, including container D. I'm on that TSC as well as five other companies that are represented on the TSC. And so we're excited to see that there's definitely a growing number of triggers to the project more than just a limited set of companies. So obviously this is part of Docker stack. You've heard about CRI container D in a few talks already in this room. There's been an experimental combination of Swarm Kit and Container D. So again, think of Docker and Docker Swarm Mode as kind of a Docker Inc official release. You could take Container D and Swarm Kit, which is an open source Moby project, and basically create your own Swarm Mode without any of Docker Inc products. Linux Kit and Build Kit, we don't have time to go through those, but they're using Container D. And then this week, just about four days ago, the Cloud Foundry runtime community proposed switching from Run-C to Container D to get rid of some of the code that they've added around the garden container runtime for Cloud Foundry. So we're excited to see them use Container D. The Apache OpenWisk serverless project is using Docker today, but they're planning on switching to Container D. There are folks at PuppetRD who have been contributing to Container D, and we hope to see that use case list growing in the future. Last couple slides just to give a visual. Kubernetes today with the Docker shim. Each one of these steps in the stack means another GRBC or API call layer. So as you can see, it's fairly deep today for Puppet to go through the CRI Docker shim, to the Docker engine, to Container D, to Run-C. The CRI Container D project has just merged into our GitHub code base, and we're planning to have them as a plug-in. So this will be a single binary Container D plus the CRI. So obviously you can see a lot of hops will be reduced, and we've been spending a lot of time even creating a lightweight GRBC protocol for the shim that saves memory and definitely is much faster. So we hope that this will build definitely a better stack for Kubernetes use of the Container D run time, for performance, memory use, et cetera, stability and everything else. If you go to Kelsie High Power to Kubernetes the hard way, you can actually deploy today. You can try out CRI Container D. You can do a Linux kit. You can make all kubelters for run time equals CRI Container D and get a ISO with Container D Kubernetes 1.9 plus and try it out today. So again, I've talked through this, but kind of the takeaways. We hope that Container D gets broad usage of the core Container run time for the Docker community, for Kubernetes, OpenWiz, Cloud Foundry. We've been doing a ton of stress testing. We have a 24-7 stress running with statistics and data that hopefully will keep Container D both with stability guarantees, like I talked about with our release process and bug fix back ports. So if you're interested, there's ways to contribute. There's plenty of documentation out there. There's room for more contributors. And these slides are out live in this building link, FOSDM-18-CTRD. And I'd be happy to, if you have questions on Twitter, you can email me, catch me on GitHub, and I think we're on time. Yeah, we are. Yeah, but at the second. Yeah, you're right. Which means no questions, unfortunately, just to answer the next speaker. If anyone has any questions, I'm sorry. Thank you.