 Welcome to the talk on container standards. We're going to talk about CNI, CRI, and OCI. Oh, my. The slides for this deck are available at this URL here. So snap a photo of it. If you don't have your phone out right now, we'll be showing that URL in a couple slides again just so you can capture it. There will be a lot of references to external content that does more of a deep dive into each of these standards. So should be handy for that context. I've also linked to it on the schedule for the KubeCon talk and tweeted it out. So you can get it in other avenues as well. Cool. So first off, who are we? How do we have any authority to talk about these things? Well, we work at CoreOS. Where a lot of these standards got a lot of attention. So I'm Paul Burt. This is my associate, Elsie Phillips. And we work on the community side of things for CoreOS. So we're answering technical questions, matching you with our technical people if you're coming in the door, answering questions on Kubernetes, Stack Overflow, IRC, that sort of thing. So you've probably seen us floating around. And this talk is explicitly about standards and containers. So we're going to try and cover both of those angles. And before we jump into it, we should get on the same page, I think, about both of those things. So we're going to do kind of a brief dive. So we're all talking the same language here when we talk about standards. So what is a standard? It's something those ISO folks make is the short answer. The long answer is that ISO is actually somewhat poetic. When it comes to standards, the name of the ISO organization actually doesn't, it's not an acronym. It doesn't stand for anything because they realized that being the International Standards Organization in English versus the ION in French, I'm not even going to try and butcher the French, was somewhat weird. So they chose a Greek word, ISO, which means equality as they're kind of stand-in. So it means the same thing in any language. And that kind of gets to the heart of what we're shooting for when we go for a standard. We want everybody to be speaking the same language. So we can look at a standard we know, which is JavaScript. You might think, why would you use JavaScript? And that's a good question, but bear with me. So if we go back to the dark days of the web, like 1996 or so, this was the state of the art. This was Yahoo around Christmas time. And when we wanted to develop web pages, we were developing in JScript, JavaScript, ActionScript, and a new thing called DHTML, Dynamic HTML, which means for any web application that we're developing, we're developing in four different languages potentially. But all four of those languages are really doing the same thing. So that's kind of a pain in the butt. And that's where JavaScript came from. It came from that world. So you may hate JS now, but you could hate it a lot more than you currently do if you had to write the same JavaScript five or six different times for every browser that you're supporting. That is a lot of the history and the context supporting a lot of these standards is there are a lot of different flavors of containers floating around, a lot more than a lot more existed way before Docker. And Docker really popularized it and did a great job of disseminating it to the masses. So we want to support that whole ecosystem and allow that ecosystem to grow and thrive. So that is what the standard does. And with JavaScript in particular, while not everyone is the biggest fan of it, I think a lot of people are fans of things like JSON, JWT. There are a lot of follow-on standards that have been built on top of JavaScript that have thrived and really made a big difference in our lives. So one example of this in the container space that you may not be aware of is Fracti. Its goal is to allow virtual machines to plug in as a runtime potentially. There's another one announced during this conference called Cata Containers, I believe. Also incredibly cool, same sort of mindset. So these are the things that we want to support without bending over backwards and re-architecting everything from the start. Cool. So what's a container? So this will go pretty quickly. I just want to make sure we're on the same page when we say container because it does mean a lot of things to a lot of different people. And essentially, a container is just a tar file. It's a bundle of files. So if you're not a Linux user, it's more akin to a zip file. So we've all messed with those in the old heyday of MP3s, that sort of thing, bundling stuff up and WinRare and sharing it with our friends, that sort of stuff. That is what we're working with when we're working with a container. But more specifically, we're working with tar files plus these kind of low-level Linux system calls. They either restrict the usage of the tar file, what resources it can access, or it isolates it in a nice way that allows the tar file to play nicely with other stuff on the system. So if I'm using Node 4.0 and my production system is using the latest version of Node.js, I don't have to worry about any headaches that come from me deploying my app to that environment. Those two versions of Node never talk to each other, which is quite nice. So containers are Linux magic is the way I like to sum up all those things. And if you're curious about more of the technical details behind those Linux system calls and just what a container is, there's a blog post by a colleague of mine from CoroS, containers from Scratch. And Brian Redbeard also has a great talk on building minimal containers from Scratch, which really gets you into the meat of what a container is. Cool. So why containers? Why do we all use containers? Well, we use it because hell is other people's development environments. So this, I think, is my favorite reason for containers. There are a lot for them. But the isolation, I think, is one of the biggest reasons we're in this world. So that's a theme that will pop up in a lot of these standards that we talk about. Cool. So we're on to the meat now. What is CNI, the Container Network Interface? So the Container Network Interface is a project that was started by CoroS way back when. It's rapidly approaching version 1.0 now. If we want to know what the goal of the project really is, we'll just zoom in on the readme page on GitHub here. It says, CNI concerns itself only with network connectivity of containers and removing allocated resource when the container is deleted. So in a nutshell, what this is saying is CNI is an API that basically has two calls. It has an add and a delete, and that's it. It's very simple. There's actually a third call for version, but you're probably not going to call that too often. And the context that CNI emerged from was Docker was dominant, and there wasn't any real standard other than just do it the Docker way. So CNI eventually pressured the other side enough that CNM emerged. And if you're looking for a good kind of evaluation of what these two standards are, how CNI differs from CNM, I think the new stack has a really great piece on this, which dives into it, but the choice quote for you all today is that both of them are really, they try to accomplish the same thing. They're driver-based, plug-in-based. They're great for creating and managing network stacks for containers. But the real wart on CNM is that it's designed to only support Docker, and that's not great, especially if you're a Kubernetes user. Like we said before, we want multiple products to be able to thrive the same way the web supports multiple browsers. So this was echoed on the Kubernetes blog in this post from January of 2016. And they say that Kubernetes is a system that supports multiple containers runtime of which Docker is just one, and eventually CNI won. So CNI has been adopted by the CNCF as its 10th project in the organization that happened sometime this year. And how's it work? What does it do? So we mentioned those two calls, the add and delete. Those are the two really important ones. So a quick visual of just what a CNI plug-in is at its base. It's really just an executable that does something. And it responds to those two calls. So what happens when a container fires up? The runtime has to create a network namespace first. And then after that network namespace is created, the runtime reads a JSON config for the network information that it wants to stuff in there. And that config will contain a name of an executable that it wants to run, maybe many executables. And it will execute a plug-in with that name calling the add command. That plug-in then has the responsibility of reading from that same JSON type file. It'll read it from standard in. Kind of do its magic, whatever that is. That means usually setting something up in the network namespace internally in the container and then connecting that to the outside world as well, doing whatever it needs on the host system. If there's an error, then the runtime will just tell the plug-in to delete the operation right there. So it'll safely remove what it was trying to do. But otherwise, if everything ran normally when the container reaches the end of its life cycle, it'll call the delete operation then. So part of the spec is for every add that you're doing, there's a corresponding delete to remove the work. And that's it. It may seem like I'm oversimplifying things, but it really is that simple. You can check the CNI GitHub page. And the spec, I think, is highly readable. I am not a developer by trade. I play a developer on TV sort of thing. So I found it highly readable and highly approachable. Someone who messes around with containers day to day have my own toy Kubernetes cluster. But I'm not hardcore in the guts of Kubernetes maintenance. And this is an example of what a configuration file would look like for the bridge plug-in for CNI. Normal stuff we're used to messing around with when it comes to networking. Nothing too complicated. So what's happened in the CNI space that's benefiting us? Well, we got IPv6 support this year. Plug-in chaining is a thing. Plug-in chaining basically just says, if I am calling a plug-in and it adds successfully, maybe I want to pass the successful result from that plug-in to the next plug-in down the line. Maybe something relies on that action that's happening. So that's a nice gimme. And then we also got port forwarding. But really the big thing for CNI is CNI is everywhere. It's kind of hard not to use CNI at this point. Like there's a flag you can disable on your kubelet when you're running Kubernetes. But really, if you're using any kind of networking plug-in these days, you're probably using CNI. There's a good chance. So it's really taken off. And it's rapidly approaching 1.0. But there's always room for improvement beyond 1.0. So if you want to get your foot in the door before 1.0, they're always looking for contributors. And vice versa, if you have grand ideas for how to improve things, I know they'd love to hear them. All right. This is the Kubernetes one, the CRI, container runtime interface. And just out of curiosity, how many folks here have worked directly on CNI, CRI, or OCI? Oh, wow. All right. So I'm given a book report on the Hobbit to JRR token here. So please correct me if anything I say up here. It ends up being wrong. Or if you have any flavor to add. Yeah. So CRI was talked about in depth late last year, late 2016. And the motivation behind it really came from the fact that multiple container standards were emerging. So there was Docker and Rocket. And when we went to add Rocket to Kubernetes, it required a lot of modifications to the KubeLit. KubeLit being the piece of software that runs on the worker nodes in your Kubernetes cluster. And the verbs or adjectives like volatile are not generally good in software, especially when it's a volatile interface. It usually means things are tightly coupled. So CRI emerged as the solution to that volatility. It allowed us to fix that problem so that multiple standards would be easy to support in the future. And what does that look like? Well, it looks like an adapter. So really, what CRI is, is it's added a wrapper as an adapter around the KubeLit as based on GRPC and ProtoBuff and a lot of other technical fancy things. But it allows us to plug in these various different container standards in an easy, repeatable way, which is very nice. So it's defining that interface in a consistent way. And the timeline for this, like we mentioned, late last year, December, the alpha came out 1.5, 1.6. We got beta enabled for the Docker CRI, 1.7 this year. About June, we got Docker CRI going general availability, 1.8, most recent release that most of us are hopefully on. We had the CLI tools come out. So there's a tool called CryCuttle, a very cool, very easy to play with. I encourage you all to give it a shot. And in the upcoming release, we're getting stats is the big thing coming out of CRI. So we're getting a lot of compatibility on the stats side that may be something that worked just for Docker, but we're getting it for all of the container interfaces going forward, which is super cool. But it's super cool in like a mom and dad got me socks for Christmas kind of way. It's cool if you're an adult and that's your thing or that's your space. But it's not exciting as like a toy you can play with. So how can you mess around and get excited about CRI today? Well, Cryo is a great way to do that. So Cryo is sort of a minimal implementation of a container runtime. It's based on Run C and a lot of other stuff kind of glued together to build that runtime. You basically need to be able to pull an image and then run that image. So Red Hat and a lot of other folks have done a lot of great work on this. And this is extremely easy to demo. Unfortunately, I realized that the demo portion of this would push us over half an hour time slot we have today. So in lieu of giving you a live demo of this, I'm just going to encourage you to visit the Cryo repository on the Kubernetes incubator. They have a great tutorial for literally you just run the script to set up a local Kubernetes cluster and then point to Cryo as your runtime of choice. So the real goal, the gift that CRI has given to us is we don't need an entirely different ecosystem of software for every different container format. And thank goodness for that. So the idea is like if we're using something like Prometheus for monitoring logging or sorry, just monitoring. If we're using something like Prometheus for monitoring, we don't have like Prometheus for monitoring Docker and then an entirely different products like Athena for monitoring Rocket. Like we can use the same thing for multiple different projects, which is wonderful. All right, so to wrap up this presentation, we'll be discussing how the OCI or the Open Container Initiative came to be and what standards it has created. So when we're talking about the CRI, we talked about the value of having an adapter for various plugs. In this case, the plugs being a metaphor for various container runtimes. Well, the OCI is a set of standards for each of those plugs. They might not all be a different shape, but we still might want the packaging or the cabling, for instance, to be standard and this is the OCI. We want to standardize both sides of the equation. So first, it's hard to deny how important Docker is to this story. So when I say Docker, I want to be clear about what I'm talking about. Docker is kind of an overloaded term. It first, there's the image format, which is usually what people refer to when they say Docker images. Additionally, Docker could refer to the actual runtime and that also does things like collecting logs, babysitting processes, and is a build system for container images. Finally, Docker also defines an API. Docker could mean any of those things, but what we really care about for the purposes of this presentation is the image format. Okay, so we're going to start our story in about mid 2014 and at this point in time, Docker was beginning to become very popular. The Docker image format at this time was not static. There wasn't a formal specification. It was only implicitly defined through Docker's implementation, which isn't a problem if all you're using is Docker, but if you wanted to integrate with other systems, good luck. There were no guarantees that your integrations would work. There were also some technical gaps in the format like a lack of content addressable images or image signing and the fact that image distribution couldn't be delegated. These were things that we really cared about and we saw other people really care about these things too. And unfortunately, they weren't being addressed in the Docker community. So we're going to fast forward a little bit to December 2014, exactly three years ago. CoreOS at this time announced a new project, a container standard. And that container standard was called App-C. The motivation behind this project was a combination of the use cases that we just talked about. The two key areas of App-C are the image format and the runtime environment. App-C was built to address our concerns with the Docker image format. It had a formal image specification, allowed images to be discoverable, made image integrity verifiable, and had a mechanism for signing. So we're going to jump forward a couple of months to April 2015. And the Docker image format had progressed, arguably the most important addition was a first pass at writing a formal specification. There were things missing from it, but it was a great first step. We were also pleased to see that there were some work done on some of the other things that we cared about. So at this point in time, we had two image formats that had gone their separate ways. The tech media had branded this divergence as the container wars, which in my opinion is a little melodramatic. We at CoreOS were eager to work with Docker to help build a solution that would work and that everybody could build on. And this is where the OCI comes in. So what is the OCI? As their homepage says, it is an open governance structure for the purpose of creating open industry standards around container formats and runtime. So in short, the OCI set out to define what a container is so everyone can implement and build tooling for it. And also to pull in the best ideas from around the community. So a lot of people have bought into this idea. This is where the current list of members of the OCI stands and you can see some pretty big names there. So what standards have the OCI produced? Today we're gonna be talking about two different but related specs. The image spec and the runtime spec. The runtime specification outlines how to run a file system bundle that is unpacked on disk. At a high level, an OCI implementation would download an OCI image then unpack that image onto a OCI runtime file system bundle. At this point, the OCI runtime bundle would be run by an OCI runtime. That's a tongue twister. Because we're running a little short on time. We're gonna skip through that slide. So if you're curious about the nitty gritty details of an OCI image, this talk by Jonathan Bull is really excellent and it clocks in just under 10 minutes. The gist that you wanna walk away with is that the image is a Linux tar ball and some metadata. So moving on to talk about the image or the runtime spec briefly. So as we mentioned earlier, this is basically the on disk layout of a container just before it executes. The runtime spec defines various life-cycle verbs of what it means to start, create, and kill a container. It also has platform agnosticism in that a bunch of this configuration in the runtime specification is shared and cross-platform and then there are platform-specific configuration files as appropriate. So what happened app C in all of this? So we're just gonna take a look at this handy-dandy little chart here. As we can see, all of the things that we set out to do in app C, we got into the OCI. So it didn't really make sense for app C to continue to exist when the OCI encapsulated everything that we were setting out to achieve. And so once OCI hit version one, we deprecated it. All right, cool. So OCI is the first of these standards to hit 1.0 and that's all for the better for us because it's sort of the nucleus of a lot of these standards. It standardizes the image format, as Elsie said, which is super important to having all this stuff operate together nicely. You know, so those are the standards. Well, not quite. There's a bit more. And I guess in terms of OCI, you would potentially see OCI today in image registries. It would be the container standard that you would download when you pull from Docker Hub or quay.io or Google trusted registry, whatever it is. Unfortunately, the industry hasn't quite shifted to that yet, but that's where things are headed. And in the meantime, we've had some space to look at other standards. So, you know, there is a lot more to be done in the standard space and one that we weren't able to touch on today is the container storage interface. So if you are in the storage world or passionate about storage, that is a young standard that is being worked on currently and your feedback would be invaluable when it comes to improving that. So sort of wrapping this up, standards allow this whole ecosystem to grow and thrive. And as that ecosystem grows, the standards will need to evolve to meet the ecosystem's needs. That's all the better for us because we get more options as users. That's sort of the dream when you're making a standard is to provide accessibility and optionality for the people who are using your software. And that's something worth being passionate about. So thank you to all the people here who have worked and contributed on each of those standards. You are giving all of us who are using this software a lot more options and choice, and I'm definitely very grateful for that. I know that CoreOS is also very grateful for that, that move towards openness is behind a lot of the work that we do. So we just announced a project called Open Cloud Services, somewhat separate from standards. So I'll just direct you to the CoreOS blog if you're interested in reading more about that. And yeah, thank you. Are there any questions? I think the question is, are images built by the latest Docker version OCI? Is that right? They're OCI compatible, and there's a tool that you'll see Jonathan Bull use if you check out his talk that LC referenced. It's linked in these slides. That, you can inspect it and transform it easily into OCI using a tool called Scopio. So if you're interested in seeing what's in there or how it fits into the OCI world, you'll essentially open it and see an index file and then that index file may point to a number of manifests that are like maybe operating system or hardware specific. Cool, all right, well thank you.