 on Linux, Android, they're all based on the Linux distribution, right? So that's just, you know, really what this allowed us is, you know, lots and lots of very, lots and lots of people contributing and lots and lots of people going off and doing what they want. These distributions help make Red Hat Linux what it is today. Thanks. I can get off the stage. I hate being on stage. So now we look at containers. All of a sudden we get into this container world and the container world has been dominated by one player or at least people think of it, right? People talk about it. So how do I make Docker containers into just containers? Everybody talks about Docker all the time. Docker containers. Everything has to go through Docker. Everything has to go through Docker. So we need to make containers as generic as PDF. That's really what the goal of all this stuff is. And containers need to be open. They can't be controlled by a single company. The really what I want to do is open up the whole idea of containers so that we can have innovation, more innovation than what's going on right now. So what exactly is a container? As I go through this talk, we're going to be talking about, we've been asked not to use the word Docker as much anymore, so I'm going to talk about it as the package formerly known as Docker. So we're going to have TPE, FKD, which I can't say, so I'll probably have to say it in full amount. One of the big pet peeves I have about the entire Docker effort right now is everything has to go through a big fat container demon. Everybody has to go to the container demon and ask for permission to do something. If you want to innovate, you have to get your tool, your innovation into the container demon. So people have this understanding of containers as being, I have set up this big, huge fat demon out there and I run tools and talk to a client server operation to talk to the demon. Well, containers are far more into the Linux operating system than going through this big fat demon. What a container is is basically a single process running on a system that has certain kernel configurations set on it. So when I talk about containers, we talk about containers as being something that has resource constraints. So they have certain C groups associated with them. It's a process with certain C group flags set on it. Secondly, a container has some security constraints. Okay, it usually has some kind of isolation, things like a different SE Linux label might be considered a container, but drop capabilities, Linux capabilities or set comp or it drops this calls. And then the third part of a container is namespaces. So namespaces is basically, you go into your namespace, you view the system differently. You view process tables, things like that. You might have different mount points inside of your container than other containers, but it's basically a process that has these namespaces associated with it. If you pulled up a Fedora system right now, booted it up and looked at PID 1, system D and looked at it, guess what you'd see? System D is in a namespace, right? System D has the amount to go to proc, if you go on your system right now, go to CD slash proc 1 NS, and you will see a whole list of namespaces. If you looked at it as C groups, if you go to proc 1 C group, you will see that system D is running in a C group. If you look at it, it's owned by root. So that's security constraints, right? It has capabilities. If you look at it, it has capabilities associated. It has an SE Linux label. It's a knit D. So if you boot up a Fedora system right now, everything is in a container, right? By the definition of a container, it's just these constraints around it. So only tools like Docker and other tools are basically just about configuring the kernel and then launching a process with that configuration. So as we go forward, and one of the interesting things I get asked a lot about containers is, can I do, or about applications is, can I do this in a container? And all I say is, if you can do it on Linux, then you can do it in a container, because by definition, everything in Linux is in a container at startup time. Now, the container might have to be running in the host container environment. It might be fully privileged and stuff like that. But yes, you can do it in a container. You can load kernel modules in a container. You can do anything. They're just processed on a Linux system. So let's look at OpenShift Kubernetes. So OpenShift Kubernetes is really where we want to do container orchestration. We want to do container orchestration, but we really want to look at the requirements of OpenShift and Kubernetes. So what does OpenShift and Kubernetes need to do to run a container? Well, first of all, we have to have a standard, you know, containers are not just those process things, but they also come with some kind of user space associated with them, right? So when I download the container, if I download the alpine container from Docker I.O., that comes with some user space, right? So what we need, though, is we need a standardized way of identifying that image that we're going to be pulling down. So we need a standard contact images. The next thing I need to do is have the ability to go out to a container registry, a place that these things are stored, and pull them down. And that has to be standardized, or somewhat standard, at least the facto standardized. So that's an easy way for me to go to container registries and pull down images and put them on my box. Next, I need to take that image I pulled down, and I need to explode it onto disk, usually on top of a copy on right file system. That's the way people expect containers to work. And then lastly, I need to execute that container image. I need to execute the software inside of that. So these are the steps that you have to do inside of a container, to get a container up and running. And then I need some kind of management API, theoretically, to be able to manage that environment, your list, what containers are running, things like that. But that's optional. I don't need that. So we have a standardized container image format. So a couple of years ago, OCI started Open Container Initiative. And Open Container Initiative really went off and tried to standardize two things of the container environment. The first one we're going to talk about here, which is the open container image format. So this is basically the things that store container registries. These are the image blobs that we're going to store container registries. Really, all they are is a tile ball and JSONs. While they're a tile ball of tile balls and JSON files. That's what a container image actually looks like. And mostly what the standardization of the format was basically specifying what the image, you know, what the tile ball of tile balls is, and what the JSON format of the JSON had to be. So this has actually been standardized now OCI image format allows you to store images and container registries. And in my opinion, this is the most important thing in containers. This is RPM Debian. This had to be standardized. If all of a sudden people started to create different container images, we'd end up with, you know, people basically having to build applications for different architectures. So the real problem in Linux over the last 20, one of the big problems in Linux over the last 30 years is that RPM Debian never got together. The Debian format and the RPM format. So anybody that wanted to build applications for Linux had to build two different formats. So we didn't want that to happen in containers. And this has actually prevented that or hopefully has prevented that from happening. So the next thing we want to do is we want to pull and push images from container registries. So how do I get an image from a container registry? I'll give you a little history. This is the first new tool we're going to be talking about. Several years ago, anybody who's played with Docker, you've done Docker Inspect, right? So Docker Inspect basically looks at the JSON file that's associated with container image or with the container itself. And you can look at that JSON. And what we added to the JSON is basically some things like labels. So if I go in and inspect a container image, it'll come up and say, oh, this is the Apache container image from Fedora. So these nice label things. But the problem with that is we actually wanted to be able to go out to a registry and look at the JSON file associated with images at the registry before we pulled it down. So we actually opened up a pull request to Docker called Docker Inspect dash, dash remote. So what we wanted to do is go out to the remote registry, grab the JSON and pull it down. Otherwise, the only way to look at a container image was actually to pull the container image to your local machine and then run Docker Inspect. So it was a huge problem because you'd be pulling down potentially gigabytes of data just to be able to look at it and say, oh, no, that's not what I wanted. Now you have to throw it away. So we wanted that and Docker basically said, no. They said, you should just implement that tool yourself. Don't do it through this. So we implemented a tool called Scopio. Scopio means Greek tool. It's Greek for remote viewing. So the original idea was that we'd be able to review the JSON associated with an image. Over time, so after we built Scopio originally just to look at the remote image, we said, well, we sort of have implemented a lot of the Docker protocol to look at the image and talk to a registry. Why don't we just add in pulling and pushing? So we added pulling and pushing of images from the registry. And then we started working with CoreOS and CoreOS said, well, they don't really want to use Scopio, but they'd really like to use a Go library to be able to move images back and forth between their environment. So they were looking at it to use Rocket. So they wanted to use Rocket to go down to a registry and pull it down. But they didn't want exact Scopio. So they said, why don't you break it out into a Go library? So we created a GitHub Container's Image. So GitHub Container's Image now contains all the libraries and tooling to be able to move images back and forth between container registries. Scopio then wraps it up so you can use Scopio as a CLI tool to do it. But we didn't stop it just so traditionally the only way to move container images back and forth was you'd have a container registry and you'd download it to DockerDemon and DockerDemon would store it locally. But we started enhancing Scopio and Container's Image to be able to move images around. So we can actually move images between two registries. We can actually move it between two DockerDemons. So we can actually pull it out of DockerDemon. We're going to be talking about some container storage in a minute. So we can actually move images out to local files. We can move them to OCI image bundles. We can move them all different ways with Scopio. And it's all because of Container's Image. Container's Image also is totally being used. Now we're getting contributions from all different vendors that don't necessarily use Docker. So Pivotal, for instance, is heavily contributing to Container's Image because they want to use it to interacting with their software. The next thing after we pull the images we've talked about we had to explode at the disk. So we need to pull the image down and we needed to explode at the disk and on some time copy on right file system. And this is really where the big fat demons becomes a problem. So traditionally we, Red Hat, over the time I've been working on Docker, we contributed lots and lots of code into copy and write file systems. Originally because the original Docker only supported AUFS and AUFS only worked on Unix, never got accepted in the upstream kernel. So it wouldn't work on REL, wouldn't work on Fedora. So we actually contributed the ButterFS driver. We contributed the DeviceMapper driver, and we contributed the original version of Overlay, which has been enhanced. So basically all the driver stuff, a lot of that code came out of Red Hat. We continued to support it in the upstream, but we basically took all that code and we moved it out and said, really what we wanted to do is, well, let me talk about it a little bit further. So where do I explode my image onto disk? Going back a couple of years ago, we added this command to atomic mount. Does anybody have a play with atomic mount? You can actually take container images and mount them onto the file system. So then you can go to the mount directory and actually look at the contents of an image. You don't have to run a container on it, but you can take stuff right out of the Docker and mount it out. The problem with the atomic mount is it was racy, because the Docker daemon doesn't know that you're mounting this stuff underneath the covers from it. Docker does all of its locking inside of its process space. So if another tool comes in and tries to muck around with something, Docker can potentially get confused. If you ran this command and then went in there and did a Docker RM, I of Fedora, Docker's going to get an error and not understand what happened. So it's kind of a racy situation. So what we want to do is allow other tools to use storage simultaneously with the container runtime daemons. So we wanted to take that tooling out and move it down to disks. So we wanted to take the locking out of the daemon and put it onto file locks to allow other tools beside it. And originally, the package formally known as Docker GraphDriver code was pulled out and put into a separate library. And that library is now called container storage. We've continued to enhance container storage, add new features, and I think I'll be talking about that later. So the next thing you do, so now I've gone to a container registry, pulled down the standard format using containers image, exploded onto disk using container storage, now I need to execute the container. Well, the good thing is OCI, the second part of OCI, was identifying the runtime specification. So again, what they've specified is basically to describe what a root FS looks like. And then they describe the JSON data that you can put on disk and any tool that can implement containers can now read that JSON file and figure out what the user expected to happen and then run a container on it. So that's what the OCI specification is. And run C was the default implementation of the runtime spec for Linux containers. As of the package formally known as Docker 1.11, Docker actually uses run C. So all the tooling I'm going to be talking about going forward is using the exact same container runtime to configure the Linux kernel to run these processes on the system. But because it's now a specification, other tools, other container runtimes, people who want to run containers in different ways, have come along. So you see run V in clear containers, which are basically both tools for running containers inside of virtual machines, inside of KBM, have come along. So you can actually run different type of container runtimes. And I believe Microsoft also has the way that they run containers on this system is using OCI specification for running it as well. This gives us alternatives to a Tobi region. So now we can run containers using different types of runtimes. We can basically get the same effect, but we can start to look and run these containers in different ways. So now that we have these tools, so again, we took the big container demon, we broke out the core components of it, wanted to send something to the libraries that can be used and innovated at different rates. My team's been working on over the last year or so, is basically how do we innovate on top of those? And we've been getting a decent amount of contribution from the upstream. One of the things we wanted to do was simplify signing. So image signing. And we've been working very heavily on allowing PGP, a thing we call simple signing. We wanted to have something much simpler than docket notary. We felt that docket notary was very difficult to do and also tied people to a specific registry. We don't believe that there should be any tying of container registry. Innovation in containers, a huge innovation that's been happening at the container registry, right? Everybody has a registry. We have the atomic registry or the open-chiff registry, there's Docker I.O., there's Coral West has their version of registry, Google has their version of registry, Amazon Cloud has their version of registry. So there's lots and lots of, most of our customers are all using out of factory, which is another registry. So there's lots and lots of these registries, but something like docket notary is all about tying you to a specific registry. But all people wanted was signing. And we've been signing images called RPMs for 20 years. But we wanted to basically make the experience of signing images as simple as the experiences of signing RPMs, and that's what simple signing was all about. So what we wanted to do is allow multiple people to sign images at the same time. Signatures can be stored in an open-chiff registry, and signatures are totally separate from the registries. So you can put your signatures anywhere you want. You can put your container registries anywhere you want. When you download a container image, it'll go out to a registry store, a signature store, download and compare the two things if they match. It's a signed image if they don't match or not. And you set up the policy. You say, I trust Dan Walsh. If he signed this image, I'd trust that that image is secure, and you're in charge of it. It's totally isolated from it. So that's been put into signatures being stored on any web server. Allow you to sign any content to docker.io. Build policy rules, engines, and control which image you register as you're trusted. So we have all these toolings all built into, right now it's built into the atomic tooling. It's built into being built into OpenShift. It's also going into other people are looking at it. Right now we're working potentially with third parties, major cloud vendors are looking at this rather than tying people to notary. So again, it's innovation. But it's innovations inside of the pulling and pushing of images, right? It's not innovation in the big fat demons. So we can actually do the innovation at the low level. There's a couple of videos when you get the presentation to watch the video on signing. So another interesting thing that we've been innovating on is a thing called system containers. System containers, it's kind of a horrible name because it means everybody sees system containers and they think of different things. What I think of a system container is, when I think of containers, I'm thinking about a way of packaging software. So containers are a way of packaging software and then I pull it down and I run it. So I want to get my software from a container registry and I want to install on the machine. If I want to pull down that container image to my machine and I want to run it, but I don't want to go through a big fat container demon, I use a system image. So a system image is just a way of pulling the container software from a remote repository, installing it on my machine, and then running it. One of the ways I want to run it, I probably want to run it at boot time. So let's look at it. So on atomic host, so for his shop to shift as a container's image, we had a use case where Kubernetes required STD to be running in Flannel D. So there's two services that are required to be running. STD and Flannel D needed to stop before your big fat container demon, before the package formerly known as Docker. So we needed a way to download software installing on a machine inside of a container image and then have it running before the container runtime comes up and starts running. These containers can be run with read-only images. So I can download it while I'm not expecting these tools to be out there mucking around with images. They're not doing builds. They're just going to download an image, run it, and launch it on a system. And then, even if you wanted to, Docker has no way of specifying priority. And the different efforts to specifying priority, they're always going to feel, and they're probably going to eventually evolve to something like a NIC control, sort of the system five NIC control. We spent that priority numbers. But there's a really good tool on Linux for launching services on the system. It's called SystemD. So SystemD doesn't have problems with the priority. You can set up all sorts of really complex priorities in different ways of booting your system using SystemD. So what we wanted to do is take these container images off of a container registry, download them, and then use SystemD to basically set up priority for running them. So System Containers, we use the atomic tool for installing system images. But all really the atomic tool is doing is wrapping up Scopio and a few other tools, and creating a SystemD unifile for launching a container. So use Scopio to pull it from the registry. Then we store the images on top of OS Tree. OS Tree is really good for allowing us to have multiple images on the system, and then not using up a hell of a lot of disk space. So we can actually use OS Tree for storing these. And finally, the atomic command creates a SystemD unifile and uses RunC. RunC is actually optional here. I probably should change this slide. You don't have to run a container runtime or container tool for launching SystemD containers. A container is just a process in the system. You wanted to just download a container image to load a kernel module. You download the image. You have a tool that basically charoots into where the image is, and executes load to load the kernel module. You don't need a container runtime. You don't need to be mucking around with containers to do that type of thing. So all I really want to do is get that container image blob, pull it down the system store it somewhere, and then allow me to execute commands inside of it. If I want to run containers, I can use RunC. See, optionally we'll use RunC to run the containers, but you can use charoot. I don't care. It's up to you guys to determine the security. And people that are packaging these tools can actually go out and package them. So when I download and install, this is the command to install the SED container on the system. Atomic install dash dash system SED goes out to a container registry, grabs the SED, pulls it down, sets up SystemD unifile, stops up the container. It's up and running. I want to install Flannel, same command. SystemD makes sure SED stops before Flannel. SystemD makes sure that both of those stop before the container runs out. So the package that can specify the criteria for starting up these system containers. Even the package formerly known as Docker is now being shipped inside of a system container. So there's nothing to say you can't ship a full big fat container daemon inside of a container, and that's really what's going on here. And you can actually install that. So with standalone containers, so I call them system containers, and then we sometimes call them standalone containers. So one of the things we're going to look forward to, and I think this is starting to happen in Fedora, is packaging up regular software, think about modules, inside of containers. So I want to basically, as we move forward, you're going to have a Fedora, say, 26, Fedora 27, Fedora 28, and someone's going to package up an application that runs inside of Apache. Do I need a big container runtime? If all I'm going to do is run that service, listening on port 80? No. So we want to basically allow people to stop packaging standardized software and run on the system. I want to run Apache, but instead of installing it as an RPM, I want to install it as a container. It'll listen to port 80 on my host. It will use VWW on my host. But that Apache could be, I could be running a Fedora 28 system, but I might be running Fedora 26's version of Apache. And I can continue to run that forever, forever. So we want to basically start to break apart running of the host operating system from running individual applications on it. And I don't need to run everything inside of an orchestrated container. The image runners container standard ports for IAMs through package, packages of the apps, OCI images, instead of RPM, or in addition to RPM. So things like MariaDB, Postgres SQL, Apache, think about it, right? We can start to actually think about packaging these things as container images instead of just RPMs. And as we move forward with modularity, this is the way you'll be able to have modularity and have multiple different applications running with different user spaces on the same machine at the same time, without having to have some big container demon managing the whole thing. So let's look at improved storage. I mentioned that earlier. One of the things we want to do with that storage is we wanted to have things like read-only container images. Right now, the way DACA works is not really good with read-only container images. So we wanted to have the big thing about read-only container images is it provides better security. Right now, everything in the DACA world is shipped with a copy on write file system underneath it, which means that if I hack into your system, if I get control of your Apache, guess what? I can rewrite user SPIN to the HTTPD. If I had a read-only container running, then when I take over your Apache, I can't rewrite the executable that I'm running inside of. So I can get a good isolation between the two things. That means that I have to hack your Apache every time it comes up, as opposed to leaving a backdoor where it came easier for me to attack it all the time. In production, my belief is in production, most applications should always run with read-only containers. Slash users should never be writable by the processes inside of a container unless you are building containers. But because Docker ties together the ability to build, as well as the ability to run, all the applications are underneath, basically have the same security. So they basically have a wide open. Anybody who's allowed to write anywhere inside of a container managed by default, I'd like to change that default going forward. So get rid of copy and write file systems were not needed. So let's talk about container image size. One of the big things, right, everybody know heard about the Alpine, and one of the things that Fedora right now, we have the Fedora Minimum Container Image. So everybody's trying to shrink these container images all the time. And people come out and say, oh, my container image is only 80 megabytes. Oh, mine's 120 megabytes. Well, mine's better than yours because I have fought a Yomskinia. Everything's about sending out container images. And the reason for that is when developers are working with container images, they're basically going out and pulling them. And they just don't want to wait that extra 10 seconds for the container image to come down. So that's the whole idea of us trying to shrink these container images. But I would argue the real problem here is that we're moving these container images back and forth all over the place. So why do you care? Why can't we use shared file systems for storing our container images? Why do I have to go out and think about Kubernetes. It's going to come up. It's going to launch 100 containers on 100 different nodes. Each one of those containers is, say, a gigabyte in size. So it now goes out to 100 nodes. Each one of those nodes is going to have to move one gigabyte of data from a container registry to the node. Now we go two months later. There's a security vulnerability. We update the container image. And all of a sudden, all those nodes have to go out and move huge amounts of data again. Why are we doing that? If I'm running in an orchestrated environment, guess where my data sits in containers? It usually sits on local host storage. If I'm running in an orchestrated environment, that host storage has to be networked. So if I want to fail over from node A to node B, I probably have my data has to be stored on some kind of shared storage. It has to be an NFS, SEFS, Gluster, iSCSI. Some way it has to be stored on disk. So the image is not being shared that way, but all the data is. So why aren't we using shared network storage for sharing our images? So we want instantaneous updates for container images. So basically what we've done with container storage now is we allow you to store your images on NFS, Gluster, SEFS. So in container storage now, we're allowed to do that. So you can actually set up your, when we get to the final part of CRIO, a KPOD or any of the other tools that we build are all can have shared storage with NFS. So now if I go out and I'm running 100 containers underneath Kubernetes, they don't pull them down. They just do a mount of the NFS share and all of a sudden the image is inside of the container. The image is ready for the container to run. So we're moving images to container storage. So let's look at container image development tools. Why do we care how someone builds an OCI image? OCI images are standardized. Why do you care what kind of tools build it? Again, going back to my PDF description at the beginning. Do you care that someone built it with Acrobat Reader? I mean, Acrobat writer, Acrobat. Do you care how that PDF was created? No, as long as it works, as long as it has a functionality when you're looking at it, why do you care how it was written? So four or five years ago when we first started using it, Docker came up with the concept of Dockerfile. The sad thing is the same way we build containers now is the same way we built them four years ago. And people are jumping through hoops to try to figure out ways to get different activity to happen inside of a Dockerfile. So we have a standardized version. And Docker itself is called Dockerfile as a really crappy version of Bash. Right? It just, it works well for describing an application, but up to this point, the only way to build Docker images or OCI images is to do a Docker build. So can we have alternatives to the Dockerfile? Well, there has been some tools that have been coming as anybody played with Ansible containers. So Ansible is a way of describing an application inside of Ansible as opposed to describing inside of a Dockerfile. And then there's OpenShift S2Y. So OpenShift S2Y takes out Dockerfiles altogether. It hides it all totally behind the scenes. And you basically do a git check-in and as soon as you do a git check-in, some way OpenShift magically takes that information and creates a new Docker image or a new OCI image. So there are new ways of developing these. The problem is every one of these has to talk to the Docker demon because the only way anybody's ever built container images is through the Docker demon. So I wanna be able to build container images without requiring a big fat demon. And I talked to this man over here in Allen at DevConflash here and I said, hey, we're doing all this stuff with container storage. Could we build a container image with it? And this was on day one of it and day two, he got up at a lightning talk and actually demonstrated building container's image without a container's registry. So that we created a thing. He made fun of the way I say the word builder, so it's builder. So we created a thing called Github project that I'm gonna call builder. But we, basically the goal here was to use bash as a way of building container images. So if I'm building a container, really what I wanna do is I wanna build it from a base container, so I needed the from command. So if you look at a Docker file, you see the first command is almost always builder, is from Fedora or from something. So he built a command line tool called builder from Fedora. When I execute builder from Fedora, I guess what it does, he uses container's image to go out to a registry, grab Fedora, pull it to the local machine, explode it on top, container storage, and creates a container ID. So as soon as I do that, the next command I can do is mount it, right? I don't have to be going through a deam and I don't have to be asking mother, may I? What I can do is I can actually go out and mount it. So he added a command called build amount. You give it a container ID and they give it a mount point. This is somewhat lying here, so we'll ignore that for now. Actually what it does is it returns the mount point where it mounted it. If I wanna run a command inside of the container image that I'm building, so if I wanna do the equivalence of a run command inside of a Docker file, I can do a build a run. So I can do a build a run of the container and I can do a DNF install. What that will do is actually use run C to create a container on top of my image and we'll run DNF inside of it. So it'll be locked down. If I wanna control what's going on, I can do that. But if I've done the build amount, I can actually just do a make install with the dester of the mount point, right? There's nothing special about these container images. They're just disk spaces. If I'd mounted up, I can copy content directly in. I can do DNF install, install root there. But one of the problems with Docker file right now is I have to have all my tools inside of the Docker image to be able to build. So when we're looking at minimizing the size of container images, one of the big things is I gotta have DNF in it and DNF requires Python. So I have these big tools that import into my container. If all I wanna run is Apache, but I have to have Python and DNF and all these other tools inside of the container is bloating it up, it's also increasing the attack surface. So if I built my container using the host machine, as opposed to building inside of the container, I can actually start to shrink the requirements of tools inside of it. When I'm done, I need to commit the image, right? So I'm building a container, I'm modifying the existing image and I need to commit it. So this is a build to commit command. And I can actually specify all sorts of command line tools. So all those other flags that are inside of the Docker file, we can actually specify it on the command line. So with Builder, I can actually do the entire build, basically now that it's implemented, every single command that you would expect to see inside of a Docker file that you can execute inside of a bash script, okay? With no container daemon in the way. So there's no container daemon causing you any problems in this environment. Yep. Well, all right, so, well again, the daemon actually causes everybody to go in and work in the same type of environment. So in the daemon, the problems with the daemon are the locking, so there's no, I can't access the storage without going through the daemon. I can't get any new features into the way I want to deal with the tools without going through the daemon. So if I want to build a tool that does special things like mounting, I have to get that daemon to agree to be able to mount these things to external storage. All right, so the real goal here is just a, it's sort of, I want to say this because everybody always says, oh, that's what system D does. But basically we want to break things apart into core components and then be able to innovate and use the core components separate from the big centralized daemon, okay? So we want, right now Ansible Container and OpenShift are both looking into using Builder as its method to building it. We've also dealt with several companies that are looking at streaming continuous builds. Turns out Builder is a lot faster for building because Builder only commits once. While you control how often Builder commits. When you're using a Docker file, every single line has to be committed. So if you look at Docker file, you'll see all these people doing these huge run lines with backslashes and stuff because they want all these commands to be executed at the same time inside of the build environment. And Builder, we've taken that apart and basically allowed you to do individual steps as you're doing Builder. So you can basically say, yeah, that's good enough, I'm gonna commit now and then I can continue adding to it. Or you can just do everything one huge hundred line script and commit at the end. If you look at some of the IJBOS applications, they have 50, 75 layers end up being added when they're on with Docker build. So if you run a Builder building, you can actually really shrink and speed up because you're not adding a layer for every single command. So what about Docker file? Yep, yep. You're defined as a bash script instead of a Docker file. Yeah, I mean a Docker file is just a bad bash script. Right? Yeah, so. No, no, we're not doing anything. If someone wants to innovate on top of Builder, they can do that. So Builder, what about Dockerfiles? Everybody in the world is working with Dockerfiles, so one of the things we had to do with Builder is actually support Dockerfile format. So we created Builder, Build using Dockerfile dash f Dockerfile dot type, so we actually called it bud. So Builder bud dash f Dockerfile basically allows you to build. It'll walk through the entire Dockerfile and run it exactly like you did a Docker build. So if you have a lot of applications that people submitting like at the Fedora, we were at the Fedora container talk yesterday and they were talking about people submitting Dockerfiles and them building them inside of Fedora registries. One of the things we asked at the end is are you looking at Builder if you're doing it? They're looking and yes, the answer was yes. So they're gonna look at using Builder for building container images. Theoretically, at some point in the future, Builder might allow us to get better security than using Docker build. We might be able to use Docker build with less privileges than we require right now to do building, but right now, it requires sysadmin privileges. So Builder's kind of a really cool tool, but there was a real goal in this whole thing was to, so we wanted to basically not require the Docker daemon. So guess what? I've been insulting big fat container daemons and guess what I'm about to introduce? A somewhat thinner big fat container daemon. So container management. So now going back to Kubernetes. Kubernetes wants to support more than one container runtime. So Kubernetes, year and a half ago, CoreOS went to, if you looked at Kubernetes up to the year and a half ago, it basically had dockerisms all through it. So it was talking to the Docker engines API to build up, do all that stuff. CoreOS went to them and said, we wanna have Rocket running inside of containers. Okay, so we need to have Rocket inside of containers. So here's a huge patch that basically said, if def, the equivalent can go of if def, Docker, do this command, if def, else do this rocket command. And Google and the Kubernetes people said, hold on, we can't do this. We can't have everybody, you know, end up with go code with like 4,000 different if then else statements. And so what they did is they said, we're gonna define the API that we expect a container runtime to implement if they want to work as a container runtime for Kubernetes. And that was called CRI. So container runtime interface. And I'll spell that out. So when they defined the CRI for Rocket, we jumped in and said, hey, why don't we build a little tiny tool, a daemon that just implements the CRI interface and launches containers for Kubernetes? If you look at another thing in Kubernetes, Kubernetes has had an incredible problem staying up with Docker, okay? Docker changes all the time, especially last year, right? There was Docker 1.8, 1.9, 1.10, 1.11, 1.12, 1.13, all within a series of few months. Every single release broke Kubernetes. Matter of fact, Kubernetes right now only supports Docker 1.12, which has been out for almost a year. They haven't moved to Docker 1.13 yet. Docker now has changed their whole thing when they changed the names of Moby. They've changed so they release on monthly basis now. So this is the 17-6 release. I don't even know what the latest release is, right? But every single release, they broke Kubernetes. So we said, why don't we implement a container runtime daemon that will guarantee to never break Kubernetes? And the way we want to do that is we want to have the container daemon, any pull request, any change to the CRI, oh, has to pass the Kubernetes test suite. It's the CRI test suite, the end-to-end test suite. So we're exaggerating a thing called CRIO, which stands for Container Runtime Interface for Open Container Initiative, OCI's, okay? So we introduced it last year. It's underneath the Kubernetes runtime, implements Kubernetes container runtime interface, Kubernetes service launches container pods. This is fully open with contributors from Red Hat, Susie, Intel, HyperShell, IBM, and lots of other people contributing to it, okay? I call it a lot, there's a whole bunch of people that are like dogs sniffing around the edge of it right now. We're hearing from everybody that's interested in it. Whether or not they're willing to contribute, I don't know, but there's lots, every big major company you can think of that has anything to do with containers, other than Docker I.O., other than Docker, right now is interested in CRIO. Intel added support for CRIO to run KVM-based containers, clear containers. If you go to CRIO.io slash blog right now, you'll see, I published today a Intel blog talking about how they run Intel containers, clearly one of the containers under CRIO. CRIO package is now available for Fedora 25, 26, raw hide, all the Fedora versions, and Fedora is where we've been delivering all this stuff for us. Everything comes in Fedora first, but we're also working on getting it onto Ubuntu and other packages. And it's the first CRI-based daemon to pass the full Kubernetes CRI end-to-end test. Okay, our test, when you commit a, right now our test, when you commit a pull request to container, it launches hundreds of tests. As a matter of fact, it takes between an hour and two hours for the entire test suite. So you are not allowed to, until you pass all those tests, we're not accepting the pull request. So you have to pass all tests to get it in. And again, it has to be because we can't break Kubernetes. Kubernetes is what matters. So everybody know who Kelsey Hightower is? Kelsey Hightower is sort of the lead evangelist for Kubernetes and he has a massive following of 35,000 followers. He's just huge influence. So last week, he announced that as of this Friday, I guess, or Friday the 31st, a huge update for Kubernetes, the hardware ship. So he basically talks all the time about Kubernetes in all new labs, including encrypted secrets and the using of cryo as a container interface is what he's about to ship. And look at the first person that chimed in underneath it, Solomon Hikes. And he's not happy about this. And one of the things that cryo, when we started developing cryo did is it created a thing called container D. So originally, cryo was basically an alternative to the Docker daemon. And if you look at the way Kubernetes worked with the Docker daemon, it goes out to the Docker daemon. The Docker daemon pulls down the image, stores it inside of its internal memory and then launches run C. That's the way Docker daemon worked. So Docker as of 1.11 or 1.12 added this thing originally called the container D, container D. Container D was tied to SWAM, Docker SWAM, a competitor against Kubernetes. So container D was because Docker was too heavyweight so they wanted to have a thinned down container D and then that worked. So all of a sudden Docker went from Docker launching run C to Docker talking to container D and container D launching run C. Actually, if you go back further, Docker used to launch container directly and then moved to run C and then moved to container D from run C. So all of a sudden container D was out there it was part of the Docker project. After CRIO started getting a lot of press last fall, Docker decided to add more functionality to the container D. So they created a thing called container, they created a project called container D that was separate from Docker. That container D, they then took all the functionality that we were doing in CRIO, right? The separate containers image, containers and moved that into the container D. So now if you talk to the container D, it will go out to a container registry, pull it down. It will put it onto storage and stuff. We've been asked to contribute to that. And the first thing we did is sure we'll contribute to that. Let's use container's image as a way of pulling images. Let's use container storage as a way of pulling storage. And they said no, we're not interested. They also tied it to what's called the BDFL which is a benevolent dictator for life. And guess who the dictator for life is? Solomon Hikes. So we basically said we wanna separate all this functionality and have a different library so we can innovate at different rates. They turned it down. And so what Docker is trying to do is get everybody to move to container D. So container D and CRIO are the two competitors in this environment and that's what's going on. But container D is also a demon for supporting swarm, mesosphere, Kubernetes, any other Tom Dick and Harry that comes along and wants to do container registries is supposed to go into that. So guess what it's doing? It's gonna blow up into a big fat container demon. We built a container demon that's dedicated just to Kubernetes. We'll see who wins. One of the things when we're building up CRIO that one of the problems with moving from Docker as a backend for Kubernetes is if you use Kubernetes right now and you wanna find out what's going on in the system, what do you do? You would get onto the node that's running Kubernetes and you start to run Docker commands. So you start to run different Docker commands to sort of diagnose what's going on and really to sort of understand what's going on behind the scenes in Kubernetes environment. So we needed basically a debugging tool for doing that. So we started out this effort inside of CRIO called K-Pod. So K-Pod is management tool for managing, administering CRIO storage in pods. So we've added K-Pod diff, K-Pod export, K-Pod history, K-Pod images, K-Pod info, K-Pod inspect, K-Pod load, K-Pod PS. Anybody notice any trend here? That Nose Docker, okay. So what we're doing is we're implementing the entire Docker CLI in K-Pod without a big fat demon. So again, this happens all, you don't even need CRIO running. This happens behind the scenes and you're able to do almost everything you can do inside of a Docker CLI, but you're not talking to a demon. It's not a client server operation. You're just running the containers as a child of your K-Pod environment. So this is how far we are with K-Pod right now. We have all these commands are implemented. We're probably adding about one command a week. If you go on to cryo.io slash blog, you'll find lots and lots of blogs to talk in video showing you how we're implementing it. We haven't implemented the most important ones yet because they're the hardest to do. K-Pod run, K-Pod exec, K-Pod attached. Those are all being worked on heavily, but we really wanna make sure we're doing it correctly. Once we have K-Pod, we're actually creating a new library called Live Pod, which K-Pod and cryo will end up sharing and be able to do all of the activity through the same interface. And if you wanna, so K-Pod originally when it comes out is just gonna be tied to what the Docker CLI can do, but we really wanna get to the point where it's actually launching pods. We wanna get more experimenting around what does it mean to be in a pod? So when I'm talking about pods, for those that don't know, are basically a way of running more than one container in the same environment. So Docker, Kubernetes basically says, I run, Kubernetes runs pods, not containers, but you can run a single container inside of a pod. So it ends up, almost everybody tends to do that, but you can actually have like other containers that sort of monitor what's going on with the main container inside of a pod. You can have what's called the NIC containers. You can start to run potentially containers with different privileges inside of a pod environment and the pods move around from node to node. You can't run, pods cannot span nodes, but the pod becomes a single unit which can run one or more containers inside of it. So we wanna experiment with Kpod into that environment once we have the entire Docker CLI implemented. So cryo next steps. We wanna move it out of the Kubernetes incubator project. So we're working very hard to get that. We wanna get out 1.0. So cryo 1.0 is gonna be tied to Kubernetes 1.7. After we get our 1.0 release and it's because the engineers wanna have a 1.0 release, from then on every cryo release will be the exact same version as the Kubernetes. So the cryo that supports 1.8 is gonna be 1.8. So cryo 1.8 will be Kubernetes 1.8. Cryo 1.9 will work with Kubernetes 1.9. So we will have them matched up. The only one that breaks that is you have to have a 1.0 release, at least they want to. We need to pass the OpenShift test suite. So right now we're tying, right now we're tied onto Kubernetes, but we also wanna tie into OpenShift. So eventually we wanna get to the point where cryo won't be updated unless you fix, unless your patch will not break Kubernetes R OpenShift. OpenShift tells Kubernetes to execute a pod. Kubernetes communicates with cryo. Cryo uses containers image to pull the image to the host. Cryo stores the image using container storage. Cryo then launches the pod using Run C. R is the blog today tells you it can use it. It can run using clear containers. Standard based container runtimes, alternatives to the package formerly known as Docker and Rocket. Conclusion, breaking up container runtimes into core functionality, pulling and pushing images from registry, storing images, running containers, innovate new and interesting ways of using containers. And the end is PDF Linux containers. Any questions? Ray understands thoroughly. Yes, Matt? Well, when cryo has full, so the question, I guess I should repeat the question, the question is cryo interesting to sort of a person who just wants to use the Docker CLI. So he just wants to play with containers on his local host. He doesn't understand orchestration, doesn't wanna worry about orchestration, just wants to sort of launch containers and play with them. Our goal with the KPOD tool is to actually give you that experience. So you can have KPOD as being a way so you can be able to do all of the Docker CLI that anybody uses. You'll be able to execute with KPOD. So you're gonna start cryo, not run the cryo demon, but use the KPOD tools. Eventually we might break apart KPOD away from cryo, but right now it's sort of tied together for, but, and you can use builder at the same time. So if you wanna build container images on your host, you can use builder to build it as soon as builder's done building it, you can do KPOD run to run a container in it. You can use scopio then to copy those images out to a registry. You can use build to push. You can use scopio, KPOD push, so all the tools have all the ability to interact with it. Right, so the advantage is that you have lots of flexibility in the tools that you can use. You mean in advance you'll be using Docker demon? Well, I think eventually, I mean the advantage is that you could, yeah, you could fire cryo demon and cryo demon would instantaneously they see the images you want. So you can go right into Kubernetes and say launch this container I just built and it'll already be inside the storage. But yeah, I mean there's no, I think there are advantages because we're innovating at different levels in the different tools. But yeah, overall it's, the goal is to make it a sort of a placement, you know, same experience. Our goal with the KPOD is to give you muscle memory. So if you typed in, you know, TI for the way you launch a container with Docker, you will type in KPOD run TI for launching connect. We're taking the exact, almost exactly the same API. Things you won't be able to do, you're not gonna be allowed to do Docker. You won't be able to have KPOD swarm. You know, we're not gonna implement. If someone else wants to implement that, we're fully open to accepting patches from anybody that wants to do that. Some companies have contacted us and they wanna use pods without using Kubernetes. So they're looking at us using libraries for pods to run underneath that. So we're open to that. Again, this is, our goal here is not to be a BDFL. This is not gonna be Red Hat's only way of doing it. Red Hat has not decided whether we're gonna support clear containers or other containers, but anyone wants to contribute. It's a fully open source project. So anybody that comes in and contributes, gives us good reason to have it or makes lots of contributions, we will accept it. Anybody else have any other questions? I'm out of time anyways. Okay, well thanks for coming. Hopefully anybody wanna contribute? At the end of it, these are all different places, right? So they're just having one centralized location you can contribute to containers image, container storage, atomic project, build a scopio, cryo, just blogs, mediums, things like that. So thanks for coming.