 So my name is Dan Walsh, I lead the container team at Red Hat. I basically, she talked about Kubernetes under the covers or under the hood. Well, I do what's under the hood of Kubernetes. So we do everything to do with running containers on the host level. So under my team, there's probably about 20 or 30 engineers spread all cross functional. So it's not just, I actually always worked in REL. I don't know if you know, I've been at Red Hat for almost 17 years. I've done mainly in that time, I do security products, I see Linux I'm fairly famous for, but I've been doing container technologies actually all the way back to REL 5, 2005 timeframe. So when the revolution started a few years ago, I got picked to start looking at it at the low level on the operating system. I now work for the OpenShift organization. But in my group, our people that work on REL, work on storage, work all over. So we have a really a cross and I'm an engineer, not a manager. Anyways, this talk, hopefully this will work. Anybody else have one of these? Okay, hopefully I'll be running back and forth. So as Diane said last night, about five o'clock I got a note saying from management saying that we should attend this meeting after close of the stock market at four o'clock yesterday. And I noticed that the email went to me and most of the people on my team, even though we're cross. So I said, there's something going on here that I didn't know anything about. But basically Red Hat and QuoraOS just decided to join forces together. And there's very little I can answer for questions. I'm not sure what this all means. I have some ideas, but we haven't even really talked to the QuoraOS guys other than tweeting to them and saying welcome to Red Hat. But one of the things when I've done this presentation in the past, I've often talked about the contributions of QuoraOS to the container environment. And now I can put their logo on the slide. So I'll show you where they've contributed greatly. Is that working? Awesome. Thank you very much. You're the man. Okay, so when I've given this presentation, I've been given a version of this presentation back at the Red Hat Summit. And one of the things I'd like to talk about in the beginning is those three letters, or those three letters mean to you in this room. When you see something that ends in .pdf, what does it mean to you? I believe it means to you that you know you can look at it, right? It's a document. You see it, you know it's a document, you can view it. What can you view it in? All the web browsers. You can view it in different tools. You can use it just about anywhere. How can you create PDFs? There's lots of tools to create PDFs, right? You can create it from the web browser, from your e-mailer. There's tools that allow you to create special PDFs. But when you see that PDF, do you instantaneously say Adobe? Do you think that the only way I could ever look at one of these is with Adobe Reader? Do I think that the only way I could ever actually print it is from Adobe products, right? It became a sort of a generic thing and it's great. And it actually made Adobe stronger because it became everywhere, right? It became a standard. Linux, when you see the keyword Linux, right? You have to think of Red Hat? No, right? Linux is everywhere. It's in your cell phones, it's in your cars, it's on your routers, it's on your IoT. Linux is everywhere. But it was only one company that ever provided Linux, even if it was Red Hat. I don't believe Red Hat would be as successful as it is by not controlling it. It's a standard. People see the word Linux and they know it's an operating system that can run everywhere. Now we get to containers, all right? We have to make containers generic. We have to allow different ways of creating this. How many people in this room know what a container is? Okay, that's good. But let me give you my definition of what a container is. A container is simply a process on a Linux system that lives with some resource constraints. In Linux, we call those C groups. Secondly, it has some security constraints. It has things like maybe set comp rules. It has ownership, file ownership. It has Linux capabilities associated. It has SC Linux, if you're running on an SC Linux, it has SC Linux labels on it. And thirdly, it has this concept of namespaces. So namespaces are things like the pid namespace where you sort of lose, you sort of get that feeling for virtualization. So I start to have a virtual feel, like a network namespace where I have my own network device. So, if I booted up a REL7 system right now, or a Fedora or Ubuntu system, and I looked at the first process that comes up on the system, and I looked at, if I cat it out, proc slash one slash C groups. Guess what? Pid one is in a C group. If I went to proc one slash NS, I would see that Pid one is in the namespaces. It's in the group of namespaces. If I looked at it and asked what the SC Linux label of it has an SC Linux label, if I looked at its capabilities, it has capabilities. So I would argue that every process on a Linux system is in a container. Okay, and then containers just become us modifying and manipulating those fields, right? I can modify the C group that it's in. I can change its resource constraints. I can change its SC Linux label. I can change the namespaces. But every process on a Linux system is. I saw someone earlier had it. We have a shirt that Red Hat puts out all the time. And on the front it says Linux is containers. And on the back it says containers are Linux. And that's what it means, right? Actually, it's right here. You have the shirt on, okay? So when you look at a Linux system, everything is a container. So if people come up to me and say, can I do this in a container? And I say, if you can do it on Linux, you can do it in a container. So let's contain it. So containers are just Linux. Okay, this might be a US thing. But do you guys know what a swage hour is? Okay, in the US when you're raising kids, anytime they swear, they have to throw, in the US it would be a quarter. Some kind of coin has to go into the swage hour. Anytime you say a swear. So I've been asked by the D company not to use the D word anymore. So if I use the D word, I will throw, this is my swage hour here. I wish I had a better jar, but. So I'm gonna try like hell not to use the D word. So when OpenShift or Kubernetes comes along and they say, well, when you wanna run it, forget about OpenShift and Kubernetes, when you wanna run a container, what do you wanna, what does it mean? I wanna run a container on the box. Well, the first thing I do is I need to have a definition of what a container is. Well, what can a container image? Because usually you're not saying, as I said, all processes are containers, but really what I wanna say is I wanna run the Nginx container, or I wanna run the Fedora container, or I wanna run an Apache container or an application container. So what does that mean when I say that? Well, we have to have a standard that defines what those containers are, okay? And I give them credit, I really can't do it without it, but Docker developed a standard for that. They defined a standard that was basically an image format. And the image format is, this is real technical here, it's a tire ball and a JSON file, okay? You create a root FS, which basically looks like the slash of an operating system. You tire it up and then you get some JSON data that you associate with that tire ball and the JSON data defines things like, this is the entry point to my container. This is the environmental variables I want set when you run a container. This is, you know, Dan Walsh created it, so I'll put a maintainer flag in it. So that JSON file describes what's in the container. Now we have the concept of layered containers, or layered container images. And a layered container image is basically, I create that container, now I add new content to that root FS. And I tire up that difference and I create a JSON file that's slightly different than the original one. And I tie both tire balls together and both JSONs together, and that's how I create a layered image. And if I wanna add another layer, I keep on creating it. So I end up with a bunch of tire balls and a bunch of JSON files inside of a tire ball and that's where the standard image is. So CoreOS actually wanted to standardize on this. Several years ago, they decided they wanted to standardize on what that JSON file is in the format, the one in those. And they created the app C spec. App C was a competitor against Docker image format. Okay, and so the world went wild. I always dead set against this at the time because I didn't want us to end up with RPM versus Debian, right? Where for the last 20-some-odd years, people have had to package software for Linux in two different formats. So we wanted a single format. And because CoreOS wanted to develop this standard for it, it forced the D company to come together and we formed a standards body that included companies like CoreOS, Red Hat, Microsoft, Google, IBM, and about four or five others. And we formed an organization called the Open Container Initiative. And the first thing they did is they standardized on the OCI image format. And I got standardized last year about this time. So now I have a standard way of putting content into what I call a container image or defining my software, packaging my software on container image. I can now take this container image and I can actually store it at a website. Okay, I can put this stuff out. Here's my application I put it at a website. Okay, those websites are often called container registries. Okay, but really a container registry is just a website that has a whole bunch of these OCI image format files. So the next thing I need to do is how do I get that image off of the registry and copy to my host. So we, how do you install an application? How do you pull an application on? Someone in the room tell me, how do you get containers onto your system? Anybody? You got a quarter? Okay, that's the only way. That's the only way to copy a tab all off of a website to the host. We're four years into this. Four years into it and that's the only way. Ain't that sad? It gets sadder. Okay, so we decided to pull the image stuff, a little history, some of this talks back. A few years ago we decided to create a product called Scopio. I'm gonna talk about that at the end. We wanted to be able to go out to a container registry and actually look at the JSON file. Remember I said there's a JSON file in a tab all? Well, those tab alls can get awfully big. I've heard rumors that some of our Jboss container images are like 1.5 gigabytes. So you're pulling this thing over. Well, what happens if you put, how do you want to look at that? If you just want to look at that JSON file and you want to copy it to your box, you have to copy 1.5 gigabytes. You get it down to your box. Oh, that ain't what I wanted, let me destroy it. So we wanted to add something to de-inspect, to allow us to look at the, just pull down the JSON file, don't pull down the entire image. And we went to the decompany and asked them to take a patch, the de-inspect dash dash remote. And they said, ah, you don't need to do that, just a website, just build your own. So we built a tool called Scopio. So Scopio implemented the protocol. Scopio means remote viewing in Greek. And what we did is we were able to remotely view the image and figure out if you wanted to pull it down or not. Or if there was an update, maybe you had an image locally, see what's on the host and figure out if you want to pull images. So eventually the engineer that did that for me said, well, if I'm going to do that, I might as well just implement the entire protocol for pulling the image to the host. Eventually implemented it to push to the host. So we had Scopio. Scopio is probably our most used open source project from the tools I'm going to be talking about here. Lots and lots of companies are using it now to move images around. We'll talk about it more at the end. But we were working with CoreOS at the time and they were interested in basically using it to pull into Rocket and use in Rocket. But they said they didn't want to use a command line tool to do it. They wanted to develop a library, a go library to be able to do that. And so we created this thing called Container's Image. So GitHub Container's Image now has the entire protocol for moving images back and forth between container registries and local store. But it actually, we've added a whole bunch of additional functionality. You can actually move images from one container registry to another container registry using Container's Image. You can move Container's Images to your host into directory structure. You can actually move it into the Docker daemon directly. You can actually move into the thing I'm about to talk about. But basically Container's Image becomes this protocol for moving images around, moving these tab balls around and actually convert it. You can actually convert a V1 image into a OCI image and back and forth. It's really, really cool that we have this library now. So the next thing is, we talked about the Container Image as being this layered thing, right? It has two, three, four layers. And the way that you can create and uncreate these things is based on a layering or a copy on write file system. So copy on write file systems, a file system so you can actually create a directory right to it and then you create another layer. You basically put some kind of storage over the original layer and then you put another tab ball, untie the second tab ball on it and you put another layer on top. Some of the layering file systems that have been developed over the last few years, a device mapper, there's ButterFS version, there's AUFS which is only, it was Ubuntu only. And Ovalay is now the most popular one. So all these, Red Hat actually developed three of those. We developed Ovalay, ButterFS and Device Mapper. We contributed those upstream to the D project and we decided to pull those out into a separate library. So we created container storage. And now we have a library that can actually implement all the copy on writes you need to run a container image, to unpack a container image onto storage. So the last thing when you wanna run a container on a system, so you pulled the image, you know what an image is, you pulled the image to the host, you untied it onto a system, now you need to run it. Well the OCI also not only did an image format but also defined what it means to run a container. Okay, and again it's a JSON file in an exploded root of fs. So I have to have the root of fs on my desk, basically a directory that has something that looks like a root file system. And I have a JSON file associated with that. That JSON file defines things that the user adds to the original image and says I wanna run this executable into it. So when you run a container runtime, it goes into the image, figures out what its JSON is and then takes user input, combines those together and creates the Run-C spec. Or it's not the Run-C spec, it's the OCI runtime spec. So we have the OCI runtime spec developed and also the default implementation which is called Run-C. So Run-C is a Go program for running containers. Every project right now that runs containers pretty much in the whole world is runs open OCI containers is actually using Run-C. So if you download the D project and run it, it's executing the container as Run-C. If you download Rocket, it's moving towards using Run-C. If you download any of the products I'm gonna be talking about from here on it, we're all using Run-C. So we're using the same image format at the container registries and we're using the same runtime on the host and all the runtime does is configures the kernel, configures those three things in the kernel, security, resource constraints, and the name spaces to run a container and that's what Run-C does. And we're gonna be talking later on about other container runtimes that have been developed because it's a standard. Run-C is the default implementation but other people have been implementing these. Okay, anything I just said, talk about container demons. Okay, everything I just talked about was all about things that you can do in an individual process, right? Pulling the image, running to the disk, putting in storage and running it. And yet in the market, everybody's putting out demons and they're getting fatter and fatter and fatter. Okay, so I have a big push. I'm trying to get trending that says no big fat container demons. I wanna stop all the proliferation of demons. If you run Kubernetes right now, so you're in an open ship and I say I wanna run Kubernetes, the first thing it does is talks to the Kubernetes demon. Kubernetes demon calls out to the Docker demon. That's two demons. The Docker demon then calls out to container D. That's three demons. Container D then goes out and talks to Run-C to run the container. Okay, so there's basically four different processes between. So when you run something, you're going through all these different processes. If anything goes wrong in any one of those steps, we end up with one of these going on. Okay, so I'm really down on this proliferation of demons, although guess what I'm gonna do right now? Introduce a new demon. Okay, so now we talked about those four components. Now let's look at Kubernetes and OpenShift. What happens when Kubernetes or OpenShift wants to run a container? Well, the first thing it does, first thing, well, again, let's take a step back. So Kubernetes was originally built totally around Docker, said. And it was totally embedded all the code into the program. And along came CoreOS, they wanted rocket support. So what did they do? They wrote the biggest patch in the universe to Kubernetes that basically did the equivalent of an if-then-else statement. So they said, if I'm running rocket, do it this step, otherwise do the original code. And the Kubernetes developers said at that point, time out, if we do this for rocket, someone else is gonna come along and say, do the same thing. So Kubernetes turned it on its ear and said, instead of us taking in now the container run times to run underneath Kubernetes, we're gonna define a protocol called CRI, container runtime interface. And they said, when Kubernetes wants to run it, it will call out to a demon and basically say, run this for me. It'll say, exec into this thing for me. Give me the stats on this. So they defined the protocol that they would talk. And what's happened then is, CoreOS went back and created Rockinettys, which was a cryo-based front-end for rocket. The D guys basically created a shim program that would talk to the demon. The Docker shim program would basically also front-end that demon. So all this became possible about a year ago, or a year and a half ago. So Kubernetes tells the CRI that it wants to run a container. CRI needs to know what it means to be a container. So it uses the OCI standard for running a container. CRI needs to be able to pull that image to the copy and write file system. So it needs to pull an image and then it needs to execute it. That's what happens when I run a Kubernetes container. Seem very similar to the previous slides. So my engineers, after we had done all this work, came to me about a year and a half ago and said, why don't we build a very lightweight tool to build and run containers? And that was called cryo. So CRIo is the name of a demon that we have created, a very lightweight demon, not a big fat demon, that's my excuse, that is scoped the Kubernetes CRI. It only supports users and Kubernetes and uses standard components for building blocks, okay? Nothing more and nothing less. Does everybody know what version of the D word that Kubernetes currently supports? It supports 1.12. That's what we're shipping right now in RHEL. The problem was Docker was updating so fast and constantly breaking backwards compatibility that Kubernetes finally said, that's it. We're only supporting this. And currently Kubernetes has just moved to 1.13, which came out about nine months ago. So we're kind of in a sticky place right now because we can't update to the latest things that the D command has because they keep on breaking backwards compatibility. There's been a lot of stability problems underneath Kubernetes to do this. And matter of fact, even Docker has admitted this and they're creating new products to be able to run Kubernetes, the thing called cry container D. So we wanted to basically say, we're gonna build a lightweight container demon that is totally dedicated to the Kubernetes workloads. CRIo, CRIo loves Kubernetes. It is totally dedicated to Kubernetes. Kubernetes is everything to us. Mesosphere, she's a cute chick. We kind of like there, but we're a one woman man. So we don't do mesosphere. Swarm, not my favorite looking gal, but we're sticking with Kubernetes. The new chick on the block, not for us. Stick into it. The old gal, not for us. CRIo is all about Kubernetes. That's it. If Kubernetes says we need an interface that does this, we implement it in CRIo. We implement nothing else. So let's look at CRIo. Let's look a little deeper at CRIo. So CRIo not only takes advantage of that container storage and containers image and the OCI image bundle and the OCI runtime, but it actually has to create that JSON file on disk. So there's an open, part of the open containers initiative is some libraries and tooling that was built to create that JSON, create the open runtime spec and we use that tool in OCI runtime tools, library issues to generate OCI config. The next thing we use is the CNI. CNI was actually developed again by CoreOS. So CoreOS developed a standard way that everybody in the industry is sort of glommed onto for running containers. It is the default networking standard for running, setting up networks in a Kubernetes environment and CRIo is using it. So when you set up your networks, we will use CNI to do it. It's been tested with different backends. So Flannel, Weave, OpenShift, SDN and all the new container, new networking tools that are coming out are all implementing CNI backends. Finally, we implemented, when you run containers on the box, it just processes living on the box. So you usually need something that monitors it, keeps track of it. A lot of times the D package, in the olden days when you stopped the Docker daemon, when you restarted it, all the containers would go away because the only thing monitoring was the daemon. And so what you wanted is you needed like little lightweight programs that just sit out there and run while the container is. And it listens to things like standard out and standard error of the container and that's how you can manage your containers. So you have a monitoring program that does it. It used to be the daemons, but we built a very lightweight process that sits up there and just watches the container. So it monitors logging, it handles the TTY so you can connect back into the TTY and out and basically detects if the container dies and if one process in the container dies, it'll finish off the container. And that's called conmon. It's written in C, very lightweight, incredibly small memory footprint. So this is what a pod looks like inside a container. So everybody know what a pod is? Okay, about half the room. So Kubernetes doesn't run containers, it runs pods. Pods are one or more containers running in the same environment. So they share the same network, they share the same IPC, but they basically run together. So you see up here that they're sharing IPC network namespace. Pid namespace is actually optional and they also run in the same C groups. It's kind of a cool idea. Most people still think in terms of containers, but pods are all about things like you might wanna have a sidecar pod or a monitoring pod. So you might have a monitoring container. You might have your primary workload running inside a container and then you have a secondary container that watches it. Okay, some security companies are doing that now. Another idea I have is basically a lot of times containers come and they need really high privileges to sort of modify the kernel. Say an NVIDIA cod comes and it wants to load a kernel module. Well, and then the container is gonna actually use the container, different container applications actually gonna use, say the special device for the special CPU device. So you might wanna have a sidecar container that's able to load kernel modules, but then the secondary container is locked down. So you have some interesting ideas with pods. But a pod in a cryo environment basically looks like this. So in a Kubernetes environment, there's always this infra container or a pods container that really just holds open the network namespace. And then you have container A and optionally container B and then you have conmon. So every time cryo creates a container, it creates a pod that looks like this. And then we get up to a higher level and this is what the whole architecture of cryo looks like. So we have the kubelet, which is a pod of Kubernetes and that talks the, I'm trying to find the, yeah, I'll do it by finger. GRPC here, GRPC in this case is CRI. So it's actually, this is the protocol. It's called the GRPC, that's a Go language RPC, but the protocol is actually a CRI and that talks to cryo. Inside of cryo, we have a library that's using containers, the image that we talked about at the beginning. We also have container storage. It also has CNI for setting up the network. It has that OCI generates so it can generate the runtime spec before launching Run-C or some other kind of runtime. And then it launches the runtime service and it also has the image service where basically this is how we manage the container storage, right? What images are currently on the box, whether we pull on things like that. And then you have the two pods. In this case, we're showing two pods running. We have pod one with two containers, and so we have the pods container plus the two others. That's the previous picture. And the other one is probably the most common way people run pods is that they have one container running in it along with the pods container. That is the entire infrastructure. That's the entire thing of cryo. So cryo is actually a very thin, very lean. So let's talk a little bit about cryo status at this point. We came out with cryo back in a few months ago. But oh, so one of the things about cryo, if you wanna contribute to cryo, that's great. But in order to get anything merged into cryo, it has to pass our test suite. Our test suite is currently running over 500 Kubernetes tests on it. Our goal is that if you cannot pass the entire Kubernetes test suite, entire OpenShift test suite, you are not gonna get your patch merged. So no PIs merged with that. That means that every time we get a patch in, it takes like hours, like one, two, three hours to actually pass the test. If you fail, you're out. So we shipped cryo 1.0 back in the, I think in the November timeframe. So the guys in my team wanted to have a 1.0. I wanted no part of 1.0, okay? Because it becomes, it's a hassle to describe it. So we have 1.0 cryo supports Kubernetes 1.7. Currently that's in tech preview. So if you're running OpenShift on RHEL right now, you can actually set up cryo to run underneath your Kubernetes environment. It's in tech preview, not supported, but you can play around with it. Later, we came out with 1.8. 1.8.4 is the current version. Notice we jumped from 1.0 to 1.8. For now on, Kubernetes and cryo are gonna have the same release number. So if you wanna run Kubernetes 1.9, you will run cryo 1.9. If you wanna run 1.8, you will run cryo 1.8. So when we got Kubernetes 1.10, anybody in the room tell me what version of cryo you use with it? Anybody? That's a slow crowd. So the idea is basically we don't wanna have any confusion on it. Kubernetes 1.8 is not something that OpenShift is gonna ship. So OpenShift is actually skipping shipping of 1.8 except for online. So as of right now, Kubernetes 1.8 is being shipped on OpenShift Online. Origin right now is at 3. That's OpenShift version 3.8. Supports Kubernetes 1.8, but you can't use it. You can't buy it from Red Hat. Cryo is running in OpenShift Online now. We'll talk about that and again in a second. 1.9 is, Kubernetes 1.9 is being released right now. So cryo 1.9 is available. OpenShift 3.9, which is scheduled for springtime will have full cryo support built into it. Docker will be the default and cryo will be the alternative. The goal at OpenShift 3.10 is to flip it and make cryo the default and the D word is the alternative. So that is scheduled I think sometime in the summertime. Maintainers and contributors to the cryo project. Red Hat and Intel have been working very heavily on this. Lately we've been getting a lot of contributions from Lyft. Susie has been involved. Now I could probably put CoroS up there since they will be involved. So these are the heavy maintainers of it. Cryo is now powering nodes on OpenShift Online. So basically as of right now, if you get an OpenShift Online, you will be using cryo. We had darkfooding it totally and we are getting really, really good results with it. So one of our companies contacted us and we heard rumors that they were using it. They have not given us liberty to say who they are yet but we asked them why haven't you told anybody using cryo in production? And this was their quote, was cryo just works for them so there's no reason to complain. And I think that is the perfect reason. That is the reason we built cryo. We want cryo to be containers in production gets to be boring. It just works. And that is what our goal with cryo is, to simplify and make it as simple as possible for running containers in the Kubernetes. Kubernetes and OpenShift are complex enough. We don't need to make an adventure of running containers on the host. So everything we do, the reason I get paid and my team gets paid is to make OpenShift successful. So one of the reasons we did cryo is we wanted to make OpenShift more stable running in the environment. But OpenShift actually has other features than just running Kubernetes. So what else does OpenShift need? It needs the ability to build container images. It needs the ability to push container images to container registries, right? So anybody that's played with OpenShift use source to image or you can, basically you want to be able to build containers as well as that. So this guy's an island die by. One year ago this week, we come out to DevConf. DevConf is a big developer conference out in Burno Czech Republic. And we were talking about container storage and containers image at that point. And I turned to him and I said, you know, what I really need is a core utilities package for building containers, right? If I can build a tarball in a JSON file, I want to build them together. And he said, well, what should we call them? I said, well, just call a builder, okay? So he came out and said, builder. And if you know, I happen to have a slight Boston accent, but so he created builder. Now I'm going to ruin everybody's picture of our icon right here. That's a, that's a Boston Terrier in there. So, because it's the accent making fun of the Boston accent. But what's he wearing on his head? The first thing when this icon went out, someone had said, why do you have a dog wearing tidy whiteies on his head? So the newer icon actually is more of a hot hat, but I keep that one just for that joke. Okay, so builder was, the goal with builder was to make, you know, again, looking at container technology, how do you build containers? Someone shouted out in the room, how do you build containers now? Container images. Debuilt, okay. Can someone name another way to build container images? S-to-Y, and what does S-to-Y use under the covers? It uses de-built, okay? Here we are, four years into the container revolution. The only way to create a tile ball in a JSON file is with the D word. Don't we suck? Isn't that horrible? Right? I could tire up, I can do that by a shell script. So I wanted it as a series of tools to be able to do that. So we wanted to co-utilize for building container images with a simple interface. So the builder command actually has builder from, because you wanna be a bit somehow specified I wanna get a container off a container registry and pull it, so if I wanted to build from a container image I could do builder from Fedora and it creates a container and then I can mount the container onto my host. All right, from that point on I can just interact with that mount point on the host. Segway. Does anybody ever use this command? Okay. This command allows you to copy content into a container image and allows you to take content in a container image and copy it out to your host. Okay, they had to build a tool to do that. Now wait till you see the tools I built. I built this tool called copy. Okay, I put it in the core-utilize package. So you were able to use this copy command to actually copy content in and out of containers. And how do you do it? You just say copy-r source directory into the container. Pretty cool, I didn't stop there though. Wait, wait, wait, do you see this? I created this tool called DNF, or YUM for those guys. So it used to be called YUM, I changed the name to DNF and now I'm gonna change it back to YUM because I'm schizophrenic. So I can use DNF and I added a flag to it called install-root and I can actually point to the directory and I can actually install content into the directory, into the root-of-fest directory. But I didn't stop there. I created a tool called make. It's a make tool, it's fairly popular in the C community and I can actually do a make install with a dester appointed to a directory and I can actually install directly into the directory. So I have lots and lots of tools that I've built over the years to be able to move data into a directory. One of the cool things here is, when I create this container, I need to add some stuff to, remember I talked about the OCI, that JSON file, so we have a builder config that actually can create a entry points, environmental variables, set all the flags to put it inside the container and then I can commit it to create a container image. So I take the container, when I'm happy with it, I create a container image and finally I push it somewhere. And guess what, I can push it anywhere. I can push it to Docker IO, I can push it to any container registry, I guess that costs me money. So I can actually do all this stuff with moving content around, really simple. But really cool thing about this, when you run container images that are built with debilt, what's the problem with them? They come with not just Apache or not just Nginx, it's a benefit and a cost, but it comes with DNF inside of it. It comes with, if you wanted to run a make inside of it, you have to have make in it, you have to have GCC. When you run the container images in the world, they're coming with all the build artifacts required to build them. So I always often work with security people that say, I need to get all that stuff out of there. I don't want a hacker getting in and having access to all these tools when he logs onto a machine. So they want small images, everybody's after small images. And yet the only way we build images right now is to stick every build tool in the universe in there. Python gets stuck in every single container. If you run an Apache, you have to have Python in there. Why? Because DNF uses Python. Do you need DNF in there? No. The way you're supposed to update container images does not go into the container and do a DNF YAM update or a DNF or YAM update. What you're supposed to do is replace the image. So this tooling actually allows you to build a minimum image. Allows you to build container images with very minimal. And so you say to me, Dan, wait, what about Dockerfile? I actually went to the concierge today and asked them to change a five pound note into coins. He brought me back five note and I said, no, no, I need a lot of little change. And he looked at me like strange. He'd give me a big handful of change. So what about defile? So builder supports the defile command and we call it builder, build using defile-f. Basically follows the same syntax as debilts. But we're lazy. Engineers are lazy, so we actually have builder-bud. And Anheuser-Busch has not approved the name, but we're gonna go with it for now. So you can actually build containers using the traditional method for building containers. Everybody read that line? I should have made you read the last one. Someone read the line. This is supposed to be the interactive part of the talk. What about other formats? I wrote a brand new tool called bash. Okay, I built shell scripting. And the way you build the containers is you can use defile or you can use bash, either one of the tools, okay? We're not gonna build a builder file, right? There's not gonna be some special language for doing this. The goal is basically use the standard tools that you have available on Linux system to build tar balls with JSON files. But if we wanna use higher level tools like source to image, we're working to make OpenShift use builder for source to image containers rather than using the D word. We also wanna work with the Ansible containers. So if you want to specify in an Ansible playbook how you want your container, what you want in the contents of the container, we are gonna work with Ansible containers to pull. They're currently using the D word underneath it. Builders become, a lot of people looking at builder and what they really wanna do is they wanna actually run builds inside of Kubernetes. So I wanna run distributed build systems, things like that. Builder has a lot of features, a lot of simplification. Currently when people do this, they're always linking the Docker socket into the container which gives you full root access on the host as soon as you do. So if you wanna run a system, builder might be a simpler tool for running in say a large Kubernetes environment. Builder has some shortcomings and positives over speed of building. But basically if you're building builder containers in a production environment, this work stream is actually gonna work faster than debilts. So that's builder. So what else does OpenShift need? Well, you need a way to debug this thing, okay? Currently in an OpenShift environment with the D word running, if something goes wrong in the host, what do you do? You SSH onto the box and you start running D commands, okay? So I start doing things like, let me look to see what images are installed. Let me look at what containers are running on the system, okay? Well, in trial world, there's two tools that are gonna be added. One of them is called CRI CTL. CRI CTL, I don't cover that closely in this, although we're about to start shipping it, is actually a test, originally it was a test tool for testing your CRIs. So it implements the Kubernetes protocol and it can talk to the daemon and actually tell it stuff like show me the pods that are running, show me stuff. So a lot of stuff you might wanna do diagnosis but outside of the container runtime. But what happens if the container runtime is hung up and you wanna look behind it? Well, remember I talked about all the storage, all the stuff's happening on disk. It's not tied to cryo. Builder is using the same database, the same storage that cryo does. So everybody's able to use it together because I invented another thing called file system storage, okay? And I created a thing called file system locks. You can put locks on file systems now, thanks to me. So what we're doing is we're basically allowing tools to work together without requiring a big fat container deam and that controls everything. Everybody, mother, may I, may I, mother, may I, may I, mother, may I. So we needed a tool that actually works underneath the covers on the back storage. So we created a project called LivePod. We wanted to create a Go language library that allows us to manage pods, okay? So we wanted to be able to separate it all from cryo but basically just allow us to manage pods and eventually that library is gonna get hopefully sucked back into cryo and into builder and other tools. But the secondary to that is we created a tool called Podman. Anybody ever hear of Kpod? So Podman was actually, used to be called Kpod but we had to wait forever to legal and marketing and stuff so they came up with Pod, Podmanager, Podman. One of the things we wanted to do with Podman was actually implement the entire DACA CLI without a big fat container demon. So we copied the exact CLI. So if you wanna list the containers that are running on the system, it's Podman PS. If you wanna run a container on the system, it's podman run-ti fedora sleep. If you wanna exec into the container, if you wanna list the images on the container system. So Podman is just about to release to Fedora. We're looking to get this out into rel probably around the three nine time frame. So lining up with that. But basically you can do everything you want inside of a container, have that entry level experience using Podman that you get traditionally with the D command. So that's the goal with Podman to implement the entire stack. We're not implementing swarm. We're not implementing compose. While we're implementing sort of the basic tools but I don't have a list of here but we probably have about 95% of everything you'd ever wanna do with the D command is now implemented in Podman. So we talked in the beginning, I mentioned Scopio. I'm just gonna follow up. Scopio is being used heavily with OpenShift underneath the covers managing containers images, moving them around the environment. You can do all these cool things with it. You can inspect. Remember I talked about its original goal was to inspect. You can actually copy, well this case we're copying off of a container registry and moving it to an atomic registry. You can copy directly from Docker IO and into a directory. You can create OCI images. You can delete images off of container registries. Basically this tool allows you to basically work with container registries and it actually can work directly. So if you wanna copy off of a container registry and push it directly into the Docker daemon, that's supported. If you want to copy it directly into Cryo's database, that's supported. You can work with Builder or it works with everything. Again, it's using containers image under the covers. The same library that's being used by Builder, Podman, Cryo and this. So they all can share the database. They all can share the content on the system. So Scopio, as I said, Scopio's being used all over the place. It's being used by Pivotal as a major contributor to it, to run in their PaaS environments. We're getting contributions from very strange companies that you don't normally think of as running. But lots and lots of big industrial companies now are running containers in their environment and they need to be able to manage these container images, move them around. And Scopio tends to be the tool to do that. So everything I talked about in this talk is listed here. So we have a whole bunch. Everything's open source, fully open source. They're all up on GitHub. There's CRIO, Builder, Scopio. LivePod's a little different if you wanna play with Podman. We also sit on two different free nodes. So we sit on Cryo and Podman, and we have a site. Any questions? Everybody's taking a picture. I'm sure they wanna get me in this picture. Yes. What's this about image signing? What's this about image signing? So, Red Hat and Partners Inside a Container's Image develops what we call simple signing. There's a real problem in the world right now in that nobody does a real good job of signing images. Okay, so people in the room might have heard of Notary. So Notary was the effort by Darker to basically create a capability of signing images. So people wanna have something like an RPM trust signature. We found that Notary was way, way too complex, okay? And we found that almost no one was using it. So what we wanna do is go off and create our own signing capability. So we built what we call simple signing. It's basically GPG signatures. We allow you to sign images that exist on any container registry. We don't make you put in some kind of big specific container registry. Run some specific demon. You can create image artifacts, signatures. And you can actually store these signatures on any web browser you want, local files. We actually built it into OpenShift Registry. So if you pull images off OpenShift, we can actually do signatures on it. And it actually works pretty well. The problem with signatures though, right now, is that Kubernetes doesn't know about them. So we built it into our tools. All of our tools, Podman, Builder, Cryo, they all support signatures. So you can actually configure a system that basically says, I only trust images that come from this registry, or I only trust images that are signed by Dan Walsh if you wanted to do that. Not a good idea, but you might want to do that. And you can set all this up. And what happens is Kubernetes comes down and says, run a container. And then the container runtime, come in and say, I want to run, engines container, and it comes in and says, wait a minute, that's not signed by Dan Walsh. And it says it's not allowed. But the protocol doesn't go back up to Kubernetes. In both the notary case and simple signatures. So what does Kubernetes do? It says, the container runtime says, I'm not running it. And Kubernetes says, no, you're gonna run it. And then it says, no, I'm not. And you end up with like, you're hugging with a five year old. And there's no protocol built. So lately, Kubernetes has started an effort called Graphius. And Graphius is looking at moving signatures into the Kubernetes protocol. And we're looking to get our simple signing as being the default implementation. So we're trying to work with Google to basically say, we just need GPG signed keys. We don't need CAs. We don't need huge infrastructures for this. We just need the same stuff that we've been signing RPMs for forever. And hopefully we'll be able to work with Kubernetes to get simple signing to get up a layer into Kubernetes. So Kubernetes will know all this node is not allowed to run images that aren't signed by Dan Walsh. So therefore I won't push images that aren't signed to Dan Walsh to that image. So that's why we're not pushing signing to that degree yet because it's not built into the Kubernetes protocol. Because Docker has control, and he just cost me money because there's no way that company's gonna allow simple signing to get in when they're trying to sell this thing called notary. Other questions? Yes? Yeah? Well, the Podman gives you every single thing you just said. Okay, so it does. Every one of those commands is underneath Podman. So if you need that experience, Podman's to do it. Even Podman has a Podman build that calls into Builder to do a Builder bud. So we have, for people that need that. But in the Kubernetes world, that doesn't make any sense. Those interfaces aren't necessary, okay? One of the big problems, one of the big advantages of one of the big problems with D is that it forces, it's a client server operation. Okay, so you have to have this big friggin' demon sitting out there for every application. When you run, say you wanna just run a container in a system D unifile that comes up, say I wanna just run Apache at boot time. Okay, if you put the DCLI into the system D unifile, it actually is not, the container ends up not being a grandchild of system D, it ends up being a different grandchild of system D of a totally different parent. Okay, it ends up being a child of the docker demon. Okay, so it's kind of a weird situation that we built. Now, because they have a demon, they can actually allow remote access to the demon over the network, and that's something we can't do with Podman. So there is a different experience there, but as a security guy, I'm not really into allowing this big, fat demon that gives you full root access to my machine with no authorization. I'm a little skeptical of that. So my opinion is as soon as you go across machine, you really wanna use something like OpenShift and Kubernetes and because they built in authorization and authentication and all that stuff that docker has never built into their tooling. So yeah, while we talk about different, but I'm really talking about different use cases and docker is actually moving away from that also. So docker does not, in the future, as of the conference, their conference this past spring, they actually announced, they've taken a look at Cryo and said there's some good ideas going there. And so they took that, remember I talked about this container D thing? So the original one, you would talk to the D demon and the D demon would do the pulling of images and would put them into their storage and then would talk to the container D demon to actually launch the containers. The reason they did that is they wanted swarm to have better performance. If you're going through the D demon, your performance tends to be bad because you're going through this layer and it's a really complex demon. And so they wanted to basically have swarm talk directly to container D so they could get rid of that bottleneck. But container D originally didn't do anything about pulling images or storing images. That still happened to the docker demon. After they saw us doing all this work, they actually moved the code into the crowd demon. I mean into the container D demon. One of the problems though is, so the docker of the company is being a little schizophrenic right now because they're still want swarm and they want messes fear and they want Kubernetes. And so they're constantly chasing after what cryo is doing but they're doing it with, in my feelings, a very big container demon that I'm not interested in supporting those. If those guys want to build demons to support messes fear, there should be a separate messes fear container demon that just implements whatever messes fear needs. Not merge them all together into one big demon. Anybody else? Okay, I've gone way over but. You've gone way over that game. Yeah, I'll be around. I'll be around all day and there is a session you guys can ask more questions there. So thanks for having and listening to me and if you've got a favorite charity, I'll donate it. Yeah, you have the favorite charity? Sorry. We have.